OCI User Guide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4453

User Guide

March 22, 2021


Copyright
Copyright © 2016, 2020, Oracle and/or its affiliates. All rights reserved.
This software and related documentation are provided under a license agreement containing restrictions on use and
disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement
or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute,
exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or
decompilation of this software, unless required by law for interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you find
any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf
of the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any
programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial
computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental
regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any
operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be
subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S.
Government.
This software or hardware is developed for general use in a variety of information management applications. It is
not developed or intended for use in any inherently dangerous applications, including applications that may create a
risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible
to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation
and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous
applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their
respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used
under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD
logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a
registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products, and
services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all
warranties of any kind with respect to third-party content, products, and services unless otherwise set forth in an
applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be responsible for any
loss, costs, or damages incurred due to your access to or use of third-party content, products, or services, except as set
forth in an applicable agreement between you and Oracle.
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://
www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.
Oracle customers that have purchased support have access to electronic support through My Oracle Support. For
information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/
lookup?ctx=acc&id=trs if you are hearing impaired.

Oracle Cloud Infrastructure User Guide iii


Table of contents

Contents

Chapter 1 About Oracle Cloud Infrastructure....................................................22


Prefer Online Help?............................................................................................................................................24
Need API Documentation?.................................................................................................................................24

Chapter 2 Welcome to Oracle Cloud Infrastructure.......................................... 26


Getting Started.................................................................................................................................................... 26
About the Services..................................................................................................................................26
Accessing Oracle Cloud Infrastructure.................................................................................................. 29
How Do I Get Started?.......................................................................................................................... 29
Key Concepts and Terminology.............................................................................................................30
Request and Manage Free Oracle Cloud Promotions............................................................................ 32
Buy an Oracle Cloud Subscription........................................................................................................ 36
Request and Manage the Oracle Startup Program................................................................................. 38
Understanding the Sign-In Options........................................................................................................39
Signing In to the Console...................................................................................................................... 41
Using the Console...................................................................................................................................42
Using the Mobile App............................................................................................................................50
Changing Your Password....................................................................................................................... 53
Checking Your Expenses and Usage..................................................................................................... 56
Adding Users.......................................................................................................................................... 58
Oracle Cloud Infrastructure Tutorials.................................................................................................... 61
Tutorial - Launching Your First Linux Instance....................................................................................62
Tutorial - Launching Your First Windows Instance..............................................................................74
Putting Data into Object Storage........................................................................................................... 83
Getting Started with the Command Line Interface................................................................................84
Getting Started...................................................................................................................................... 103
Getting Started with Load Balancing...................................................................................................104
Getting Started with Audit................................................................................................................... 117
Getting Started with Oracle Platform Services.................................................................................... 120
Getting Started with Oracle Applications............................................................................................ 122
Setting Up Your Tenancy.....................................................................................................................123
Getting Help and Contacting Support..................................................................................................126
Task Mapping from My Services........................................................................................................ 133
Frequently Asked Questions.................................................................................................................136

Chapter 3 Oracle Cloud's Free Tier................................................................... 142


Free Trial...........................................................................................................................................................142
Always Free Resources.................................................................................................................................... 142
To provision your Always Free using Terraform and Resource Manager...........................................143
Upgrading to a Paid Account...........................................................................................................................143
Additional Information..................................................................................................................................... 144
Details of the Always Free Resources............................................................................................................. 144
Compute................................................................................................................................................ 144
Database................................................................................................................................................ 144
Load Balancing..................................................................................................................................... 145
Block Volume....................................................................................................................................... 145
Object Storage.......................................................................................................................................146

iv
Table of contents

Resource Manager................................................................................................................................ 146


Service Connector Hub.........................................................................................................................147
Vault...................................................................................................................................................... 147
Frequently Asked Questions: Oracle Cloud Infrastructure Free Tier.............................................................. 147
How do I change which resources I want to designate as Always Free?............................................ 147
What happens when my Free Trial expires or my credits are used up?.............................................. 147
If I upgrade, do I keep my Free Trial credit balance?.........................................................................148
After I upgrade my account, can I downgrade?.................................................................................. 148
My resources no longer appear. How can I restore them?.................................................................. 148
I get an "out of host capacity" error when I try to create an Always Free Compute instance. What
can I do?..........................................................................................................................................148
Is it possible to extend my Free Trial?................................................................................................ 148
Is my Free Tier account eligible for support?..................................................................................... 148

Chapter 4 Oracle Cloud Infrastructure Government Cloud............................150


Oracle Cloud Infrastructure US Government Cloud........................................................................................150
For All US Government Cloud Customers..........................................................................................150
Oracle Cloud Infrastructure US Government Cloud with FedRAMP Authorization...........................160
Oracle Cloud Infrastructure US Federal Cloud with DISA Impact Level 5 Authorization................. 165
Oracle Cloud Infrastructure United Kingdom Government Cloud..................................................................174
Regions..................................................................................................................................................175
Console Sign-in URLs..........................................................................................................................175
API Reference and Endpoints.............................................................................................................. 175
Services Not Supported in Oracle Cloud Infrastructure United Kingdom Government Cloud............179
SMTP Authentication and Connection Endpoints............................................................................... 179
SPF Record Syntax...............................................................................................................................179

Chapter 5 Service Essentials................................................................................ 180


Security Credentials.......................................................................................................................................... 181
Console Password................................................................................................................................. 181
API Signing Key...................................................................................................................................181
Instance SSH Key.................................................................................................................................181
Auth Token........................................................................................................................................... 182
Regions and Availability Domains.................................................................................................................. 182
About Regions and Availability Domains........................................................................................... 182
Fault Domains.......................................................................................................................................184
Subscribed Region Limits.................................................................................................................... 185
Service Availability Across Regions....................................................................................................185
Resource Availability............................................................................................................................186
Dedicated Regions................................................................................................................................ 188
IP Address Ranges............................................................................................................................................195
Public IP Addresses for VCNs and the Oracle Services Network.......................................................195
Public IP Addresses for the Oracle YUM Repos................................................................................ 198
Resource Identifiers.......................................................................................................................................... 199
Oracle Cloud IDs (OCIDs)...................................................................................................................199
Where to Find Your Tenancy's OCID................................................................................................. 200
Name and Description (IAM Only)..................................................................................................... 200
Display Name........................................................................................................................................200
Resource Monitoring........................................................................................................................................ 201
Prerequisites.......................................................................................................................................... 201
Working with Resource Monitoring.....................................................................................................201
Using the API....................................................................................................................................... 213
Resource Tags...................................................................................................................................................213
Working with Resource Tags...............................................................................................................214

v
Table of contents

Using the API....................................................................................................................................... 217


Service Limits................................................................................................................................................... 217
About Service Limits and Usage......................................................................................................... 217
Compartment Quotas............................................................................................................................ 217
Viewing Your Service Limits, Quotas, and Usage..............................................................................218
When You Reach a Service Limit....................................................................................................... 219
Requesting a Service Limit Increase....................................................................................................219
Limits by Service..................................................................................................................................220
Service Logs..................................................................................................................................................... 238
Working with Service Logs................................................................................................................. 238
Using the API....................................................................................................................................... 238
Tenancy Explorer..............................................................................................................................................238
Tenancy Explorer Highlights................................................................................................................238
Work Requests...................................................................................................................................... 239
Resources Supported by the Tenancy Explorer................................................................................... 239
Required IAM Policy to Work with Resources in the Tenancy Explorer........................................... 244
Navigating to the Tenancy Explorer and Viewing Resources............................................................. 245
Filtering Displayed Resources..............................................................................................................245
Opening the Resource Details Page..................................................................................................... 245
Moving Resources to a Different Compartment.................................................................................. 245
Deleting Resources............................................................................................................................... 246
Using the API....................................................................................................................................... 246
Compartment Quotas........................................................................................................................................ 246
About Compartment Quotas.................................................................................................................246
Using the Console.................................................................................................................................248
Available Quotas by Service................................................................................................................ 249
Work Requests.................................................................................................................................................. 262
Required IAM Policy........................................................................................................................... 263
Work Request States.............................................................................................................................263
Using the Console to View Work Requests........................................................................................ 264
Using the API....................................................................................................................................... 264
Console Announcements.................................................................................................................................. 264
Types of Announcements..................................................................................................................... 264
Required IAM Policy........................................................................................................................... 265
Email Delivery...................................................................................................................................... 265
Viewing Announcements......................................................................................................................265
Using the Command Line Interface (CLI)...........................................................................................267
Using the API....................................................................................................................................... 268
Prerequisites for Oracle Platform Services on Oracle Cloud Infrastructure.................................................... 269
Accessing Oracle Cloud Infrastructure................................................................................................ 269
Required Identity and Access Management (IAM) Policy.................................................................. 269
Resources Created in Your Tenancy by Oracle...................................................................................269
Prerequisites for Oracle Platform Services.......................................................................................... 270
Setting Up the Prerequisites................................................................................................................. 270
Information About Supported Platform Services.................................................................................274
Renaming a Cloud Account............................................................................................................................. 275
Billing and Payments........................................................................................................................................276
Budgets Overview.................................................................................................................................277
Cost and Usage Reports Overview...................................................................................................... 281
Cost Analysis Overview....................................................................................................................... 286
Unified Billing Overview..................................................................................................................... 300
My Services Use Cases.................................................................................................................................... 304
Service Discovery Use Case................................................................................................................ 304
Exadata Use Cases................................................................................................................................307
Managing Exadata Instances................................................................................................................ 318
Using Access Token Authorization with My Services API.................................................................332

vi
Table of contents

Chapter 6 API Gateway....................................................................................... 338


API Gateway.....................................................................................................................................................338
Ways to Access Oracle Cloud Infrastructure.......................................................................................338
Resource Identifiers.............................................................................................................................. 338
Authentication and Authorization........................................................................................................ 339
API Gateway Capabilities and Limits..................................................................................................339
Required IAM Service Policy.............................................................................................................. 339
API Gateway Concepts.........................................................................................................................339
Preparing for API Gateway.................................................................................................................. 342
Creating an API Gateway.....................................................................................................................359
Creating an API Resource with an API Description........................................................................... 362
Creating an API Deployment Specification......................................................................................... 365
Deploying an API on an API Gateway by Creating an API Deployment........................................... 367
Setting Up Custom Domains and TLS Certificates............................................................................. 375
Adding Path Parameters and Wildcards to Route Paths...................................................................... 381
Adding Context Variables to Policies and HTTP Back End Definitions............................................ 382
Calling an API Deployed on an API Gateway.................................................................................... 388
Listing API Gateways and API Deployments..................................................................................... 390
Updating API Gateways and API Deployments.................................................................................. 391
Moving API Gateways and API Deployments Between Compartments............................................. 395
Deleting API Gateways and API Deployments................................................................................... 398
Adding Logging to API Deployments................................................................................................. 401
Adding an HTTP or HTTPS URL as an API Gateway Back End...................................................... 410
Adding a Function in Oracle Functions as an API Gateway Back End.............................................. 414
Adding Stock Responses as an API Gateway Back End.....................................................................417
Adding Request Policies and Response Policies to API Deployment Specifications.......................... 421
Limiting the Number of Requests to API Gateway Back Ends...........................................................425
Adding CORS support to API Deployments....................................................................................... 428
Adding Authentication and Authorization to API Deployments......................................................... 435
Transforming Incoming Requests and Outgoing Responses............................................................... 457
Troubleshooting API Gateway............................................................................................................. 478
API Gateway Internal Limits............................................................................................................... 480
API Gateway Metrics........................................................................................................................... 482

Chapter 7 Archive Storage...................................................................................488


Archive Storage................................................................................................................................................ 488
Using Archive Storage......................................................................................................................... 488
Ways to Access Archive Storage......................................................................................................... 489
Authentication and Authorization........................................................................................................ 490
WORM Compliance............................................................................................................................. 490
Limits on Archive Storage Resources..................................................................................................490

Chapter 8 Audit.....................................................................................................492
Audit..................................................................................................................................................................492
Version 2 Audit Log Schema...............................................................................................................492
Ways to Access Oracle Cloud Infrastructure.......................................................................................492
Authentication and Authorization........................................................................................................ 493
Contents of an Audit Log Event.......................................................................................................... 493
Viewing Audit Log Events...................................................................................................................498
Bulk Export of Audit Log Events........................................................................................................501

vii
Table of contents

Chapter 9 Block Volume...................................................................................... 504


Block Volume................................................................................................................................................... 504
Typical Block Volume Scenarios.........................................................................................................504
Volume Attachment Types...................................................................................................................505
Volume Access Types.......................................................................................................................... 506
Device Paths......................................................................................................................................... 506
Regions and Availability Domains...................................................................................................... 506
Resource Identifiers.............................................................................................................................. 507
Ways to Access Oracle Cloud Infrastructure.......................................................................................507
Authentication and Authorization........................................................................................................ 507
Monitoring Resources...........................................................................................................................507
Moving Resources................................................................................................................................ 507
Tagging Resources................................................................................................................................507
Creating Automation with Events........................................................................................................ 508
Block Volume Encryption.................................................................................................................... 508
Block Volume Data Eradication.......................................................................................................... 508
Block Volume Performance................................................................................................................. 508
Block Volume Durability..................................................................................................................... 509
Block Volume Capabilities and Limits................................................................................................ 509
iSCSI Information.................................................................................................................................509
Volume Groups.....................................................................................................................................510
Creating a Volume................................................................................................................................519
Attaching a Volume..............................................................................................................................521
Attaching a Volume to Multiple Instances.......................................................................................... 524
Connecting To a Volume..................................................................................................................... 528
Listing Volumes....................................................................................................................................533
Listing Volume Attachments................................................................................................................534
Renaming a Volume............................................................................................................................. 534
Editing a Volume's Settings................................................................................................................. 535
Resizing a Volume............................................................................................................................... 536
Overview of Block Volume Backups.................................................................................................. 544
Cloning a Volume.................................................................................................................................564
Disconnecting From a Volume............................................................................................................ 566
Detaching a Volume............................................................................................................................. 567
Deleting a Volume................................................................................................................................567
Move Block Volume Resources Between Compartments................................................................... 568
Block Volume Performance................................................................................................................. 571
Block Volume Metrics......................................................................................................................... 589

Chapter 10 Compliance Documents....................................................................592


Compliance Documents.................................................................................................................................... 592
Types of Compliance Documents........................................................................................................ 592
Types of Environments.........................................................................................................................592
Regions and Availability Domains...................................................................................................... 592
Ways to Access Oracle Cloud Infrastructure.......................................................................................593
Viewing and Downloading Compliance Documents........................................................................... 593

Chapter 11 Compute.............................................................................................594
Compute............................................................................................................................................................ 594
Instance Types...................................................................................................................................... 594
Components for Launching Instances.................................................................................................. 595
Creating Automation with Events........................................................................................................ 596

viii
Table of contents

Resource Identifiers.............................................................................................................................. 596


Work Requests...................................................................................................................................... 596
Ways to Access Oracle Cloud Infrastructure.......................................................................................596
Authentication and Authorization........................................................................................................ 596
Storage for Compute Instances............................................................................................................ 597
Limits on Compute Resources............................................................................................................. 597
Best Practices for Your Compute Instance.......................................................................................... 597
Protecting Data on NVMe Devices......................................................................................................604
Boot Volumes....................................................................................................................................... 613
Oracle-Provided Images........................................................................................................................633
Compute Shapes....................................................................................................................................659
Installing and Running Oracle Ksplice................................................................................................ 669
Managing Custom Images.................................................................................................................... 670
Image Import/Export.............................................................................................................................675
Bring Your Own Image (BYOI).......................................................................................................... 680
Configuring Image Capabilities for Custom Images........................................................................... 692
OS Kernel Updates............................................................................................................................... 696
Managing Key Pairs on Linux Instances............................................................................................. 698
Creating an Instance............................................................................................................................. 700
Managing Compute Instances.............................................................................................................. 713
Autoscaling............................................................................................................................................724
Managing Cluster Networks.................................................................................................................732
Dedicated Virtual Machine Hosts........................................................................................................ 735
Connecting to an Instance.................................................................................................................... 739
Adding Users on an Instance............................................................................................................... 743
Displaying the Console for an Instance............................................................................................... 745
Managing Plugins with Oracle Cloud Agent....................................................................................... 746
Running Commands on an Instance.....................................................................................................758
Getting Instance Metadata.................................................................................................................... 763
Updating Instance Metadata................................................................................................................. 770
Editing an Instance............................................................................................................................... 771
Moving a Compute Instance to a New Host........................................................................................781
Moving Compute Resources to a Different Compartment.................................................................. 783
Stopping and Starting an Instance........................................................................................................785
Terminating an Instance....................................................................................................................... 789
Enabling Monitoring for Compute Instances....................................................................................... 790
Compute Metrics...................................................................................................................................794
Compute NVMe Performance.............................................................................................................. 805
Compute Health Monitoring for Bare Metal Instances........................................................................807
Microsoft Licensing on Oracle Cloud Infrastructure........................................................................... 809
Troubleshooting Compute Instances.................................................................................................... 819
Updating the Linux iSCSI Service to Restart Automatically.............................................................. 835

Chapter 12 Container Engine.............................................................................. 840


Container Engine.............................................................................................................................................. 840
Ways to Access Oracle Cloud Infrastructure.......................................................................................840
Creating Automation with Events........................................................................................................ 840
Resource Identifiers.............................................................................................................................. 841
Authentication and Authorization........................................................................................................ 841
Container Engine for Kubernetes Capabilities and Limits.................................................................. 841
Required IAM Service Policy.............................................................................................................. 841
Container Engine and Kubernetes Concepts........................................................................................841
Preparing for Container Engine for Kubernetes.................................................................................. 843
Creating a Kubernetes Cluster..............................................................................................................869
Setting Up Cluster Access....................................................................................................................875

ix
Table of contents

Modifying Kubernetes Cluster Properties............................................................................................ 881


Modifying Node Pool and Worker Node Properties........................................................................... 882
Updating Worker Nodes by Creating a New Node Pool.....................................................................884
Deleting a Kubernetes Cluster..............................................................................................................885
Monitoring Clusters.............................................................................................................................. 885
Viewing Work Requests and Kubernetes API Server Audit Logs...................................................... 887
Viewing Application Logs on Worker Nodes..................................................................................... 888
Accessing a Cluster Using Kubectl......................................................................................................889
Accessing a Cluster Using the Kubernetes Dashboard........................................................................890
Adding a Service Account Authentication Token to a Kubeconfig File............................................. 893
Deploying a Sample Nginx App on a Cluster Using Kubectl............................................................. 896
Pulling Images from Registry during Deployment.............................................................................. 896
Supported Labels for Different Usecases.............................................................................................898
Encrypting Kubernetes Secrets at Rest in Etcd................................................................................... 900
Connecting to Worker Nodes Using SSH............................................................................................903
Using Pod Security Polices with Container Engine for Kubernetes....................................................904
Autoscaling Kubernetes Clusters..........................................................................................................909
About Access Control and Container Engine for Kubernetes............................................................. 919
Supported Images (Including Custom Images) and Shapes for Worker Nodes...................................922
Supported Admission Controllers........................................................................................................ 924
Kubernetes Versions and Container Engine for Kubernetes................................................................925
Supported Versions of Kubernetes.......................................................................................................926
Upgrading Clusters to Newer Kubernetes Versions............................................................................ 928
Configuring DNS Servers for Kubernetes Clusters............................................................................. 932
Creating Load Balancers to Distribute Traffic Between Cluster Nodes.............................................. 937
Creating a Persistent Volume Claim....................................................................................................947
Adding OCI Service Broker for Kubernetes to Clusters..................................................................... 952
Example: Setting Up an Ingress Controller on a Cluster.....................................................................953
Example: Installing Calico and Setting Up Network Policies............................................................. 960
Frequently Asked Questions About Container Engine for Kubernetes............................................... 962
Container Engine for Kubernetes Metrics............................................................................................962

Chapter 13 Data Transfer.................................................................................... 968


Data Transfer.................................................................................................................................................... 968
Supported Regions................................................................................................................................ 968
Limits on Data Transfer Service Resources.........................................................................................969
Tagging Resources................................................................................................................................969
Automation for Objects Using the Events Service.............................................................................. 969
Notifications.......................................................................................................................................... 969
Data Encryption.................................................................................................................................... 969
Inputting Text into Data Transfer........................................................................................................ 970
What's Next...........................................................................................................................................970
Data Import - Disk............................................................................................................................... 970
Data Import - Appliance.....................................................................................................................1015
Data Export......................................................................................................................................... 1082
Troubleshooting...................................................................................................................................1122
Help Sheets......................................................................................................................................... 1127

Chapter 14 Database...........................................................................................1140
Database.......................................................................................................................................................... 1140
License Types and Bring Your Own License (BYOL) Availability..................................................1140
Always Free Database Resources.......................................................................................................1141
Moving Database Resources to a Different Compartment................................................................ 1141
Monitoring Resources.........................................................................................................................1142

x
Table of contents

Creating Automation with Events...................................................................................................... 1142


Resource Identifiers............................................................................................................................ 1142
Ways to Access Oracle Cloud Infrastructure.....................................................................................1142
Authentication and Authorization.......................................................................................................1142
Security Zone Integration................................................................................................................... 1143
Limits on the Database Service..........................................................................................................1143
Work Requests Integration................................................................................................................. 1143
Getting Oracle Support Help for Your Database Resources............................................................. 1149
Autonomous Databases.......................................................................................................................1149
Exadata Cloud Service....................................................................................................................... 1222
Bare Metal and Virtual Machine DB Systems.................................................................................. 1353
External Database Service.................................................................................................................. 1559
Oracle Database Software Images......................................................................................................1568
Oracle Maximum Availability Architecture in Oracle Cloud Infrastructure..................................... 1570
Security Zone Integration................................................................................................................... 1576
DB System Time Zone.......................................................................................................................1576
Database Metrics.................................................................................................................................1579
Using the Oracle Database Service Overview to Manage Resources................................................1596
Using Performance Hub to Analyze Database Performance............................................................. 1599
Migrating Databases to the Cloud......................................................................................................1610
Troubleshooting...................................................................................................................................1647
Deprecated Database Service APIs.................................................................................................... 1674

Chapter 15 DNS and Traffic Management...................................................... 1676


DNS and Traffic Management....................................................................................................................... 1676
DNS..................................................................................................................................................... 1676
Traffic Management............................................................................................................................1707

Chapter 16 Email Delivery.................................................................................1740


Email Delivery................................................................................................................................................ 1740
Email Delivery Service Components................................................................................................. 1740
Regions and Availability Domains.....................................................................................................1741
Ways to Access Oracle Cloud Infrastructure.....................................................................................1741
Authentication and Authorization.......................................................................................................1741
SMTP Authentication and Connection Endpoints............................................................................. 1742
Monitoring Resources.........................................................................................................................1743
Email Delivery Service Capabilities and Limits................................................................................1743
Required IAM Service Policy............................................................................................................ 1744
Dedicated IP Addresses...................................................................................................................... 1744
Tagging Resources..............................................................................................................................1745
Integration with Oracle Cloud Infrastructure Services...................................................................... 1745
Getting Started with Email Delivery..................................................................................................1745
Getting Started with Email Delivery..................................................................................................1745
Generate SMTP Credentials for a User............................................................................................. 1752
Managing Approved Senders............................................................................................................. 1753
Configure SPF.....................................................................................................................................1754
Configure SMTP Connection............................................................................................................. 1756
Managing the Suppression List.......................................................................................................... 1757
Email Delivery Metrics...................................................................................................................... 1758
Integrating Oracle Application Express with Email Delivery........................................................... 1760
Integrating Postfix with Email Delivery............................................................................................ 1761
Integrating Oracle Enterprise Manager with Email Delivery............................................................ 1763
Integrating Mailx with Email Delivery.............................................................................................. 1763
Integrating Swaks with Email Delivery............................................................................................. 1765

xi
Table of contents

Integrating JavaMail with Email Delivery......................................................................................... 1766


Integrating Sendmail with Email Delivery........................................................................................ 1770
Integrating PeopleSoft with Email Delivery...................................................................................... 1772
Integrating Python with Email Delivery............................................................................................ 1776
Troubleshooting Email Delivery........................................................................................................ 1778
Email Deliverability............................................................................................................................1782

Chapter 17 Events............................................................................................... 1788


Events.............................................................................................................................................................. 1788
How Events Works.............................................................................................................................1788
Events Concepts..................................................................................................................................1789
Region Availability.............................................................................................................................1790
Ways to Access Oracle Cloud Infrastructure.....................................................................................1790
Authentication and Authorization.......................................................................................................1790
Limits on Events Resources............................................................................................................... 1790
Service Gateway and Events.............................................................................................................. 1790
Getting Started with Events................................................................................................................1790
Matching Events with Filters............................................................................................................. 1800
Events and IAM Policies....................................................................................................................1805
Managing Rules for Events................................................................................................................ 1805
Contents of an Event Message...........................................................................................................1818
Services that Produce Events............................................................................................................. 1821
Events Metrics.................................................................................................................................... 1925

Chapter 18 File Storage......................................................................................1928


File Storage..................................................................................................................................................... 1928
File Storage Concepts.........................................................................................................................1929
Encryption........................................................................................................................................... 1930
Data Transfers.....................................................................................................................................1931
File Storage Space Allocation............................................................................................................ 1931
How File Storage Permissions Work................................................................................................. 1931
Regions and Availability Domains.....................................................................................................1931
Creating Automation with Events...................................................................................................... 1932
Resource Identifiers............................................................................................................................ 1932
Ways to Access Oracle Cloud Infrastructure.....................................................................................1932
Authentication and Authorization.......................................................................................................1932
Limits on Your File Storage Components......................................................................................... 1932
Additional Documentation Resources................................................................................................ 1932
About Security.................................................................................................................................... 1933
Creating File Systems.........................................................................................................................1956
Mounting File Systems.......................................................................................................................1962
Managing File Systems...................................................................................................................... 1978
Managing Mount Targets................................................................................................................... 1988
Managing Snapshots........................................................................................................................... 1998
Using File Storage Parallel Tools...................................................................................................... 2002
File System Metrics............................................................................................................................ 2005
Paths in File Systems......................................................................................................................... 2014
File System Usage and Metering....................................................................................................... 2015
Troubleshooting Your File System.................................................................................................... 2018

Chapter 19 Functions..........................................................................................2040
Functions......................................................................................................................................................... 2040
Ways to Access Oracle Cloud Infrastructure.....................................................................................2040

xii
Table of contents

Creating Automation with Events...................................................................................................... 2041


Resource Identifiers............................................................................................................................ 2041
Authentication and Authorization.......................................................................................................2041
Oracle Functions Capabilities and Limits.......................................................................................... 2041
Required IAM Service Policy............................................................................................................ 2041
Oracle Functions Concepts.................................................................................................................2042
How Oracle Functions Works............................................................................................................ 2043
Oracle Functions Resiliency, Availability, Concurrency, and Scalability......................................... 2045
Preparing for Oracle Functions.......................................................................................................... 2046
Creating, Deploying, and Invoking a Helloworld Function...............................................................2067
Creating Applications......................................................................................................................... 2069
Creating and Deploying Functions.....................................................................................................2071
Using Custom Dockerfiles..................................................................................................................2073
Creating Functions from Existing Docker Images.............................................................................2074
Sample Functions, Solution Playbooks, Reference Architectures, and Developer Tutorials............. 2076
Viewing Functions and Applications................................................................................................. 2077
Invoking Functions............................................................................................................................. 2079
Controlling Access to Invoke and Manage Functions....................................................................... 2084
Storing and Viewing Function Logs.................................................................................................. 2086
Updating Functions.............................................................................................................................2089
Deleting Applications and Functions................................................................................................. 2091
Passing Custom Configuration Parameters to Functions................................................................... 2093
Accessing File Systems from Running Functions............................................................................. 2096
Integrating Oracle Functions with Other Oracle Products.................................................................2096
Permissions Granted to Containers Running Functions.....................................................................2102
Oracle Functions Support for Private Network Access..................................................................... 2102
Invoking Oracle Functions from Other Oracle Cloud Infrastructure Services...................................2103
Changing Oracle Functions Default Behavior................................................................................... 2103
Differences between Oracle Functions and Fn Project......................................................................2104
Troubleshooting Oracle Functions..................................................................................................... 2105
Function Metrics................................................................................................................................. 2115

Chapter 20 Health Checks................................................................................. 2120


Health Checks................................................................................................................................................. 2120
Health Checks Service Components.................................................................................................. 2120
Ways to Access the Health Checks Service...................................................................................... 2120
Authentication and Authorization.......................................................................................................2121
Health Checks Service Capabilities and Limits.................................................................................2121
Required IAM Service Policy............................................................................................................ 2121
Moving Health Checks to a Different Compartment.........................................................................2121
Tagging Resources..............................................................................................................................2121
Getting Started With the Health Checks API.................................................................................... 2122
Managing Health Checks....................................................................................................................2126
Health Checks Metrics....................................................................................................................... 2129

Chapter 21 IAM.................................................................................................. 2132


IAM................................................................................................................................................................. 2132
Components of IAM...........................................................................................................................2132
Services You Can Control Access To................................................................................................2133
The Administrators Group and Policy............................................................................................... 2133
Example Scenario............................................................................................................................... 2134
Viewing Resources by Compartment in the Console........................................................................ 2142
The Scope of IAM Resources............................................................................................................ 2142
Creating Automation with Events...................................................................................................... 2142

xiii
Table of contents

Resource Identifiers............................................................................................................................ 2142


Ways to Access Oracle Cloud Infrastructure.....................................................................................2142
Limits on IAM Resources.................................................................................................................. 2143
Getting Started with Policies..............................................................................................................2143
How Policies Work.............................................................................................................................2144
Common Policies................................................................................................................................ 2150
Advanced Policy Features.................................................................................................................. 2169
Policy Syntax...................................................................................................................................... 2172
Policy Reference................................................................................................................................. 2176
User Credentials..................................................................................................................................2379
Federating with Identity Providers..................................................................................................... 2381
User Provisioning for Federated Users.............................................................................................. 2423
Managing User Capabilities for Federated Users.............................................................................. 2427
Calling Services from an Instance..................................................................................................... 2429
Managing Users.................................................................................................................................. 2433
Managing Groups................................................................................................................................2438
Managing Dynamic Groups................................................................................................................2441
Managing Network Sources............................................................................................................... 2446
Managing Compartments....................................................................................................................2450
Managing Regions.............................................................................................................................. 2464
Managing Platform Services Regions................................................................................................ 2466
Managing the Tenancy....................................................................................................................... 2468
Managing Policies...............................................................................................................................2469
Managing User Credentials................................................................................................................ 2475
Managing Authentication Settings..................................................................................................... 2487
Managing Multi-Factor Authentication..............................................................................................2489
Policies for Managing Resources Used with Resource Manager...................................................... 2495
Deprecated IAM Service APIs........................................................................................................... 2496

Chapter 22 Load Balancing............................................................................... 2498


Load Balancing............................................................................................................................................... 2498
How Load Balancing Works.............................................................................................................. 2498
Load Balancing Concepts...................................................................................................................2501
Resource Identifiers............................................................................................................................ 2505
Ways to Access Oracle Cloud Infrastructure.....................................................................................2505
Monitoring Resources.........................................................................................................................2505
Authentication and Authorization.......................................................................................................2506
Limits on Load Balancing Resources................................................................................................ 2506
How Load Balancing Policies Work..................................................................................................2506
Connection Management.................................................................................................................... 2507
HTTP "X-" Headers............................................................................................................................2508
Session Persistence............................................................................................................................. 2509
Managing Load Balancers.................................................................................................................. 2512
Managing Backend Sets..................................................................................................................... 2528
Managing Backend Servers................................................................................................................ 2533
Managing Listeners.............................................................................................................................2540
Managing Cipher Suites..................................................................................................................... 2543
Managing Request Routing................................................................................................................ 2552
Managing Rule Sets............................................................................................................................2570
Managing SSL Certificates.................................................................................................................2580
Editing Health Check Policies............................................................................................................2587
Viewing the State of a Work Request............................................................................................... 2594
Load Balancing Metrics......................................................................................................................2596

xiv
Table of contents

Chapter 23 Logging.............................................................................................2602
Logging........................................................................................................................................................... 2602
How Logging Works.......................................................................................................................... 2602
Logging Concepts............................................................................................................................... 2603
Limits on Logging.............................................................................................................................. 2604
Resource Identifiers............................................................................................................................ 2604
Ways to Access Oracle Cloud Infrastructure.....................................................................................2604
Authentication and Authorization.......................................................................................................2604
Managing Logs and Log Groups....................................................................................................... 2605
Service Logs........................................................................................................................................2619
Audit Logs.......................................................................................................................................... 2645
Custom Logs....................................................................................................................................... 2647
Searching Logs....................................................................................................................................2662

Chapter 24 Marketplace.....................................................................................2676
Marketplace..................................................................................................................................................... 2676
Resource Identifiers............................................................................................................................ 2676
Ways to Access Oracle Cloud Infrastructure.....................................................................................2676
Authentication and Authorization.......................................................................................................2676
Working with Listings........................................................................................................................ 2677
Publishing Listings..............................................................................................................................2683
Viewing Accepted Terms of Use Agreements...................................................................................2683

Chapter 25 Monitoring....................................................................................... 2686


Monitoring.......................................................................................................................................................2686
How Monitoring Works......................................................................................................................2686
Monitoring Concepts.......................................................................................................................... 2691
Availability..........................................................................................................................................2695
Supported Services..............................................................................................................................2695
Resource Identifiers............................................................................................................................ 2696
Ways to Access Monitoring............................................................................................................... 2696
Moving Alarms to a Different Compartment.....................................................................................2696
Authentication and Authorization.......................................................................................................2697
Limits on Monitoring......................................................................................................................... 2697
Viewing Default Metric Charts.......................................................................................................... 2697
Building Metric Queries..................................................................................................................... 2726
Publishing Custom Metrics................................................................................................................ 2743
Managing Alarms................................................................................................................................2745
Best Practices for Your Alarms......................................................................................................... 2765
Monitoring Query Language (MQL) Reference................................................................................ 2767

Chapter 26 Networking...................................................................................... 2772


Networking......................................................................................................................................................2772
Features............................................................................................................................................... 2772
Networking Overview.........................................................................................................................2774
Networking Scenarios.........................................................................................................................2782
Virtual Networking Quickstart........................................................................................................... 2846
VCNs and Subnets..............................................................................................................................2847
Access and Security............................................................................................................................2856
Virtual Network Interface Cards (VNICs)......................................................................................... 2881
IP Addresses and DNS in Your VCN................................................................................................2890

xv
Table of contents

DHCP Options.................................................................................................................................... 2943


Route Tables....................................................................................................................................... 2947
Dynamic Routing Gateways (DRGs)................................................................................................. 2953
VPN Connect...................................................................................................................................... 2958
FastConnect......................................................................................................................................... 3200
Access to the Internet......................................................................................................................... 3271
Access to Your On-Premises Network.............................................................................................. 3280
Private Access.....................................................................................................................................3281
Access to Other VCNs: Peering.........................................................................................................3293
Access to Oracle Cloud Infrastructure Classic.................................................................................. 3317
Access to Microsoft Azure.................................................................................................................3327
Access to Other Clouds with Libreswan............................................................................................3338
Network Performance......................................................................................................................... 3347
Networking Metrics............................................................................................................................ 3348
Troubleshooting...................................................................................................................................3356

Chapter 27 Notifications.....................................................................................3378
Notifications.................................................................................................................................................... 3378
How Notifications Works...................................................................................................................3378
Notifications Concepts........................................................................................................................3378
Flow of Message Publication............................................................................................................. 3381
Availability..........................................................................................................................................3382
Service Comparison for Sending Email Messages............................................................................ 3382
Resource Identifiers............................................................................................................................ 3383
Moving Topics and Subscriptions to a Different Compartment........................................................ 3383
Ways to Access Notifications............................................................................................................ 3383
Authentication and Authorization.......................................................................................................3383
Limits on Notifications.......................................................................................................................3384
Best Practices for Your Subscriptions and Topics.............................................................................3384
Managing Topics and Subscriptions.................................................................................................. 3384
Publishing Messages...........................................................................................................................3393
Scenarios............................................................................................................................................. 3394
Troubleshooting Notifications............................................................................................................ 3413
Notifications Metrics.......................................................................................................................... 3417

Chapter 28 Object Storage.................................................................................3420


Object Storage.................................................................................................................................................3420
Object Storage Resources...................................................................................................................3420
Object Storage Features......................................................................................................................3421
Ways to Access Object Storage......................................................................................................... 3421
Using Object Storage..........................................................................................................................3422
Authentication and Authorization.......................................................................................................3422
Blocking Access to Object Storage Resources from Unauthorized IP Addresses............................. 3423
Object Storage IP Addresses.............................................................................................................. 3423
Limits on Object Storage Resources..................................................................................................3423
Understanding Object Storage Namespaces.......................................................................................3423
Understanding Storage Tiers.............................................................................................................. 3424
Managing Buckets.............................................................................................................................. 3426
Managing Objects............................................................................................................................... 3447
Using Replication................................................................................................................................3468
Using Object Versioning.................................................................................................................... 3474
Using Retention Rules to Preserve Data............................................................................................3486
Using Object Lifecycle Management.................................................................................................3494
Using Multipart Uploads.................................................................................................................... 3506

xvi
Table of contents

Using Pre-Authenticated Requests..................................................................................................... 3510


Copying Objects..................................................................................................................................3517
Using Your Own Keys for Server-Side Encryption.......................................................................... 3521
Amazon S3 Compatibility API.......................................................................................................... 3523
Designating Compartments for the Amazon S3 Compatibility and Swift APIs................................ 3530
Object Storage Metrics....................................................................................................................... 3532
Accessing Object Storage Resources Across Tenancies....................................................................3536
Hadoop Support.................................................................................................................................. 3538

Chapter 29 Registry............................................................................................ 3540


Registry........................................................................................................................................................... 3540
Ways to Access Oracle Cloud Infrastructure.....................................................................................3540
Resource Identifiers............................................................................................................................ 3541
Authentication and Authorization.......................................................................................................3541
Registry Capabilities and Limits........................................................................................................ 3541
Required IAM Service Policy............................................................................................................ 3541
Preparing for Registry........................................................................................................................ 3541
Registry Concepts............................................................................................................................... 3543
Creating a Repository......................................................................................................................... 3546
Pushing Images Using the Docker CLI............................................................................................. 3547
Pulling Images Using the Docker CLI...............................................................................................3550
Pulling Images from Registry during Kubernetes Deployment......................................................... 3552
Viewing Images and Image Details................................................................................................... 3553
Deleting and Undeleting an Image.....................................................................................................3554
Untagging Images............................................................................................................................... 3555
Retaining and Deleting Images Using Retention Policies................................................................. 3556
Deleting a Repository......................................................................................................................... 3559
Moving Repositories Between Compartments................................................................................... 3560
Getting an Auth Token.......................................................................................................................3561
Policies to Control Repository Access...............................................................................................3561

Chapter 30 Resource Manager.......................................................................... 3566


Resource Manager.......................................................................................................................................... 3566
Features............................................................................................................................................... 3566
Overview of Resource Manager.........................................................................................................3568
Getting Started.................................................................................................................................... 3575
Managing Stacks and Jobs................................................................................................................. 3591
Managing Private Templates.............................................................................................................. 3613
Managing Configuration Source Providers........................................................................................ 3616
Using Remote Exec............................................................................................................................ 3621
Templates............................................................................................................................................ 3623
Using the Deploy to Oracle Cloud Button.........................................................................................3627
Terraform Configurations for Resource Manager..............................................................................3629
GitHub and GitLab Connection Issues.............................................................................................. 3659

Chapter 31 Search...............................................................................................3660
Search.............................................................................................................................................................. 3660
Search Categories and Ways to Search Them................................................................................... 3660
Supported Resources...........................................................................................................................3660
Required IAM Permissions................................................................................................................ 3666
Free Text Search.................................................................................................................................3666
Search Language Syntax.................................................................................................................... 3667
Sample Queries................................................................................................................................... 3673

xvii
Table of contents

Querying Resources, Services, and Documentation.......................................................................... 3676


Troubleshooting Search...................................................................................................................... 3681

Chapter 32 Security Zones.................................................................................3684


Security Zones................................................................................................................................................ 3684
Managing Security Zones...................................................................................................................3685
Managing Recipes...............................................................................................................................3686
Security Zone Policies........................................................................................................................ 3687
Security Zone IAM Policies...............................................................................................................3691

Chapter 33 Security............................................................................................ 3692


Oracle Cloud Infrastructure Security Guide.................................................................................................. 3692
Security Overview.............................................................................................................................. 3693
Security Services and Features...........................................................................................................3699
Oracle Cloud Testing Policies............................................................................................................3707
Security Best Practices....................................................................................................................... 3710
Addressing Basic Configuration Issues..............................................................................................3742
Oracle Cloud Security Response to Intel L1TF Vulnerabilities.................................................................... 3762
Oracle Cloud Infrastructure................................................................................................................ 3763
Oracle Cloud Infrastructure Classic and Oracle Platform Service on Oracle Cloud Infrastructure
Classic............................................................................................................................................3763
Oracle Cloud Infrastructure Customer Advisory for L1TF Impact on the Compute Service.............3763
Oracle Cloud Infrastructure Customer Advisory for L1TF Impact on the Database Service.............3767
Oracle Cloud Security Response to Intel Microarchitectural Data Sampling (MDS) Vulnerabilities........... 3768
Oracle Cloud Infrastructure................................................................................................................ 3768
Oracle Cloud Infrastructure Classic and Oracle Platform Service on Oracle Cloud Infrastructure
Classic............................................................................................................................................3768
Oracle Cloud Infrastructure Customer Advisory for MDS Impact on the Compute Service............. 3769
Oracle Cloud Infrastructure Customer Advisory for MDS Impact on the Database Service............. 3772

Chapter 34 Security Advisor..............................................................................3774


Security Advisor............................................................................................................................................3774
Authentication and Authorization.......................................................................................................3774
Regions and Availability Domains.....................................................................................................3775
Limits on Resources........................................................................................................................... 3775
Creating a Secure Bucket................................................................................................................... 3775
Creating a Secure File System........................................................................................................... 3777
Creating a Secure Virtual Machine Instance..................................................................................... 3780
Creating a Secure Block Volume.......................................................................................................3783

Chapter 35 Service Connector Hub.................................................................. 3786


Service Connector Hub...................................................................................................................................3786
How Service Connector Hub Works..................................................................................................3786
Service Connector Hub Concepts.......................................................................................................3786
Flow of Data....................................................................................................................................... 3786
Availability..........................................................................................................................................3787
Resource Identifiers............................................................................................................................ 3787
Ways to Access Service Connector Hub........................................................................................... 3787
Authentication and Authorization.......................................................................................................3787
Limits on Service Connector Hub......................................................................................................3790
Managing Service Connectors............................................................................................................3790
Service Connector Hub Scenarios......................................................................................................3798

xviii
Table of contents

Troubleshooting Service Connectors..................................................................................................3811


Viewing the State of a Work Request............................................................................................... 3813
Example Messages..............................................................................................................................3814
Service Connector Hub Metrics......................................................................................................... 3816
Query Reference for Service Connector Hub.................................................................................... 3824

Chapter 36 Storage Gateway............................................................................. 3826


Storage Gateway............................................................................................................................................. 3826
Availability..........................................................................................................................................3826
Storage Gateway and Oracle Cloud Infrastructure Concepts............................................................ 3826
How Storage Gateway Works............................................................................................................ 3828
Recommended Uses and Workloads..................................................................................................3828
Uses and Workloads Not Supported.................................................................................................. 3828
Security Considerations...................................................................................................................... 3829
Limits on Storage Gateway Resources.............................................................................................. 3830
Storage Gateway Release Notes.........................................................................................................3831
Features of Storage Gateway..............................................................................................................3831
Getting Started With Storage Gateway.............................................................................................. 3833
Configuring the Cache for File Systems............................................................................................3833
Understanding Storage Gateway Performance...................................................................................3839
Interacting With Object Storage.........................................................................................................3841
Installing Storage Gateway.................................................................................................................3843
Logging In to the Storage Gateway Management Console............................................................... 3851
Creating Your First File System........................................................................................................ 3852
Managing File Systems...................................................................................................................... 3856
Managing Storage Gateway................................................................................................................3864
Using Storage Gateway File Management Operations...................................................................... 3867
Monitoring Storage Gateway..............................................................................................................3871
Using Storage Gateway Cloud Sync..................................................................................................3873
Best Practices for Using Storage Gateway........................................................................................ 3878
Troubleshooting Storage Gateway..................................................................................................... 3878
Upgrading Storage Gateway...............................................................................................................3881
Uninstalling Storage Gateway............................................................................................................ 3884
Getting Help with Storage Gateway.................................................................................................. 3885

Chapter 37 Streaming.........................................................................................3888
Streaming.........................................................................................................................................................3888
About Streaming................................................................................................................................. 3888
Streaming Concepts............................................................................................................................ 3889
Streaming Features..............................................................................................................................3890
Ways to Access Streaming.................................................................................................................3891
Using Streaming..................................................................................................................................3891
Authentication and Authorization.......................................................................................................3891
Limits on Streaming Resources..........................................................................................................3892
Managing Streams.............................................................................................................................. 3892
Managing Stream Pools......................................................................................................................3900
Publishing Messages...........................................................................................................................3906
Consuming Messages..........................................................................................................................3909
Using Streaming with Apache Kafka.................................................................................................3916
Using Oracle Cloud Infrastructure SDKs with Streaming.................................................................3925
Accessing Streaming Resources Across Tenancies........................................................................... 3931
Streaming Metrics...............................................................................................................................3932
Troubleshooting Streaming.................................................................................................................3937

xix
Table of contents

Chapter 38 Tagging.............................................................................................3942
Tagging............................................................................................................................................................3942
How Tagging Works.......................................................................................................................... 3942
Tagging Concepts............................................................................................................................... 3943
Authentication and Authorization.......................................................................................................3944
Region Availability.............................................................................................................................3944
Ways to Access Oracle Cloud Infrastructure.....................................................................................3944
Limits on Tags....................................................................................................................................3944
Resources That Can Be Tagged......................................................................................................... 3945
Managing Tags and Tag Namespaces................................................................................................3949
Using Cost-Tracking Tags..................................................................................................................3959
Using Predefined Values.................................................................................................................... 3960
Using Tag Variables........................................................................................................................... 3962
Managing Tag Defaults...................................................................................................................... 3963
Using Tags to Manage Access...........................................................................................................3968
Frequently Asked Questions About Tagging..................................................................................... 3986

Chapter 39 Vault................................................................................................. 3988


Vault...............................................................................................................................................................3988
Key and Secret Management Concepts..............................................................................................3989
Regions and Availability Domains.....................................................................................................3991
Private Access to Vault...................................................................................................................... 3992
Resource Identifiers............................................................................................................................ 3992
Ways to Access Oracle Cloud Infrastructure.....................................................................................3992
Authentication and Authorization.......................................................................................................3992
Limits on Vault Resources................................................................................................................. 3992
Managing Vaults.................................................................................................................................3993
Managing Keys................................................................................................................................... 3998
Assigning Keys................................................................................................................................... 4007
Importing Keys and Key Versions.....................................................................................................4022
Exporting Keys and Key Versions.....................................................................................................4030
Using Keys..........................................................................................................................................4034
Backing Up Vaults and Keys.............................................................................................................4039
Managing Secrets................................................................................................................................4043
Secret Versions and Rotation States.................................................................................................. 4054
Rules for Secrets.................................................................................................................................4055
Vault Metrics...................................................................................................................................... 4056
Troubleshooting the Vault Service.....................................................................................................4059

Chapter 40 VMware Solution............................................................................ 4062


VMware Solution............................................................................................................................................4062
Solution Highlights............................................................................................................................. 4062
SDDC Details......................................................................................................................................4062
Supported Shapes................................................................................................................................4062
Oracle Cloud VMware Solution Architecture....................................................................................4063
About the VMware Software............................................................................................................. 4063
Working with SDDCs.........................................................................................................................4064
Additional Documentation Resources................................................................................................ 4064
Setting Up an Oracle Cloud VMware Solution SDDC......................................................................4065
Configuring Networking Connectivity for an SDDC........................................................................ 4068
Managing Oracle Cloud VMware Solution SDDCs.......................................................................... 4071
Managing Layer 2 Networking Resources for an SDDC.................................................................. 4074

xx
Table of contents

Security Rules for Oracle Cloud VMware Solution SDDCs............................................................. 4079

Chapter 41 Web Application Firewall..............................................................4088


Web Application Firewall.............................................................................................................................. 4088
Features............................................................................................................................................... 4088
Overview of Web Application Firewall.............................................................................................4089
Getting Started with WAF................................................................................................................. 4091
Managing WAF Policies.................................................................................................................... 4098
Origin Management............................................................................................................................ 4101
Bot Management.................................................................................................................................4108
WAF Protection Rules........................................................................................................................4114
Access Control.................................................................................................................................... 4186
Caching Rules..................................................................................................................................... 4192
Threat Intelligence.............................................................................................................................. 4195
Certificates...........................................................................................................................................4197
Logs..................................................................................................................................................... 4201
WAF Metrics...................................................................................................................................... 4207
Layer 7 DDoS Mitigation...................................................................................................................4210
HTTP WAF Headers.......................................................................................................................... 4212

Chapter 42 Developer Tools...............................................................................4214


Developer Resources...................................................................................................................................... 4214
Developer Guide................................................................................................................................. 4214
Glossary...................................................................................................................................... 4440
Release Notes..............................................................................................................................4452

xxi
About Oracle Cloud Infrastructure

Chapter

1
About Oracle Cloud Infrastructure
Oracle Cloud Infrastructure provides bare metal cloud infrastructure that lets you create networking, compute, and
storage resources for your enterprise workloads.
If you're new to Oracle Cloud Infrastructure and would like to learn some key concepts and take a quick tutorial, see
the Oracle Cloud Infrastructure Getting Started Guide.
If you're ready to create cloud resources such as users, access controls, cloud networks, instances, and storage
volumes, this guide is right for you. It provides the following information about using Oracle Cloud Infrastructure:

Service What's Covered Chapter


Application Migration Migrating applications from Oracle Cloud See the online
Infrastructure Classic to Oracle Cloud documentation: Overview of
Infrastructure. Application Migration
Application Performance Monitoring applications and diagnosing See the online documentation:
Monitoring performance issues. Application Performance Monitoring
Archive Storage Preserving cold data. Archive Storage on page 488
Audit Logging activity in your cloud. Audit on page 492
Big Data Provides enterprise-grade Hadoop as a See the online documentation: Big
service. Data.
Block Volume Adding storage capacity to instances. Block Volume on page 504
Blockchain Platform Creating permissioned-blockchain networks See the online
for trusted transactions. documentation: Blockchain Platform.
Cloud Guard Monitoring, identifying, achieving, and See the online documentation: Cloud
maintaining a strong security posture on Guard
Oracle Cloud.
Compute Launching compute instances and connecting Compute on page 594
to them by using an SSH key pair.
Container Engine for Defining and creating Kubernetes clusters Container Engine for Kubernetes on
Kubernetes to enable the deployment, scaling, and page 840
management of containerized applications.
Data Catalog Find, govern, and track Oracle Cloud data See the online documentation: Data
assets. Catalog.
Data Flow Creating, managing, and running Spark See the online documentation: Data
applications in a serverless environment. Flow.

Data Integration Design data flows using a visually rich See the online documentation: Data
interface to extract, transform, and load data Integration.
from data sources to targets.

Oracle Cloud Infrastructure User Guide 22


About Oracle Cloud Infrastructure

Service What's Covered Chapter


Data Science Build, train, deploy, and manage machine See the online documentation: Data
learning models on Oracle Cloud. Science Overview.
Data Transfer Migrating large volumes of data. Data Transfer on page 968
Database Creating and managing database systems and Database on page 1140
Oracle Databases.
Database Management Comprehensive database performance See the online documentation:
diagnostics and management capabilities to Database Management
monitor and manage Oracle databases.
Edge Services Encompasses several services that allow Edge Services
you to manage, secure, and maintain your
domains and endpoints.
Email Delivery Sending large volume email. Email Delivery on page 1740
Events Creating automation in your tenancy. Events on page 1788
File Storage Managing shared file systems, mount targets, File Storage on page 1928
and snapshots.
Functions Building and deploying applications and Functions on page 2040
functions.
GoldenGate Moving data in real-time from one or more See the online documentation:
source data management systems to target GoldenGate
databases on Oracle Cloud.
IAM Setting up administrators, users, and groups IAM on page 2132
and specifying their permissions to access to
cloud resources.
Vault Creating and managing encryption keys and Vault on page 3988
key vaults to control the encryption of your
data.
Load Balancing Setting up load balancers, listeners, backend Load Balancing on page 2498
sets, certificate bundles, and managing health
check policies.
Logging Analytics Rich analysis capabilities based on extensive See the online documentation:
parsing and enrichment of on-premises or Logging Analytics
cloud resource logs.
Management Agent Providing low latency interactive See the online
communication and data collection between documentation: Management Agent
Oracle Cloud Infrastructure and any other Overview.
targets.
Monitoring Querying metrics and managing alarms Monitoring on page 2686
to monitor the health, capacity, and
performance of your cloud resources.
MySQL Database Creating and managing DB Systems for See the online
Oracle MySQL. documentation: MySQL Database.
Networking Setting up cloud networks, subnets, Networking on page 2772
gateways, route tables, and security lists.

Oracle Cloud Infrastructure User Guide 23


About Oracle Cloud Infrastructure

Service What's Covered Chapter


NoSQL Database Cloud Provisioning for JSON, Table, and Key- See the online
Value datatypes using on-demand throughput documentation: NoSQL Database
and a storage basis.
Notifications Setting up topics and subscriptions, and Notifications on page 3378
publishing messages.
Object Storage Creating and managing buckets to store Object Storage on page 3420
objects, and uploading and accessing data
files.
Operations Insights Providing 360-degree insight into the See the online
resource utilization and capacity of Oracle documentation: Operations Insights.
Autonomous Databases.
Registry Storing, sharing, and managing development Registry on page 3540
artifacts like Docker images in an Oracle-
managed registry.
Resource Manager Using Terraform, installs, configures, Resource Manager on page 3566
and manages Oracle Cloud Infrastructure
resources through the "infrastructure-as-
code" model.
Search Searching for Oracle Cloud Infrastructure Search on page 3660
resources using free text search or advanced
queries.
Service Connector Hub Describes, executes, and monitors Service Connector Hub on page
interactions when moving data between 3786
Oracle Cloud Infrastructure services.
Tagging Adding metadata tags to your resources. Tagging on page 3942

For a description of the terminology used throughout this guide, see the PrintGlossary.

Prefer Online Help?


The information in this guide and the Getting Started Guide is also available in the online help at https://
docs.cloud.oracle.com/iaas/Content/home.htm.

Need API Documentation?


For general information, see REST APIs on page 4409. For links to the detailed service API documentation, see the
online help at https://docs.cloud.oracle.com/iaas/Content/home.htm.

Oracle Cloud Infrastructure User Guide 24


About Oracle Cloud Infrastructure

Oracle Cloud Infrastructure User Guide 25


Welcome to Oracle Cloud Infrastructure

Chapter

2
Welcome to Oracle Cloud Infrastructure
This chapter provides brief descriptions of Oracle Cloud Infrastructure features and resources.

Introduction
Oracle Cloud Infrastructure is a set of complementary cloud services that enable you to build and run a wide range
of applications and services in a highly available hosted environment. Oracle Cloud Infrastructure offers high-
performance compute capabilities (as physical hardware instances) and storage capacity in a flexible overlay virtual
network that is securely accessible from your on-premises network.

About the Services


Analytics Cloud empowers business analysts and consumers with modern, AI-powered, self-service analytics
capabilities for data preparation, visualization, enterprise reporting, augmented analysis, and natural language
processing.
API Gateway enables you to create governed HTTP/S interfaces for other services, including Oracle Functions,
Oracle Cloud Infrastructure Container Engine for Kubernetes, and Oracle Cloud Infrastructure Registry. API Gateway
also provides policy enforcement such as authentication and rate-limiting to HTTP/S endpoints.
Application Migration simplifies the migration of applications from Oracle Cloud Infrastructure Classic to Oracle
Cloud Infrastructure.
Application Performance Monitoring provides a comprehensive set of features to monitor applications and
diagnose performance issues.
Archive Storage lets you preserve cold data in a cost-efficient manner.
Audit provides visibility into activities related to your Oracle Cloud Infrastructure resources and tenancy. Audit log
events can be used for security audits, to track usage of and changes to Oracle Cloud Infrastructure resources, and to
help ensure compliance with standards or regulations.
Big Data provides enterprise-grade Hadoop as a service, with end-to-end security, high performance, and ease of
management and upgradeability.
Block Volume provides high-performance network storage capacity that supports a broad range of I/O intensive
workloads. You can use block volumes to expand the storage capacity of your compute instances, to provide durable
and persistent data storage that can be migrated across compute instances, and to host large databases.
Blockchain Platform Cloud enables creation of managed, permissioned-blockchain networks for secure, real-time
data sharing and trusted transactions among business partners.
Cloud Advisor finds potential inefficiencies in your tenancy and offers guided solutions that explain how to address
them. The recommendations help you maximize cost savings and improve the security of your tenancy.
Cloud Guard is a cloud-native service that helps customers monitor, identify, achieve, and maintain a strong
security posture on Oracle Cloud. Use the service to examine your Oracle Cloud Infrastructure resources for security
weakness related to configuration, and your Oracle Cloud Infrastructure operators and users for risky activities. Upon
detection, Cloud Guard can suggest, assist, or take corrective actions, based on your configuration.

Oracle Cloud Infrastructure User Guide 26


Welcome to Oracle Cloud Infrastructure

Use Compute to provision and manage compute instances. You can launch an Oracle bare metal compute resource
in minutes. Provision instances as needed to deploy and run your applications, just as you would in your on-premises
data center. Managed virtual machine (VM) instances are also available for workloads that don't require dedicated
physical servers or the high-performance of bare metal instances.
Container Engine for Kubernetes helps you define and create Kubernetes clusters to enable the deployment,
scaling, and management of containerized applications.
Content and Experience is a cloud-based content hub to drive omni-channel content management and accelerate
experience delivery. It offers powerful collaboration and workflow management capabilities to streamline the creation
and delivery of content and improve customer and employee engagement.
Data Catalog is a collaborative metadata management solution that lets you be more insightful about the data you
have in Oracle Cloud and beyond. With Data Catalog, data consumers can easily find, understand, govern, and track
Oracle Cloud data assets.
Data Flow is a fully managed service with a rich user interface to allow developers and data scientists to create,
edit, and run Apache Spark applications at any scale without the need for clusters, an operations team, or highly
specialized Spark knowledge. As a fully managed service, there is no infrastructure to deploy or manage. It is entirely
driven by REST APIs, giving easy integration with applications or workflows.
Data Integration is a fully managed service that helps data engineers and ETL developers with common extract,
load, and transform (ETL) tasks such as ingesting data from a variety of data assets, cleansing, transforming, and
reshaping that data, and then efficiently loading it to target data assets.
Data Safe is a fully-integrated Cloud service focused on the security of your data. It provides a complete and
integrated set of features for protecting sensitive and regulated data in Oracle Cloud databases. Features include
Security Assessment, User Assessment, Data Discovery, Data Masking, and Activity Auditing.
Data Science is a platform for data scientists to build, train, and manage machine learning models on Oracle Cloud
Infrastructure, using Python and open source machine learning libraries. Teams of data scientists can organize their
work and access data and computing resources in this collaborative environment.
Data Transfer lets you migrate large volumes of data to Oracle Cloud Infrastructure.
Database lets you easily build, scale, and secure Oracle databases with license-included pricing in your Oracle
Cloud Infrastructure cloud. You create databases on DB Systems, which are bare metal servers with local NVMe
flash storage. You launch a DB System the same way you do a bare metal instance, you just add some additional
configuration parameters. You can then use your existing tools, Recovery Manager (RMAN), and the database CLI
to manage your databases in the cloud the same way you manage them on-premises. To get started with the Database,
see "Overview of the Database Service," in the Oracle Cloud Infrastructure User Guide.
Database Management provides comprehensive database performance diagnostics and management capabilities to
monitor and manage Oracle databases.
Digital Assistant is a platform that allows you to create and deploy digital assistants, which are AI-driven interfaces
that help users accomplish a variety of tasks in natural language conversations.
Edge Services encompasses several services that allow you to manage, secure, and maintain your domains and
endpoints.
Email Delivery is an email sending service that provides a fast and reliable managed solution for sending high-
volume emails that need to reach your recipients. Email Delivery provides the tools necessary to send application-
generated email for mission-critical communications such as receipts, fraud detection alerts, multi-factor identity
verification, and password resets.
The Events service helps to create automation in your tenancy.
File Storage allows you to create a scalable, distributed, enterprise-grade network file system. File Storage supports
NFSv3 with NLM for full POSIX semantics, snapshots capabilities, and data at-rest encryption.
The Functions service helps you build and deploy applications and functions.
Fusion Analytics Warehouse empowers you with industry-leading, AI-powered, self-service analytics capabilities
for data preparation, visualization, enterprise reporting, augmented analysis, and natural language processing.

Oracle Cloud Infrastructure User Guide 27


Welcome to Oracle Cloud Infrastructure

GoldenGate is a fully managed service that helps data engineers move data in real-time, at scale, from one or more
data management systems to Oracle Cloud databases. Design, run, orchestrate, and monitor data replication tasks in a
single interface without having to allocate or manage any compute environments.
You can control access to Oracle Cloud Infrastructure using IAM Service. Create and manage compartments, users,
groups, and the policies that define permissions on resources.
Oracle Integration is a fully managed, preconfigured environment where you can integrate your applications,
automate processes, gain insight into your business processes, create visual applications, and support B2B
integrations. Use integrations to design, monitor, and manage connections between your applications, selecting from
our portfolio of over 60 adapters to connect with Oracle and third-party applications.
Load Balancing allows you to create a highly available load balancer within your virtual cloud network (VCN) so
that you can distribute internet traffic to your compute instances within the VCN.
Logging Analytics is a unified, integrated cloud solution that enables users to monitor, aggregate, index, analyze,
search, explore, and correlate all log data from their applications and system infrastructure.
Management Agent is a service that provides low latency interactive communication and data collection between
Oracle Cloud Infrastructure and any other targets.
Use Monitoring to query metrics and manage alarms. Metrics and alarms help monitor the health, capacity, and
performance of your cloud resources.
MySQL Database is a fully managed database service that enables organizations to deploy cloud-native applications
using the world’s most popular open source database. It is 100% compatible with On-Premises MySQL for a
seamless transition to public or hybrid cloud. Leverage your existing Oracle investments and easily integrate MySQL
Database Service with Oracle technologies.
Use Networking to create and manage the network components for your cloud resources. You can configure your
virtual cloud network (VCN) with access rules and gateways to support routing of public and private internet traffic.
NoSQL Database Cloud is a high performance data store which is distributed, sharded for horizontal scalability, and
highly available. It is optimized for applications requiring predictable low latency (such as fraud detection, gaming,
and personalized user experience), very high throughput, or extreme ingestion rates (such as event processing, IoT,
and sensor data).
Use Notifications to set up topics and subscriptions for broadcasting messages. Topics are used with alarms.
Operations Insights provides 360-degree insight into the resource utilization and capacity of Oracle Autonomous
Databases. You can easily analyze CPU and storage resources, forecast capacity issues, and proactively identify SQL
performance issues across a fleet of Autonomous Databases.
OS Management helps you keep operating platforms in your Compute instances secure and up to date with the latest
patches and updates from the respective vendor.
Registry helps you store, share, and manage development artifacts like Docker images in an Oracle-managed
registry.
Resource Manager helps you install, configure, and manage resources using the "infrastructure-as-code" model.
Roving Edge Infrastructure is a cloud-integrated service that puts fundamental Oracle Cloud Infrastructure services
where data is generated and consumed regardless of network connectivity.
Search lets you find resources in your tenancy without requiring you to navigate through different services and
compartments.
Security Zones let you be confident that your resources comply with Oracle security principles. If any resource
operation violates a security zone policy, then the operation is denied.
Service Connector Hub is a cloud message bus platform that offers a single pane of glass for describing, executing,
and monitoring interactions when moving data between Oracle Cloud Infrastructure services.
Storage Gateway is a cloud storage gateway that lets you connect your on-premises applications with Oracle Cloud
Infrastructure. Applications that can write data to an NFS target can also write data to the Oracle Cloud Infrastructure
Object Storage, without requiring application modification to uptake the REST APIs.

Oracle Cloud Infrastructure User Guide 28


Welcome to Oracle Cloud Infrastructure

Use Streaming to ingest, consume, and process high-volume data streams in real-time.
The Support Management service allows you to create, view, and manage support tickets.
The Tagging service lets you use metadata tags to organize and manage the resources in your tenancy.
The Vault service helps you centrally manage the encryption keys that protect your data and the secret credentials
that you use for access to resources.
Use Oracle Cloud VMware Solution to create and manage VMware enabled software-defined data centers (SDDCs)
in Oracle Cloud Infrastructure. To get started with the VMware solution, see "Oracle Cloud VMware Solution," in the
Oracle Cloud Infrastructure User Guide.
WAF helps you make your endpoints more secure by monitoring and filtering out potentially malicious traffic.

Accessing Oracle Cloud Infrastructure


You can create and manage resources in the following ways:
• Oracle Cloud Infrastructure Console The Console is an intuitive, graphical interface that lets you create and
manage your instances, cloud networks, and storage volumes, as well as your users and permissions. See Using
the Console on page 42.
• Oracle Cloud Infrastructure APIs The Oracle Cloud Infrastructure APIs are typical REST APIs that use HTTPS
requests and responses. See "Using the API" in the Oracle Cloud Infrastructure User Guide.
• SDKs Several Software Development Kits are available for easy integration with the Oracle Cloud Infrastructure
APIs, including SDKs for Java, Ruby, and Python. For more information, see "Developer Tools" in the Oracle
Cloud Infrastructure User Guide.
• Command Line Interface (CLI) You can use a command line interface with some services. For more
information, see "Developer Tools" in the Oracle Cloud Infrastructure User Guide.
• Terraform Oracle supports Terraform. Terraform is "infrastructure-as-code" software that allows you to define
your infrastructure resources in files that you can persist, version, and share. For more information, see Getting
Started on page 103.
• Ansible Oracle supports the use of Ansible for cloud infrastructure provisioning, orchestration, and configuration
management. Ansible allows you to automate configuring and provisioning your cloud infrastructure, deploying
and updating software assets, and orchestrating your complex operational processes. For more information, see
Getting Started on page 4372.
• Resource Manager Resource Manager is an Oracle Cloud Infrastructure service that allows you to automate the
process of provisioning your Oracle Cloud Infrastructure resources. Using Terraform, Resource Manager helps
you install, configure, and manage resources through the "infrastructure-as-code" model. For more information,
see Overview of Resource Manager on page 3568.
For new capabilities, Oracle targets the release of relevant APIs, as well as CLI, SDKs, and Console updates, at the
time of general availability (GA). We also target the release of an updated Terraform provider within 30 days of GA.

How Do I Get Started?


• Sign up for Oracle Cloud Infrastructure
• Understand Oracle Cloud Infrastructure concepts and terminology
• Follow guided tutorials to:
• Launch your first instance (Linux or Windows)
• Add users
• Put data into object storage
• Create a load balancer
• Get started with APIs, see "Using the API" in the Oracle Cloud Infrastructure User Guide
• FAQs

Oracle Cloud Infrastructure User Guide 29


Welcome to Oracle Cloud Infrastructure

Key Concepts and Terminology


Understand the following concepts and terminology to help you get started with Oracle Cloud Infrastructure.
BARE METAL HOST
Oracle Cloud Infrastructure provides you control of the physical host (“bare metal”) machine. Bare metal
compute instances run directly on bare metal servers without a hypervisor. When you provision a bare metal
compute instance, you maintain sole control of the physical CPU, memory, and network interface card
(NIC). You can configure and utilize the full capabilities of each physical machine as if it were hardware
running in your own data center. You do not share the physical machine with any other tenants.
REGIONS AND AVAILABILITY DOMAINS
Oracle Cloud Infrastructure is physically hosted in regions and availability domains. A region is a localized
geographic area, and an availability domain is one or more data centers located within a region. A region
is composed of one or more availability domains. Oracle Cloud Infrastructure resources are either region-
specific, such as a virtual cloud network, or availability domain-specific, such as a compute instance.
Availability domains are isolated from each other, fault tolerant, and very unlikely to fail simultaneously
or be impacted by the failure of another availability domain. When you configure your cloud services, use
multiple availability domains to ensure high availability and to protect against resource failure. Be aware
that some resources must be created within the same availability domain, such as an instance and the storage
volume attached to it.
For more details see "Regions and Availability domains" in the Oracle Cloud Infrastructure User Guide.
REALM
A realm is a logical collection of regions. Realms are isolated from each other and do not share any data.
Your tenancy exists in a single realm and has access to the regions that belong to that realm. Oracle Cloud
Infrastructure currently offers a realm for commercial regions and two realms for government cloud regions:
FedRAMP authorized and IL5 authorized.
CONSOLE
The simple and intuitive web-based user interface you can use to access and manage Oracle Cloud
Infrastructure.
tenancy
When you sign up for Oracle Cloud Infrastructure, Oracle creates a tenancy for your company, which is
a secure and isolated partition within Oracle Cloud Infrastructure where you can create, organize, and
administer your cloud resources.
compartments
Compartments allow you to organize and control access to your cloud resources. A compartment is a
collection of related resources (such as instances, virtual cloud networks, block volumes) that can be
accessed only by certain groups that have been given permission by an administrator. A compartment should
be thought of as a logical group and not a physical container. When you begin working with resources in the
Console, the compartment acts as a filter for what you are viewing.
When you sign up for Oracle Cloud Infrastructure, Oracle creates your tenancy, which is the root
compartment that holds all your cloud resources. You then create additional compartments within
the tenancy (root compartment) and corresponding policies to control access to the resources in each
compartment. When you create a cloud resource such as an instance, block volume, or cloud network, you
must specify to which compartment you want the resource to belong.
Ultimately, the goal is to ensure that each person has access to only the resources they need.
SECURITY ZONES
A security zone is associated with a compartment. When you create and update cloud resources in a security
zone, Oracle Cloud Infrastructure validates these operations against security zone policies. If any policy is

Oracle Cloud Infrastructure User Guide 30


Welcome to Oracle Cloud Infrastructure

violated, then the operation is denied. Security zones let you be confident that your resources comply with
Oracle security principles.
VIRTUAL CLOUD NETWORK (VCN)
A virtual cloud network is a virtual version of a traditional network—including subnets, route tables, and
gateways—on which your instances run. A cloud network resides within a single region but includes all
the region's availability domains. Each subnet you define in the cloud network can either be in a single
availability domain or span all the availability domains in the region (recommended). You need to set up
at least one cloud network before you can launch instances. You can configure the cloud network with an
optional internet gateway to handle public traffic, and an optional IPSec VPN connection or FastConnect to
securely extend your on-premises network.
INSTANCE
An instance is a compute host running in the cloud. An Oracle Cloud Infrastructure compute instance allows
you to utilize hosted physical hardware, as opposed to the traditional software-based virtual machines,
ensuring a high level of security and performance.
IMAGE
The image is a template of a virtual hard drive that defines the operating system and other software for
an instance, for example, Oracle Linux. When you launch an instance, you can define its characteristics
by choosing its image. Oracle provides a set of images you can use. You can also save an image from an
instance that you have already configured to use as a template to launch more instances with the same
software and customizations.
SHAPE
In Compute, the shape specifies the number of CPUs and amount of memory allocated to the instance. Oracle
Cloud Infrastructure offers shapes to fit various computing requirements. See "Compute Shapes" in the
Oracle Cloud Infrastructure User Guide.
In Load Balancing, the shape determines the load balancer's total pre-provisioned maximum capacity
(bandwidth) for ingress plus egress traffic. Available shapes include 100 Mbps, 400 Mbps, and 8000 Mbps.
KEY PAIR
A key pair is an authentication mechanism used by Oracle Cloud Infrastructure. A key pair consists of a
private key file and a public key file. You upload your public key to Oracle Cloud Infrastructure. You keep
the private key securely on your computer. The private key is private to you, like a password.
Key pairs can be generated according to different specifications. Oracle Cloud Infrastructure uses two types
of key pairs for specific purposes:
• Instance SSH Key pair: This key pair is used to establish secure shell (SSH) connection to an instance.
When you provision an instance, you provide the public key, which is saved to the instance's authorized
key file. To log on to the instance, you provide your private key, which is verified with the public key.
• API signing key pair: This key pair is in PEM format and is used to authenticate you when submitting
API requests. Only users who will be accessing Oracle Cloud Infrastructure via the API need this key
pair.
For details about the syntax of an OCID, see "Security Credentials" in the Oracle Cloud Infrastructure User
Guide.
BLOCK VOLUME
A block volume is a virtual disk that provides persistent block storage space for Oracle Cloud Infrastructure
instances. Use a block volume just as you would a physical hard drive on your computer, for example, to
store data and applications. You can detach a volume from one instance and attach it to another instance
without loss of data.

Oracle Cloud Infrastructure User Guide 31


Welcome to Oracle Cloud Infrastructure

OBJECT STORAGE
Object Storage is a storage architecture that allow you to store and manage data as objects. Data files can
be of any type and up to 50 GB in size. Once you upload data to Object Storage it can be accessed from
anywhere. Use Object Storage when you want to store a very large amount of data that does not change
very frequently. Some typical use cases for Object Storage include data backup, file sharing, and storing
unstructured data like logs and sensor-generated data.
BUCKET
A bucket is a logical container used by Object Storage for storing your data and files. A bucket can contain
an unlimited number of objects.
ORACLE CLOUD IDENTIFIER (OCID)
Every Oracle Cloud Infrastructure resource has an Oracle-assigned unique ID called an Oracle Cloud
Identifier (OCID). This ID is included as part of the resource's information in both the Console and API.
For details about the syntax of an OCID, see "Resource Identifiers" in the Oracle Cloud Infrastructure User
Guide.

Request and Manage Free Oracle Cloud Promotions


You can sign up for a 30-day Oracle Cloud promotion and receive free credits. This promotion applies to eligible
Oracle Cloud Infrastructure services.
Estimate Your Monthly Cost on page 32
Sign Up for the Free Oracle Cloud Promotion on page 34
Monitor the Credit Balance for Your Free Oracle Cloud Promotion on page 36
What Happens When the Promotion Expires on page 36
Upgrade Your Free Oracle Cloud Promotion on page 36

Estimate Your Monthly Cost


Oracle provides you with a cost estimator to help you figure out your monthly usage and costs for Oracle’s
Infrastructure and Platform Cloud (Oracle IaaS/PaaS) services before you commit to an amount.
The cost estimate is automatically calculated based on your choice of the Oracle Cloud service category, its service
configurations, and the usage of each resource in the configuration.
You can start using Oracle Cloud with no up-front cost. Oracle will bill you for the services and resources you use.
For the purpose of planning, use the results from the Cost Estimator to estimate how much you are likely to be
charged for usage each month.
To use the Cost Estimator:
1. Go to the Cost Estimator page on the Oracle Cloud website.
2. Select a category of cloud services, such as Infrastructure or Data Management, from the list on the left side of the
page.
The cost estimator displays a set of packages, which represent the services and resources that are typically
required to support the selected service category. To see all the packages of the selected service category, scroll to
the right.
3. Select one of the packages, as a starting point for your estimate. The estimator begins calculating the cost for the
selected service and package.
4. In the Configuration Options section, expand each service, use the sliders, or select from the drop-down lists to
adjust the values to match your project’s or organization’s needs.
As soon as you adjust the amount of resources, the cost estimate changes.
If you have existing software licences for services such as Oracle Database or Oracle Middleware, you can use
them to estimate your cost for cloud services. Simply select the BYOL (Bring Your Own License) option from the

Oracle Cloud Infrastructure User Guide 32


Welcome to Oracle Cloud Infrastructure

service packages or under the Configuration Options section. For example, if you have an existing license for
Autonomous Data Warehouse, then select the Autonomous Data Warehouse Cloud - BYOL package from the
service packages set. If you’ve an Oracle Database Enterprise Edition license, then select the Enterprise Edition
Extreme Performance BYOL option from the Edition list under Configuration Options. The cost immediately
reflects the BYOL pricing, which is typically lower than the normal cloud service costs.
You can experiment with different configuration options until you balance the cost with your organization’s
needs.
5. Review your estimates, and then click Start for Free to sign up for the Oracle Cloud Free Tier and get free
credits. You can upgrade your free promotion to a paid account at any time during the promotion period.
Example: Estimating Your Monthly Cost for Oracle Database Cloud Service
In this example, see how you can estimate your monthly cost for Oracle Database Cloud Service based on your
requirements.
To estimate your costs:
1. In the Cost Estimator page, select the Data Management category from the list on the left side of the screen..
2. From the list of configurations displayed, select Oracle Database Cloud Service and click Add.
The cost estimator displays a set of packages, which represent the services and resources that are typically
required to support the selected service category. To see all the packages of the selected service category, scroll to
the right.
3. In the Configuration Options section of the page, expand Database.
4. Expand each of the resources under Database, such as Number of Instances, Average Days Usage per Month, or
Average Hours Usage per Day. You’ll see some default values as you expand each item.
5. Increase the number of instances to 2: One for development and one for testing.
6. Use the slider to adjust values for Average Days Usage per Month or Average Hours Usage per Day, as
needed. By default, they are set to 31 days (in a month) and 24 hours (per day) of usage. If you intend to use the
Database service for a lesser period, then adjust the values accordingly.
7. Select Enterprise Edition High Performance - General Purpose from the Edition drop-down menu to see how
this affects the monthly estimate.
8. You can also remove certain sections by clicking the trash icon next to them. For example, if you don’t need
Database Backup service, you can remove it by clicking the trash icon.
9. When you have estimated all your requirements, select the payment plan, and click Buy Now.
You can also add other configurations in the Data Management category such as Oracle Database Exadata Cloud
Service or Oracle Big Data Cloud Service to estimate your total cost. Or, you can add other service categories, such as
Infrastructure or Integration and their configurations, as needed, to get your total usage cost estimate.
Save and Share Your Cost Estimator Results
When you are satisfied with your monthly usage estimates, you can save them either by downloading them as a PDF
file or exporting them to an .oce file. The .oce file is only used to export and import your saved estimates in the
Cost Estimator. This is useful when you want to share and review the quotes with your management, finance, or other
departments to get their approval.

Save Your Cost Estimates


To save your cost estimate:

Oracle Cloud Infrastructure User Guide 33


Welcome to Oracle Cloud Infrastructure

• In the Cost Estimator page, select from the following options:


• Load/Save: Click this button to save your service configurations in your browser. Provide a name for your
configuration and click Save. Note that this action is browser specific. You can’t use a configuration that you
saved on Google Chrome in Firefox, or vice versa.
• Save as PDF: Click this button to save the estimates as a PDF file. This is useful for presenting the estimates
to others. The PDF is read-only.
• Export: Click this button to export the estimates to an .oce file. This is useful if you need to share the
estimates with reviewers or might need to make changes to them later. The reviewers can then import the
.oce file to their own Cost Estimator pages and make changes as needed.

Import or Load Your Saved Estimates


If you want to make changes to your saved estimates, or if you’re reviewing them, you can import them to the Cost
Estimator. You can also load previously saved service configurations on your browser to continue with your estimate.
To import or load your saved cost estimate, use any of the following options:
• Load/Save: Click this button to load your saved service configurations. Note that this action is browser specific.
1. Click Select Saved Configuration.
2. Select a saved configuration and then click Load.
• Import: Click this button to import any previously exported estimates. Ensure that you have exported the
estimates to an.oce file.
• Browse for the .oce file and click Open.
The saved estimates appear in the Cost Estimator page. You can then make changes as required.

Sign Up for the Free Oracle Cloud Promotion


Signing up for Oracle Cloud Free Tier is easy.
1. Visit https://signup.oraclecloud.com.

Oracle Cloud Infrastructure User Guide 34


Welcome to Oracle Cloud Infrastructure

2. Provide information for your Oracle Cloud account.


• Select your country. For some countries, such as Russia, you must manually accept the Terms of Use by
selecting the check boxes when prompted.
• Enter your name and optionally enter the name of your company.
• Provide a valid email address.
You'll use this email ID later to sign in to your Oracle Cloud account. Instructions about signing in to your
new cloud account are sent to this address.
Your email ID is also used to check if you are eligible for any special offers. If you are, then you'll be
prompted to select a special offer from a list of applicable offers.
Oracle permits one cloud account to be created per email address. If your email address is already associated
with a cloud account, then you can click the link to get all your accounts associated with your email address.
• Enter a password based on the password policy specified on the web page. You will use this password later to
sign in to your Oracle Cloud account.
Password must contain a minimum of 8 characters, 1 lowercase, 1 uppercase, 1 numeric, and 1 special
character. Password can't exceed 40-characters, contain the users first name, last name, email address, spaces,
or ` ~ < > \ characters.
• Re-enter the password to confirm it.
• Create a cloud account name, which is used to identify your cloud account.
• Select a Home Region, where your services will be hosted.
Note:

Your home region is the geographic location where your account and
identity resources will be created. You can't change this after signing up.
If you are not sure which region to select as your home region, contact
your sales representative before you create your account.
• Click Continue.
3. Enter your address, and then click Continue.
Provide additional information, such as a PO box number, if you’re asked for it. For Brazil, enter your CPF
number for tax purposes in the format xxxxxxxxx-xx. For example, 655156112-18.
If you have selected a special offer, then you'll be prompted to enter a phone number. You don't have to provide
verification code for certain offers, so in these cases skip to step 8.
4. Enter a valid mobile number, along with country code, and then click Text me a code. A verification code will be
sent as a text message to your mobile phone. VOIP or internet-only mobile numbers are not accepted as we may
need to speak to you if there are questions about your account.
5. Enter the verification code that you received on your phone as a text message, and then click Verify my code. If
you already have a verification code, then follow the on-screen instructions to verify your phone number.
You can also click Request another code if you don't receive a verification code soon or if your verification code
expires.
6. Click Add payment verification method, and then click Credit Card.
7. Enter your credit card information, and then click Finish. You may see a small, temporary charge on your
payment method. This is a verification hold that will be removed automatically. Note that your credit card won't
be charged unless you elect to upgrade your cloud account.
8. Accept the terms and conditions, and then click Start my free trial to submit your request for a new Oracle Cloud
account.
After the services are provisioned in your tenancy, you'll be redirected to the Oracle Cloud Infrastructure Console.
Use the Oracle Cloud Infrastructure Console to create instances of your services.
You'll also receive a welcome (Get Started) email with more information about your account.
For some countries, you may not be able to request a free promotion from the Oracle Cloud website. In these cases,
contact Oracle Sales to request a free promotion.

Oracle Cloud Infrastructure User Guide 35


Welcome to Oracle Cloud Infrastructure

Monitor the Credit Balance for Your Free Oracle Cloud Promotion
After you get free credits, you can monitor and manage your service usage and your credit balance.
In the Console, you can monitor your usage costs from the Account Management page. See Checking Your
Expenses and Usage on page 56 for more information.
Oracle sends you a notice when you get close to your credit limit.

What Happens When the Promotion Expires


If you don’t upgrade your free credit promotion to a paid subscription, then it’s important to understand what happens
to your cloud account.
All Oracle Cloud Infrastructure accounts (whether free or paid) have a set of resources that are available free of
charge for the life of the account. These resources are called Always Free resources. If you have subscribed to a free
credit promotion, your account continues to be available to you after the trial period ends (or after you use all of your
credits). You can continue to use the Always Free resources in your account for as long as your account remains
active. Free accounts remain active and available to you as long as the account has been used within the past 60 days.
If you have a paid account, you will not be billed for any Always Free resources you are using. See Oracle Cloud
Infrastructure Free Tier on page 142 for more information.
Your Free Credit Promotion expires:
• Thirty (30) days from the day you signed up.
OR
• When you use up the free credits available in your promotion offer.
In both cases, Oracle Cloud sends you warning messages that you are nearing the end of your promotion period or
getting close to your free credit limit. Another email will let you know when the promotion actually expires. You
will have a grace period of 30 days. You can continue to use paid resources during the grace period. However, you
can't create new paid resources during the grace period unless you upgrade your account. If you don’t upgrade your
account during this period, then your paid resources will be reclaimed. Your Always Free resources will continue to
be available.

Upgrade Your Free Oracle Cloud Promotion


You can choose to upgrade your free promotion to a paid account at any time during the promotion period or within
30 days of the promotion expiration.
If you are using the Oracle Cloud Infrastructure Console, then you can upgrade your promotion to a paid account
from the Account Management page. For more information, see Changing Your Payment Method on page 57.

Buy an Oracle Cloud Subscription


Use the Oracle Cloud website to estimate your cloud usage and costs for Oracle Cloud Infrastructure services and
to sign up for an Oracle Cloud account. You can also contact an Oracle Sales representative to order Oracle Cloud
services on your behalf.
To purchase a subscription to Oracle Cloud Applications (SaaS), see Order Oracle Cloud Applications.

About Bring Your Own License Subscriptions


If you already have Oracle software licenses for services such as Oracle Database, Oracle Middleware, or Oracle
Analytics, you can reuse them when subscribing to Oracle Platform Cloud Services (Oracle PaaS). This is called
Bring Your Own License (BYOL).
With BYOL, you can leverage existing software licenses for Oracle PaaS at a lower cost. For example, if you have
purchased a perpetual license for Oracle Database Standard Edition earlier, then you can use the same when you buy
Database Standard Package with BYOL pricing. This enables you to get a discounted price for your services. Oracle
BYOL to PaaS includes Compute and Compute support along with automation.

Oracle Cloud Infrastructure User Guide 36


Welcome to Oracle Cloud Infrastructure

You continue to get the same license support (that you had for your existing licenses) and contract when you buy
Oracle PaaS with BYOL pricing. This flexible licensing allows you to move between your on-premises and cloud
services with ease.

How do You Use Your BYOL for Oracle PaaS?


When you have an existing Oracle software license and you want to use it on Oracle Cloud, you can do so in the
following ways:
• Select specific Oracle BYOL options in the Cost Estimator to get your BYOL pricing.
• Apply your BYOL pricing to individual cloud service instances when creating a new instance of your PaaS
service. BYOL is the default licensing option during instance creation for all services that support it. For example,
when creating a new instance of Oracle Database Cloud Service using the QuickStarts wizard, BYOL option is
automatically applied.
For a list of cloud services that support BYOL, search for BYOL in the Universal Credits Service Descriptions
Document.
For more information, see BYOL Overview video and Frequently Asked Questions.

About Universal Credits


Oracle Cloud provides a flexible buying and usage model for Oracle Cloud Services, called Universal Credits.
When you sign up for an Oracle Cloud Account, you have unlimited access to all eligible IaaS and PaaS services.
You can sign up for a Pay-As-You-Go subscription to pay in arrears based on your actual usage at the end of your
monthly billing cycle.
After you sign up, you can start using any of the IaaS or PaaS services at any time. Not all services are available in
all the data regions. You can only use services in the data regions that your subscription is enabled in. However, you
can always extend your subscription to other data regions to access services available there. See Extending Your
Subscription to Another Data Region.
When new eligible services become available as part of the Universal Credits program, you'll receive an email with
the details of the newly added services if they are available in one of your enabled data regions.
For new services added to data regions where your subscription is not enabled, see the Service Availability Matrix.

Upgrade Your Free Oracle Cloud Promotion


You can choose to upgrade your free promotion to a paid account at any time during the promotion period or within
30 days of the promotion expiration.
If you are using the Oracle Cloud Infrastructure Console, then you can upgrade your promotion to a paid account
from the Account Management page. For more information, see Changing Your Payment Method on page 57.

Activate Your Order from Your Welcome Email


If you ordered Oracle Infrastructure as a Service (Oracle IaaS) and Oracle Platform as a Service (Oracle PaaS) cloud
services with Universal Credits through Oracle Sales, then you must activate your services before you start using
them.
When an Oracle Sales representative orders Oracle Cloud services on your behalf, you’ll receive a welcome email
and you’ll be designated as an activator of the services. To activate your services, you must provide your details
and set up your account with Oracle. Review the instructions in the email to create an account and start using your
services.
1. Open the email you received from Oracle Cloud.
2. Review the information about your service in the email.
3. Click Activate My Services.

Oracle Cloud Infrastructure User Guide 37


Welcome to Oracle Cloud Infrastructure

4. Complete the form to sign up for your new Oracle Cloud account.
You will be asked to:
• Provide a new account name, which will be used to identify your Cloud account.
• Provide your email address. You must provide the same email address at which you received your welcome
email. Instructions for signing in to your new Oracle Cloud account will be sent to this address. You’ll be
prompted for the email ID only if you don’t already have an Oracle Cloud account.
• If prompted, select a Home Region. If you need more information, click the Regions link below the field.
Note:

Your home region contains your account information and identity


resources. It is not changeable after your tenancy is provisioned. If you
are unsure which region to select as your home region, contact your sales
representative before you create your account.
• Provide Oracle Cloud account administrator details. The person you specify here will be a Cloud Account
Administrator and a Service Administrator and can create other users in your account. This person will manage
and monitor services in the specified Oracle Cloud account.
• After you enter all the required information, click Create Account to submit your request for an Oracle Cloud
account.
After successful activation, you’ll receive another email with your sign in credentials. Use this information to sign in
to your account and change your password after initial sign in.

Verify That Your Services Are Ready


When you sign up for a Free Oracle Promotion or a paid account, your Oracle Cloud account is created soon after
sign up, but the service provisioning takes some time. You’ll receive a Welcome email soon after you sign up.
The email contains information required to access your account and sign in to Infrastructure Classic Console:
• Your user name and temporary password (sign-in credentials)
• The name of your Cloud Account
Sign in to Infrastructure Classic Console to see how many services are provisioned. A message at the top of the
Infrastructure Classic Console indicates how many services are active.
1. Click Get Started with Oracle Cloud from your welcome email.
2. Change your password when prompted.
3. Scan the dashboard to check the current status of your service. When the services are provisioned, they might
not immediately be displayed on the Infrastructure Classic Console. Services with instances are automatically
displayed.
4. Click the gear icon next to Dashboard and set the services to Show. By default, all service tiles are hidden, unless
a service has at least one instance. The Customize Dashboard dialog box appears.
When all the services in your order are provisioned, you’ll get a message on the Infrastructure Classic Console. You
can then add users, view service details, monitor account usage, and access the service consoles.
Some services in your order may require additional sign-in credentials, which you can find in the Manage Account,
My Admin Accounts page. For more information about these services, see Access Traditional Cloud Account
Services.

Request and Manage the Oracle Startup Program


You can sign up for the Oracle Startup Program and receive free credits. This promotion applies to eligible Oracle
Infrastructure as a Service (Oracle IaaS) and Platform as a Service (Oracle PaaS) services.
After you consume your free credits, you’ll be charged for the services and resources you use. For information about
monitoring the usage of your free credits, see Monitor the Credit Balance for Your Free Oracle Cloud Promotion on
page 36.

Oracle Cloud Infrastructure User Guide 38


Welcome to Oracle Cloud Infrastructure

Sign Up for the Oracle Startup Program


Signing up for the Oracle Startup Program is easy. You create an Oracle Cloud account, and then you get a welcome
email with the details that you need to sign in.
1. Go to the Oracle for Startups website, and then click Join Oracle for Startups.
2. Fill out the Oracle for Startups form. You are asked to:
Create Account:
• Select your country. For some countries such as Russia, you must manually accept the Terms of Use by
selecting the check boxes when prompted.
• Provide a valid email address, and then click Next. Instructions for signing in to your new cloud account are
sent to this address. You can sign up for only one Oracle Startup Program even if you have an existing Oracle
Cloud account. If your email address is already associated with the Oracle Startup Program, then you'll be
provided information to access your existing account.
Enter Account Details:
• Create a cloud account name, which is used to identify your cloud account.
• Select a Home Region, where your services will be hosted. See Data Regions for Platform and Infrastructure
Services for service availability in each region.
Note:

Your home region contains your account information and identity


resources. It is not changeable after your tenancy is provisioned. If you
are unsure which region to select as your home region, contact your sales
representative before you create your account.
• Provide your name, company name, and address.
• Provide additional information, such as a PO box number, if you’re asked for it. For Brazil, enter your CNPJ
number for tax purposes in the format xxxxxxxx/xxxx-xx. For example, 12345678/0001-18.
• Enter a valid mobile number, so that Oracle can text you a verification code, and then click Next:Verify
Mobile Number. VOIP or internet-only mobile numbers are not accepted.
• Your address is validated and displayed with corrections, if any. Confirm your address if prompted.
Verify Your Mobile Number:
• Enter the SMS code you received on your phone and click Verify Code. If you already have a verification
code, then follow the on-screen instructions to verify your phone number.
• You can also request another code if you don't receive a verification code soon.
Payment Information:
• Click Add Credit Card Details. Enter your credit card information. During sign up, you may see an
authorization of $100 USD (or local currency equivalent) on your payment card account. Authorizations do
not represent charges nor money owed to Oracle. This is a temporary hold on available credit that will be
removed automatically.
• Click Finish.
3. Accept the terms and conditions, and then click Complete Sign-Up to submit your request for a new Oracle
Cloud account.
Your account is created. After the services in your tenancy are provisioned, you'll be redirected to the sign-in page.
You'll also receive a welcome (Get Started) email with your sign-in credentials.

Understanding the Sign-In Options


This topic describes sign in options available to you when you sign up for an Oracle Cloud account.

Oracle Cloud Infrastructure User Guide 39


Welcome to Oracle Cloud Infrastructure

About the Sign In Options


When you sign up for Oracle Cloud, Oracle creates a user for you in two different identity systems, giving you two
options to sign in to Oracle Cloud Infrastructure.

When you want to use Oracle Cloud Infrastructure, you can choose which identity provider to sign in through:

Oracle Identity Cloud Service


Many Oracle Cloud services, including Oracle Cloud Infrastructure, are integrated with Oracle Identity Cloud
Service. When you sign up for an Oracle Cloud account, a user is created for you in Oracle Identity Cloud Service
with the username and password you selected at sign up. You can use this single sign-on option to sign in to Oracle
Cloud Infrastructure and then navigate to other Oracle Cloud services without reauthenticating. This user has
administrator privileges for all the Oracle Cloud services included with your account.

Oracle Cloud Infrastructure


Oracle Cloud Infrastructure includes its own identity service, called the Identity and Access Management service,
or IAM, for short. When you sign up for an Oracle Cloud account, this service is included. A second, separate user

Oracle Cloud Infrastructure User Guide 40


Welcome to Oracle Cloud Infrastructure

is created for you in the IAM service with the username and password you selected at sign up. You are granted
administrator privileges in Oracle Cloud Infrastructure so you can get started right away with all Oracle Cloud
Infrastructure services.
Important:

Although the credentials are identical in both systems when your account
is created, the users are in separate identity management systems, and you
manage them separately. If you change your password in the Oracle Cloud
Infrastructure IAM, your password in Oracle Identity Cloud Service is not
changed, and conversely.

When to Use Each Sign-In Option


If you plan to use Oracle Cloud Infrastructure services exclusively, it makes sense for you to use your direct sign-in
credentials to the IAM service.
If you want to use other Oracle Cloud services that are managed through Oracle Identity Cloud Service, then sign in
with your single sign-on credentials.

More Information About Managing Users in Oracle Cloud Identity Providers


Managing Users in the IAM Service
Managing Oracle Identity Cloud Service Users and Groups in the Oracle Cloud Infrastructure Console on page 2391
Adding Users on page 58

Signing In to the Console


This topic describes how to sign in to the Oracle Cloud Infrastructure Console.

Supported Browsers
Oracle Cloud Infrastructure supports the following browsers and versions:
• Google Chrome 69 or later
• Safari 12.1 or later
• Firefox 62 or later

Oracle Cloud Infrastructure User Guide 41


Welcome to Oracle Cloud Infrastructure

Signing In for the First Time


To sign in to Oracle Cloud at https://cloud.oracle.com, you need:
• Your cloud account name (also sometimes referred to as your tenancy name)
• User name and password
When your tenancy is provisioned, Oracle sends an email to the default administrator at your company with the sign-
in credentials and URL. This administrator can then create a user for each person who needs access to Oracle Cloud
Infrastructure. Check your email or contact your administrator for your credentials and account name.
1. Open a supported browser and go to https://cloud.oracle.com.
2. Enter your Cloud Account Name and click Next.
3. On the Single Sign-On (SSO) panel, click Continue.
4. Enter your user name and temporary password from your welcome email. You will be prompted to change your
temporary password.
After you sign in, the Console Home page is displayed.
Note:

When you're logged in to the Console for one of the commercial realm
regions, the browser times out after 60 minutes of inactivity, and you need to
sign in again to use the Console.

Next Steps
Get to know the Console. See Using the Console on page 42.
Follow guided tutorials to launch your first instance, add users, or put data into object storage.
Begin setting up your tenancy for other users. See Setting Up Your Tenancy on page 123.

Using the Console


This topic provides basic information about the Oracle Cloud Console. To access the Console, you must use a
supported browser.

About the Console Home Page


When you sign in to the Console, you see the home page.

Oracle Cloud Infrastructure User Guide 42


Welcome to Oracle Cloud Infrastructure

The home page offers features for both new and experienced users. For new users, the Get Started tab provides
features to help you start learning about and working with Oracle Cloud Infrastructure. The Dashboard tab supports
widgets to help you quickly access and monitor your resources and billing usage.
Features of the Get Started Tab
The Get Started tab includes features that are particularly helpful for new users or users who want to jump in and
quickly create common resources.
Quick Actions
Use the Quick Actions tiles to navigate directly to common tasks, like creating a VM instance, setting up a network
with a wizard, and setting up a load balancer. Use these links to set up your environment.
Start Exploring
The Start Exploring section provides links to tutorials, developer tools, and blogs that demonstrate how to use Oracle
Cloud Infrastructure to build solutions.
• In the Get Started category, find introductory materials that you can use to learn more about basics, such as
information about virtual training classes, key concepts, and introductory demos.
• In the Deploy Websites & Apps category, find tutorials that leverage both basic and more advanced features
available to build solutions.
• In the Explore Developer Tools category, explore the developer kits, tools, and plug-ins that you can use to
facilitate the development of apps and to simplify the management of infrastructure.
• In the Manage Bills category, learn about the billing and payment tools that to help you manage your service
costs.
Features of the Dashboard Tab
The Dashboard tab includes widgets you can use to explore your resources and billing usage.

Oracle Cloud Infrastructure User Guide 43


Welcome to Oracle Cloud Infrastructure

Get an Overview of All Your Resources with the Resource Explorer


Use the resource explorer to get an overview of the number and types of resources that exist in a selected
compartment and region.
To use the resource explorer to find resources:
1. On the Console home page, click the Dashboard tab.
2. The resource explorer displays the list of services and count of resources in the selected compartment and region.
By default, the root compartment is selected. To view another compartment, select it from the compartment
picker.
3. Expand the entry for a service to see the count for each resource-type within the service.
4. To see more information about a resource-type in the list, click the resource-type to open the detailed list. To
navigate directly to a specific resource in the list, click the Display Name.

Monitor Your Usage with the Billing Widget


Administrators and users with appropriate permissions can view the billing widget. The billing widget lets you
quickly view your current charges or usage and the days elapsed in your billing cycle. Your view depends on your
account type.
• Pay-as-you-go customers see the current charges and the numbers of days elapsed in the current billing cycle.
• Universal credit customers see the total credits used and number of days elapsed in the credit period.
• Trial customers see the total credits used and number of days elapsed in the trial period.
To get a more detailed view of your spending, click the Analyze Costs link to go to the Cost Analysis tool where you
can generate charts and reports of aggregated cost data for your Oracle Cloud Infrastructure consumption. If your
account is a free tier or promotional trial account, you'll see an option to Upgrade your account. If you have a paid
account, you'll see the option to Manage payment method to view or change your payment method.

Oracle Cloud Infrastructure User Guide 44


Welcome to Oracle Cloud Infrastructure

Navigating to Oracle Cloud Infrastructure Services


Open the navigation menu in the upper left to work with services and resources. Services and resources are organized
by functional group. For example, to work with Compute service instances: Open the navigation menu. Under Core
Infrastructure, go to Compute and click Instances.

Navigating to More Oracle Cloud Services from the Console


For more details about accessing other Oracle Cloud offerings, see Navigate to Your Cloud Services.
The Oracle Cloud Console provides navigation to other Oracle Cloud services in addition to Oracle Cloud
Infrastructure services.
If your account also has Oracle Cloud Platform services, Oracle Cloud Infrastructure Classic services, or Oracle
Cloud Applications, then you can navigate to these services from the Oracle Cloud Console:

Navigating to Platform Services


Open the navigation menu. Under More Oracle Cloud Services, go to Platform Services, and then click the service
you want to access.

Navigating to Classic Data Management Services


Open the navigation menu. Under More Oracle Cloud Services, go to Classic Data Management Services, and
then click the service you want to access.

Navigating to Classic Infrastructure Services


Open the navigation menu. Under More Oracle Cloud Services, go to Classic Infrastructure Services, and then
click the service you want to access.

Oracle Cloud Infrastructure User Guide 45


Welcome to Oracle Cloud Infrastructure

Navigating to the Applications Console


If your Cloud account also has Cloud Applications services provisioned, then you have access to the Applications
Console.
In the Console header, click Applications to switch to the Applications Console.
For more information about the consoles for these Oracle Cloud services, see About the Consoles.

Switching Regions
Your current region is displayed at the top of the Console. If your tenancy is subscribed to multiple regions, you can
switch regions by selecting a different region from the Region menu.

Working Across Regions


When working within a service, the Console displays resources that are in the currently selected region. So if
your tenancy has instances in CompartmentA in US West (Phoenix), and instances in CompartmentA in US East
(Ashburn), you can only view the instances in one region at a time, even though the instances are in the same
compartment.
Using the following figure as an example, if you select US West (Phoenix) and then select CompartmentA, you see
instances 1 and 2 listed. To see instances 3 and 4 in the Console, you must switch to US East (Ashburn) (and then you
no longer see instances 1 and 2).

Oracle Cloud Infrastructure User Guide 46


Welcome to Oracle Cloud Infrastructure

To view resources across regions that are in a specific compartment, you can use the tenancy explorer.
IAM resources (compartments, users, groups, policies, tags, and federation providers) are global, so you can see those
resources no matter which region you have selected in the Console.

Switching Languages
The Console automatically detects the language setting in your browser. However, if you want to view the Console in
a different language, you can change it by using the language selector in the Console.

The language selector supports the following languages:


Supported Languages
• Chinese (Simplified)
• Chinese (Traditional)
• Croatian
• Czech
• Danish
• Dutch

Oracle Cloud Infrastructure User Guide 47


Welcome to Oracle Cloud Infrastructure

• English
• Finnish
• French (Canada)
• French (Europe)
• German
• Greek
• Hungarian
• Italian
• Japanese
• Korean
• Norwegian
• Polish
• Portuguese (Brazil)
• Portuguese (Portugal)
• Romanian
• Russian
• Serbian
• Slovak
• Slovenian
• Spanish
• Swedish
• Thai
• Turkish
The language you choose persists between sessions. However, the language setting is specific to the browser. If
you change to a different browser, the Console displays text in the language last selected in the language selector. If
it's the first time you're viewing the Console in a particular browser, the Console displays content according to the
browser's language setting.

Contact Support
In the Console, you can create a support request or start a live online chat with an Oracle Support or Sales
representative. For more information, see Getting Help and Contacting Support on page 126.

Understanding Compartments
After you select a service from the navigation menu, the menu on the left includes the compartments list.
Compartments help you organize resources to make it easier to control access to them. Your root compartment is
created for you by Oracle when your tenancy is provisioned. An administrator can create more compartments in the
root compartment and then add the access rules to control which users can see and take action in them. To manage
compartments, see Managing Compartments on page 2450.
The list of compartments is filtered to show you only the compartments that you have permission to access.
Compartments can be nested, and you might have access to a compartment but not its parent. The names of parent
compartments that you don't have permission to access are dimmed, but you can traverse the hierarchy down to the
compartment that you do have access to.

Oracle Cloud Infrastructure User Guide 48


Welcome to Oracle Cloud Infrastructure

After you select a compartment, the Console displays only the resources that you have permission to view in the
compartment for the region that you are in. The compartment selection filters the view of your resources. To see
resources in another compartment, you must switch to that compartment. To see resources in another region, you
must switch to that region, or use the tenancy explorer.
For more details about compartments, see Setting Up Your Tenancy on page 123 and Managing Compartments on
page 2450.

Filtering the Displayed List of Resources


To help you locate a resource, some resources let you filter the list that is displayed.
Filters include:
State: You can display only the resources that are in the state you select. Valid values for state can vary by resource.
Examples are:
• Any state - includes all lifecycle states for the resource
• Available
• Provisioning
• Terminating
• Terminated
Availability Domain: For resources that reside in a single availability domain, you can limit the list to the resources
that reside in the availability domains you select. For a list of availability domain-specific resources, see Resource
Availability on page 186.
Tags: Resources that support tagging let you filter the list by tags.

To filter a list of resources by a defined tag


1. Next to Tag Filters, click add.

Oracle Cloud Infrastructure User Guide 49


Welcome to Oracle Cloud Infrastructure

2. In the Apply a Tag Filter dialog, enter the following:


a. Namespace: Select the tag namespace.
b. Key: Select a specific key.
c. Value: Select from the following:
• Match Any Value - returns all resources tagged with the selected namespace and key, regardless of the tag
value.
• Match Any of the Following - returns resources with the tag value you enter in the text box. Enter a
single value in the text box. To specify multiple values for the same namespace and key, click + to display
another text box. Enter one value per text box.
d. Click Apply Filter.

To filter a list of resources by a free-form tag


1. Next to Tag Filters, click add.
2. In the Apply a Tag Filter dialog, enter the following:
a. Key: Enter the tag key.
b. Value: Select from the following:
• Match Any Value - returns all resources tagged with the selected free-form tag key, regardless of the tag
value.
• Match Any of the Following - returns resources with the tag value you enter in the text box. Enter a single
value in the text box. To specify multiple values for the same key, click + to display another text box. Enter
one value per text box.
c. Click Apply Filter.

Signing Out

To sign out of the Console, open the Profile menu ( ) and then click Sign Out.

Using the Mobile App


The Oracle Cloud Infrastructure Mobile app let you review alerts, notifications, and limits on the go. Quickly access
information about infrastructure resources, billing, and usage data from your mobile device. Read more to learn about
installing and using the app.

Download Now
To download and install the app, click the Google Play Store or Apple App Store badge below and follow the
instructions at the link.

Alternately, in the Google Play Store or Apple App Store, search for Oracle Cloud Infrastructure, select the app,
and follow the installation steps.
This app is supported on the following operating systems:
• Android 8 and later versions
• iOS 11 and later versions

Oracle Cloud Infrastructure User Guide 50


Welcome to Oracle Cloud Infrastructure

Signing In
To sign in to the Oracle Cloud Infrastructure Mobile app, use the same credentials and steps that you use to sign in to
the Console. For more information, see Understanding the Sign-In Options.
The first time you sign in, you must read and accept the End User License Agreement to access the app.
Enabling Automatic Sign-in
For faster sign-in to the mobile app, you can enable automatic sign-in. Automatic sign-in uses an API key to
authenticate you when you access the app, keeping you signed in until you sign out. The private key and the
generated fingerprint are encrypted and stored in either the Android Keystore or the iCloud Keychain, depending on
your device operating system. This encryption and storage ensures that your information is only accessible through
the Oracle Cloud Infrastructure Mobile app. For more information about API keys, see Working with Console
Passwords and API Keys on page 2475.
To enable automatic sign-in:
1.
In the app, open the Profile menu ( ) and then tap Settings.
2. Under Login Security, for Enable automatic sign-in, toggle the Enabled or Disabled switch. When enabled,
automatic sign-in will be used next time you open the app.
Generating the API signing key can take a few minutes. Automatic sign-in is available after the key is generated.
After you enable automatic sign-in, if you want to find the API key used by the app:
1.
In the app, open the Profile menu ( ) and then tap Settings.
2. Under Login Security, the API key fingerprint value is the API signing key that the mobile app is using.
Each user has a limit of three API keys. If your account has reached this limit, you can't use this feature in the mobile
app until you delete one of the existing API keys. You can use the Console to delete API signing keys.
To delete an API signing key
The following procedure works for a regular user or an administrator. Administrators can delete an API key for either
another user or themselves.
1. View the user's details:
• If you're deleting an API key for yourself:

Open the Profile menu ( ) and click User Settings.


•If you're an administrator deleting an API key for another user: Open the navigation menu. Under
Governance and Administration, go to Identity and click Users. Locate the user in the list, and then click
the user's name to view the details.
2. For the API key you want to delete, click Delete.
3. Confirm when prompted.
The API key is no longer valid for sending API requests.
Securing Your Account if Your Device is Compromised
If you have automatic sign-in enabled in the Oracle Cloud Infrastructure Mobile app and your device is stolen, you
need to secure your account. To secure your account, in the Console, delete your API signing keys.

Switching Regions
Your current region is displayed at the top of the mobile app. If your tenancy is subscribed to multiple regions, you
can switch regions by selecting a different region from the Region picker.

Switching Time Zones


You can set the mobile app to use UTC time or local time. To switch the time zone:

Oracle Cloud Infrastructure User Guide 51


Welcome to Oracle Cloud Infrastructure

1.
In the app, open the Profile menu ( ) and then tap Settings.
2. In the Time zone menu, for Set time zone, select Local or UTC.

Navigating in the Mobile App


When you sign in to the app, you see the Home tab.

Home
• The Alarms section displays information about alarms fired within the last 24 hours. For more details, tap an
alarm in the list, or navigate to the Alarms tab.
• Tap the tiles in the Resources menu to see details about that type of resource.
• For trial users, the Billing section displays current information about costs associated with resource usage.
In addition to the Home tab, the app has Alarms, Resources, and Limits tabs.
Alarms
The Alarms tab displays details about alarms fired within the last 24 hours. At the top of the tab, use the
Compartment picker to select your compartment. To see details about a specific alarm, tap that alarm in the list. For
more information about alarms, see Monitoring Overview.
Resources
The Resources tab displays details about a selection of resources. At the top of the tab, use the Compartment picker
to select your compartment.
Currently, you can view details about the following types of resources in the mobile app.
• Compute instances
• Block volumes
• Object Storage
• Load balancers
• Autonomous Transaction Processing
• Autonomous Data Warehouse
Tap a section to see a list of resources of that type. At the top of the tab, use the Compartment picker to select your
compartment. Use the Filter resources text box to search for resource using a free text search based on keywords.
For more information, see Search Overview. The indicator next to the resource name let's you know the status of the
resource.
To see resource details, tap the resource name in the list. This action takes you to a view that displays information
about that resource, including:
• Resource status
• Visualizations with metrics that let you monitor the health, capacity, and performance of your resources
• Metadata for the resource
Limits
The Limits tab displays details about your current service limits and usage. The service limit is the quota or
allowance set on a resource. For example, your tenancy is allowed a maximum number of compute instances per
availability domain. These limits are usually established with your Oracle sales representative when you purchase
Oracle Cloud Infrastructure. For more information about service limits, see Sevice Limits.
To view limits, at the top of the Limits tab:
1. Filter the list to the limits you want to see:
• Use the Compartment picker to select your compartment.
• Use the Resource picker to select a service.
2. After making your selections, tap Search limits to see the list of limits and current usage.

Oracle Cloud Infrastructure User Guide 52


Welcome to Oracle Cloud Infrastructure

Each item in the resulting list shows a description of the service limit, the current usage for that service, and the total
limit available.

Signing Out

To sign out of the Oracle Cloud Infrastructure Mobile app, open the Profile menu ( ) and then tap Sign Out.

Contacting Support
To open a support request for the Oracle Cloud Infrastructure Mobile app, sign in to the Console on a computer and
the follow the steps to create a support request. When you create the request, in the issue summary, include the prefix
OCI Mobile to specify that the support request is for the mobile app. For more information, see Getting Help and
Contacting Support on page 126.
To create a support ticket
1.
Open the Help menu ( ), go to Support, and click Create support request.
2. Enter the following:
• Issue Summary: Enter a title that summarizes your issue. Avoid entering confidential information.
• Describe Your Issue: Provide a brief overview of your issue.
•Include all the information that support needs to route and respond to your request. For example, "I am
unable to connect to my Compute instance."
• Include troubleshooting steps taken and any available test results.
• Select the severity level for this request.
3. Click Create Request.

Changing Your Password


This topic describes how users can change their own passwords.

Oracle Cloud Infrastructure User Guide 53


Welcome to Oracle Cloud Infrastructure

To Change Your Password


Procedure for Oracle Identity Cloud Service users
1.
Open the Profile menu ( ) and click User Settings.

Your Oracle Cloud Infrastructure IAM service User Details page is displayed. Notice that your username is
prefixed with the name of your IDCS federation, for example: oracleidentitycloudservice/User
2. The information banner at the top of the page tells you that your account is managed in Oracle Identity Cloud
Service. Click the here link.

Oracle Cloud Infrastructure User Guide 54


Welcome to Oracle Cloud Infrastructure

3. Your Identity Cloud Service User Details page is displayed. Notice that on this page, your username is displayed
without the prefix.

4. Click Change Password.


5. Follow the instructions in the dialog to create a new password.
Procedure for local Oracle Cloud Infrastructure users
Use this procedure if your sign-in page looks like the following image and you sign in through Oracle Cloud
Infrastructure.

1. Sign in to the Console using the Oracle Cloud Infrastructure Username and Password.

Oracle Cloud Infrastructure User Guide 55


Welcome to Oracle Cloud Infrastructure

2.
After you sign in, go to the top-right corner of the Console, open the Profile menu ( ) and then click Change
Password.

3. Enter the current password.


4. Follow the prompts to enter the new password, and then click Save New Password.

Checking Your Expenses and Usage


This topic describes how to analyze the Oracle Cloud Infrastructure costs associated with your account.
You can use the following cost-related tools in the Console, which can help you gain insight into your costs and
attribution of Oracle Cloud Infrastructure resources:
• Budgets
• Cost Analysis
• Cost and Usage Reports
See Working with Costs Analysis Tools on page 56, and Billing and Payment Tools Overview on page 276 for
more information.

Required IAM Policy


To enable users to monitor the costs associated with this account, you will have to grant them access by writing a
policy. If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page
2150.
See Required IAM Policy on page 284 for more information on the required policy statements. Also see Cost and
Usage Reports Overview on page 281 for more information on cost and usage reports.

Working with Costs Analysis Tools


Cost Analysis is a visualization tool that helps you track and optimize your Oracle Cloud Infrastructure spending,
allows you to generate charts, and download accurate, reliable tabular reports of aggregated cost data on your Oracle
Cloud Infrastructure consumption. Use the tool for spot checks of spending trends and for generating reports.
To filter costs by dates

Oracle Cloud Infrastructure User Guide 56


Welcome to Oracle Cloud Infrastructure

1. Open the navigation menu. Under Governance and Administration, go to Account Management and click
Cost Analysis.
2. In Start Date, select a date.
3. In End Date, select a date (within six months of the start date).
4. Click Apply Filters.
To filter costs by tags
1. Open the navigation menu. Under Governance and Administration, go to Account Management and click
Cost Analysis.
2. From Tag Key, select a tag.
3. Click Apply Filters.
To filter costs by compartments
1. Open the navigation menu. Under Governance and Administration, go to Account Management and click
Cost Analysis.
2. From Compartment, select a compartment.
3. Click Apply Filters.
To remove a compartment or tag filter
• When you filter costs, a label appears with the name of the tag or compartment filter. To clear that filter, click the
x.
For more information on Cost Analysis, see Cost Analysis Overview on page 286.

Changing Your Payment Method


This topic describes how to upgrade to a paid account, or change your payment method. This topic also describes how
to terminate your paid subscription.
Required IAM Policy
To upgrade to a paid account or change your credit card, you must be a member of the Administrators group. See The
Administrators Group and Policy on page 2133.
Upgrade Your Free Account
Most new customers in the United States who create new accounts after January 28, 2019 can use these tools.
Note:

If you created your account prior to January 28, 2019 or from outside the
United States, use the following links:
• To upgrade to a paid account, see Upgrade Your Free Oracle Cloud
Promotion on page 36.
• To change your credit card, see Updating Your Billing Details.
To upgrade to Pay-as-You-Go
1. Open the navigation menu. Under Governance and Administration, go to Account Management and click
Payment Method.
2. Under Account Type, select Pay-as-You-Go.
3. Take one of the following actions:
• Click Edit to review the current credit card
• Click Add a Credit Card
4. Type or review your information and click Finish.
5. Read the terms and conditions and select the check box to indicate your agreement.
6. Click Start Paid Account.

Oracle Cloud Infrastructure User Guide 57


Welcome to Oracle Cloud Infrastructure

To request a sales call


1. Open the navigation menu. Under Governance and Administration, go to Account Management and click
Payment Method.
2. Under Account Type, select Request a Sales Call.
3. Type a phone number, an email address, or both.
4. Click Submit.
To change your payment method
You cannot change the payment method for promotional accounts.
1. Open the navigation menu. Under Governance and Administration, go to Account Management and click
Payment Method.
2. Click Edit Card.
3. Type your information and click Finish.
Terminating Your Account
You can terminate your account at any time through a support request. From the time that your request has been duly
processed, billing is stopped (even if you have running instances), and any running resources are terminated.

Adding Users
This chapter provides a quick hands-on tutorial for adding users and groups and creating simple policies to grant them
permissions to work with Oracle Cloud Infrastructure resources.
Use these instructions to quickly add some users to try out features. To fully understand the features of IAM and how
to manage access to your cloud resources, see "Overview of IAM" in the Oracle Cloud Infrastructure User Guide.
For an overview of user management for all Oracle Cloud services, see Managing Users, User Accounts, and Roles.

About Users, Groups, and Policies


A user's permissions to access Oracle Cloud Infrastructure services comes from the groups to which they belong. The
permissions for a group are defined by policies. Policies define what actions members of a group can perform, and in
which compartments. Users can then access services and perform operations based on the policies set for the groups
they are members of.

About Oracle Identity Cloud Service Federated Users


When you sign up for Oracle Cloud Infrastructure, your tenancy is federated with Oracle Identity Cloud Service
(IDCS) as the identity provider. You can create users and groups in IDCS that you can use with your Oracle Cloud
products. To give these users permissions in Oracle Cloud Infrastructure, you need to perform some steps in IDCS
and some steps in Oracle Cloud Infrastructure.
You can create your IDCS users and groups directly in the Console. The examples in the following sections include
examples of creating IDCS users who can use Oracle Cloud Infrastructure services.
For more details on managing federated users, see Managing Oracle Identity Cloud Service Users and Groups in the
Oracle Cloud Infrastructure Console on page 2391.
You can also choose to use Oracle Cloud Infrastructure's IAM service as your identity provider to manage users and
groups exclusively in the IAM service. These users can have permissions to use Oracle Cloud Infrastructure services
only. If you want to manage users in the IAM service, see Managing Users on page 2433.

Sample Users and Groups


To help you understand how to set up users with the access permissions they need, perform the following tasks to set
up these two basic types of users:
• An IDCS federated user with full administrator permissions (Cloud Administrator)
• An IDCS federated user with permissions to use one compartment only

Oracle Cloud Infrastructure User Guide 58


Welcome to Oracle Cloud Infrastructure

Add a User with Oracle Cloud Administrator Permissions


The user you create in this task will have full administrator permissions of the default administrator. This means that
the user has full access to all compartments and can create and manage all resources in Oracle Cloud Infrastructure
as well as other services managed through Oracle Identity Cloud Service. You must have Cloud Administrator
permissions to complete this task.
Create a Cloud Administrator user
1. Open the navigation menu. Under Governance and Administration, go to Identity and click Federation.
2. Click your Oracle Identity Cloud Service federation. For most tenancies, the federation is named
OracleIdentityCloudService. The identity provider details page is displayed.
3. Click Create IDCS User.
4. In the Create IDCS User dialog enter the following:
• Username: Enter a unique name or email address for the new user. The value will be the user's login to the
Console and must be unique across all other users in your tenancy.
• Email: Enter an email address for this user. The initial sign-in credentials will be sent to this email address.
• First Name: Enter the user's first name.
• Last Name: Enter the user's last name.
• Phone Number: Optionally, enter a phone number.
• Groups: You can skip this step. You will be granting this user full administrator privileges.
5. Click Create.
The user is created in Oracle Identity Cloud Service. This user can't access their account until they complete the
password reset steps.
6. Click Email Password Instructions to send the password link and instructions to the user. If your email app does
not launch, copy the reset instructions and manually email them to the user.
The password link is good for 24 hours. If the user does not reset their password in time, you can generate a new
password link by clicking Reset Password for the user.
7. Click close to close the dialog. You are returned to the Users list on the Identity Provider Details page.
8. Click the name of the user you just created. The User Details page is displayed.
9. Click Manage Roles.
10. Select the check box next to Add Cloud Account Administrator Role.
11. Click Apply Role Settings.
12. A dialog confirms the entitlements granted to the user. To notify the user of these updates, click Send Email to
User. Click Close to close the dialog.

Create a Compartment and Add a User with Access to It


In this example, create a compartment called "Sandbox" and then create a user with access to only that compartment.
Procedure Overview: To provide access to the Sandbox compartment and all the resources in it, you create a group
(SandboxGroup), and then create a policy (SandboxPolicy) to define the access rule.
To enable access for users created in Identity Cloud Service, create a group in IDCS (IDCSSandboxGroup), and map
it to the SandboxGroup.
Finally, create an IDCS user and add them to the IDCSSandboxGroup.
Create a sandbox compartment
1. Open the navigation menu. Under Governance and Administration, go to Identity and click Compartments.
2. Click Create Compartment.
3. Enter the following:
• Name: Enter Sandbox.
• Description: Enter a description (required), for example: Sandbox compartment for users to try out OCI.

Oracle Cloud Infrastructure User Guide 59


Welcome to Oracle Cloud Infrastructure

4. Click Create Compartment.


Your compartment is displayed in the list.
Create an Oracle Cloud Infrastructure group
Next, create the "SandboxGroup" that you will create the policy for.
1. Open the navigation menu. Under Governance and Administration, go to Identity and click Groups.
2. Click Create Group.
3. In the Create Group dialog:
• Name: Enter a unique name for your group, for example, SandboxGroup.
Note that the name cannot contain spaces.
• Description: Enter a description (required).
4. Click Create.
Create a policy
Create the policy to give the SandboxGroup permissions in the Sandbox compartment.
1. Open the navigation menu. Under Governance and Administration, go to Identity and click Policies.
2. Under List Scope, ensure that you are in your root compartment.
3. Click Create Policy.
4. Enter a unique Name for your policy, for example, SandboxPolicy.
Note that the name cannot contain spaces.
5. Enter a Description (required), for example, Grants users full permissions on the Sandbox compartment.
6. Enter the following Statement:

Allow group SandboxGroup to manage all-resources in compartment Sandbox

This statement grants members of the SandboxGroup group full access to the Sandbox compartment.
7. Click Create.
Create an Oracle Identity Cloud Service group
Next, create the "IDCS_SandboxGroup" in Oracle Identity Cloud Service.
1. Open the navigation menu. Under Governance and Administration, go to Identity and click Federation.
2. Click your Oracle Identity Cloud Service federation. For most tenancies, the federation is named
OracleIdentityCloudService. The identity provider details page is displayed.
3. Under Resources, click Groups.
4. Click create IDCS Group.
5. In the Create IDCS Group dialog enter the following:
• Name: Enter a unique name for your group, for example, IDCS_SandboxGroup.
Note that the name cannot contain spaces.
• Description: Enter a description (required).
6. Click Create.
The group is created and it is displayed in the identity provider details page. Next, map the group.
Map the Oracle Identity Cloud Service Group to the Oracle Cloud Infrastructure group
Next, you need to map the Oracle Identity Cloud Service group to the Oracle Cloud Infrastructure group you created.
The mapping gives the members of the IDCS group the permissions you granted to the OCI group.
1. On the identity provider details page, click Group Mapping. The group mappings are displayed.
2. Click Edit Mapping.
3. Click + Add Mapping.

Oracle Cloud Infrastructure User Guide 60


Welcome to Oracle Cloud Infrastructure

4. From the Identity Provider Group menu list, choose the IDCS_SandboxGroup.
5. From the OCI Group menu list, select the SandboxGroup.
6. Click Submit.
Users that are members of the Oracle Identity Cloud Service groups mapped to the Oracle Cloud Infrastructure groups
are now listed in the Console on the Users page. See Managing User Capabilities for Federated Users on page 2427
for more information on assigning these users additional credentials.
Create a user
1. Open the navigation menu. Under Governance and Administration, go to Identity and click Federation.
2. Click your Oracle Identity Cloud Service federation. For most tenancies, the federation is named
OracleIdentityCloudService. The identity provider details page is displayed.
3. Click Create IDCS User.
4. In the Create IDCS User dialog enter the following:
• Username: Enter a unique name or email address for the new user. The value will be the user's login to the
Console and must be unique across all other users in your tenancy.
• Email: Enter an email address for this user. The initial sign-in credentials will be sent to this email address.
• First Name: Enter the user's first name.
• Last Name: Enter the user's last name.
• Phone Number: Optionally, enter a phone number.
• Groups: Select the group you created in the previous step, for example, IDCS_SandboxGroup.
5. Click Create.
The user is created in Oracle Identity Cloud Service. This user can't access their account until they complete the
password reset steps.
6. Click Email Password Instructions to send the password link and instructions to the user.
The password link is good for 24 hours. If the user does not reset their password in time, you can generate a new
password link by clicking Reset Password for the user.
When this user signs in they can see the compartments they have access to and they can only view, create, and
manage resources in the Sandbox compartment. This user cannot create other users or groups.

Oracle Cloud Infrastructure Tutorials

After you create your Free Trial account, use these tutorials to get started.

Autonomous Database Quickstart Use APEX and Autonomous Database


Create an instance in just a few clicks. Then load data Oracle Application Express (APEX) is a low-code
into your database from Object Storage and query it. development framework that enables you to rapidly build
modern, data-driven apps right from your browser - no
additional tools required.

Oracle Cloud Infrastructure User Guide 61


Welcome to Oracle Cloud Infrastructure

Migrating To Oracle Cloud Infrastructure Analyze data with Autonomous Database


Migrate MySQL on Amazon RDS to Always Free Use Oracle Analytics Desktop (OAD) to visualize data
Autonomous Database. Launch an Always Free Linux in Autonomous Database. Use Oracle Machine Learning
instance and transfer your other application files to (OML) to try your hand at predictive analytics.
Object Storage.

Launch a Linux or Windows VM. Launch a VM with the CLI


Create your first virtual cloud network and launch Use the command line interface to launch a Linux
an instance. Optionally, attach block storage to your instance or a Windows instance. This tutorial includes
instance working with a compartment, creating a virtual cloud
network.

Get started with Object Storage Launch and test a load balancer
Create your first bucket and upload some objects. Create a VCN and load balancer, then test it out.

Tutorial - Launching Your First Linux Instance


In this tutorial you'll learn the basic features of Oracle Cloud Infrastructure by performing some guided steps to
launch and connect to an instance. After your instance is up and running, you can optionally create and attach a block
volume to your instance.
In this tutorial you will:
• Create a cloud network and subnet that enables internet access
• Launch an instance
• Connect to the instance
• Add and attach a block volume

Oracle Cloud Infrastructure User Guide 62


Welcome to Oracle Cloud Infrastructure

The following figure depicts the components you create in the tutorial.

Task Flow to Launch an Instance


Linux instances use an SSH key pair instead of a password to authenticate a remote user. If you do not already have
a key pair, your first task is to create one using common third-party tools (if you have OpenSSH, you can instead
use a key pair that is generated by Oracle Cloud Infrastructure). Next, prepare for your instance by launching a cloud
network with subnets. You will then launch your instance into one of the subnets and connect to it. If you want to
attach some storage, continue with the tutorial to add a cloud block storage volume. When finished with the tutorial,
be sure to terminate the resources that you created.
Prepare:
• Create a key pair.
• Choose a compartment for your resources.
• Create a cloud network.
Launch and connect:
• Launch an instance.
• Connect to your instance.
Add storage and clean up:
• Add a block volume (optional).
• Clean up your resources.

Oracle Cloud Infrastructure User Guide 63


Welcome to Oracle Cloud Infrastructure

Creating a Key Pair


Linux instances use an SSH key pair instead of a password to authenticate a remote user. A key pair file contains
a private key and public key. You keep the private key on your computer and provide the public key when you
create an instance. When you connect to the instance using SSH, you provide the path to the private key in the
SSH command.
Caution:

Anyone who has access to the private key can connect to the instance. Store
the private key in a secure location.
If you're connecting to your instance from a computer that has OpenSSH installed, you can use a key pair that is
generated by Oracle Cloud Infrastructure instead of creating your own key pair.
Before You Begin
• If you will connect to your instance from a Windows system using OpenSSH or from a UNIX-based system, you
can use a key pair that is generated by Oracle Cloud Infrastructure and skip this step. OpenSSH should be installed
on Windows 10 and Windows Server 2019. Proceed to Choosing a Compartment on page 64.
• If you already have an SSH-2 RSA key pair, you can use your existing key pair and skip this step. Proceed to
Choosing a Compartment on page 64.
• If you will connect to your instance from a Windows system that does not have OpenSSH, download and install
the PuTTY Key Generator from http://www.putty.org.
Creating an SSH Key Pair on Windows Using PuTTY Key Generator
1. Find puttygen.exe in the PuTTY folder on your computer, for example, C:\Program Files
(x86)\PuTTY. Double-click puttygen.exe to open it.
2. Specify a key type of SSH-2 RSA and a key size of 2048 bits:
• In the Key menu, confirm that the default value of SSH-2 RSA key is selected.
• For the Type of key to generate, accept the default key type of RSA.
• Set the Number of bits in a generated key to 2048 if it is not already set.
3. Click Generate.
4. Move your mouse around the blank area in the PuTTY window to generate random data in the key.
When the key is generated, it appears under Public key for pasting into OpenSSH authorized_keys file.
5. A Key comment is generated for you, including the date and time stamp. You can keep the default comment or
replace it with your own more descriptive comment.
6. Leave the Key passphrase field blank.
7. Click Save private key, and then click Yes in the prompt about saving the key without a passphrase.
The key pair is saved in the PuTTY Private Key (PPK) format, which is a proprietary format that works only with
the PuTTY tool set.
You can name the key anything you want, but use the ppk file extension. For example, mykey.ppk.
8. Select all of the generated key that appears under Public key for pasting into OpenSSH authorized_keys file,
copy it using Ctrl + C, paste it into a text file, and then save the file in the same location as the private key.
(Do not use Save public key because it does not save the key in the OpenSSH format.)
You can name the key anything you want, but for consistency, use the same name as the private key and a file
extension of pub. For example, mykey.pub.
9. Write down the names and location of your public and private key files. You will need the public key when
launching an instance. You will need the private key to access the instance via SSH.

Choosing a Compartment
Compartments help you organize and control access to your resources. A compartment is a collection of related
resources (such as cloud networks, compute instances, or block volumes) that can be accessed only by those groups

Oracle Cloud Infrastructure User Guide 64


Welcome to Oracle Cloud Infrastructure

that have been given permission by an administrator in your organization. For example, one compartment could
contain all the servers and storage volumes that make up the production version of your company's Human Resources
system. Only users with permission to that compartment can manage those servers and volumes.
In this tutorial you use one compartment for all your resources. When you are ready to create a production
environment you will most likely separate these resources in different compartments.
Before You Begin
Sign in to the Console.
Choosing a Compartment
To begin working with a service, you must first select a service, and then select a compartment that you have
permissions in.
1. In this tutorial, the first resource you create is the cloud network. Open the navigation menu. Under Core
Infrastructure, go to Networking and click Virtual Cloud Networks.
2. Select the Sandbox compartment (or the compartment designated by your administrator) from the list on the left,
as shown in the image. If the Sandbox compartment does not exist, you can create it as described in Creating a
Compartment.

Creating a Compartment
1. Open the navigation menu. Under Governance and Administration, go to Identity and click Compartments.
2. Click Create Compartment.
3. Enter the following:
• Name: Enter "Sandbox".
• Description: Enter a description (required), for example: "Sandbox compartment for the getting started
tutorial".
• Parent Compartment: Select the compartment you want this compartment to reside in. Defaults to the root
compartment (or tenancy).
4. Click Create Compartment.
Your compartment is displayed in the list.
5. Return to Choosing a Compartment.
When you select the Sandbox compartment, you will only see resources that are in the Sandbox. When you create
new resources you will be prompted to choose the compartment to create them in, but your current compartment will
be the default. If you change compartments, you must come back to the Sandbox compartment to see the resources
that were created there.

Oracle Cloud Infrastructure User Guide 65


Welcome to Oracle Cloud Infrastructure

Creating a Virtual Cloud Network


Before you can launch an instance, you need to have a virtual cloud network (VCN) and subnet to launch it into. A
subnet is a subdivision of your VCN. The subnet directs traffic according to a route table. For this tutorial, you'll
access the instance over the internet using its public IP address, so your route table will direct traffic to an internet
gateway. The subnet also uses a security list to control traffic in and out of the instance.
For information about VCN features, see "Overview of Networking" in the Oracle Cloud Infrastructure User Guide .
Before You Begin
• You or an administrator has created a compartment for your network. See Choosing a Compartment on page
64.
Create a Cloud Network Plus Related Resources
Tip:

The Console offers two choices when you create a VCN: to create only the
VCN, or to create the VCN with several related resources that are necessary
if you want to immediately launch an instance. To help you get started
quickly, the following procedure creates the VCN plus the related resources.
1. Open the navigation menu. Under Core Infrastructure, go to Networking and click Virtual Cloud Networks.
Ensure that the Sandbox compartment (or the compartment designated for you) is selected in the Compartment
list on the left.
2. Click Networking Quickstart.
3. Select VCN with Internet Connectivity, and then click Start Workflow.
4. Enter the following:
• VCN Name: Enter a name for your cloud network. The name is incorporated into the names of all the related
resources that are automatically created. Avoid entering confidential information.
• Compartment: This field defaults to your current compartment. Select the compartment you want to create
the VCN and related resources in, if not already selected.
• VCN CIDR Block: Enter a valid CIDR block for the VCN. For example 10.0.0.0/16.
• Public Subnet CIDR Block: Enter a valid CIDR block for the subnet. The value must be within the VCN's
CIDR block. For example: 10.0.0.0/24.
• Private Subnet CIDR Block: Enter a valid CIDR block for the subnet. The value must be within the VCN's
CIDR block and not overlap with the public subnet's CIDR block. For example: 10.0.1.0/24.
• Accept the defaults for any other fields.
5. Click Next.
6. Review the list of resources that the workflow will create for you. Notice that the workflow will set up security list
rules and route table rules to enable basic access for the VCN.
7. Click Create to start the short workflow.
8. After the workflow completes, click View Virtual Cloud Network.
The cloud network has the following resources and characteristics:
• Internet gateway.
• NAT gateway.
• Service gateway with access to the Oracle Services Network.
• A regional public subnet with access to the internet gateway. This subnet uses the VCN's default security list and
default route table. Instances in this subnet may optionally have public IP addresses.
• A regional private subnet with access to the NAT gateway and service gateway. This subnet uses a custom
security list and custom route table that the workflow created. Instances in this subnet cannot have public
IP addresses.
• Use of the Internet and VCN Resolver for DNS.

Oracle Cloud Infrastructure User Guide 66


Welcome to Oracle Cloud Infrastructure

Important:

This simple cloud network is designed to make it easy to launch an instance


when trying out Oracle Cloud Infrastructure. When you create your
production instances, ensure that you create appropriate security lists and
route table rules to restrict network traffic to your instances.
What's Next
Now you can launch an instance. See Launching a Linux Instance on page 67.

Launching a Linux Instance


Now you will launch an instance with the Oracle Linux image and basic shape. More advanced options are available;
see "Managing Instances" in the Oracle Cloud Infrastructure User Guide for more information.
Before You Begin
• You have created a virtual cloud network (VCN) and public subnet. See Creating a Virtual Cloud Network on
page 66.
• If you will connect to your instance from a Windows system that does not have OpenSSH, you have a created an
SSH key pair. See Creating a Key Pair on page 64.
Launching an Instance
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click Create Instance.
3. Enter a name for the instance, for example: <your initials>_Instance. Avoid entering confidential information.
4. In the Placement and hardware section, make the following selections:
a. Accept the default Availability domain.
b. In the Image section, accept the default, Oracle Linux 7.x.
c. In the Shape section, click Change Shape. Then, do the following:
1. For Instance type, accept the default, Virtual Machine.
2. For Shape series, select Intel Skylake, and then choose the VM.Standard2.1 shape (1 OCPU, 15 GB
RAM).
Tip:

To create an instance using the Always Free-eligible


VM.Standard.E2.1.Micro shape, select Specialty and Previous
Generation, and then choose the VM.Standard.E2.1.Micro shape. If

Oracle Cloud Infrastructure User Guide 67


Welcome to Oracle Cloud Infrastructure

the Micro shape is disabled, cancel out of the shape selection page,
select a different availability domain, and then try again.
The shape defines the number of CPUs and amount of memory allocated to the instance.
3. Click Select Shape.
5. In the Networking section, configure the network details for the instance:
• For Network, leave Select existing virtual cloud network selected.
• Virtual cloud network in <compartment_name>: Select the cloud network that you created. If necessary,
click Change compartment to switch to the compartment containing the cloud network that you created.
• For Subnet, leave Select existing subnet selected.
• Subnet in <compartment_name>: Select the public subnet that was created with your cloud network. If
necessary, click Change compartment to switch to the compartment containing the correct subnet.
• Select the Assign a public IPv4 address option. This creates a public IP address for the instance, which you
need to access the instance. If you have trouble selecting this option, confirm that you selected the public
subnet that was created with your VCN, not a private subnet.
6. In the Add SSH keys section, select one of the following options:
• Generate SSH keys: If you will connect to the instance using OpenSSH, select this option. Click Save
Private Key and then save the private key on your computer. Optionally, click Save Public Key and then save
the public key.
Caution:

Anyone who has access to the private key can connect to the instance.
Store the private key in a secure location.
• Choose SSH key files: If you will connect to the instance using PuTTY, select this option. To upload the
public key portion of the key pair that you want to use for SSH access to the instance, browse to the key file
that you want to upload, or drag and drop the file into the box.
7. In the Boot volume section, leave all the options cleared.
8. Click Create.
The instance is displayed in the Console in a provisioning state. Expect provisioning to take several minutes before
the state updates to running. Do not refresh the page. After the instance is running, allow another few minutes for the
operating system to boot before you attempt to connect.
Getting the Instance Public IP Address
To connect to the instance in the next step, you'll need its public IP address.
To get the instance public IP address:
1. Click the instance name to see its details.

Oracle Cloud Infrastructure User Guide 68


Welcome to Oracle Cloud Infrastructure

2. The Public IP Address and Username are displayed on the details page under Instance Access, as shown in the
following image:

3. Make a note of the Public IP Address before you continue.

Connecting to Your Instance


You connect to a running Linux instance using a Secure Shell (SSH) connection. Most Linux and UNIX-like
operating systems include an SSH client by default. Windows 10 and Windows Server 2019 systems should include
the OpenSSH client, which you'll need if you created your instance using the SSH keys generated by Oracle
Cloud Infrastructure. For other Windows versions, you can download a free SSH client called PuTTY from http://
www.putty.org.
Before You Begin
• You know the public IP address of your instance. See Launching a Linux Instance on page 67.
• You know the path to the private key file.
Connecting to Your Linux Instance Using SSH
Log in to the instance using SSH.
To connect to a Linux instance from a Unix-style system

Oracle Cloud Infrastructure User Guide 69


Welcome to Oracle Cloud Infrastructure

1. Use the following command to set the file permissions so that only you can read the file:

chmod 400 <private_key_file>

<private_key_file> is the full path and name of the file that contains the private key associated with the instance
you want to access.
2. Use the following SSH command to access the instance.

ssh –i <private_key_file> <username>@<public-ip-address>

<private_key_file> is the full path and name of the file that contains the private key associated with the instance
you want to access.
<username> is the default username for the instance. For Oracle Linux and CentOS images, the default username
is opc. For Ubuntu images, the default username is ubuntu.
<public-ip-address> is your instance IP address that you retrieved from the Console.
To connect to a Linux instance from a Windows system using OpenSSH
If the instance uses a key pair that was generated by Oracle Cloud Infrastructure, use the following procedure.
1. If this is the first time you are using this key pair, you must set the file permissions so that only you can read the
file. Do the following:
a.In Windows Explorer, navigate to the private key file, right-click the file, and then click Properties.
b.On the Security tab, click Advanced.
c.Ensure that the Owner is your user account.
d.Click Disable Inheritance, and then select Convert inherited permissions into explicit permissions on this
object.
e. Select each permission entry that is not your user account and click Remove.
f. Ensure that the access permission for your user account is Full control.
g. Save your changes.
2. To connect to the instance, open Windows PowerShell and run the following command:

ssh –i <private_key_file> <username>@<public-ip-address>

<private_key_file> is the full path and name of the file that contains the private key associated with the instance
you want to access.
<username> is the default username for the instance. For Oracle Linux and CentOS images, the default username
is opc. For Ubuntu images, the default username is ubuntu.
<public-ip-address> is your instance IP address that you retrieved from the Console.
To connect to a Linux instance from a Windows system using PuTTY
SSH private key files generated by Oracle Cloud Infrastructure are not compatible with PuTTY. If you are using a
private key file generated during the instance creation process you need to convert the file to a .ppk file before you
can use it with PuTTY to connect to the instance.
Convert a generated .key private key file:
1. Open PuTTYgen.
2. Click Load, and select the private key generated when you created the instance. The extension for the key file is
.key.
3. Click Save private key.
4. Specify a name for the key. The extension for new private key is .ppk.
5. Click Save.
Connect to the Linux instance using a .ppk private key file:

Oracle Cloud Infrastructure User Guide 70


Welcome to Oracle Cloud Infrastructure

If the instance uses a key pair that you created using PuTTY Key Generator, use the following procedure.
1. Open PuTTY.
2. In the Category pane, select Session and enter the following:
• Host Name (or IP address):
<username>@<public-ip-address>
<username> is the default username for the instance. For Oracle Linux and CentOS images, the default
username is opc. For Ubuntu images, the default username is ubuntu.
<public-ip-address> is your instance public IP address that you retrieved from the Console
• Port: 22
• Connection type: SSH
3. In the Category pane, expand Window, and then select Translation.
4. In the Remote character set drop-down list, select UTF-8. The default locale setting on Linux-based instances is
UTF-8, and this configures PuTTY to use the same locale.
5. In the Category pane, expand Connection, expand SSH, and then click Auth.
6. Click Browse, and then select your .ppk private key file.
7. Click Open to start the session.
If this is your first time connecting to the instance, you might see a message that the server's host key is not cached
in the registry. Click Yes to continue the connection.
Tip:

If the connection fails, you may need to update your PuTTY proxy
configuration.
Running Administrative Tasks on the Instance
When you’re logged in as the default user, opc, you can use the sudo command to run administrative tasks.
What's Next
Now that you've got an instance and have successfully connected to it, consider the following next steps:
• Install software on the instance.
• Add a block volume. See Adding a Block Volume on page 71.
• Add more users to work with Oracle Cloud Infrastructure. See Adding Users on page 58.
• Allow additional users to connect to your instance. See Adding Users on an Instance on page 743.
• Or, if you are finished with your instance, delete the resources that you created in the tutorial. See Cleaning Up
Resources from the Tutorial on page 73.
If you're having trouble connecting, see Troubleshooting the SSH Connection on page 743.

Adding a Block Volume


Block Volume provides network storage to use with your Oracle Cloud Infrastructure instances. After you create,
attach, and mount a volume to your instance, you can use it just as you would a physical hard drive on your computer.
A volume can be attached to a single instance at a time, but you can detach it from one instance and attach to another
instance, keeping your data intact.
This task shows you how to create a volume, attach it to an instance, and then connect the volume to the instance.
For complete details on Block Volume, see "Overview of Block Volume" in the Oracle Cloud Infrastructure User
Guide .
Creating a Volume
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Block Volumes.
2. Click Create Block Volume.

Oracle Cloud Infrastructure User Guide 71


Welcome to Oracle Cloud Infrastructure

3. In the Create Block Volume dialog, enter the following:


•Create in Compartment: This field defaults to your current compartment. Select the compartment you want
to create the volume in, if not already selected.
• Name: Enter a user-friendly name. Avoid entering confidential information.
• availability domain: Select the same availability domain that you selected for your instance. If you followed
the tutorial instructions when launching your instance, this is the first AD in the list. The volume and the
instance must be in the same availability domain.
• Size: Enter 50 to create a 50 GB block volume.
• Backup Policy: Do not select a backup policy.
• Tags: Leave the tagging fields blank.
4. Click Create Block Volume.
A 50 GB block volume is displayed in the provisioning state. When the volume is no longer in the provisioning state,
you can attach it to your instance.
Attaching the Volume to an Instance
Next you attach the volume via an iSCSI network connection to your instance:
1. Find your instance: Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click your instance name to view its details.
3. In the Resources section, click Attached Block Volumes.
4. Click Attach Block Volume.
5. Enter the following:
• Select ISCSI.
• Block Volume Compartment: Select the compartment where you created the block volume.
• Select Volume: Select this option.
• Block Volume: Select the block volume from the list.
• Device Path: If the instance supports consistent device paths, you will see a list of device paths. Select one
from the list.
• Require CHAP Credentials: Leave cleared.
Tip:

CHAP is a security protocol. You can leave this box cleared for the
purposes of the tutorial. When you set up your production environment,
Oracle recommends requiring CHAP credentials.
• Access: Select Read/Write.
6. Click Attach.

Connecting to the Volume


After your volume is attached, you can configure the iSCSI connection. You connect to the volume using the
iscsiadm command-line tool. The commands you need to configure, authenticate, and log on are provided by the
Console so you can easily copy and paste them into your instance session window. After the connection is configured,
you can mount the volume on your instance and use it just as you would a physical hard drive.
To connect to your volume:
1. Log on to your instance as described in Connecting to Your Instance on page 69.
2. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
3. Click your instance name to view its details.
4. In the Resources section, click Attached Block Volumes.

Oracle Cloud Infrastructure User Guide 72


Welcome to Oracle Cloud Infrastructure

5. Click the Actions icon (three dots) next to the volume you just attached and then click iSCSI Commands and
Information.
The iSCSI Commands and Information dialog is displayed. Notice that the dialog displays specific identifying
information about your volume (such as IP address and port) as well as the iSCSI commands you'll need to use.
The commands are ready to use with the appropriate information already included in each command.
6. The Attach Commands configure the iSCSI connection and log on to iSCSI. Copy and paste each command from
the Attach Commands list into the instance session window.
Be sure to paste and run each command individually. There are three attach commands. Each command begins
with sudo iscsiadm.
7. After entering the final command to log on to iSCSI, you are ready to format (if needed) and mount the volume.
To get a list of mountable iSCSI devices on the instance, run the following command:

sudo fdisk -l

If your disk attached successfully, you'll see it in the returned list as follows:

Disk /dev/sdb: 50.0 GB, 50010783744 bytes, 97677312 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4096 bytes / 1048576 bytes

Important:

Connecting to Volumes on Linux Instances


When connecting to volumes on Linux instances, if you want to
automatically mount these volumes on instance boot, you need to use
some specific options in the /etc/fstab file, or the instance may fail to
launch. See Traditional fstab Options on page 532 and fstab Options for
Block Volumes Using Consistent Device Paths on page 531 for more
information.
What's Next
Now that you've got an instance running and attached some storage, consider the following next steps:
• Install your own software on the instance.
• Add more users to work with Oracle Cloud Infrastructure. See Adding Users on page 58.
• Or, if you are finished with your instance, delete the resources that you created in the tutorial. See Cleaning Up
Resources from the Tutorial on page 73.

Cleaning Up Resources from the Tutorial


After you've finished with the resources you created for this tutorial, clean up by terminating the instance and deleting
the resources you don't intend to continue working with.

Detach and Delete the Block Volume


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Find your instance in the Instances list and click its name to display its details.
3. In the Resources section on the Instance Details page, click Attached Block Volumes.
4. Find your volume, click the Actions icon (three dots), and then click Detach.
5. Click Continue Detachment and then click OK.
6. When the Console shows the volume status as Detached, you can delete the volume. Open the navigation menu.
Under Core Infrastructure, go to Block Storage and click Block Volumes.
7. Find your volume, click the Actions icon (three dots), and then click Terminate. Confirm when prompted.

Oracle Cloud Infrastructure User Guide 73


Welcome to Oracle Cloud Infrastructure

Terminate the Instance


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. In the list of instances, find the instance you created in the tutorial.
3. Click the Actions icon (three dots), and then click Terminate.
4. Select the Permanently delete the attached boot volume check box, and then click Terminate Instance.

Delete the Virtual Cloud Network


1. Open the navigation menu. Under Core Infrastructure, go to Networking and click Virtual Cloud Networks.
2. In the list of VCNs, find the one you created in the tutorial.
3. Click the Actions icon (three dots), and then click Terminate.
4. Click Terminate All to delete all the underlying resources of your VCN.
When all the resources are successfully deleted you can close the dialog.

Tutorial - Launching Your First Windows Instance


In this tutorial you'll learn the basic features of Oracle Cloud Infrastructure by performing some guided steps to
launch and connect to a Windows instance. After your instance is up and running, you can optionally create and
attach a block volume to your instance.
In this tutorial you will:
• Create a cloud network and subnet that enables internet access
• Launch an instance
• Connect to the instance
• Add and attach a block volume
The following figure depicts the components you create in the tutorial.

Oracle Cloud Infrastructure User Guide 74


Welcome to Oracle Cloud Infrastructure

Task Flow to Launch a Windows Instance


You will connect to your instance using Remote Desktop Connection and a one-time password that is created when
you launch the instance. Before you can launch the instance, you must create a virtual cloud network (VCN) with
subnets. You will then launch your instance into one of the subnets of your VCN and connect to it. If you want to
attach some storage, continue with the tutorial to add a cloud block storage volume. When finished with the tutorial,
be sure to terminate the resources that you created.
Prepare:
• Choose a compartment for your resources.
• Create a cloud network.
Launch and connect:
• Launch a Windows instance.
• Connect to your Windows instance.
Add storage and clean up:
• Add a block volume (optional).
• Clean up your resources.

Choosing a Compartment
Compartments help you organize and control access to your resources. A compartment is a collection of related
resources (such as cloud networks, compute instances, or block volumes) that can be accessed only by those groups
that have been given permission by an administrator in your organization. For example, one compartment could
contain all the servers and storage volumes that make up the production version of your company's Human Resources
system. Only users with permission to that compartment can manage those servers and volumes.

Oracle Cloud Infrastructure User Guide 75


Welcome to Oracle Cloud Infrastructure

In this tutorial you use one compartment for all your resources. When you are ready to create a production
environment you will most likely separate these resources in different compartments.
Before You Begin
Sign in to the Console.
Choosing a Compartment
To begin working with a service, you must first select a service, and then select a compartment that you have
permissions in.
1. In this tutorial, the first resource you create is the cloud network. Open the navigation menu. Under Core
Infrastructure, go to Networking and click Virtual Cloud Networks.
2. Select the Sandbox compartment (or the compartment designated by your administrator) from the list on the left,
as shown in the image. If the Sandbox compartment does not exist, you can create it as described in Creating a
Compartment.

Creating a Compartment
1. Open the navigation menu. Under Governance and Administration, go to Identity and click Compartments.
2. Click Create Compartment.
3. Enter the following:
• Name: Enter "Sandbox".
• Description: Enter a description (required), for example: "Sandbox compartment for the getting started
tutorial".
• Parent Compartment: Select the compartment you want this compartment to reside in. Defaults to the root
compartment (or tenancy).
4. Click Create Compartment.
Your compartment is displayed in the list.
5. Return to Choosing a Compartment.
When you select the Sandbox compartment, you will only see resources that are in the Sandbox. When you create
new resources you will be prompted to choose the compartment to create them in, but your current compartment will
be the default. If you change compartments, you must come back to the Sandbox compartment to see the resources
that were created there.

Creating a Virtual Cloud Network


Before you can launch an instance, you need to have a virtual cloud network (VCN) and subnet to launch it into. A
subnet is a subdivision of your VCN. The subnet directs traffic according to a route table. For this tutorial, you'll
access the instance over the internet using its public IP address, so your route table will direct traffic to an internet
gateway. The subnet also uses a security list to control traffic in and out of the instance.

Oracle Cloud Infrastructure User Guide 76


Welcome to Oracle Cloud Infrastructure

For information about VCN features, see "Overview of Networking" in the Oracle Cloud Infrastructure User Guide.
Before You Begin
• You or an administrator has created a compartment for your network. See Choosing a Compartment on page
75.
Create a Cloud Network Plus Related Resources
Tip:

The Console offers two choices when you create a VCN: to create only the
VCN, or to create the VCN with several related resources that are necessary
if you want to immediately launch an instance. To help you get started
quickly, the following procedure creates the VCN plus the related resources.
1. Open the navigation menu. Under Core Infrastructure, go to Networking and click Virtual Cloud Networks.
Ensure that the Sandbox compartment (or the compartment designated for you) is selected in the Compartment
list on the left.
2. Click Networking Quickstart.
3. Select VCN with Internet Connectivity, and then click Start Workflow.
4. Enter the following:
• VCN Name: Enter a name for your cloud network. The name is incorporated into the names of all the related
resources that are automatically created. Avoid entering confidential information.
• Compartment: This field defaults to your current compartment. Select the compartment you want to create
the VCN and related resources in, if not already selected.
• VCN CIDR Block: Enter a valid CIDR block for the VCN. For example 10.0.0.0/16.
• Public Subnet CIDR Block: Enter a valid CIDR block for the subnet. The value must be within the VCN's
CIDR block. For example: 10.0.0.0/24.
• Private Subnet CIDR Block: Enter a valid CIDR block for the subnet. The value must be within the VCN's
CIDR block and not overlap with the public subnet's CIDR block. For example: 10.0.1.0/24.
• Accept the defaults for any other fields.
5. Click Next.
6. Review the list of resources that the workflow will create for you. Notice that the workflow will set up security list
rules and route table rules to enable basic access for the VCN.
7. Click Create to start the short workflow.
8. After the workflow completes, click View Virtual Cloud Network.
The cloud network has the following resources and characteristics:
• Internet gateway.
• NAT gateway.
• Service gateway with access to the Oracle Services Network.
• A regional public subnet with access to the internet gateway. This subnet uses the VCN's default security list and
default route table. Instances in this subnet may optionally have public IP addresses.
• A regional private subnet with access to the NAT gateway and service gateway. This subnet uses a custom
security list and custom route table that the workflow created. Instances in this subnet cannot have public
IP addresses.
• Use of the Internet and VCN Resolver for DNS.
Important:

This simple cloud network is designed to make it easy to launch an instance


when trying out Oracle Cloud Infrastructure. When you create your
production instances, ensure that you create appropriate security lists and
route table rules to restrict network traffic to your instances.

Oracle Cloud Infrastructure User Guide 77


Welcome to Oracle Cloud Infrastructure

Edit the Default Security List to Allow Traffic to Your Windows Instance
To enable network traffic to reach your Windows instance, you need to add a security list rule to enable Remote
Desktop Protocol (RDP) access. Specifically, for the default security list (which is used by the public subnet), you
need a stateful ingress rule for TCP traffic on destination port 3389 from source 0.0.0.0/0 and any source port.
To edit the VCN's security list:
1. Click the name of the VCN that you just created. Its details are displayed.
2. Under Resources, click Security Lists.
3. Click the default security list for your VCN.
Its details are displayed.
4. Click Add Ingress Rules.
5. Enter the following for your new rule:
a. Source Type: CIDR
b. Source CIDR: 0.0.0.0/0
c. IP Protocol: RDP (TCP/3389)
d. Source Port Range: All
e. Destination Port Range: 3389
6. When done, click Add Ingress Rules.
What's Next
Now you can launch an instance. See Launching a Windows Instance on page 78.

Launching a Windows Instance


Now you will launch an instance with the Oracle Windows image and basic shape. More advanced options are
available, see "Managing Instances" in the Oracle Cloud Infrastructure User Guide for more information.
Before You Begin
• You have created a virtual cloud network (VCN) and public subnet. See Creating a Virtual Cloud Network on
page 76.
Launching an Instance
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click Create Instance.

Oracle Cloud Infrastructure User Guide 78


Welcome to Oracle Cloud Infrastructure

3. Enter a name for the instance, for example: <your initials>_Instance. Avoid entering confidential information.
Important:

Use only these ASCII characters in the instance name: uppercase letters
(A-Z), lowercase letters (a-z), numbers (0-9), and hyphens (-). See this
known issue for more information.
4. In the Placement and hardware section, make the following selections:
a. Accept the default Availability domain.
b. In the Image section, click Change Image. Then, do the following:
1. In the Image source list, select Platform images.
2. Select Windows. Then, in the OS version list, select Server 2019 Standard.
3. Review and accept the terms of use, and then click Select Image.
c. In the Shape section, click Change Shape. Then, do the following:
1. For Instance type, accept the default, Virtual Machine.
2. For Shape series, select Intel Skylake, and then choose the VM.Standard2.1 shape (1 OCPU, 15 GB
RAM).
The shape defines the number of CPUs and amount of memory allocated to the instance.
3. Click Select Shape.
5. In the Networking section, configure the network details for the instance. Do not accept the defaults.
• For Network, leave Select existing virtual cloud network selected.
• Virtual cloud network in <compartment_name>: Select the cloud network that you created. If necessary,
click Change compartment to switch to the compartment containing the cloud network that you created.
• For Subnet, leave Select existing subnet selected.
• Subnet in <compartment_name>: Select the public subnet that was created with your cloud network. If
necessary, click Change compartment to switch to the compartment containing the correct subnet.
• Select the Assign a public IPv4 address option. This creates a public IP address for the instance, which you
need to access the instance. If you have trouble selecting this option, confirm that you selected the public
subnet that was created with your VCN, not a private subnet.
6. In the Boot volume section, leave all the options cleared.
7. Click Create.
The instance is displayed in the Console in a provisioning state. Expect provisioning to take several minutes before
the state updates to running. Do not refresh the page. After the instance is running, allow another few minutes for the
operating system to boot before you attempt to connect.
Getting the Instance Public IP Address and Initial Windows Password
To connect to the instance in the next step, you'll need its public IP address and initial password.
To get the instance public IP address and initial password:
1. Click the instance name to see its details.

Oracle Cloud Infrastructure User Guide 79


Welcome to Oracle Cloud Infrastructure

2. The Public IP Address, Username, and Initial Password are displayed on the details page, as shown in the
following image:

3. To view the Initial Password, click Show. Although the Console offers a copy option, the paste option is
typically not available when you are prompted to enter the password, so be prepared to enter it manually.
4. When you are ready to connect to the instance, make a note of both the public IP address and the initial password.

Connecting to Your Windows Instance


You connect to a running Windows instance using Remote Desktop.
Before You Begin
• You know the public IP address and initial password of your instance, see Launching a Windows Instance on page
78.
• You have Remote Desktop installed.
Connecting to Your Windows Instance from a Remote Desktop Client
1. Open the Remote Desktop client.
2. In the Computer field, enter the public IP address that you retrieved from the Console.
3. The User name is opc. Depending on the Remote Desktop client you are using, you might have to connect to the
instance before you can enter this credential.
4. Click Connect to start the session.
5. Accept the certificate if you are prompted to do so.
6. Enter the initial password that you retrieved from the Console. You will be prompted to change the password as
soon as you log in.
Your new password must be at least 12 characters long and must comply with Microsoft's password policy.
7. Press Enter.
Running Administrative Tasks on the Instance
The default user, opc, has administrative privileges.
What's Next
Now that you've got an instance and have successfully connected to it, consider the following next steps:

Oracle Cloud Infrastructure User Guide 80


Welcome to Oracle Cloud Infrastructure

• Install software on the instance.


• Add a block volume. See Adding a Block Volume to a Windows Instance on page 81.
• Add more users to work with Oracle Cloud Infrastructure. See Adding Users on page 58.
• Allow additional users to connect to your instance. See Adding Users on an Instance on page 743.
• Or, if you are finished with your instance, delete the resources that you created in the tutorial. See Cleaning Up
Resources from the Tutorial on page 82.
If you're having trouble connecting, see Troubleshooting the SSH Connection on page 743.

Adding a Block Volume to a Windows Instance


Block Volume provides network storage to use with your Oracle Cloud Infrastructure instances. After you create,
attach, and mount a volume to your instance, you can use it just as you would a physical hard drive on your computer.
A volume can be attached to a single instance at a time, but you can detach it from one instance and attach to another
instance, keeping your data intact.
This task shows you how to create a volume, attach it to an instance, and then connect the volume to the instance.
For complete details on Block Volume, see "Managing Volumes" in the Oracle Cloud Infrastructure User Guide.
Creating a Volume
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Block Volumes.
2. Click Create Block Volume.
3. In the Create Block Volume dialog box, enter the following:
•Create in Compartment: This field defaults to your current compartment. Select the compartment you want
to create the volume in, if not already selected.
• Name: Enter a user-friendly name. Avoid entering confidential information.
• availability domain: Select the same availability domain that you selected for your instance. If you followed
the tutorial instructions when launching your instance, this will be the first availability domain in the list. The
volume and the instance must be in the same availability domain.
• Size: Enter 256 to create a 256 GB block volume.
4. Click Create Block Volume.
A 256 GB block volume is displayed in the list in the provisioning state. When the volume is no longer in the
provisioning state, you can attach it to your instance.
Attaching the Volume to an Instance
Next you attach the volume via an iSCSI network connection to your instance:
1. Find your instance: Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click your instance name to view its details.
3. In the Resources section, click Attached Block Volumes.
4. Click Attach Block Volume.
5. Enter the following:
a. Block Volume Compartment: Select the compartment where you created the block volume.
b. Block Volume: Select the block volume from the list.
c. Require CHAP Credentials: Leave cleared.
Tip:

CHAP is a security protocol. You can leave this box cleared for the
purposes of the tutorial. When you set up your production environment,
Oracle recommends requiring CHAP credentials.
6. Click Attach.

Oracle Cloud Infrastructure User Guide 81


Welcome to Oracle Cloud Infrastructure

Connecting to the Volume


After your volume is attached, you can configure the iSCSI connection. After the connection is configured, you can
mount the volume on your instance and use it just as you would a physical hard drive.
To connect to your volume:
1. Log on to your instance as described in Connecting to Your Windows Instance on page 80.
2. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
3. Click your instance name to view the instance details.
4. In the Resources section, click Attached Block Volumes.
5. Click the Actions icon (three dots) next to the volume you just attached and then click iSCSI Commands and
Information.
The iSCSI Commands and Information dialog box opens. Notice that the dialog box displays specific
identifying information about your volume (such as IP address and port) as well as the iSCSI commands that you
can use.
6. On your Windows instance, open the iSCSI Initiator.
For example: Open Server Manager, click Tools, and select iSCSI Initiator.
7. In the iSCSI Initiator Properties dialog box, click the Discovery tab.
8. Click Discover Portal.
9. Enter the block volume IP address and port. Click OK.
10. Click the Targets tab.
11. In the Discovered Targets region, select the volume IQN.
12. Click Connect and then click OK to close the dialog.
13. You are now ready to format (if needed) and mount the volume. To get a list of mountable iSCSI devices on the
instance, in Server Manager, click File and Storage Services and then click Disks.
The 256 GB disk is displayed in the list.
What's Next
Now that you've got an instance running and attached some storage, consider the following next steps:
• Install your own software on the instance.
• Add more users to work with Oracle Cloud Infrastructure. See Adding Users on page 58.
• Or, if you are finished with your instance, delete the resources that you created in the tutorial. See Cleaning Up
Resources from the Tutorial on page 82.

Cleaning Up Resources from the Tutorial


After you've finished with the resources you created for this tutorial, clean up by terminating the instance and deleting
the resources you don't intend to continue working with.

Detach and Delete the Block Volume


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Find your instance in the Instances list and click its name to display its details.
3. In the Resources section on the Instance Details page, click Attached Block Volumes.
4. Find your volume, click the Actions icon (three dots), and then click Detach.
5. Click Continue Detachment and then click OK.
6. When the Console shows the volume status as Detached, you can delete the volume. Open the navigation menu.
Under Core Infrastructure, go to Block Storage and click Block Volumes.
7. Find your volume, click the Actions icon (three dots), and then click Terminate. Confirm when prompted.

Terminate the Instance


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.

Oracle Cloud Infrastructure User Guide 82


Welcome to Oracle Cloud Infrastructure

2. In the list of instances, find the instance you created in the tutorial.
3. Click the Actions icon (three dots), and then click Terminate.
4. Select the Permanently delete the attached boot volume check box, and then click Terminate Instance.

Delete the Virtual Cloud Network


1. Open the navigation menu. Under Core Infrastructure, go to Networking and click Virtual Cloud Networks.
2. In the list of VCNs, find the one you created in the tutorial.
3. Click the Actions icon (three dots), and then click Terminate.
4. Click Terminate All to delete all the underlying resources of your VCN.
When all the resources are successfully deleted you can close the dialog.

Putting Data into Object Storage


Object Storage provides reliable, secure, and scalable object storage. Object storage is a storage architecture that
stores and manages data as objects. Some typical use cases include data backup, file sharing, and storing unstructured
data like logs and sensor-generated data.
Object Storage uses buckets to organize your files. To use Object Storage, first create a bucket and then begin adding
data files.
Use this procedure to quickly get started. For more details, see "Overview of Object Storage" in the Oracle Cloud
Infrastructure User Guide.

Creating a Bucket
To create a bucket to store objects:
1. Open the navigation menu. Under Core Infrastructure, click Object Storage.
A list of the buckets in the compartment you're viewing is displayed.
2. Select a compartment from the Compartment list on the left side of the page.
A list of existing buckets is displayed.
3. Click Create Bucket.
4. In the Create Bucket dialog box, specify the attributes of the bucket:
• Bucket Name: The system generates a default bucket name that reflects the current year, month, day, and
time, for example bucket-20190306-1359. If you change this default to any other bucket name, use letters,
numbers, dashes, underscores, and periods. Avoid entering confidential information.
• Default Storage Tier: Select the default tier in which you want to store your data. Once set, you cannot
change the default storage tier of a bucket. When you upload objects, this tier will be selected by default. You
can, however, select a different tier. Available default tiers include:
• Standard is the primary, default storage tier used for Object Storage service data. Use the Standard tier
for storing frequently accessed data that requires fast and immediate access.
• Archive is the default storage tier used for Archive Storage service data. Use the Archive tier for
storing rarely accessed data that requires long retention periods. Access to data in the Archive tier is not
immediate. Archived data must be restored before the data is accessible.
• Object Events: Select Emit Object Events if you want to enable the bucket to emit events for object state
changes. For more information about events, see Overview of Events on page 1788.
• Encryption: Buckets are encrypted with keys managed by Oracle by default, but you can optionally encrypt
the data in this bucket using your own Vault encryption key. To use Vault for your encryption needs, select
Encrypt Using Customer-Managed Keys. Then, select the Vault Compartment and Vault that contain the
master encryption key you want to use. Also select the Master Encryption Key Compartment and Master

Oracle Cloud Infrastructure User Guide 83


Welcome to Oracle Cloud Infrastructure

Encryption Key. For more information about encryption, see Overview of Vault on page 3988. For details
on how to create a vault, see Managing Vaults on page 3993.
• Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags
to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more
information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then
skip this option (you can apply tags later) or ask your administrator.
5. Click Create Bucket.
The bucket is created immediately and you can add objects to it. Objects added to archive buckets are immediately
archived and must be restored before they are available for download.

Uploading Files to a Bucket


Object Storage supports uploading individual files up to 10 TiB. Because memory capacity and browser capability
can impact uploading objects using the Console, use the CLI, SDK, or API for larger files. See "Developer Tools" in
the Oracle Cloud Infrastructure User Guide.
To upload files to your bucket using the Console:
1. From the Object Storage Buckets screen, click the bucket name to view its details.
2. Click Upload.
3. In the Object Name Prefix field, optionally specify a file name prefix for the files that you plan to upload.
4. If the Storage Tier field displays Standard, you can optionally change the storage tier to upload objects to.
5. Select the object or objects to upload in one of two ways:
• Drag files from your computer into the Drop files here ... section.
• Click the select files link to display a file selection dialog box.
As you select files to upload, they are displayed in a scrolling list. If you decide that you do not want to upload a
file that you have selected, click the X icon to the right of the file name.
If selected files to upload and files already stored in the bucket have the same name, messages warning you of an
overwrite are displayed.
6. Click Upload.
The selected objects are uploaded. Click Close to return to the bucket.

What's Next
For information on managing and accessing your object files, see "Overview of Object Storage" in the Oracle Cloud
Infrastructure User Guide.

Getting Started with the Command Line Interface


This topic provides a walk-through of the commands required to launch a Linux instance and a Windows instance.
This tutorial includes working with a compartment, creating a virtual cloud network, and launching instances.

About the Command Line Interface (CLI)


The CLI is a tool that lets you work with most of the available services in Oracle Cloud Infrastructure. The CLI
provides the same core functionality as the Console, plus additional commands. The CLI's functionality and
command help are based on the service's API.

Oracle Cloud Infrastructure User Guide 84


Welcome to Oracle Cloud Infrastructure

Getting Help with Commands


You can get inline help using the --help, -h, or -? keywords. For example:

oci --help

oci bv volume -h

oci os bucket create -?

You can also view all the CLI help in your browser.
About the CLI Examples
The examples in this document are grouped as a command and a response, where:
• You are told what the command does, and given the command to use
• The result of the command is returned in a drop-down text box
The next example shows a command and response group.
To get the namespace for your tenancy, run the following command.

oci os ns get

Response
Note:

Understanding Response Output


This response to the oci os ns get command shows the standard output,
which is returned in JSON format. JSON objects are written as key/value
pairs, with the key and value separated by a colon. For example:

{
"data": "docs"
"id":
"ocid1.compartment.oc1..aaaaaaaal3gzijdhqol2pglie6astxxeyqdqeyg35nz5zxil2
"is-stateless": null
}

A key like "id" isn't very informative. To understand the JSON object
reference you have to read the key's value.

{
"data": "docs"
}

Most of the command and response groups in this guide aren't as simple as the preceding example. However, as you
work through the tasks, they are easier to read and work with.

Before You Begin


Before you start using the command line interface, verify that you meet all the requirements described in "Command
Line Interface (CLI)" in the Oracle Cloud Infrastructure User Guide .
As a best practice, complete the tasks in this tutorial in a test environment. This approach ensures that your
configurations do not affect other environments in the tenancy. At the end of the tutorial, you can safely delete the test
resources.

Oracle Cloud Infrastructure User Guide 85


Welcome to Oracle Cloud Infrastructure

Working in a Compartment
In this tutorial, you use one compartment for all your resources. When you are ready to create a production
environment, you will most likely separate these resources in different compartments.
You can either use an existing compartment (recommended), or create a new compartment.
Choose a Compartment
Help: oci iam compartment list -h
To list the compartments in your tenancy, run the following command.

oci iam compartment list -c <tenancy_id>

Command Example and Response

oci iam compartment list -c


ocid1.tenancy.oc1..aaaaaaaal1fvgn0h9njji5u6ldrwb4l6aay2x87qatw2wte30f714lal9oom

{
"data": [
{
"compartment-id":
"ocid1.tenancy.oc1..aaaaaaaal1fvgn0h9njji5u6ldrwb4l6aay2x87qatw2wte30f714lal9oom",
"description": "For testing CLI features",
"id":
"ocid1.tenancy.oc1..aaaaaaaal1fvgn0h9njji5u6ldrwb4l6aay2x87qatw2wte30f714lal9oom",
"inactive-status": null,
"lifecycle-state": "ACTIVE",
"name": "CLIsandbox",
"time-created": "2017-06-27T18:52:52.214000+00:00"
},
{
"compartment-id":
"ocid1.tenancy.oc1..aaaaaaaal1fvgn0h9njji5u6ldrwb4l6aay2x87qatw2wte30f714lal9oom",
"description": "for testing",
"id":
"ocid1.compartment.oc1..aaaaaaaasqn3hj6e5tq6slj4rpdqqja7qsyuqipmu4sv5ucmyp3rkmrhuv2q",
"inactive-status": null,
"lifecycle-state": "ACTIVE",
"name": "CLISandbox",
"time-created": "2017-05-12T21:31:27.709000+00:00"
}
],
"opc-next-page": "
AAAAAAAAAAGLB28zJTjPUeNvgmLxg9QuJdAAZrl10FfKymIMh4ylXItQkO_Xk6RXbGxCn8hgkYm_pRpf1v6hVox
PTB53F2TZ31dReLsbzxBa3ljbwqQgwzQsUPYROLXA40EIJFdr2oYp67AzozSW8jt8MWFC8y19PsHEEEBW1jw8TT7
VgP3ZFu6Y-Rab-
gPNtjsT4pLh91BkDKWzbyHr0OmH4W1rhTJ5HfZ8YGpA0Ntm7_rOyNBd06qeBU496AQHk24-
U_l9p4NvAvHuJ_fR-
Z6ahgvWPlZQc1iCTRlJ6leM7ED3JNehIV0onOVQvGquJpF2WeEWFPcioQaqf4iScqHEchV--3Mn2k1yP_-
b4AsVtSPRFYG8UuiRACPzg6ENVFjyeGOk3rrHjLR3j7s61pdgqtMOKZ1WtbOV8AcNON8ac1xJPN7O2YmjO3D0H4J
iT"
}

Create a Compartment
Help: oci iam compartment create -h
Before you create a compartment, review "Working with Compartments" in the Oracle Cloud Infrastructure User
Guide to understand compartment design, resource management, and compartment constraints.

Oracle Cloud Infrastructure User Guide 86


Welcome to Oracle Cloud Infrastructure

To create a compartment, run the following command.

oci iam compartment create --name <compartment_name> -


c <root_compartment_id> --description "<friendly_description>"

Command Example and Response

oci iam compartment create --name CLIsandbox -c


ocid1.tenancy.oc1..aaaaaaaal1fvgn0h9njji5u6ldrwb4l6aay2x87qatw2wte30f714lal9oom
--description "For testing CLI features"

{
"data": {
"compartment-id":
"ocid1.tenancy.oc1..aaaaaaaawuu4tdkysd2ups5fsclgm5ksfjwmx6mwem5sbjyw5ob5ojq2vkxa",
"description": "For testing CLI features",
"id":
"ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l",
"inactive-status": null,
"lifecycle-state": "ACTIVE",
"name": "CLIsandbox",
"time-created": "2017-06-27T18:52:52.214000+00:00"
},
"etag": "24a4737ede9d34eae934c93e9549ee684a15efc8"
}

Tip:

Keep track of the information that's returned when you run commands. In
several cases, you need this information as you work through this document.
For example, the preceding command returns the OCID for the tenancy,
which is also the root compartment.

"compartment-id":
"ocid1.tenancy.oc1..aaaaaaaawuu4tdkysd2ups5fsclgm5ksfjwmx6mwem5sbjyw5ob5o

Creating a Virtual Cloud Network


Before you can launch any instances, you have to create a virtual cloud network (VCN) and related resources. The
following tasks are used to prepare the network environment:
1. Create the Virtual Cloud Network
Help: oci network vcn create -h
Create the VCN, specifying a DNS name and a CIDR block range.
To create the VCN, run the following command.

oci network vcn create --compartment-id <compartment_id> --display-name


"<friendly_name>" --dns-label <dns_name> --cidr-block "<0.0.0.0/0>"

Command Example and Response

oci network vcn create --compartment-id


ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l
--display-name "cli_vcn" --dns-label sandboxvcn1 --cidr-block
"10.0.0.0/16"

Oracle Cloud Infrastructure User Guide 87


Welcome to Oracle Cloud Infrastructure

"data": {
"cidr-block": "10.0.0.0/16",
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l"
"default-dhcp-options-id":
"ocid1.dhcpoptions.oc1.phx.aaaaaaaaexnsdwsjmxnmmt4tpzkcbyengrnfpgnqzlkzz7qfx6faeqfbtc
"default-route-table-id":
"ocid1.routetable.oc1.phx.aaaaaaaagdjre4rmk5dq6qqkftjtzyn7vctemqga3i6qrxvf23stedpujo2
"default-security-list-id":
"ocid1.securitylist.oc1.phx.aaaaaaaaxa3cr5zqshmed7zf64bxcrxb2zerinxhc52zrqe5w27hrau75
"display-name": "cli_vcn",
"dns-label": "sandboxvcn1",
"id":
"ocid1.vcn.oc1.phx.aaaaaaaa6va8fxr1m4hvzjk3nzo8x290qymdrwiblxw5qpzlm64rdd74vchr",
"lifecycle-state": "AVAILABLE",
"time-created": "2017-06-27T22:14:15.683000+00:00","vcn-domain-name":
"sandboxvcn1.oraclevcn.com"
},
"etag": "9037efc5"
}

You can get information about any of your configurations by sending queries to your tenancy.
For example, to get network information, run the following command.

oci network vcn list -c <compartment_id>

Command Example and Response

oci network vcn list -c


ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l

{
"data": [
{
"cidr-block": "10.0.0.0/16",
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l"
"default-dhcp-options-id":
"ocid1.dhcpoptions.oc1.phx.aaaaaaaaexnsdjmxncbydrwrnfwspgnqzlkzz7qfmmt4tpzkx6faeqfbtc
"default-route-table-id":
"ocid1.routetable.oc1.phx.aaaaaaaagdjre4jtzyn7vctmqga3i6qrxvf2rmk5dqdrwqkft3stedpujo2
"default-security-list-id":
"ocid1.securitylist.oc1.phx.aaaaaaaaxa3cr5zqsdrwxb2zerinxhc52zrqe5wmed74bxczf27hrau75
"display-name": "cli_vcn",
"dns-label": "sandboxvcn1",
"id":
"ocid1.vcn.oc1.phx.aaaaaaaa6va8fxr1m4hvzjk3nzo8x290qymdrwiblxw5qpzlm64rdd74vchr",
"lifecycle-state": "AVAILABLE",
"time-created": "2017-06-27T22:14:15.683000+00:00",
"vcn-domain-name": "sandboxvcn1.oraclevcn.com"
}
]
}
2. Configure a Security List Ingress Rule
Help: oci network security-list create -h
When you create a VCN, a default security list is created for you. However, the Windows instance also requires
inbound traffic enabled for port 3389. The preferred approach is creating a second list that addresses the Windows

Oracle Cloud Infrastructure User Guide 88


Welcome to Oracle Cloud Infrastructure

port requirement. You use the --security-list-ids option to associate both security lists with the subnet
when you create it.
Note:

Passing JSON Strings in the CLI


The next command passes complex input as a JSON text string. For help
with formatting JSON input, especially when working in a Windows
environment, see "Passing Complex Input" in the Oracle Cloud
Infrastructure User Guide .
To create a new security list and configure the ingress rule for port 3389, run the following command.

oci network security-list create -c <compartment_id> --egress-


security-rules "[{"destination": "<0.0.0.0/0>", "protocol": "<6>",
"isStateless": <true>, "tcpOptions": {"destinationPortRange": <null>,
"sourcePortRange": <null>}}]" --ingress-security-rules "[{"source":
"<0.0.0.0/0>", "protocol": "<6>", "isStateless": <false>,
"tcpOptions": {"destinationPortRange": {"max": <3389>, "min": <3389>},
"sourcePortRange": <null>}}]" --vcn-id <vcn_id> --display-
name <rule_name>

Command Example and Response

oci network security-list create -c


ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l
--egress-security-rules "[{"destination": "0.0.0.0/0", "protocol":
"6", "isStateless": true, "tcpOptions": {"destinationPortRange":
null, "sourcePortRange": null}}]" --ingress-security-rules
"[{"source": "0.0.0.0/0", "protocol": "6", "isStateless":
false, "tcpOptions": {"destinationPortRange": {"max":
3389, "min": 3389}, "sourcePortRange": null}}]" --vcn-id
ocid1.vcn.oc1.phx.aaaaaaaa6va8fxr1m4hvzjk3nzo8x290qymdrwiblxw5qpzlm64rdd74vchr
--display-name port3389rule

{
"data": {
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l"
"display-name": "port3389rule",
"egress-security-rules": [
{
"destination": "0.0.0.0/0",
"icmp-options": null,
"is-stateless": true,
"protocol": "6",
"tcp-options": {
"destination-port-range": {
"max": null,
"min": null
},
"source-port-range": null
},
"udp-options": null
}
],
"id":
"ocid1.securitylist.oc1.phx.aaaaaaaa7snx4jjfons6o2h33drwdh5hev6elir55hnrhi2ywqfnd5rcq
"ingress-security-rules": [
{
"icmp-options": null,

Oracle Cloud Infrastructure User Guide 89


Welcome to Oracle Cloud Infrastructure

"is-stateless": false,
"protocol": "6",
"source": "0.0.0.0/0",
"tcp-options": {
"destination-port-range": {
"max": 3389,
"min": 3389
},
"source-port-range": null
},
"udp-options": null
}
],
"lifecycle-state": "AVAILABLE",
"time-created": "2017-08-23T19:50:58.104000+00:00",
"vcn-id":
"ocid1.vcn.oc1.phx.aaaaaaaa6va8fxr1m4hvzjk3nzo8x290qymdrwiblxw5qpzlm64rdd74vchr"
},
"etag": "d063779e"
}
3. Create a Subnet
Help: oci iam availability-domain list -h, oci network subnet create -h
In this next step, you have to provide the OCIDs for the default security list and the new security list. If you didn't
record these OCIDs, use the oci network security-list list command to get a list of the security
lists in the virtual cloud network.
Before you create a subnet, you have to find out which availability domains are available to create the subnet in.
To get the availability domain list for your compartment, run the following command.

oci iam availability-domain list -c <compartment_id>

Command Example and Response

oci iam availability-domain list -c


ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l

{
"data": [
{
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l"
"name": "EMIr:PHX-AD-1"
},
{
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l"
"name": "EMIr:PHX-AD-2"
},
{
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l"
"name": "EMIr:PHX-AD-3"
}
]

Oracle Cloud Infrastructure User Guide 90


Welcome to Oracle Cloud Infrastructure

To create a subnet in AD-1, run the following command.

oci network subnet create --vcn-id <vcn_id> -c <compartment_id>


--availability-domain "<availability_domain_name>" --
display-name <display_name> --dns-label "<dns_label>"
--cidr-block "<10.0.0.0/16>" --security-list-ids
"["<default_security_list_id>","<new_security_list_id>"]"

Command Example and Response

oci network subnet create --vcn-id


ocid1.vcn.oc1.phx.aaaaaaaah2ast7desae6ok3amu64wozj3kskox75awryr5j2nd7tkocplajq
-c
ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l
--availability-domain "EMIr:PHX-AD-1" --display-name CLISUB --dns-
label "vminstances" --cidr-block "10.0.0.0/16" --security-list-ids
"["ocid1.securitylist.oc1.phx.aaaaaaaaw7c62ybv4f5drwv2mup3f75aiquhbkbh4s676muq5t7j5tj

{
"data": {
"availability-domain": "EMIr:PHX-AD-1",
"cidr-block": "10.0.0.0/16",
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l"
"dhcp-options-id":
"ocid1.dhcpoptions.oc1.phx.aaaaaaaaexnsdjmxnmmt4tpzkengrnfwspgnqzldrw7qfx6cbyfaeqfbtc
"display-name": "CLISUB",
"dns-label": "vminstances",
"id":
"ocid1.subnet.oc1.phx.aaaaaaaahvx05fhw7p320cxmdrwo5wlf50egig9cmdzs1plb1xl6c5wvb5s2",
"lifecycle-state": "PROVISIONING",
"prohibit-public-ip-on-vnic": false,
"route-table-id":
"ocid1.routetable.oc1.phx.aaaaaaaagdjqga3i6qrxvf23stedpre4rmkdrw6qeqkftjtzyn7vctmujo2
"security-list-ids": [

"ocid1.securitylist.oc1.phx.aaaaaaaaw7c62ybv4f5drwv2mup3f75aiquhbkbh4s676muq5t7j5tjck

"ocid1.securitylist.oc1.phx.aaaaaaaa7snx4jjfons6o2h33drwdh5hev6elir55hnrhi2ywqfnd5rcq
],
"subnet-domain-name": vminstances.sandboxvcn1.oraclevcn.com,
"time-created": "2017-08-24T00:51:30.462000+00:00",
"vcn-id":
"ocid1.vcn.oc1.phx.aaaaaaaa6va8fxr1m4hvzjk3nzo8x290qymdrwiblxw5qpzlm64rdd74vchr",
"virtual-router-ip": "10.0.0.1",
"virtual-router-mac": "00:00:17:7F:8A:D7"
},
"etag": "92d20c35"
}

Oracle Cloud Infrastructure User Guide 91


Welcome to Oracle Cloud Infrastructure

4. Create an Internet Gateway


Help: oci network internet-gateway create -h
To create an Internet Gateway, run the following command.

oci network internet-gateway create -c <compartment_id> --is-


enabled <true> --vcn-id <vcn_id> --display-name <gateway_display_name>

Command Example and Response

oci network internet-gateway create -c


ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l
--is-enabled true --vcn-id
ocid1.vcn.oc1.phx.aaaaaaaa6va8fxr1m4hvzjk3nzo8x290qymdrwiblxw5qpzlm64rdd74vchr
--display-name sbgateway

{
"data": {
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l"
"display-name": "sbgateway",
"id":
"ocid1.internetgateway.oc1.phx.aaaaaaaa3vcd7gmqqh4po6wnsjhcdkxlddeqinmnbanzz2wsh5gdrw
"is-enabled": true,
"lifecycle-state": "AVAILABLE",
"time-created": "2017-08-25T20:03:48.482000+00:00",
"vcn-id":
"ocid1.vcn.oc1.phx.aaaaaaaa6va8fxr1m4hvzjk3nzo8x290qymdrwiblxw5qpzlm64rdd74vchr"
},
"etag": "d13fb7e3"
}
5. Add a Rule to the Route Table
Help: oci network route-table list -h, oci network route-table update -h
When you create a VCN, a route table is created automatically. Before you add a rule to the route table, you need
the OCID for the table.
To get the route table OCID, run the following command.

oci network route-table list -c <compartment_id> --vcn-id <vcn_id>

Command Example and Response

oci network route-table list -c


ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l
--vcn-id
ocid1.vcn.oc1.phx.aaaaaaaa6va8fxr1m4hvzjk3nzo8x290qymdrwiblxw5qpzlm64rdd74vchr

{
"data": [
{
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l"
"display-name": "Default Route Table for cli_vcn",
"id":
"ocid1.routetable.oc1.phx.aaaaaaaagdjqga3i6qrxvf23stedpre4rmkdrw6qeqkftjtzyn7vctmujo2
"lifecycle-state": "AVAILABLE",
"route-rules": [],
"time-created": "2017-08-25T21:46:04.324000+00:00",

Oracle Cloud Infrastructure User Guide 92


Welcome to Oracle Cloud Infrastructure

"vcn-id":
"ocid1.vcn.oc1.phx.aaaaaaaa6va8fxr1m4hvzjk3nzo8x290qymdrwiblxw5qpzlm64rdd74vchr"
}
]
}

The information in the previous response shows that there is a route table without any rules: "route rules":
[]. Because the table exists, you create a rule by updating the table. When you run the next command, you get a
warning about updates to route rules. Any update to the route rules replaces all the existing rules. If you want to
continue and process the update, Enter "y".
To update the route rules, run the following command.

oci network route-table update --rt-id <route_table_id> --route-rules


"[{"cidrBlock":"<0.0.0.0/0>","networkEntityId":"<internet_gateway_id>"}]
WARNING: Updates to route-rules will replace any existing values. Are you
sure you want to continue? [y/N]: y

Command Example and Response

oci network route-table update --rt-id


ocid1.routetable.oc1.phx.aaaaaaaagdjqga3i6qrxvf23stedpre4rmkdrw6qeqkftjtzyn7vctmujo2q
--route-rules
"[{"cidrBlock":"0.0.0.0/0","networkEntityId":"ocid1.internetgateway.oc1.phx.aaaaaaaa3
WARNING: Updates to route-rules will replace any existing values. Are you
sure you want to continue? [y/N]: y

{
"data": {
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l"
"display-name": "Default Route Table for cli_vcn",
"id":
"ocid1.routetable.oc1.phx.aaaaaaaa4kujevzdsnd7bh6aetvrhvzdrwcxmmblspmyj3pqwckchajvz6f
"lifecycle-state": "AVAILABLE",
"route-rules": [
{
"cidr-block": "0.0.0.0/0",
"network-entity-id":
"ocid1.internetgateway.oc1.phx.aaaaaaaa3vcd7gmqqh4po6wnsjhcdkxlddeqinmnbanzz2wsh5gdrw
}
],
"time-created": "2017-08-25T23:46:04.324000+00:00","vcn-id":
"ocid1.vcn.oc1.phx.aaaaaaaa6va8fxr1m4hvzjk3nzo8x290qymdrwiblxw5qpzlm64rdd74vchr"
},
"etag": "3fc998d8"
}

Preparing to Launch an Instance


When you launch an instance you have to provide the following information, some of which you've already obtained:
• compartment-id
• availability-domain
• subnet-id
• image-id
• shape

Oracle Cloud Infrastructure User Guide 93


Welcome to Oracle Cloud Infrastructure

1. Get Information About the Available Images


Help: oci compute image list -h
The image-id identifies the operating system that you want to install. For more information, see "Oracle-
Provided Images" in the Oracle Cloud Infrastructure User Guide .
To get a list of images, run the following command.

oci compute image list -c <compartment_id>

Command Example and Response


Images are available for: Oracle Linux, CentOS, Ubuntu, and Windows Server. This response example only
shows the information for Oracle Linux 7.3.

oci compute image list -c


ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l

{
"base-image-id": null,
"compartment-id": null,
"create-image-allowed": true,
"display-name": "Oracle-Linux-7.3-2017.03.03-0",
"id":
"ocid1.image.oc1.phx.aaaaaaaaevkccomzepja4yhahz6rguhqbuomuto7gdrw5hjimqsig6syeqda",
"lifecycle-state": "AVAILABLE",
"operating-system": "Oracle Linux",
"operating-system-version": "7.3",
"time-created": "2017-03-03T19:04:30.824000+00:00"
}
2. Get Information About the Available Shapes
Help: oci compute shape list -h
The shape identifies the configuration of the virtual machine or bare metal host that you want to use. "Overview of
the Compute Service" in the Oracle Cloud Infrastructure User Guide contains up-to-date information about the
available shapes.
For the purposes of this walk-through, use this virtual machine shape for testing: --shape
"VM.Standard1.1". This shape is configured with 1 CPU and 7 GB of memory.
Note:

Shape and Block Volume Sizing


Sizing for compute instance shapes and block volumes are not part of this
walk-through. The examples use the minimum sizes that are available.
To get a list of all the available bare metal and virtual machine shapes, run the following command.

oci compute shape list -c <compartment_id> --availability-domain


"<availability_domain_name>"

Command Example and Response

oci compute shape list -c


ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l
--availability-domain "EMIr:PHX-AD-1"

{
"data": [

Oracle Cloud Infrastructure User Guide 94


Welcome to Oracle Cloud Infrastructure

{
"shape": "BM.Standard1.36"
},
{
"shape": "VM.Standard1.1"
},
{
"shape": "VM.Standard1.2"
},
{
"shape": "VM.Standard1.4"
},
{
"shape": "VM.Standard1.8"
},
{
"shape": "VM.Standard1.16"
},
{
"shape": "VM.DenseIO1.4"
}
]
}

Launching a Linux Instance


Now you're ready to launch a Linux instance based on the configurations you prepared.
1. Use a Public/Private Key Pair to Connect to the Instance
When you launch an instance using the CLI, you need an existing key pair to access the instance. (This key pair is
not the same as an API signing key.)
2. Launch the Instance
Help: oci compute instance launch -h
Caution:

In this example, the --ssh-authorized-keys-file parameter


references a file that contains the public key required to access the compute
instance. If you don't provide this key when you launch the instance you
can't connect to the instance after it's launched.
To launch the Linux instance, run the following command.

oci compute instance launch --availability-domain


"<availability_domain_name>" -c <compartment_id> --shape "<shape_name>"
--display-name "<instance_display_name>" --image-id <image_id> --
ssh-authorized-keys-file "<path_to_authorized_keys_file>" --subnet-id
<subnet_id>

Command Example and Response

oci compute instance launch --availability-domain "EMIr:PHX-AD-1" -c


ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l
--shape "VM.Standard1.1" --display-name "Linux Instance" --image-id
ocid1.image.oc1.phx.aaaaaaaa5yu6pw3riqtuhxzov7fdngi4tsteganmao54nq3pyxu3hxcuzmoa
--ssh-authorized-keys-file "C:\Users\testuser\.oci\linux_key.pem" --
subnet-id
ocid1.subnet.oc1.phx.aaaaaaaahvx05fhw7p320cxmdrwo5wlf50egig9cmdzs1plb1xl6c5wvb5s2

Oracle Cloud Infrastructure User Guide 95


Welcome to Oracle Cloud Infrastructure

"data": {
"availability-domain": "EMIr:PHX-AD-1",
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l"
"display-name": "Linux Instance",
"extended-metadata": {},
"id":
"ocid1.instance.oc1.phx.abyhqljrtv7hhenrra6hsdrwjqvszcr2gs7c76tuuzc33iyl6bz5mfnbzw7q"
"image-id":
"ocid1.image.oc1.phx.aaaaaaaa5yu6pw3riqtuhxzov7fdngi4tsteganmao54nq3pyxu3hxcuzmoa",
"ipxe-script": null,
"lifecycle-state": "RUNNING",
"metadata": {
"ssh_authorized_keys": "ssh-rsa AAAAB3NzaABJQAAAQC1yc2EAAAEAtaT/
s9HZ24VeLUxcBNT//nPygk75BWpA
+kuQotpH4yP1tpqJvOBZoTKwoYa0BuoVcY4VP1GkuCEUrpojZ5F6LybbVeO
+ixpuxcPTRNZcVPZJfUVZqg7u8CCjih2T9qH9ZrOcXBJCyKrxEE2kkP4RunnS38MvuDnySYus/04V8l7sEudqW
+Sc4vljbZIaOqNrlAJV5xfQHISL2Ejq8Q1JKaO2Mc6D4Ku/6qEwe0ihtPGoi0zFmPoWstfgc1UqTdiRsYECzza
lgBfOsv/Dcg19ND7/qKnmJ4/9iKuacI2bm+HF2oR0gY4C2MvL3Q== rsa-key-20817080\n"
},
"region": "phx",
"shape": "VM.Standard1.1",
"time-created": "2017-08-26T20:39:03.340000+00:00"
},
"etag":
"2df9d1f14856a2e9a0cc239417f1ee829288b8badeb7ac6fb6d5b3553cbd148c--gzip"
}
3. Get VNIC Information for the Instance
Help: oci compute instance list-vnics -h
You need the public IP address of the instance in order to connect to the instance. The VNIC for the instance has
this information.
To get a list of VNICs for the instance, run the following command.

oci compute instance list-vnics --instance-id <instance_id>

Command Example and Response

oci compute instance list-vnics --instance-id


ocid1.instance.oc1.phx.abcdefgh6kykdowc8ozzvr4421kwp7apdrwk6wrjl7su82d60c6sp4nap88d

{
"data": [
{
"availability-domain": "EMIr:PHX-AD-1",
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l"
"display-name": "Linux Instance",
"hostname-label": null,
"id":
"ocid1.vnic.oc1.phx.abyhqljrxqrdrwuhj4nly7dp7ctr7xvclvejc7pu5rq77e37vlsq2al5y74a",
"lifecycle-state": "AVAILABLE",
"private-ip": "10.0.0.2",
"public-ip": "129.145.32.236",
"subnet-id":
"ocid1.subnet.oc1.phx.aaaaaaaahvx05fhw7p320cxmdrwo5wlf50egig9cmdzs1plb1xl6c5wvb5s2",
"time-created": "2017-08-24T00:51:30.462000+00:00"
}
[
}

Oracle Cloud Infrastructure User Guide 96


Welcome to Oracle Cloud Infrastructure

4. Create a Block Volume for the Instance


Help: oci bv volume create -h
Create a block volume, using the minimum available size.
Caution:

Block volume sizes are expressed as increments of 1024 MB. The next
command example uses the minimum size, --size-in-mbs 51200, or
50 GB.
To create a block volume, run the following command.

oci bv volume create --availability-domain "<availability_domain_name>"


-c <compartment_id> --size-in-mbs <51200> --display-
name <volume_display_name>

Command Example and Response

oci bv volume create --availability-domain "EMIr:PHX-AD-1" -c


ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l
--size-in-mbs 51200 --display-name LinuxVol

{
"data": {
"availability-domain": "EMIr:PHX-AD-1",
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l"
"display-name": "LinuxVol",
"id":
"ocid1.volume.oc1.phx.abyhqlsktp2ec7pdazl4y324drw5lxruh5nxjrgbgqq7znsj5oo4t25nvcta",
"lifecycle-state": "PROVISIONING",
"size-in-mbs": 51200,
"time-created": "2017-08-26T00:51:30.462000+00:00"
},
"etag": "720652578"
}

After the lifecycle state changes from "PROVISIONING" to "AVAILABLE" you can attach the volume to the
Linux instance.
Tip:

Finding out the Lifecycle State


You can find out the lifecycle state for the block volume using the oci
bv volume get command for the volume you created. You can also
query other resources such as compute instances and VNICs, to find out
their lifecycle state.
5. Attach the Block Volume to the Instance
Help: oci compute volume-attachment attach -h
To attach the block volume to the Linux instance, run the following command.

oci compute volume-attachment attach --instance-id <instance_id> --


type <iscsi> --volume-id <volume_id>

Command Example and Response

oci compute volume-attachment attach --instance-id


ocid1.instance.oc1.phx.abcdefgh6kykdowc8ozzvr4421kwp7apdrwk6wrjl7su82d60c6sp4nap88d

Oracle Cloud Infrastructure User Guide 97


Welcome to Oracle Cloud Infrastructure

--type iscsi --volume-id


ocid1.volume.oc1.phx.abyhqljrgbktp2ec7pdazl4y324drw5lxruh5nxt25gqq7znsj5oo4snvcta

{
"data": {
"attachment-type": "iscsi",
"availability-domain": "EMIr:PHX-AD-1",
"chap-secret": null,
"chap-username": null,
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l"
"display-name": null,
"id":
"ocid1.volumeattachment.oc1.phx.abyhqlytoivg6eaybdrwb7mqqms6utjrefofrplyip7filf3vtpk5
"instance-id":
"ocid1.instance.oc1.phx.abcdefgh6kykdowc8ozzvr4421kwp7apdrwk6wrjl7su82d60c6sp4nap88d"
"ipv4": null,
"iqn": null,
"lifecycle-state": "ATTACHING",
"port": null,
"time-created": "2017-08-26T00:55:30.462000+00:00",
"volume-id":
"ocid1.volume.oc1.phx.fewtr0p6pm9lj7h7rpf8w3drwlf4x9tadrw1sbs7n5qkx7dcu7bk"
},
"etag":
"0c0afdb14a0a10ffc15283366798ac82f623433e6f5619eb2d4469612b32a332"
}

Launching a Windows Instance


Launching a Windows instance follows the same pattern and requires the same information as launching a Linux
instance. The only significant differences are the operating system and shape, as shown in the following commands.
1. Launch the Instance
Help: oci compute instance launch -h
To launch the Windows instance, run the following command.

oci compute instance launch --availability-domain


"<availability_domain_name>" -c <compartment_id> --shape "<shape_name>"
--display-name "<instance_display_name>" --image-id <image_id> --subnet-
id <subnet_id>

Command Example and Response

oci compute instance launch --availability-domain "EMIr:PHX-AD-1" -c


ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l
--shape "VM.Standard1.2" --display-name "Windows Instance" --image-id
ocid1.image.oc1.phx.aaaaaaaa53cliasgvqmueus5byytfldrwafbro2y4ywjebci5szc42e2b7ua
--subnet-id
ocid1.subnet.oc1.phx.aaaaaaaaypsr25bzjmj3drwiha6lodzus3yn6xwgkcrgxdgafscirbhj5bpa

{
"data": {
"availability-domain": "EMIr:PHX-AD-1",
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l"
"display-name": "Windows Instance",
"extended-metadata": {},
"id":
"ocid1.instance.oc1.phx.zsutzirph7cbrbx6rzu91stavdrw58puq3isknlr07zfcd6rq6p9",

Oracle Cloud Infrastructure User Guide 98


Welcome to Oracle Cloud Infrastructure

"image-id":
"ocid1.image.oc1.phx.aaaaaaaa53cliaskafbro2y4drwebci5szc4eus5bygvqmutflwqy2e2b7ua",
"ipxe-script": null,
"lifecycle-state": "PROVISIONING",
"metadata": {},
"region": "phx",
"shape": "VM.Standard1.2",
"time-created": "2017-08-26T00:51:30.462000+00:00"
},
"etag":
"4ec3da1e2415c49f55ed705c4d81edb2739da62946d36d73f816e8241e705b3b"
}
2. Get VNIC Information for the Instance
To get the VNIC information, run the following command.

oci compute instance list-vnics --instance-id <instance_id>


3. Create a Block Volume for the Instance
To create a block volume, run the following command.

oci bv volume create --availability-domain "<availability_domain_name>" -


c <compartment_id> --size-in-mbs <51200> --display-name <display_name>
4. Attach the Block Volume to the Instance
To attach the Block Volume to the Windows instance, run the following command.

oci compute volume-attachment attach --instance-id <instance_id> --


type <iscsi> --volume-id <volume_id>

Connecting to Your Instances


Although the Public IP address is required for connecting to Linux and Windows instances, that is the only thing
the two have in common. Some of these differences include: authentication, port configuration, and desktop client
programs.
1. Connect to Your Linux Instance
Connecting to Your Instance describes how to connect to a Linux instance from a Unix-style or Windows-style
system.
2. Connect to Your Windows Instance
Help: oci compute instance list-vnics -h and oci compute instance get-windows-
initial-creds -h
To connect to the instance using Remote Desktop Client (RDC), you need:
• The public IP address for the instance
• The initial Windows credentials
To get the public IP address of the Windows instance, run the following command.

oci compute instance list-vnics --instance-id <instance_id>

Command Example and Response

oci compute instance list-vnics --instance-id


ocid1.instance.oc1.phx.zsutzirph7cbrbx6rzu91stavdrw58puq3isknlr07zfcd6rq6p9

Oracle Cloud Infrastructure User Guide 99


Welcome to Oracle Cloud Infrastructure

"data": [
{
"availability-domain": "EMIr:PHX-AD-1",
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l"
"display-name": "Windows Instance",
"hostname-label": null,
"id":
"ocid1.vnic.oc1.phx.abyhqljr5m5mmra3ecxasw6vdrwq5ft23dqn4dlrl45hdggz6rgfdwpp4ija",
"lifecycle-state": "AVAILABLE",
"private-ip": "10.10.0.3",
"public-ip": "129.142.0.212",
"subnet-id":
"ocid1.subnet.oc1.phx.aaaaaaaahvx05fhw7p320cxmdrwo5wlf50egig9cmdzs1plb1xl6c5wvb5s2",
"time-created": "2017-08-26T00:51:30.462000+00:00"
}
]
}

To get the initial Windows credentials, run the following command.

oci compute instance get-windows-initial-creds --instance-id


<instance_id>

Command Example and Response

oci compute instance get-windows-initial-creds --instance-id


ocid1.instance.oc1.phx.zsutzirph7cbrbx6rzu91stavdrw58puq3isknlr07zfcd6rq6p9

{
"data": {
"password": "Cz{73~~vf@dnK7A",
"username": "opc"
}
}

Connecting to Your Windows Instance describes how to connect to your instance using RDC.

Cleaning Up the Test Environment


When you've finished setting up the test environments described in this tutorial, clean up the test environment by
removing resources you aren't using.

Detach and Delete the Block Volumes


Help: oci compute volume-attachment list -h , oci compute volume-attachment detach
-h and oci bv volume delete -h
Removing a block volume from an instance is a 3-step process. Use the following steps to detach and delete the block
volume for the Linux instance.

Oracle Cloud Infrastructure User Guide 100


Welcome to Oracle Cloud Infrastructure

1. Get the volume-attachment-id


The volume attachment ID is created when you create a block volume.
To get the volume attachment ID, run the following command.

oci compute volume-attachment list -c <compartment_id>

Command Example and Response

oci compute volume-attachment list -c


ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l

{
"data": [
{
"attachment-type": "iscsi",
"availability-domain": "EMIr:PHX-AD-1",
"chap-secret": null,
"chap-username": null,
"compartment-id":
"ocid1.compartment.oc1..aaaaaaaalkqnr7pfd92rdrwo5fm6fcoufoih1vd4ls4j9jjpge16vfyxrc1l"
"display-name": null,
"id":
"ocid1.volumeattachment.oc1.phx.abyhqlytoivg6eaybdrwb7mqqms6utjrefofrplyip7filf3vtpk5
"instance-id":
"ocid1.instance.oc1.phx.abcdefgh6kykdowc8ozzvr4421kwp7apdrwk6wrjl7su82d60c6sp4nap88d"
"ipv4": "169.254.2.2",
"iqn": "iqn.2015-12.com.oracleiaas:e3fd73db-b164-4d76-bc3f-
f58b093989d0",
"lifecycle-state": "ATTACHED",
"port": 3260,
"time-created": "2017-08-26T00:51:30.462000+00:00",
"volume-id":
"ocid1.volume.oc1.phx.abyhqpa3ati7ggfjvba7y6dcg7imdrwskq4bdljroo2cbwchrebuprxddvca"
}
]
}
}
2. Detach the volume-attachment-id
To detach the volume attachment-id, run the following command.

oci compute volume-attachment detach --volume-attachment-


id <volume_attachment_id>

Command Example and Response

oci compute volume-attachment detach --volume-attachment-id


ocid1.volumeattachment.oc1.phx.abyhqlytoivg6eaybdrwb7mqqms6utjrefofrplyip7filf3vtpk55

Are you sure you want to delete this resource? [y/N]:

All destructive actions, such as detaching and deleting resources allow you to use the --force parameter,
and the resource is removed without requiring confirmation. As a best practice, use the y/N option instead of --
force.
Confirm the deletion. No response is returned after the resource is deleted.

Oracle Cloud Infrastructure User Guide 101


Welcome to Oracle Cloud Infrastructure

3. Delete the Block Volume


To delete the block volume, run the following command.

oci bv volume delete --volume-id <volume_id> --force

Command Example and Response


a.
oci bv volume delete --volume-id
ocid1.volume.oc1.phx.abyhqljroo2cbwchrpa3ati7ggfjvba7y6dcg7imnleskq4bdebuprxddvca
--force

There is no response to this action. To verify that the block volume was deleted, run the following command.

oci bv volume list -c <compartment_id>

The response to this query returns "lifecycle-state": "TERMINATED", showing that the volume
doesn't exist.
To delete the block volume attached to the Windows instance, use the preceding steps (1-3) as a guide.
Terminate the Instances
Help: oci compute instance terminate -h
To delete the Linux instance, run the following command.

oci compute instance terminate --instance-id <instance_id>

Command Example and Response

oci compute instance terminate --instance-id


ocid1.instance.oc1.phx.abcdefgh6kykdowc8ozzvr4421kwp7apdrwk6wrjl7su82d60c6sp4nap88d

Are you sure you want to delete this resource? [y/N]:

Confirm the deletion. No response is returned after the instance is deleted.


To delete the Windows instance, run the following command.

oci compute instance terminate --instance-id <instance_id>

Command Example and Response

oci compute instance terminate --instance-id


ocid1.instance.oc1.phx.zsutzirph7cbrbx6rzu91stavdrw58puq3isknlr07zfcd6rq6p9

Are you sure you want to delete this resource? [y/N]:

Confirm the deletion. No response is returned after the instance is deleted.


Delete the Virtual Cloud Network
Help: oci network subnet delete -h, oci network vcn delete -h
It takes the following 2 steps to delete the VCN.

Oracle Cloud Infrastructure User Guide 102


Welcome to Oracle Cloud Infrastructure

1. Delete the subnet


To delete the subnet, run the following command.

oci network subnet delete --subnet-id <subnet_id> --force

Command Example and Response

oci network subnet delete --subnet-id


ocid1.subnet.oc1.phx.aaaaaaaahvx05fhw7p320cxmdrwo5wlf50egig9cmdzs1plb1xl6c5wvb5s2
--force

None
2. Delete the virtual cloud network
To delete the VCN, run the following command.

oci network vcn delete --vcn-id <vcn_id> --force

Command Example and Response

oci network vcn delete --vcn-id


ocid1.vcn.oc1.phx.aaaaaaaa6va8fxr1m4hvzjk3nzo8x290qymdrwiblxw5qpzlm64rdd74vchr
--force

None

Getting Started
Terraform is "infrastructure-as-code" software that allows you to define your Oracle Cloud Infrastructure (OCI)
resources in files that you can persist, version, and share. These files describe the steps required to provision your
infrastructure and maintain its desired state:
Resources can create OCI infrastructure objects such as virtual cloud networks or compute instances. Your first
application of the configuration creates the objects, and subsequent applications can update or delete them.
Data sources represent read-only views of your existing OCI infrastructure.
Variables represent parameters for Terraform.
Caution:

Terraform state files contain all resource attributes that are specified as part
of configuration files. If you manage any sensitive data with Terraform, like
database or user passwords or instance private keys, you should treat the state
file itself as sensitive data. See Storing Sensitive Data on page 4343 for
more information.

Installing the Provider


To use the Oracle Cloud Infrastructure (OCI) Terraform provider, you must install both Terraform and the OCI
Terraform provider. You can install both Terraform and the OCI Terraform provider with yum, or directly download
them from HashiCorp.
Government Cloud customers should follow the installation and configuration steps in Enabling FIPS Compatiblity
on page 4323.
Tip:

You can use Resource Manager to preinstall the Oracle Cloud Development
Kit on a Compute instance in your compartment. The Oracle Cloud

Oracle Cloud Infrastructure User Guide 103


Welcome to Oracle Cloud Infrastructure

Development Kit includes Terraform and the OCI Terraform provider, and
preconfigures the required authorization.
After downloading and installing, you must configure the Terraform provider so that Terraform can interact with OCI
resources.
Prerequisites for Installing and Using the Provider
• An Oracle Cloud Infrastructure (OCI) account that has user credentials sufficient to execute a Terraform plan.
• A user in that account.
• Required keys and OCI IDs (OCIDs). For guidance, see "Required Keys and OCIDs" in the Oracle Cloud
Infrastructure User Guide .
• The correct Terraform binary file for your operating system. We recommend using Terraform version 0.12.20 or
greater.
Installing from HashiCorp
Terraform and the OCI Terraform provider can be downloaded directly from HashiCorp.
Download and Install Terraform
Terraform is available for direct download from the HashiCorp download page. Ensure that you download the correct
binary file for your system.
Download and Install the Provider
To use the latest version of the OCI Terraform provider, run terraform init from the directory that contains
a configuration file with the provider "oci" { ... configuration block. The provider is automatically
downloaded. Terraform configurations also allow you to specify a particular version of the OCI Terraform provider.
You can also download the Terraform provider directly to a location of your choice.
Installing with Yum
If you're running Oracle Linux 7, you can use yum to install Terraform and the OCI Terraform provider.
To use yum to install Terraform:

sudo yum-config-manager --enable ol7_developer


sudo yum install terraform

To use yum to install the Terraform provider:

sudo yum-config-manager --enable ol7_developer


sudo yum install terraform-provider-oci

Test the Terraform Installation


Open a terminal window and run the following command to test your installation:

terraform -v

Getting Started with Load Balancing


This chapter provides a hands-on tutorial to introduce you to the components of Load Balancing.
The Load Balancing service allows you to create highly available load balancers within your VCN. All load balancers
come with provisioned bandwidth. You can choose to create a load balancer with either a public or a private
IP address. Load balancers support SSL handling for both incoming traffic and traffic with your application servers.
When you create a load balancer with a public IP address you specify two subnets, each in a different availability
domain, on which the load balancer can run. The two subnets ensure the high availability of the load balancer. A
private load balancer requires only one subnet.

Oracle Cloud Infrastructure User Guide 104


Welcome to Oracle Cloud Infrastructure

This tutorial is an introduction to Load Balancing. You can follow the steps here to create a public load balancer and
verify it with a basic web server application. For complete details about the service and its components, see Overview
of Load Balancing in the Oracle Cloud Infrastructure User Guide.

Before You Begin


To try out the Load Balancing service for this tutorial, you must have these things set up first:
• A virtual cloud network (VCN) with two subnets (each in a different availability domain) and an internet gateway
• Two instances running (in different subnets)
• A web application (such as Apache HTTP Server) running on each instance
If you don't have these items set up yet, you can follow the steps shown here.
Tip:

If you need an introduction to VCNs and instances, try the Tutorial -


Launching Your First Linux Instance on page 62 first.
VCN and Instance Setup
The following diagram shows the prerequisite VCN and instances:

Oracle Cloud Infrastructure User Guide 105


Welcome to Oracle Cloud Infrastructure

Create a VCN
1. Open the navigation menu. Under Core Infrastructure, go to Networking and click Virtual Cloud Networks.
Ensure that the Sandbox compartment (or the compartment designated for you) is selected in the Compartment
list on the left.
2. Click Networking Quickstart.
3. Select VCN with Internet Connectivity, and then click Start Workflow.
4. Enter the following:
• VCN Name: Enter a name for your cloud network. The name is incorporated into the names of all the related
resources that are automatically created. Avoid entering confidential information.
• Compartment: This field defaults to your current compartment. Select the compartment you want to create
the VCN and related resources in, if not already selected.
• VCN CIDR Block: Enter a valid CIDR block for the VCN. For example 10.0.0.0/16.
• Public Subnet CIDR Block: Enter a valid CIDR block for the subnet. The value must be within the VCN's
CIDR block. For example: 10.0.0.0/24.
• Private Subnet CIDR Block: Enter a valid CIDR block for the subnet. The value must be within the VCN's
CIDR block and not overlap with the public subnet's CIDR block. For example: 10.0.1.0/24.
• Accept the defaults for any other fields.
5. Click Next.
6. Review the list of resources that the workflow will create for you. Notice that the workflow will set up security list
rules and route table rules to enable basic access for the VCN.
7. Click Create to start the short workflow.
8. After the workflow completes, click View Virtual Cloud Network.
The cloud network has the following resources and characteristics:
• Internet gateway.
• NAT gateway.
• Service gateway with access to the Oracle Services Network.
• A regional public subnet with access to the internet gateway. This subnet uses the VCN's default security list and
default route table. Instances in this subnet may optionally have public IP addresses.
• A regional private subnet with access to the NAT gateway and service gateway. This subnet uses a custom
security list and custom route table that the workflow created. Instances in this subnet cannot have public
IP addresses.
• Use of the Internet and VCN Resolver for DNS.
Launch two instances
This example uses the VM.Standard2.1 shape. If you prefer, you can choose a larger shape.
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click Create Instance.
3. On the Create Compute Instance page, for Name, enter a name, for example: Webserver1. Avoid entering
confidential information.
4. In the Placement and hardware section, enter the following:
a. Availability domain: Select the first availability domain in the list (AD-1).
b. Image: Select the Oracle Linux 7.x image.
c. Shape: Click Change Shape, and then make the following selections:
1. For Instance type, select Virtual Machine.
2. For Shape series, select Intel Skylake, and then select the VM Standard2.1 shape (1 OCPU, 15 GB
RAM).
3. Click Select Shape.

Oracle Cloud Infrastructure User Guide 106


Welcome to Oracle Cloud Infrastructure

5. In the Networking section, configure the network details for the instance. Do not accept the defaults.
a. For Network, leave Select existing virtual cloud network selected
b. Virtual cloud network in <compartment_name>: Select the cloud network that you created. If necessary,
click Change compartment to switch to the compartment containing the cloud network that you created.
c. For Subnet, leave Select existing subnet selected.
d. Subnet in <compartment_name>: Select the public subnet in availability domain 1. If necessary, click
Change compartment to switch to the compartment that contains the correct subnet.
e. Select the Assign a public IPv4 address option. This creates a public IP address for the instance, which you
need to access the instance. If you have trouble selecting this option, confirm that you selected the public
subnet that was created with your VCN, not a private subnet.
f. Click Show advanced options. Ensure that the Hostname field is blank.
6. In the Add SSH keys section, upload the public key portion of the key pair that you want to use for SSH access to
the instance. Browse to the key file that you want to upload, or drag and drop the file into the box.
If you do not have an SSH key pair, see Creating a Key Pair on page 64.
7. In the Boot volume section, leave all the options cleared.
8. Click Create.
9. Repeat the previous steps. This time, enter the name Webserver2 and select the subnet in availability domain 2.
Start a web application on each instance
This example uses Apache HTTP Server.
1. Connect to your instance. If you need help, see Connecting to Your Instance on page 69.
2. Run yum update:

sudo yum -y update


3. Install the Apache HTTP Server:

sudo yum -y install httpd


4. Allow Apache (HTTP and HTTPS) through the firewall:

sudo firewall-cmd --permanent --add-port=80/tcp

sudo firewall-cmd --permanent --add-port=443/tcp

Note:

Open the Firewall


If you choose to run a different application than Apache, ensure that you
run the preceding command to open the firewall for your application's port.
5. Reload the firewall:

sudo firewall-cmd --reload


6. Start the web server:

sudo systemctl start httpd

Oracle Cloud Infrastructure User Guide 107


Welcome to Oracle Cloud Infrastructure

7. Add an index.htm file on each server that indicates which server it is, for example:
a. On Webserver 1:

sudo su

echo 'WebServer1' >/var/www/html/index.html


b. On Webserver 2:

sudo su

echo 'WebServer2' >/var/www/html/index.html

Tutorial Overview
In this tutorial, you create a public load balancer and verify it. A load balancer requires configuration of several
components to be functional, and this tutorial walks you through each step to help you understand these components.
To create and test the load balancer, complete the following steps:
1. Add two subnets to your VCN to host your load balancer.
2. Create a load balancer.
3. Create a backend set with health check.
4. Add backend servers to your backend set.
5. Create a listener.
6. Update the load balancer subnet security list and allow internet traffic to the listener.
7. Verify your load balancer.
8. Update rules to protect your backend servers.
9. Delete your load balancer.

Add Two Subnets to Your VCN to Host Your Load Balancer


Your load balancer must reside in different subnets from your application instances. This configuration allows you to
keep your application instances secured in subnets with stricter access rules, while allowing public internet traffic to
the load balancer in the public subnets.
To add the public subnets to your VCN:
Add a Security List
1. Open the navigation menu. Under Core Infrastructure, go to Networking and click Virtual Cloud Networks.
The list of VCNs in the current compartment is displayed.
2. Click the name of the VCN that includes your application instances.
3. Under Resources, click Security Lists.
4. Click Create Security List.
a. Create in Compartment: This field defaults to your current compartment. Select the compartment you want
to create the security list in, if not already selected.
b. Enter a Name, for example, "LB Security List". Avoid entering confidential information.
c. Delete the entry for the ingress rule and the entry for the egress rule. The security list must have no rules. The
correct rules are added automatically during the load balancer workflow.
d. Tags: Leave as is (you can add tags later if you like).
e. Click Create Security List.
f. Return to your Virtual Cloud Network Details page.

Oracle Cloud Infrastructure User Guide 108


Welcome to Oracle Cloud Infrastructure

Add a Route Table


1. Under Resources, click Route Tables.
2. Click Create Route Table. Enter the following:
a. Create in Compartment: This field defaults to your current compartment. Select the compartment you want
to create the route table in, if not already selected.
b. Name: Enter a name, for example, "LB Route Table". Avoid entering confidential information.
c. Target Type: Select Internet Gateway.
d. Destination CIDR Block: Enter 0.0.0.0/0.
e. Compartment: Select the compartment that contains your VCN's internet gateway.
f. Target: Select your VCN's internet gateway.
g. Tags: Leave as is (you can add tags later if you like).
h. Click Create Route Table.
Create the first subnet
1. Under Resources, click Subnets.
2. Click Create Subnet.
3. Enter or select the following:
a. Name: Enter a name, for example, "LB Subnet 1". Avoid entering confidential information.
b. availability domain: Choose the first availability domain (AD-1).
c. CIDR Block: Enter 10.0.4.0/24.
d. Route Table: Select the LB Route Table you created.
e. Subnet Access: Select Public Subnet.
f. DNS Resolution: Select Use DNS Hostnames in this Subnet.
g. DHCP Options: Select Default DHCP Options for LB_Network.
h. Security Lists: Select the LB Security List you created.
i. Tags: Leave as is (you can add tags later if you like).
4. Click Create.
Create the second subnet
Create a second load balancer subnet in a different availability domain.
1. In the details page of your VCN, click Create Subnet.
2. Enter the following:
a. Name: Enter a name, for example, "LB Subnet 2". Avoid entering confidential information.
b. availability domain: Choose the second availability domain (AD-2).
c. CIDR Block: Enter 10.0.5.0/24.
d. Route Table: Select the LB Route Table you created.
e. Subnet Access: Select Public Subnet.
f. DNS Resolution: Select Use DNS Hostnames in this Subnet.
g. DHCP Options: Select Default DHCP Options for LB_Network.
h. Security Lists: Select the LB Security List you created.
i. Tags: Leave as is (you can add tags later if you like).
3. Click Create.
The following figure shows the new components added to the VCN:

Oracle Cloud Infrastructure User Guide 109


Welcome to Oracle Cloud Infrastructure

Create the Load Balancer


When you create a public load balancer, you choose its shape (size) and you select two subnets, each in a different
availability domain. This configuration ensures that the load balancer is highly available. It is active in only one
subnet at a time. This load balancer comes with a public IP address and provisioned bandwidth corresponding to the
shape you chose.
Tip:

Although the load balancer resides in a subnet, it can direct traffic to backend
sets that reside in any of the subnets within the VCN.
1. Open the navigation menu. Under the Core Infrastructure group, go to Networking and click Load Balancers.
Ensure that the Sandbox compartment (or the compartment designated for you) is selected on the left.
2. Click Create Load Balancer.
3. Enter the following:
• Name: Enter a name for your load balancer. Avoid entering confidential information.
• Shape: Select 100 Mbps. The shape specifies the bandwidth of the load balancer. For the tutorial, use the
smallest shape. The shape cannot be changed later.
• Virtual Cloud Network: Select the virtual cloud network for your load balancer.
• Visibility: Choose Create Public Load Balancer.
• Subnet (1 of 2): Select LB Subnet 1.
• Subnet (2 of 2): Select LB Subnet 2. The second subnet must be in a different availability domain than the
first subnet you chose.
4. Click Create.

Oracle Cloud Infrastructure User Guide 110


Welcome to Oracle Cloud Infrastructure

When the load balancer is created, you get a public IP address. You route all your incoming traffic to this IP address.
The IP address is available from both subnets that you specified, but it is active in only one subnet at a time.

Create a Backend Set


A backend set is a collection of backend servers to which your load balancer directs traffic. A list of backend servers,
a load balancing policy, and a health check script define each backend set. A load balancer can have multiple backend
sets, but for this tutorial, you create only one backend set that includes both of your web servers.
In this step, you define the backend set policy and health check. You add your servers in a separate step.
To create the backend set:
1. Click the name of your load balancer and view its details.
2. Click Create Backend Set.
3. In the dialog box, enter:
a. Name: Give your load balancer backend set a name. The name cannot contain spaces. Avoid entering
confidential information.
b. Policy: Choose Weighted Round Robin.
4. Enter the Health Check details.
Load Balancing automatically checks the health of the instances for your load balancer. If it detects an unhealthy
instance, it stops sending traffic to the instance and reroutes traffic to healthy instances. In this step, you provide
the information required to check the health of servers in the backend set and ensure that they can receive data
traffic.
• Protocol: Select HTTP.
• Port: Enter 80
• URL Path (URI): Enter /
The rest of the fields are optional and can be left blank for this tutorial.

Oracle Cloud Infrastructure User Guide 111


Welcome to Oracle Cloud Infrastructure

5. Click Create.
When the Backend Set is created, the Work Request shows a status of Succeeded. Close the Work Request dialog
box.
What is a policy?
The policy determines how traffic is distributed to your backend servers.
• Round Robin - This policy distributes incoming traffic sequentially to each server in a backend set list. When each
server has received a connection, the load balancer repeats the list in the same order.
• IP Hash - This policy uses an incoming request's source IP address as a hashing key to route non-sticky traffic to
the same backend server. The load balancer routes requests from the same client to the same backend server as
long as that server is available.
• Least Connections - This policy routes incoming non-sticky request traffic to the backend server with the fewest
active connections.

Add Backends (Servers) to Your Backend Set


After the backend set is created, you can add compute instances (backend servers) to it. To add a backend server, you
can enter the OCID for each instance and your application port. The OCID enables the Console to create the security
list rules required to enable traffic between the load balancer subnets and the instance subnets.
Tip:

Security lists are virtual firewall rules for your VCN that provide ingress
and egress rules to specify the types of traffic allowed in and out of a subnet.
Update your VCN's security list rules to allow traffic flow between the load
balancer subnets and the backend server subnets. In this step, you can have
the security lists automatically updated by providing the instance OCIDs.
To add a server to your backend set:
1. On the details page of your load balancer, click Backend Sets. The backend set you just created is displayed.
2. Click the name of the backend set and view its details.
3. Click Edit Backends.
In the dialog:
1. Ensure that Help me create proper security list rules is checked.
2. OCID: Paste the OCID of the first instance (Webserver1).
3. Port: Enter 80.
4. Weight: Leave blank to weight the servers evenly.
5. Repeat Steps 2 through 4, pasting in the OCID for the second instance (Webserver2).
6. Click Create Rules.
The following figure shows the components created in this task:

Oracle Cloud Infrastructure User Guide 112


Welcome to Oracle Cloud Infrastructure

What rules are added to my security lists?


The system updates the security list used by your load balancer subnets to allow egress traffic from the load balancer
to each backend server's subnet:
• Updates to the security list for your load balancer subnets:
• Allow egress traffic to the backend server 1 subnet (for example, Public-Subnet-AD1)
• Allow egress traffic to the backend server 2 subnet (for example, Public-Subnet-AD2)

The system updates the security list used by your backend server subnets to allow ingress traffic from the load
balancer subnets:

Oracle Cloud Infrastructure User Guide 113


Welcome to Oracle Cloud Infrastructure

• Updates to the security list for your backend server subnets:


• Allow ingress traffic from load balancer subnet 1
• Allow ingress traffic from load balancer subnet 2

How do I get the OCID of an instance?


The OCID (Oracle Cloud Identifier) is displayed when you view the instance, on the instance details page.
1. In the dialog, right-click View Instances and select a browser option to open the link in a new tab.

A new Console browser tab launches, displaying the instances in the current compartment.
2. In the tab that just opened, if your instances are not in the current compartment, select the compartment to which
the instance belongs. (Select from the list on the left side of the page.)
A shortened version of the OCID is displayed next to each instance.
3. Click the instance that you're interested in.
A shortened version of the OCID is displayed on the instance details page.
4. Click Copy to copy the OCID. You can then paste it into the Instance ID field.

Oracle Cloud Infrastructure User Guide 114


Welcome to Oracle Cloud Infrastructure

Create the Listener for Your Load Balancer


A listener is an entity that checks for connection requests. The load balancer listener listens for ingress client traffic
using the port you specify within the listener and the load balancer's public IP.
In this tutorial, you define a listener that accepts HTTP requests on port 80.
Note:

Listening on Multiple Ports


A listener can listen on one port. To listen on more ports (such as 443 for
SSL), create another listener. For information on enabling SSL for your load
balancer, see "Managing SSL Certificates" in the Oracle Cloud Infrastructure
User Guide.
To create a listener:
1. On your Load Balancer Details page, click Listeners.
2. Click Create Listener.
3. Enter the following:
• Name: Enter a friendly name. Avoid entering confidential information.
• Protocol: Select HTTP.
• Port: Enter 80 as the port on which to listen for incoming traffic.
• Backend Set: Select the backend set you created.
4. Click Create.

Update Load Balancer Security Lists and Allow Internet Traffic to the Listener
When you create a listener, you must also update your VCN's security list to allow traffic to that listener.

Allow the Listener to Accept Traffic


The subnets where the load balancer resides must allow the listener to accept traffic. To enable the traffic to get to the
listener, update the load balancer subnet's security list.
To update the security list to allow the listener to accept traffic:
1. Open the navigation menu. Under Core Infrastructure, go to Networking and click Virtual Cloud Networks.
The list of VCNs in the current compartment is displayed.
2. Click Security Lists. A list of the security lists in the cloud network is displayed.
3. Click the LB Security List. The details are displayed.
4. Click Edit All Rules.
5. Under Allow Rules for Ingress, click Add Rule.
6. Enter the following ingress rule:
• Source Type: Select CIDR
• Source CIDR: Enter 0.0.0.0/0
• IP Protocol: Select TCP
• Destination Port Range: Enter 80 (the listener port).
7. Click Save Security List Rules.
If you created other listeners, add an ingress rule for each listener port to allow traffic to the listener. For example, if
you created a listener on port 443, repeat the previous steps using Destination Port Range: 443.
The following figure shows the component created in this task:

Oracle Cloud Infrastructure User Guide 115


Welcome to Oracle Cloud Infrastructure

Verify Your Load Balancer


To test your load balancer's functionality, you can open a web browser and navigate to its public IP address (listed on
the load balancer's detail page). If the load balancer is properly configured, you can see the name of one of the web
server instances:
1. Open a web browser.
2. Enter the load balancer public IP address.
The index.htm page of one of your web servers appears.

3. Refresh the web page.


The index.htm page of the other web server now appears.

Oracle Cloud Infrastructure User Guide 116


Welcome to Oracle Cloud Infrastructure

Because you configured the load balancer backend set policy as Round Robin, refreshing the page alternates between
the two web servers.

Update Rules to Limit Traffic to Backend Servers


Update the default security list and the default route table to limit traffic to your backend servers. If you used the
Create Virtual Cloud Network Plus Related Resources option to create your VCN and you are not going to
terminate this load balancer immediately, these actions are important.
To delete the default route table rule:
1. Open the navigation menu. Under Core Infrastructure, go to Networking and click Virtual Cloud Networks.
2. Click the name of your VCN and review its details.
3. Under Resources, click Route Tables.
4. Click the Default Route Table for the VCN.
5. Click Edit Route Rules.
6. Click the X next to the route rule, and then click Save.
There are now no Route Rules for the default route table.
To edit the default security list rules:
1. Go to your Virtual Cloud Network Details page.
2. Under Resources, click Security Lists.
3. Click the Default Security List for the VCN.
4. Click Edit All Rules.
5. Under Allow Rules for Ingress, delete the following rules:

Action Source CIDR IP Protocol Destination Port Range


Delete 0.0.0.0/0 TCP 22
Delete 0.0.0.0/0 ICMP 3,4
Delete 10.0.0.0/16 ICMP 3
6. Under Allow rules for Egress, delete the rule. There can be no Egress Rules.
Now your instances can receive data traffic from, and direct traffic to, only the load balancer subnets. You no longer
can connect directly to your instance's public IP address.

Delete Your Load Balancer


When your load balancer becomes available, you are billed for each hour that you keep it running. Once you no
longer need a load balancer, you can delete it. When the load balancer is deleted, you stop incurring charges for it.
Deleting a load balancer does not affect the backend servers or subnets used by the load balancer.
To delete your load balancer:
1. Open the navigation menu. Under the Core Infrastructure group, go to Networking and click Load Balancers.
2. Choose the Compartment that contains your load balancer.
3. Next to your load balancer, click the Actions icon (three dots), and then click Terminate.
4. Confirm when prompted.
If you want to delete the instances and VCN you created for this tutorial, follow the instructions in Cleaning Up
Resources from the Tutorial on page 73.

Getting Started with Audit


This chapter provides a hands-on tutorial to introduce you to the components of the Oracle Cloud Infrastructure Audit
service.

Oracle Cloud Infrastructure User Guide 117


Welcome to Oracle Cloud Infrastructure

The Oracle Cloud Infrastructure Audit service is included with your Oracle Cloud Infrastructure tenancy. The
Audit service automatically records calls to the public application programming interface (API) endpoints for your
Oracle Cloud Infrastructure tenancy. The service records events relating to the actions taken on the Oracle Cloud
Infrastructure resources. Events recorded in the log can be viewed, retrieved, stored, and analyzed. These log events
include information such as:
• ID of the caller
• target resource
• time of the recorded event
• request parameters
• response parameters
This task helps you get started with the Audit service by showing you how to find and view a specific event.
For complete details on Audit, see "Overview of the Audit in the Oracle Cloud Infrastructure User Guide .

Prerequisite
To create an event to view, create and delete a VCN in the Networking service.
Create and Delete a VCN
1. Select the compartment (from the list on the left) in which you want to create the VCN.
2. Open the navigation menu. Under Core Infrastructure, go to Networking and click Virtual Cloud Networks.
3. Click Create Virtual Cloud Network.
4. Enter the following:
a. Name: Enter "Audit_Test".
b. CIDR Block: Enter "10.0.0.0/16".
c. Leave all other fields with their default settings. Click Create Virtual Cloud Network.
The VCN is displayed in the list.
5. Next to your VCN name, click the OCID: Copy link. You will use the OCID to help you find the event.
6. Terminate the VCN: Click the Actions icon (three dots), and then click Terminate. Confirm when prompted.

Using Audit to View Events


In this task, you will use Audit to find the delete VCN event.
Tip:

Audit time stamps events according to Greenwich Mean Time (GMT).


Before you get started, be aware of your local time zone offset.
1. Open the navigation menu. Under Governance and Administration, go to Governance and click Audit.
The list of events that occurred in the current compartment is displayed. Audit logs are organized by compartment,
so if you are looking for a particular event, you must know which compartment the event occurred in.
2. From the Compartments list, select the compartment in which you created the VCN.
The list of events for the compartment is displayed.

Oracle Cloud Infrastructure User Guide 118


Welcome to Oracle Cloud Infrastructure

3. To find the delete VCN event, you can try the following filters:
Filter by time
a. Click in the Start Date box to display the date and time editor.
b. Select the current date from the calendar. Type or select values for hour and minute to approximate the
preceding hour. Enter the time as Greenwich Mean Time (GMT) using 24-hour clock notation.
c. Repeat the above steps to enter an end date for the current date and time, so that you filter results for the
preceding hour.
Example
If you are in located in the America/Los Angeles time zone and you are looking for an event that occurred
between 1:15 PM and 2:15 PM local time on October 25, enter 21:15 and 22:15 to account for the GMT offset.

d. Click Search.
Filter events by keywords
You can further filter the results list to display only log entries that include a specific text string. Try the following
entries to help you find the delete VCN event:
Tip:

When you filter by keywords, use quotes to avoid results that have a
similar string embedded in a longer string. For example, the quotes around

Oracle Cloud Infrastructure User Guide 119


Welcome to Oracle Cloud Infrastructure

the responseStatus "204" prevent matches of 204 embedded in a longer


string somewhere else in the audit event.
• Filter by the responseStatus value
In the Keywords box, type "204" and click Search to display only events that returned the 204 (i.e., deleting
resource) response status.
• Filter by requestResource value
In the Keywords box, paste the VCN OCID that you copied to your clipboard in the prerequisite step and click
Search.
Review the events to find the DELETE event.
Filter events by request action types
• Filter by the request action types
In Request Actions Types, select "DELETE" and click Search.
The list filters to show only DELETE events. Scan the list to find your VCN termination event.
4. View the details of your event:
• To see only the top-level details, click the down arrow to the right of an event.
• To see lower-level details, click { . . . } to the right of the collapsed parameter.

Getting Started with Oracle Platform Services


This chapter helps you get started with Oracle Platform Services on Oracle Cloud Infrastructure.
Note:

Oracle Platform Services are not available in Oracle Cloud Infrastructure


Government Cloud tenancies.

Supported Platform Services


The following platform services are supported on Oracle Cloud Infrastructure:
• Analytics Cloud
• API Platform Cloud Service
• Autonomous Data Warehouse
• Integration
• Autonomous Mobile Cloud Enterprise
• NoSQL Database Cloud Service
• Oracle Visual Builder
• Content and Experience Cloud
• Data Hub Cloud Service
• Data Integration Platform Cloud
• Database Cloud Service
• Developer Cloud Service
• Event Hub Cloud Service
• Java Cloud Service
• Oracle SOA Cloud Service
For services that are supported on both Oracle Cloud Infrastructure and Oracle Cloud Infrastructure Classic, you can
choose Oracle Cloud Infrastructure during instance creation by selecting an appropriate region.

Oracle Cloud Infrastructure User Guide 120


Welcome to Oracle Cloud Infrastructure

Understand the Infrastructure Prerequisites


Before creating instances of your service on Oracle Cloud Infrastructure, you must create certain resources in Oracle
Cloud Infrastructure for use by your platform service instances.
See "Prerequisites for Oracle Platform Services on Oracle Cloud Infrastructure" in the Oracle Cloud Infrastructure
User Guide .

Learn About Service-Specific Differences and Workflows


Broadly, the service features are the same regardless of the infrastructure you choose (Oracle Cloud Infrastructure
or Oracle Cloud Infrastructure Classic), but differences may exist in some services. And the workflows for creating
instances on Oracle Cloud Infrastructure may vary across services.
See the following documentation:

Service More Information


Data Hub Cloud Service About Oracle Data Hub Cloud Service Clusters in Oracle
Cloud Infrastructure
Database Cloud Service About Database Deployments in Oracle Cloud
Infrastructure
Event Hub Cloud Service About Instances in Oracle Cloud Infrastructure
Java Cloud Service About Java Cloud Service Instances in Oracle Cloud
Infrastructure
Oracle SOA Cloud Service About SOA Cloud Service Instances in Oracle Cloud
Infrastructure Classic and Oracle Cloud Infrastructure

REST API Endpoints for Platform Services


You can use the following URL structure to access the REST API endpoints for a Platform Service:

https://<rest_server>/<endpoint_path>

where:
• <endpoint_path> is the relative path that defines the REST resource. For a list of available paths, refer to the
REST API documentation for the specific service.
• <rest_server> is the REST server. Choose the REST server based on the region in which your platform
service was created. Refer to the following table.

REST Server Regions


psm.us.oraclecloud.com • US East (Ashburn)
• US West (Phoenix)
• US West (San Jose)
• Canada Southeast (Montreal)
• Canada Southeast (Toronto)

Oracle Cloud Infrastructure User Guide 121


Welcome to Oracle Cloud Infrastructure

REST Server Regions


psm.europe.oraclecloud.com • Germany Central (Frankfurt)
• Netherlands Northwest (Amsterdam)
• Saudi Arabia West (Jeddah)
• Switzerland North (Zurich)
• UAE East (Dubai)
• UK South (London)
• UK West (Newport)

psm.aucom.oraclecloud.com • Australia East (Sydney)


• Australia Southeast (Melbourne)
• India West (Mumbai)
• India South (Hyderabad)
• Japan Central (Osaka)
• Japan East (Tokyo)
• South Korea Central (Seoul)
• South Korea North (Chuncheon)

psm.brcom-central-1.oraclecloud.com • Brazil East (Sao Paulo)


• Chile (Santiago)

psm-<account_name>.console.oraclecloud.com All regions


<account_name> is your tenant name or cloud account
name

psm-cacct-<account_id>.console.oraclecloud.com All regions


<account_id> is the alphanumeric ID of your tenant
name or cloud account

You can find <account_name> or <account_id> in either:


• The welcome email sent to your cloud account administrator
• The URL used to access the console for the Platform Service

Getting Started with Oracle Applications


This chapter helps you get started with Oracle Applications on Oracle Cloud Infrastructure.

Support for Oracle Applications


Oracle Cloud Infrastructure is an ideal place to host your Oracle Applications. You can deploy and manage
Oracle applications on Oracle Cloud Infrastructure using the standard procedures found in the application product
documentation, or using Oracle-provided automation solutions (available for some applications).
Oracle applications that meet the following criteria are supported:
• The application version is under Premier, Extended, or Sustained support.
• You plan to run the application on an operating system and database version that is supported on Oracle Cloud
Infrastructure and certified for the application.
Oracle offers solutions and documentation to make deploying applications on Oracle Cloud Infrastructure easier.
Solutions are available for the following applications:
• Oracle E-Business Suite

Oracle Cloud Infrastructure User Guide 122


Welcome to Oracle Cloud Infrastructure

• Oracle JD Edwards EnterpriseOne


• Oracle PeopleSoft

Setting Up Your Tenancy


After Oracle creates your tenancy in Oracle Cloud Infrastructure, an administrator at your company will need to
perform some set up tasks and establish an organization plan for your cloud resources and users. Use the information
in this topic to help you get started.
Tip:

To quickly get some users up and running while you are still in the planning
phase, see Adding Users on page 58.

Create a Plan
Before adding users and resources you should create a plan for your tenancy. Fundamental to creating your plan is
understanding the components of the Oracle Cloud Infrastructure Identity and Access Management (IAM). Ensure
that you read and understand the features of IAM. See "Overview of the IAM" in the Oracle Cloud Infrastructure
User Guide.
Your plan should include the compartment hierarchy for organizing your resources and the definitions of the user
groups that will need access to the resources. These two things will impact how you write policies to manage access
and so should be considered together.
Use the following primer topics to help you get started with your plan:
• Understanding Compartments on page 123
• Consider Who Should Have Access to Which Resources on page 124

Understanding Compartments
Compartments are the primary building blocks you use to organize your cloud resources. You use compartments to
organize and isolate your resources to make it easier to manage and secure access to them.
When your tenancy is provisioned, a root compartment is created for you. Your root compartment holds all of your
cloud resources. You can think of the root compartment like a root folder in a file system.
The first time you sign in to the Console and select a service, you will see your one, root compartment.

Oracle Cloud Infrastructure User Guide 123


Welcome to Oracle Cloud Infrastructure

You can create compartments under your root compartment to organize your cloud resources in a way that aligns with
your resource management goals. As you create compartments, you control access to them by creating policies that
specify what actions groups of users can take on the resources in those compartments.
Keep in mind the following when you start working with compartments:
• At the time you create a resource (for example, instance, block storage volume, VCN, subnet), you must decide in
which compartment to put it.
• Compartments are logical, not physical, so related resource components can be placed in different compartments.
For example, your cloud network subnets with access to an internet gateway can be secured in a separate
compartment from other subnets in the same cloud network.
• You can create a hierarchy of compartments up to six compartments deep under the tenancy (root compartment).
• When you write a policy rule to grant a group of users access to a resource, you always specify the compartment
to apply the access rule to. So if you choose to distribute resources across compartments, remember that you
will need to provide the appropriate permissions for each compartment for users that will need access to those
resources.
• In the Console, compartments behave like a filter for viewing resources. When you select a compartment, you
only see resources that are in the compartment selected. To view resources in another compartment, you must first
select that compartment. You can use the Search feature to get a list of resources across multiple compartments.
See Overview of Search on page 3660.
• You can use the tenancy explorer to get a complete view of all the resources (across regions) that reside in a
specific compartment. See Viewing All Resources in a Compartment on page 238.
• If you want to delete a compartment, you must delete all resources in the compartment first.
• Finally, when planning for compartments you should consider how you want usage and auditing data aggregated.

Consider Who Should Have Access to Which Resources


Another primary consideration when planning the setup of your tenancy is who should have access to which
resources. Defining how different groups of users will need to access the resources will help you plan how to organize
your resources most efficiently, making it easier to write and maintain your access policies.
For example, you might have users who need to:
• View the Console, but not be allowed to edit or create resources

Oracle Cloud Infrastructure User Guide 124


Welcome to Oracle Cloud Infrastructure

• Create and update specific resources across several compartments (for example, network administrators who need
to manage your cloud networks and subnets)
• Launch and manage instances and block volumes, but not have access to your cloud network
• Have full permissions on all resources, but only in a specific compartment
• Manage other users' permissions and credentials
To see some sample policies, see "Common Policies" in the Oracle Cloud Infrastructure User Guide.

Sample Approaches to Setting Up Compartments

Put all your resources in the tenancy (root compartment)


If your organization is small, or if you are still in the proof-of-concept stage of evaluating Oracle Cloud
Infrastructure, you might consider placing all of your resources in the root compartment (tenancy). This approach
makes it easy for you to quickly view and manage all your resources. You can still write policies and create groups to
restrict permissions on specific resources to only the users who need access.
High-level tasks to set up the single compartment approach:
1. (Best practice) Create a sandbox compartment. Even though your plan is to maintain your resources in the root
compartment, Oracle recommends setting up a sandbox compartment so that you can give users a dedicated space
to try out features. In the sandbox compartment you can grant users permissions to create and manage resources,
while maintaining stricter permissions on the resources in your tenancy (root) compartment. See Create a Sandbox
Compartment.
2. Create groups and policies. See "Common Policies" in the Oracle Cloud Infrastructure User Guide.
3. Add users. See "Managing Users" in the Oracle Cloud Infrastructure User Guide.
Create compartments to align with your company projects
Consider this approach if your company has multiple departments that you want to manage separately or if your
company has several distinct projects that would be easier to manage separately.
In this approach, you can add a dedicated administrators group for each compartment (project) who can set the access
policies for just that project. (Users and groups still must be added at the tenancy level.) You can give one group
control over all their resources, while not allowing them administrator rights to the root compartment or any other
projects. In this way, you can enable different groups at your company to set up their own "sub-clouds" for their own
resources and administer them independently.
High-level tasks to set up the multiple project approach:
1. Create a sandbox compartment. Oracle recommends setting up a sandbox compartment so you can give users a
dedicated space to try out features. In the sandbox compartment you can grant users permissions to create and
manage resources, while maintaining stricter permissions on the resources in your tenancy (root) compartment.
2. Create a compartment for each project, for example, ProjectA, ProjectB.
3. Create an administrators group for each project, for example, ProjectA_Admins.
4. Create a policy for each administrators group.
Example:

Allow group ProjectA_Admins to manage all-resources in compartment


ProjectA)
5. Add users. See "Managing Users" in the Oracle Cloud Infrastructure User Guide.
6. Let the administrators for ProjectA and ProjectB create subcompartments within their designated compartment to
manage resources.
7. Let the administrators for ProjectA and ProjectB create the policies to manage the access to their compartments.
Create compartments to align with your security requirements
Consider this approach if your company has projects or applications that require different levels of security.

Oracle Cloud Infrastructure User Guide 125


Welcome to Oracle Cloud Infrastructure

A security zone is associated with a compartment and a security zone recipe. When you create and update resources
in a security zone, Oracle Cloud Infrastructure validates these operations against the policies in the security zone
recipe. If any security zone policy is violated, then the operation is denied.
In this approach, you create security zone compartments for projects that must comply with our maximum security
architecture and best practices. You create standard compartments for projects that don't require this level of security
compliance.
Security zone policies align with Oracle security principles, including:
• Data in a security zone can't be copied to a standard compartment because it might be less secure.
• Resources in a security zone must not be accessible from the public internet.
• Resources in a security zone must use only configurations and templates approved by Oracle.
Caution:

To ensure the integrity of your data, you can't move certain resources from a
security zone compartment to a standard compartment.
Similar to the previous approach, you can add a dedicated administrator group for each compartment who can then set
the access policies for that single project.
• Access (IAM) policies grant users the ability to manage certain resources in a compartment.
• Security zone policies ensure that management operations in a security zone compartment comply with Oracle
security best practices.
To learn more, see Security Zones.

Getting Help and Contacting Support


When using Oracle Cloud Infrastructure, sometimes you need to get help from the community or to talk to someone
in Oracle support. This topic provides more information about accessing these tools.
Tip:

Console announcements appear at the top of the Console to communicate


timely, important information about service status. For more information, see
Console Announcements.

1. Use a search engine


For common issues, someone else has likely asked this question in the past. You can use scoped search to look for
answers in our documentation and our forum platforms – Cloud Customer Connect and Stack Overflow. To perform
a scoped search, go to your favorite search engine and specify the site URLs along with your specific search terms, as
follows:

<Your Search Terms> (site:docs.cloud.oracle.com/iaas OR


site:cloudcustomerconnect.oracle.com OR site:stackoverflow.com)

2. Use Live Chat in the Console


Use Live Chat in the Console to get immediate help with common issues. To start a live online chat with an Oracle

Support or Sales representative, at the top of the Console, click the Live Chat button ( ). A chat window opens
that connects you to Oracle Support.

3. Post a question to our forums


If you can't find an answer to your question through search, submit a new question to one of the forums we support.
This option is available to all customers.

Oracle Cloud Infrastructure User Guide 126


Welcome to Oracle Cloud Infrastructure

Cloud Customer Connect


For any issue related to Oracle Cloud Infrastructure, including provisioning of new resources, Console issues,
identity, networking, documentation, storage, database, Edge services, or other solutions, you can post a question to
Cloud Customer Connect at:
https://cloudcustomerconnect.oracle.com/resources/9c8fa8f96f/summary
If you are using only Always Free resources or using a Free Tier account, use Cloud Customer Connect for support
queries.

Stack Overflow
If you are creating an application that integrates with Oracle Cloud Infrastructure APIs, endpoints, or services, you
can also use Stack Overflow forums for development-related questions. Tag your questions with oracle-cloud-
infrastructure, as follows:
https://stackoverflow.com/questions/tagged/oracle-cloud-infrastructure

4. Open a support service request


This option is only available to paid accounts.
The first time you open a support request, you're automatically taken through a series of steps to provision your
support account. If you want to make changes or if you run into problems, see Configuring Your Oracle Support
Account on page 131.
Note:

Customers using only Always Free resources and customers using Free Tier
accounts are not eligible for Oracle Support. You must upgrade to a paid
account to access Oracle Support. If you need support, post a question to
Cloud Customer Connect.
If the preceding options did not resolve your issue and you need to talk to someone, you can create a support request.
In addition to support for technical issues, you can open support requests if you need to:
• Reset the password or unlock the account for the tenancy administrator
• Add or change a tenancy administrator
• Ask a question about billing and payments
• Request a service limit increase
• Request a root cause analysis (RCA)
Creating a Service Request Using the Console
To create a support ticket
1.
Open the Help menu ( ), go to Support, and click Create support request.
2. Enter the following:
• Issue Summary: Enter a title that summarizes your issue. Avoid entering confidential information.
• Describe Your Issue: Provide a brief overview of your issue.
•Include all the information that support needs to route and respond to your request. For example, "I am
unable to connect to my Compute instance."
• Include troubleshooting steps taken and any available test results.
• Select the severity level for this request.
3. Click Create Request.
To request a root cause analysis (RCA)
To request a root cause analysis for an outage, create a support request and include Root Cause Analysis
(RCA) Request in the Issue Summary field.

Oracle Cloud Infrastructure User Guide 127


Welcome to Oracle Cloud Infrastructure

Tip:

Use the Oracle Cloud Infrastructure Status page to view the current status of
services or to sign up for emails that notify you about outages.
To request a service limit increase
1.
Open the Help menu ( ), go to Support and click Request service limit increase.
2. Enter the following:
• Primary Contact Details: Enter the name and email address of the person making the request. Enter one
email address only. A confirmation will be sent to this address.
• Service Category: Select the appropriate category for your request.
• Resource: Select the appropriate resource.
Depending on your selection for resource, additional fields might display for more specific information.
•Reason for Request: Enter a reason for your request. If your request is urgent or unusual, please provide
details here.
3. Click Create Request.
After you submit the request, it is processed. A response can take anywhere from a few minutes to a few days. If your
request is granted, a confirmation email is sent to the address provided in the primary contact details.
If we need additional information about your request, a follow-up email is sent to the address provided in the primary
contact details.
To view support tickets

Open the Help menu ( ), go to Support and click View support requests.
To add a comment to a support ticket
1.
Open the Help menu ( ), go to Support and click View support requests.
A list of technical support requests appears.
2. Click the name of the support request on which you want to comment.
3. Under Comments, click Add Comment.
The Add Comment dialog appears.
4. Type your comment, and then click Add Comment.
To close a support ticket
1.
Open the Help menu ( ), go to Support and click View support requests.
A list of technical support requests appears.
2. Click the name of the support request you want to close.
3. Click Close Request.
The Request to close dialog appears.
4. Enter the reason for closing the ticket, and then click Close Request.
Creating a Service Request Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials
on page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page
4262.
To manage support requests with the API, use the Support Management API.

Oracle Cloud Infrastructure User Guide 128


Welcome to Oracle Cloud Infrastructure

Locating Oracle Cloud Infrastructure IDs


Use the following tips to help you locate identifiers you might be asked to provide.
Finding Your Customer Support Identifier (CSI)
The Customer Support Identifier (CSI) number is generated after you purchase Oracle Cloud services. This number
can be found in several places, including in your contract document and also on your tenancy details page. You’ll
need the CSI number to register and log support requests in My Oracle Support (MOS).
To find your CSI number:
1.
Open the Profile menu ( ) and click Tenancy: <your_tenancy_name>.
2. The CSI number is shown under Tenancy Information.

Finding Your Tenancy OCID (Oracle Cloud Identifier)


Get the tenancy OCID from the Oracle Cloud Infrastructure Console on the Tenancy Details page:
1.
Open the Profile menu ( ) and click Tenancy: <your_tenancy_name>.
2. The tenancy OCID is shown under Tenancy Information. Click Copy to copy it to your clipboard.

Finding the OCID of a Compartment


The OCID (Oracle Cloud Identifier) of a resource is displayed when you view the resource in the Console, on the
resource details page.

Oracle Cloud Infrastructure User Guide 129


Welcome to Oracle Cloud Infrastructure

For example, to get the OCID for a Compute instance:


1. Open the navigation menu. Under Governance and Administration, go to Identity and click Compartments.
A list of the compartments in your tenancy is displayed.
A shortened version of the OCID is displayed next to each compartment.

2. Click the shortened OCID string to view the entire value in a pop-up. Click Copy to copy the OCID to your
clipboard. You can then paste it into the service request form field.
Finding the OCID of a Resource
The OCID (Oracle Cloud Identifier) of a resource is displayed when you view the resource in the Console, both in the
list view and on the details page.
For example, to get the OCID for a compute instance:
1. Open the Console.
2. Select the Compartment to which the instance belongs from the list on the left side of the page.
Note that you must have appropriate permissions in a compartment to view resources.
3. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances. A list of instances in
the selected compartment is displayed.
4. Click the instance that you're interested in.
A shortened version of the OCID is displayed on the instance details page.
5. Click Copy to copy the OCID to your clipboard. You can then paste it into the service request form field.
Finding Your opc-request-id in the Console
To locate the opc-request-id value when you are using the Oracle Cloud Infrastructure Console, you must first
access the developer tools in the browser in which you are running the Console. Depending on your browser, this is
called either Developer Tools or Web Console and can be opened by clicking F12. In Safari on a Mac, it's called the
Web Inspector.
1. Open your browser's developer tools by clicking F12 (in Safari on Mac, click Option + Cmd + i).
2. Select the Network tab, then filter on XHR.
Note:

Different browsers present filtering options in different ways. Firefox


present an XHR filtering button on the Network tab UI. Internet Explorer
and Edge provide a filter icon with the label Content type, which you click
to expose an XHR filter.
3. Select results that return a 500 error to view the request details.
4. In the request details pane, click the Headers tab.
5. Locate and copy both the opc-request-id and date values and include them in your support ticket.

Oracle Cloud Infrastructure User Guide 130


Welcome to Oracle Cloud Infrastructure

Configuring Your Oracle Support Account


The first time you use Oracle Support in the Console, you're automatically taken through a series of steps to provision
your support account. If you want to make changes or if you run into problems, this topic explains how to manually
update your support account settings. If you're a tenancy administrator, the first time you use Oracle Support in the
Console, you might need to approve yourself as a user in MOS.
The following steps are automatically completed for you during the provisioning process. These sections explain how
to manually change your account settings.
• Approve pending users (administrators)
• Add an email to your IAM user account
• Create an Oracle Single Sign On (SSO) account
• Change your linked account
To use an identity provider other than IAM, IDCS, or Okta, follow the steps to link the identity provider account to
your MOS account.
Approving Pending Users (Administrators)
If you're an administrator, you need to accept the MOS terms of service and approve pending users. Follow these
steps to approve users in MOS:
1. Go to https://support.oracle.com and sign in.
2. Accept the terms and conditions, and then click Next.
3. Navigate to the My Account page: Go to your user name at the top of the page, open the menu, and then click My
Account.
4. In the menu bar at the top of the page, click the Message Center icon, and then click Approve Pending User
Request.
5. Approve the user.
Adding an Email to Your IAM User Account
To create support requests in the Console, your user account must have an associated email address. The first time
you create a support request in the Console, the provisioning process adds this email for you. If you want to add the
email manually, you can follow these steps.
If your user account already has an email address or you aren't an IAM user, this section does not apply.
1.
Open the Profile menu ( ) and click User Settings. Your Oracle Cloud Infrastructure IAM service User
Details page is displayed.
2. Click Edit User.
3. In the Email field, enter your email, and then click Save Changes.
Creating an Oracle Single Sign On (SSO) Account
To create service requests with My Oracle Support, you need to have an Oracle Single Sign On account and register
your Customer Support Identifier with My Oracle Support. The first time you create a support request in the Console,
the provisioning process creates an Oracle Single Sign On account for you if needed. If you want to create the account
manually, you can follow these steps.
• If you already have an Oracle SSO account with a registered CSI, use your existing account.
• If you have an Oracle SSO account but it doesn't have an associated CSI, see Registering Your CSI for Oracle
Cloud Infrastructure on page 132.
Tip:

Before you begin this procedure, have your CSI number available. Not sure
what that number is or how to locate it? See Finding Your Customer Support
Identifier (CSI) on page 133.
To request an SSO account and register with My Oracle Support:

Oracle Cloud Infrastructure User Guide 131


Welcome to Oracle Cloud Infrastructure

1. To create your Oracle Single Sign On account, go to the My Oracle Support Create Your Oracle Account page.
2. Enter your company email address in the Email address field, complete the rest of the form, and then click
Create Account. A verification email is generated.
Important:

If you use an identity provider other than IAM, IDCS, or Okta, this email
address must match the user name that you use with your identity provider.
3. Check your email account for an email from Oracle asking you to verify your email address.
4. Open the email and click Verify Email Address.
5. Sign in with the credentials you just set up.
6. At sign-in, you are prompted to enter a Note to the Approver and the Support Identifier (your CSI).
7. Click Request Access.
8. Enter the first five characters of the name of the organization that owns the Customer Support Identifier (listed in
the Welcome letter), and then click Validate. The support identifier appears in the table.
9. Click Next.
10. Enter your contact information and click Next.
11. Accept the terms and click Next.
If you are the first person requesting this support identifier, the status of the request is pending until you receive
approval from the Customer User Administrator (CUA) or from Oracle Support.
Changing Your Linked Account
If you want to change the Console user account linked to your MOS account, follow these steps. You might want to
change your account if you have an Oracle support account that doesn't use your Oracle Cloud Infrastructure profile
email. For information about IAM user accounts, see Signing In to the Console on page 41.
1.
Open the Profile menu ( ) and click User Settings. Your Oracle Cloud Infrastructure IAM service User
Details page is displayed.
2. Click More Actions > Link Support Account. The Oracle account sign-in page prompts you to enter your
Oracle credentials.
3. Enter the User name and Password of the Oracle support account that you want to link to this user, and then click
Sign in. The IAM user account is linked to the Oracle support account. The email address associated with the
support account is displayed in the user details in the field My Oracle Support account.
Using an Identity Provider Other than IAM, IDCS, or Okta
If you use an identity provider other than IAM, IDCS, or Okta, to access support in the Console, your MOS email
address must match the user name that you use with your identity provider.
If your identity provider user name and MOS email address do not match and you would like to access support in the
Console, change your MOS email address to a value that matches your identity provider user name.
Registering Your CSI for Oracle Cloud Infrastructure
To submit support requests, your MOS account must be associated with your tenancy CSI number. If you already
added the CSI for Oracle Cloud Infrastructure, these steps don't apply to you. If you previously registered for My
Oracle Support but need to add the CSI for Oracle Cloud Infrastructure, the provisioning process registers your CSI
for you, but if you want to register your CSI manually, you can follow these steps.
1. Go to https://support.oracle.com and sign in.
2. Navigate to the My Account page: Go to your user name at the top of the page, open the menu, and then click My
Account.
3. The Support Identifiers region displays the accounts that your user name is associated with.
4. Click Request Access.
5. Enter a Note to the Approver and then enter the Support Identifier (your CSI).
6. Enter the first five characters of the name of the organization that owns the Customer Support Identifier (listed in
the Welcome letter), and then click Validate. The support identifier appears in the table.

Oracle Cloud Infrastructure User Guide 132


Welcome to Oracle Cloud Infrastructure

7. Click Next.
8. Enter your contact information and accept the terms. Click Next.
The status of the request is pending until you receive approval from the Customer User Administrator (CUA).
For more information about signing in and using My Oracle Support, see Registration, Sign In, and Accessibility
Options in My Oracle Support Help.
Finding Your Customer Support Identifier (CSI)
The following steps explain how to locate your CSI number.
The Customer Support Identifier (CSI) number is generated after you purchase Oracle Cloud services. This number
can be found in several places, including in your contract document and also on your tenancy details page. You’ll
need the CSI number to register and log support requests in My Oracle Support (MOS).
To find your CSI number:
1.
Open the Profile menu ( ) and click Tenancy: <your_tenancy_name>.
2. The CSI number is shown under Tenancy Information.

Task Mapping from My Services


This topic summarizes how to perform tasks that you previously performed in My Services prior to the Oracle Cloud
Console updates.

Navigation and Task Flow Changes


The updates recently introduced to unify the Console experience enhance and simplify managing your cloud. To
achieve the overall reduction in interfaces, some navigation paths and workflows have changed. The following tables
summarize the changes to expect:
• Navigation and General Feature Changes on page 134
• Account Management Changes on page 134
• User Management and Identity Changes on page 135
• Platform Service Region Management Changes on page 135

Oracle Cloud Infrastructure User Guide 133


Welcome to Oracle Cloud Infrastructure

Navigation and General Feature Changes


Task or Feature Workflow in the Unified Console

Landing page After signing in, you now land on the Oracle Cloud
Console home page and access all services from this
page. The navigation menu includes services that you
previously navigated to through My Services. For
general information about the Console home page, see
Using the Console on page 42.

Navigate to Services Navigating to Platform Services


Open the navigation menu. Under More Oracle Cloud
Services, go to Platform Services, and then click the
service you want to access.
Navigating to Classic Data Management Services
Open the navigation menu. Under More Oracle Cloud
Services, go to Classic Data Management Services,
and then click the service you want to access.
Navigating to Classic Infrastructure Services
Open the navigation menu. Under More Oracle Cloud
Services, go to Classic Infrastructure Services, and
then click the service you want to access.
Navigating to the Applications Console
If your Cloud account also has Cloud Applications
services provisioned, then you'll have access to the
Applications Console.
In the Console header, click Applications to switch to
the Applications Console.

View Platform or Classic Announcements Announcements for Platform and Classic services
display in a banner above the header bar in the Console.
Navigate to My Home From the Oracle Cloud Console: Open the Profile menu
( ), and then click Service User Console.
Chat Support See Use Live Chat in the Console.

Account Management Changes


Task or Feature Workflow in the Unified Console

View Invoices Open the navigation menu. Under Governance and


Administration, go to Account Management and click
Invoices.
Your list of invoices is displayed. You can access your
invoices in PDF format.

Oracle Cloud Infrastructure User Guide 134


Welcome to Oracle Cloud Infrastructure

Task or Feature Workflow in the Unified Console

Platform Services Cost Breakdown View the breakdown of your Platform Services on the
Cost Analysis page: Open the navigation menu. Under
Governance and Administration, go to Account
Management and click Cost Analysis.

User Management and Identity Changes


Task or Feature Workflow in the Unified Console

Clone Cloud Administrator permissions See Add a User with Oracle Cloud Administrator
Permissions on page 59.

Add or revoke a Platform Service role (sometimes See Managing Oracle Identity Cloud Service Roles for
called a service entitlement) for a user Users on page 2398.
Add or revoke access to an instance See Managing Instance Roles in the Console on page
2399.
Add roles to a group See Managing Oracle Identity Cloud Service Roles for
Groups on page 2400.
Edit an IDCS group name See To edit details for an Oracle Identity Cloud Service
group on page 2394.
Add users to an IDCS group See To add users to a group on page 2395.
Change password See Changing Your Password on page 53.
Open the Identity Cloud Service (IDCS) console Open the navigation menu. Under Governance and
Administration, go to Identity and click Federation.
A list of the identity providers in your tenancy is
displayed.
Click the Oracle Identity Cloud Service Console link
to open the Identity Cloud Service console.

Platform Service Region Management Changes

To view and subscribe to Platform Services regions


1. Open the Console, open the Region menu, and then click Manage Regions.

Oracle Cloud Infrastructure User Guide 135


Welcome to Oracle Cloud Infrastructure

2. On the Manage Regions page, click Platform Services Regions.


The list of geographical regions is displayed. Regions that you have not subscribed to provide a button to create a
subscription. A sample of the Platform Services Regions page is shown in the following screenshot:

3. To subscribe to a region, locate the region in the list and click Subscribe.
It might take several minutes to activate your tenancy in the new region.

Frequently Asked Questions


I'm not seeing Platform Services or Classic Services on my navigation menu. What happened?
Make sure that you are signing in with your Oracle Identity Cloud Service login credentials. To ensure that you sign
in through IDCS:
1. Start at: https://www.oracle.com/cloud/sign-in.html.
2. Enter your tenancy name and click Next. The IDCS sign in page is displayed.
3. Enter your username and password and click Sign In.
4. On the Console, open the navigation menu. If your account has access to Platform or Classic services, you'll see
them displayed on the menu under More Oracle Cloud Services. See also Navigating to More Oracle Cloud
Services from the Console on page 45.

How do I get to my IDCS console?


To access the IDCS console:
1. Open the navigation menu. Under Governance and Administration, go to Identity and click Federation. The
list of identity providers is displayed. OracleIdentityCloudService is displayed in the list of identity providers,
with details about the federation.

Oracle Cloud Infrastructure User Guide 136


Welcome to Oracle Cloud Infrastructure

2. Click the link for Oracle Identity Cloud Service Console. An example is shown in the following screenshot.

The Oracle Identity Cloud service console opens in a new window.

My team needs reports that were only available from My Services. How can I get
back to the old dashboard?
You can access the My Services dashboard by using this URL:

http://myservices-<tenancyname>.console.oraclecloud.com/mycloud/cloudportal/
dashboard

where you replace <tenancyname> with your company's tenancy name.

Where can I find more information about the changes to other task workflows and
navigation?
See Task Mapping from My Services on page 133.

Where do I sign in to the Oracle Cloud Infrastructure Console?


Go to https://cloud.oracle.com.
You are prompted to enter your cloud tenant, your user name, and your password. Once authenticated, you are
directed to a region your tenancy is subscribed to. You can switch to other regions you are subscribed to by using the
region selector at the top of the Console.
If you need more help signing in, see Signing In to the Console on page 41.

How do I find my tenancy home region?


To find out what your home region is:

Open the Profile menu ( ) and click Tenancy: <your_tenancy_name>.


The Tenancy details page shows your Home Region.

What are my Oracle Cloud Infrastructure account service limits (or resource
quotas) and can I request more?
You can view your tenancy's service limits in the Console and request an increase. For more information and the
default tenancy limits, see Service Limits on page 217.

Where do I find information about what APIs are available?


Oracle Cloud Infrastructure provides a set of APIs for the core services (network, compute, block volumes) as well as
for the IAM and the Object Storage services.
See "API Requests" in the Oracle Cloud Infrastructure User Guide.

Oracle Cloud Infrastructure User Guide 137


Welcome to Oracle Cloud Infrastructure

What browsers can I use with the Console?


Oracle Cloud Infrastructure supports the following browsers and versions:
• Google Chrome 69 or later
• Safari 12.1 or later
• Firefox 62 or later

Why can't I sign in using Firefox?


If you are having trouble signing in to the Console using the Firefox browser, it might be due to one of the following
conditions:
• You are in Private Browsing mode. The Console does not support Private Browsing mode. Open a new session of
Firefox with Private Browsing turned off.
• You are not on the latest version of Firefox. Upgrade to the latest version. To check to see if you are on the latest
version, follow these instructions: https://support.mozilla.org/en-US/kb/find-what-version-firefox-you-are-using
When checking the version, note whether you are using Firefox or Firefox ESR.
• Your Firefox user profile is corrupted. To fix this issue:
1. Upgrade to the latest version of Firefox.
2. Create a new user profile and open Firefox with the new profile. See Mozilla Support for instructions on how
to create a new user profile: https://support.mozilla.org/en-US/kb/profile-manager-create-and-remove-firefox-
profiles
If none of the above resolves your issue, contact Oracle Support. In your problem description, make sure you specify
whether you are using Firefox or Firefox ESR.
How do I know if I am in Private Browsing mode?
When you are in Private Browsing mode, a mask icon is displayed in the upper right corner of your Firefox window.

How do I change my password?


Note:

For Federated Users


If your company uses an identity provider (other than Oracle Identity Cloud
Service) to manage user logins and passwords, you can't use the Console to
update your password. You do that with your identity provider.
1. Sign in to the Console using the Oracle Cloud Infrastructure Username and Password.

Oracle Cloud Infrastructure User Guide 138


Welcome to Oracle Cloud Infrastructure

2.
After you sign in, go to the top-right corner of the Console, open the Profile menu ( ) and then click Change
Password.

3. Enter the current password.


4. Follow the prompts to enter the new password, and then click Save New Password.

How do I reset my password if I forget it?


If you added an email address to your user settings, you can use the Forgot Password link on the sign-in page
to have a temporary password sent to you. If you don't have an email address included with your user details,
then an administrator must reset your password. Contact your administrator to reset your password for you. Your
administrator will give you a temporary password that is good for 7 days. If you do not use it in 7 days, the password
will expire and you'll need to get your administrator to create a new one-time password for you.
If you are the default or tenant administrator for your site and you forgot your password, contact Oracle Support. For
tips on filing a service request, see Getting Help and Contacting Support on page 126.

How do I get support?


See Getting Help and Contacting Support on page 126.

Where do I find my Tenancy OCID?


Get the tenancy OCID from the Oracle Cloud Infrastructure Console on the Tenancy Details page:
1.
Open the Profile menu ( ) and click Tenancy: <your_tenancy_name>.

Oracle Cloud Infrastructure User Guide 139


Welcome to Oracle Cloud Infrastructure

2. The tenancy OCID is shown under Tenancy Information. Click Copy to copy it to your clipboard.

Oracle Cloud Infrastructure User Guide 140


Welcome to Oracle Cloud Infrastructure

Oracle Cloud Infrastructure User Guide 141


Oracle Cloud Infrastructure Free Tier

Chapter

3
Oracle Cloud Infrastructure Free Tier
Oracle Cloud Infrastructure's Free Tier includes a free time-limited promotional trial that allows you to explore a
wide range of Oracle Cloud Infrastructure products, and a set of Always Free offers that never expire.

Free Trial
The Free Trial provides you with $300 of cloud credits that are valid for up to 30 days. You may spend these credits
on any eligible Oracle Cloud Infrastructure service.

Getting Started
Start for Free
For more information, and to see a complete list of services available to you during the trial, visit the Free Trial
website.
Note:

During sign up, choose your home region carefully. You can provision
Always Free Autonomous Databases only in your home region.
For security purposes, most users will need a mobile phone number and a credit card to create an account. Your credit
card will not be charged unless you upgrade your account.

When Your Trial Period Ends


After your trial ends, your account remains active. There is no interruption to the availability of the Always Free
Resources you have provisioned. You can terminate and re-provision Always Free resources as needed.
Paid resources that were provisioned with your credits during your free trial are reclaimed by Oracle unless you
upgrade your account.
Pay as You Go accounts are available with no commitment, or contact an Oracle sales representative in your location
to learn about monthly and annual flex accounts that offers discounted pricing. For more information, see Oracle
Cloud Infrastructure Pricing.

Always Free Resources


All Oracle Cloud Infrastructure accounts (whether free or paid) have a set of resources that are free of charge for the
life of the account. These resources display the Always Free label in the Console.
Using the Always Free resources, you can provision a virtual machine (VM) instance, an Oracle Autonomous
Database, and the networking, load balancing, and storage resources needed to support the applications that you want
build. With these resources, you can do things like run small-scale applications or perform proof-of-concept testing.
The following list summarizes the Oracle Cloud Always Free-eligible resources that you can provision in your
tenancy:

Oracle Cloud Infrastructure User Guide 142


Oracle Cloud Infrastructure Free Tier

• Compute (up to two instances)


• Autonomous Database (up to two database instances)
• Load Balancing (one load balancer)
• Block Volume (up to 100 GB total storage)
• Object Storage (up to 20 GiB)
• Vault (up to 20 keys and up to 150 secrets)
For detailed information about the Always Free resources, see Details of the Always Free Resources on page 144.
You can find your tenancy’s limits for Always Free resources in the Console. To check these limits: Open the
navigation menu. Under Governance and Administration, go to Limits, Quotas and Usage.

Quickly Launch Your Always Free Resources Using Resource Manager


Oracle offers you the ability to automatically create a full set of Always Free resources in a few minutes using the
Resource Manager service's templates feature. Templates are pre-built Terraform configurations that help you easily
create sets of resources used in common scenarios using a single, simple workflow. When you provision your Always
Free resources using the provided template, your resources are created with the settings and configuration you need to
start creating applications in the cloud. You don't need to have experience with Terraform to use the template. See To
provision your Always Free using Terraform and Resource Manager for step-by-step instructions.

To provision your Always Free using Terraform and Resource Manager


Tip:

Note that Terraform refers to the set of resources being provisioned as a


"stack". For a general introduction to Terraform and the "infrastructure-as-
code" model, see Terraform: Write, Plan, and Create Infrastructure as Code.
1. Log into your Oracle Cloud Infrastructure account.
2. Open the navigation menu. Under Solutions and Platform, go to Resource Manager and click Stacks.
3. Click the Create Stack button to open the Create Stack dialog.
4. In the Create Stack dialog, click Sample Solution.
5. Click Select Solution to browse available solutions.
6. Select the check box for Sample E-Commerce Application.
7. Click Select Solution.
8. Optionally, provide a name for the new stack. If you don't provide a name, a default name is provided on the
server. Avoid entering confidential information.
9. Optionally, provide a description for the stack.
10. Optionally, select a different compartment from your current compartment in which to create the stack. To do so,
select a compartment from the Create In Compartment drop-down.
11. Click Next to proceed to the Configure Variables panel.
12. The variables displayed in the Configure Variables panel are auto-populated from the Terraform file that you
uploaded. You don't need to change these variables if you are provisioning your Always Free resources using the
Terraform file provided by Oracle.
13. Click Next to proceed to the Review panel.
14. Verify your stack configuration, then click Create to create your stack.
Your set of Always Free resources should take no more than a few minutes to provision.

Upgrading to a Paid Account


You can upgrade to a paid account at any time through the Oracle Cloud Infrastructure Console. To do so, click the
Upgrade link in the banner at the top of the Console web page. If you don't see an Upgrade link on the page you
are viewing, you can click the Oracle Cloud logo at the top of the Console and then look for the Upgrade link in the

Oracle Cloud Infrastructure User Guide 143


Oracle Cloud Infrastructure Free Tier

sidebar on the right side of the Console home page. You will continue to have access to all of your cloud resources
after upgrading your account.

Additional Information
See Frequently Asked Questions: Oracle Cloud Infrastructure Free Tier on page 147 for answers to your questions
about Free Tier accounts and resources.

Details of the Always Free Resources


This topic provides reference information about Oracle Cloud Infrastructure's Always Free resources.

Compute
All tenancies get two Always Free Compute virtual machine (VM) instances.
You must create the Always Free Compute instances in your home region.

Details of the Always Free Compute instance


• Shape: VM.Standard.E2.1.Micro
• Processor: 1/8th of an OCPU with the ability to use additional CPU resources
• Memory: 1 GB
• Networking: Includes one VNIC with one public IP address and up to 50 Mbps network bandwidth via the
internet. Traffic to private IPs, on-premise endpoints via a Dynamic Routing Gateway, or to endpoints within the
same Oracle Cloud region is up to 480 Mbps.
• Operating System: Your choice of one of the following Always Free-eligible operating systems:
• Oracle Linux (including Oracle Autonomous Linux)
• Canonical Ubuntu Linux
• CentOS Linux
Tip:

The Linux operating systems labeled "Always Free Eligible" in the Console
are compatible with Always Free Compute instances and incur no licensing
fees. These operating systems are also compatible with paid resources and
are available to users of paid accounts. To provision a Compute instance with
an operating system that is not Always Free-eligible, you must have a paid
account or a Free Trial account with available credits.
See Oracle-Provided Images on page 633 for more information about the available operating systems. For steps to
create an Always Free-eligible Compute instance, see "Tutorial - Launching Your First Linux Instance" in the Oracle
Cloud Infrastructure Getting Started Guide.

Database
All tenancies get two Always Free Oracle Autonomous Databases. The Autonomous Databases use shared Exadata
infrastructure (meaning Oracle handles the database infrastructure provisioning and maintenance). For current
regional availability, see the "Always Free Cloud Services" section of Data Regions for Platform and Infrastructure
Services.

Details of the Always Free Oracle Autonomous Database instance


• Processor: 1 Oracle CPU processor (cannot be scaled)
• Memory: 8 GB RAM

Oracle Cloud Infrastructure User Guide 144


Oracle Cloud Infrastructure Free Tier

• Database Storage: 20 GB storage (cannot be scaled)


• Workload Type: Your choice of either the transaction processing or data warehouse workload type
• Maximum Simultaneous Database Sessions: 20
Tip:

Always Free Autonomous Databases can be upgraded to paid instances after


provisioning if you need features like more storage or CPU scaling.
Note:

• Before creating an Always Free Autonomous Database, check your home


region for Always Free Autonomous Database support. See Data Regions
for Platform and Infrastructure Services.
• You cannot create an Always Free Autonomous Database in a Home
Region where Always Free Autonomous Databases are not supported.
• Not all regions support the same database version. The supported version
may be 19c-only or 21c-only, depending on the region.
See Overview of the Always Free Autonomous Database on page 1221 for additional product details. See To create
an Always Free Autonomous Database on page 1159 for steps to create an Always Free Autonomous Database.

Load Balancing
All tenancies get one Always Free 10 Mbps load balancer.

Details of the Always Free load balancer


• Shape: Micro (10 Mbps)
• Listeners: 10
• Virtual Hostnames: 10
• Backend Sets: 10
• Backend Servers: 128
For information about provisioning an Always Free load balancer, see Getting Started with Load Balancing on page
104.

Block Volume
All tenancies receive a total of 100 GB of Always Free Block Volume storage, and five volume backups. These
amounts apply to both boot volumes and block volumes combined. When you provision a Compute instance, the
instance automatically receives a 50 GB boot volume for storage. You can also create and attach block volumes to
expand the storage capacity of a Compute instance. For more information, see Creating a Volume on page 519 and
Attaching a Volume on page 521.

Details of the Always Free Block Volume resources


• 100 GB total of combined boot volume and block volume Always Free Block Volume storage.
• Five total volume backups (boot volume and block volume combined).
When you create a Compute instance, the default boot volume size for the instance is 50 GB, which counts towards
your allotment of 100 GB. You can customize the instance's boot volume size up to 100 GB; however, this will use
up your full allotment of storage for Always Free Block Volume resources. Also, because the minimum boot volume
size allowed for Compute instances is 50 GB, launching two instances will use all your Always Free Block Volume
resources. Alternatively, you can launch one instance with the default boot volume size of 50 GB, and then create
and attach a 50 GB block volume to expand the storage capacity of the instance. For more information, see Creating
a Volume on page 519 and Attaching a Volume on page 521. Although it is possible to mix paid and Always
Free resources, Oracle does not recommend this. If you have used up your allotment of Always Free Block Volume

Oracle Cloud Infrastructure User Guide 145


Oracle Cloud Infrastructure Free Tier

resources, you can free up block storage resources by terminating an Always Free instance and deleting the boot
volume, or terminating an Always Free block volume.
You can have a maximum of five Always Free volume backups at any time. This applies to both boot volume and
block volume backups. For example, you could have three boot volume backups for your Always Free instance and
two block volume backups for your Always Free block volumes. In this example, if you try to create new backups,
the operation will fail with an error until you delete existing Always Free volume backups. For more information
about volume backups, see Overview of Block Volume Backups on page 544 and Overview of Boot Volume
Backups on page 619.

Object Storage
All tenancies get a total of 20 GiB (gibibytes) of Always Free Object Storage.

Details of the Always Free Object Storage resources


If you have a free account (including trial accounts), Always Free includes the following:
• 20 GiB of combined Standard tier, Infrequent Access tier, and Archive tier data
• 50,000 Object Storage API requests per month
If you have a paid account, Always Free includes the following:
• 10 GiB of Standard tier data
• 10 GiB of Infrequent Access tier data
• 10 GiB of Archive tier data
• 50,000 Object Storage API requests per month
Important:

If you are participating in an Oracle Cloud Free Trial, you can store unlimited
data and can use 20 GiB for free (your usage of the first 20 GiB incurs no
deduction of your initial $300 trial credit balance). Upgrade to a paid account
to continue access to unlimited storage. If you do not upgrade before your
trial ends, your free account will be limited to 20 GiB of combined Standard
tier, Infrequent Access tier, and Archive tier data. If you are using more
than the 20-GiB limit when your Free Trial ends, all of your objects will be
deleted. You can then upload objects until you reach your Always Free usage
limits.
See Putting Data into Object Storage on page 83 for instructions on using your Always Free storage resources.

Resource Manager
All tenancies get the following Always Free resources for Resource Manager.

Resource (per tenant) Always Free resources

Configuration source providers 100

Jobs (concurrent) 2
Job duration: 24 hours

Private templates 10

Oracle Cloud Infrastructure User Guide 146


Oracle Cloud Infrastructure Free Tier

Resource (per tenant) Always Free resources

Stacks 10
Variables per stack: 250
Size per variable: 8192 bytes
Zip file per stack: 11 MB

Service Connector Hub


All tenancies get 2 Always Free service connectors.

Details of the Always Free Service Connector Hub resources


If you have a free account (including trial accounts), Always Free Service Connector Hub includes 2 Always Free
service connectors.
If you have a paid account, see Service Connector Hub Limits on page 234.

Vault
All master encryption keys protected by software are free. All tenancies get 20 key versions of master encryption keys
protected by a hardware security module (HSM) and 150 Always Free Vault secrets. You can spread these keys or
secrets across any number of vaults in the tenancy, although virtual private vaults are not included in the Always Free
resources.

Details of the Always Free Vault resources


• all key versions of a master encryption key protected by software (across any number of keys or vaults)
• 20 total key versions of a master encryption key protected by an HSM (across any number of keys or vaults)
• 150 total Always Free secrets (across any number of vaults).
• 40 secret versions of any given secret (including up to 20 in some form of active use and 20 pending deletion).
If you have used up your allotment of Always Free secrets, you can release resources by scheduling a secret or secret
version for deletion. At minimum, you do have to wait a day before the secret or secret version is deleted.

Frequently Asked Questions: Oracle Cloud Infrastructure Free Tier


I just signed up and I cannot access specific services. What can I do?
Registering your account with all services and regions can take a few minutes. Check again after a few minutes have
passed.

How do I change which resources I want to designate as Always Free?


In short, you cannot. Eligible resources are designated Always Free when they are created. After you provision an
Always Free resource, the Always Free status is not transferable to another existing resource. However, you can
delete an existing Always Free resource in order to create a new Always Free resource in its place.

What happens when my Free Trial expires or my credits are used up?
When you've reached the end of your 30 day trial, or used all of your Free Trial credits (whichever comes first),
you will no longer be able to create new paid resources. However, your account will remain active. Your existing
resources will continue to run for a few days, allowing you to upgrade your account and keep your resources before
they're reclaimed by Oracle. (Note that reclaimed resources cannot be recovered—they are permanently deleted.)

Oracle Cloud Infrastructure User Guide 147


Oracle Cloud Infrastructure Free Tier

Resources identified as Always Free will not be reclaimed. After your Free Trial expires, you'll continue to be able
to use and manage your existing Always Free resources, and create new Always Free resources according to tenancy
limits.

If I upgrade, do I keep my Free Trial credit balance?


Yes, if you upgrade during the Free Trial period, you will not be billed until you've reached the end of your 30 day
trial, or used all of your Free Trial credits (whichever comes first). You will be notified by email when billing begins.

After I upgrade my account, can I downgrade?


There is no option to downgrade your account. However, with a paid account, you’ll continue to have access to
Always Free resources, and you’ll only pay for the standard resources you use. No minimums and no prepayment are
required for your paid account.

My resources no longer appear. How can I restore them?


If you have a Free Tier account and your resources no longer appear, it is likely that your Free Trial has expired and
your paid resources have been reclaimed (terminated). You can verify this if this is the case by doing the following:
1. Log in to the Console
2. Check for a banner at the top of the Console with the following text: "You are using a Free Tier account. To
access all services and resources, upgrade to a paid account.”
If you see this message, your resources have been reclaimed and cannot be restored.

I get an "out of host capacity" error when I try to create an Always Free Compute
instance. What can I do?
An "out of host capacity" error indicates a temporary lack of Always Free shapes in your home region. Oracle is
working to provide more capacity, though it might take several days before additional capacity is available in your
home region. Wait a while, and then try to launch the instance again.

Is it possible to extend my Free Trial?


If you need additional credits or time, you can schedule a call with an Oracle sales representative using the Upgrade
page in the Console. Sales representatives have the authority to extend trials or issue additional credits if appropriate.
If you don't see an Upgrade link on the Console page you are viewing, you can click the Oracle Cloud logo at the
top of the Console and then look for the Upgrade link in the sidebar on the right side of the page.

Is my Free Tier account eligible for support?


Community support through our forums is available to all customers. Customers using only Always Free resources
are not eligible for Oracle Support. Limited support is available to Free Tier accounts with Free Trial credits. After
you use all of your credits or after your trial period ends (whichever comes first), you must upgrade to a paid account
to access Oracle Support. If you choose not to upgrade and continue to use Always Free Services, you will not be
eligible to raise a service request in My Oracle Support. See Getting Help and Contacting Support on page 126.

Oracle Cloud Infrastructure User Guide 148


Oracle Cloud Infrastructure Free Tier

Oracle Cloud Infrastructure User Guide 149


Oracle Cloud Infrastructure Government Cloud

Chapter

4
Oracle Cloud Infrastructure Government Cloud
Oracle Cloud Infrastructure includes the following government cloud services:
• Oracle Cloud Infrastructure US Government Cloud on page 150
• Oracle Cloud Infrastructure United Kingdom Government Cloud on page 174

Oracle Cloud Infrastructure US Government Cloud


Oracle Cloud Infrastructure US Government Cloud provides cloud services for two levels of government operators:
• Oracle Cloud Infrastructure US Government Cloud with FedRAMP Authorization on page 160
• Oracle Cloud Infrastructure US Federal Cloud with DISA Impact Level 5 Authorization on page 165

For All US Government Cloud Customers


This topic contains information common to both the US Government Cloud with FedRAMP High Joint Authorization
Board authorization and to the US Federal Cloud with DISA Impact Level 5 authorization.

Shared Responsibilities
Oracle Cloud Infrastructure for government offers best-in-class security technology and operational processes to
secure its enterprise cloud services. However, for you to securely run your workloads , you must be aware of your
security and compliance responsibilities. By design, Oracle provides security of cloud infrastructure and operations
(cloud operator access controls, infrastructure security patching, and so on), and you are responsible for securely
configuring your cloud resources. Security in the cloud is a shared responsibility between you and Oracle.
For more information about shared responsibilities in the Oracle Cloud, see the following white papers:
• Making Sense of the Shared Responsibility Model
• Oracle Cloud Infrastructure Security

Setting Up an Identity Provider for Your Tenancy


As a Government Cloud customer, you must bring your own identity provider that meets your agency's compliance
requirements and supports common access card/personal identity verification card (CAC/PIV) authentication. You
can federate Oracle Cloud Infrastructure with SAML 2.0 compliant identity providers that also support CAC/PIV
authentication. For instructions on setting up a federation, see Federating with Identity Providers on page 2381.

Remove the Oracle Cloud Infrastructure Default Administrator User and Any Other Non-Federated
Users
When your organization signs up for an Oracle account and Identity Domain, Oracle sets up a default administrator
for the account. This person will be the first IAM user for your company and will have full administrator access to
your tenancy. This user can set up your federation.
After you have successfully set up the federation with your chosen identity provider, you can delete the default
administrator user and any other IAM service local users you might have added to assist with setting up your tenancy.

Oracle Cloud Infrastructure User Guide 150


Oracle Cloud Infrastructure Government Cloud

Deleting the local, non-federated users ensures that only users in your chosen identity provider can access Oracle
Cloud Infrastructure.
To delete the default administrator:
1. Sign in to the Console through your identity provider.
More details
a. Open a supported browser and go to the Government Cloud Console URL.
b. Enter your Cloud Tenant and click Continue.
c. On the Single Sign-On pane, select your identity provider and click Continue. You will be redirected to your
identity provider to sign in.
d. Enter your user name and password.
2. Open the navigation menu. Under Governance and Administration, go to Identity and click Users. The list of
users is displayed.
3. On the User Type filter, select only Local Users.
4. For each local user, go to the the Actions icon (three dots) and click Delete.

Using a Common Access Card/Personal Identity Verification Card to Sign in to the


Console
After you set up CAC/PIV authentication with your identity provider and successfully federate with Oracle Cloud
Infrastructure, you can use your CAC/PIV credentials to sign in to the Oracle Cloud Infrastructure Console. See your
identity provider's documentation for the specific details for your implementation.
In general, the sign in steps are:
1. Insert your CAC/PIV card into your card reader.
2. Navigate to the Oracle Cloud Infrastructure Console sign in page.
3. If prompted, enter your Cloud Tenant name and click Continue.
4. Select the Single Sign-On provider and click Continue.
5. On your identity provider's sign on page, select the appropriate card, for example, PIV Card.
6. If presented with a certificate picker, choose the appropriate certificate or other attributes set up by your
organization.
7. When prompted, enter the PIN.

IPv6 Support for Virtual Cloud Networks


US Government Cloud customers have the option to enable IPv6 addressing for their VCNs. For more information,
see IPv6 Addresses on page 2915.

Setting Up Secure Access for Compute Hosts


You can set up CAC/PIV authentication using third-party tools to enable multi-factor authentication for securely
connecting to your compute hosts. Example tools include PuTTY-CAC for Windows and Open SC for macOS. For
more information see the U.S. Government website, PIV Usage Guidelines.

Enabling FIPS Mode for Your Operating System


Government Cloud customers are responsible for enabling FIPS mode for the operating systems on their Compute
hosts. To make your operating system compliant with Federal Information Processing Standard (FIPS) Publication
140-2, follow the guidelines for your operating system:

Oracle Linux
Follow the guidance provided at Enabling FIPS Mode on Oracle Linux.

Oracle Cloud Infrastructure User Guide 151


Oracle Cloud Infrastructure Government Cloud

Ubuntu
Follow the guidance provided at Ubuntu Security Certifications.

Windows Server 2012


Follow the guidance provided at Data Encryption for Web console and Reporting server Connections.

Windows Server 2016 and Windows Server 2019


First, follow the guidance provided at How to Use FIPS Compliant Algorithms.
Next, go to the Microsoft document, FIPS 140 Validation and navigate to the topic Information for System
Integrators. Follow the instructions under "Step 2 – Setting FIPS Local/Group Security Policy Flag" to complete the
FIPS enablement.

CentOS
The following guidance is for enabling FIPS on CentOS 7.5. These procedures are valid for both VM and bare metal
instances, and only in NATIVE mode. These procedures can be modified for both Emulated and PV modes as needed.
Note that this procedure provides an instance that contains the exact FIPS cryptographic modules EXCEPT kernel.
However, the kernel module is the same major/minor version but is accelerated in revision, so can be considered
compliant under most FIPS compliant models.
After you complete this procedure, Oracle strongly recommends that you do NOT run system-wide yum updates. The
system-wide update will remove the FIPS modules contained herein.

Verify that the version of the kernel, FIPS modules, and FIPS software are at the minimum version:
1. Validate the current version of the kernel package meets the requirement:
a. Current version: kernel-3.10.0-693.el7
b. Execute rpm -qa | grep kernel-3
2. Execute the following and validate the major or minor version is the same as the requirements.
a. Run

yum list <package_name>


b. Verify that the major/minor version matches the required ones.
Required packages and versions are:
• fipscheck - fipscheck-1.4.1-6.el7
• hmaccalc - hmaccalc-0.9.13-4.el7
• dracut-fips - dracut-fips-033-502.el7
• dracut-fips-aesni - dracut-fips-aesni-033-502.el7
c. For each version of package that is not installed, run

yum install <package_name>

Oracle Cloud Infrastructure User Guide 152


Oracle Cloud Infrastructure Government Cloud

3. Download and install the following packages:


a. Packages already installed as part of the image:
1. Create a directory called preinstall.
2. Download the following packages into this directory:
openssl, openssl-libs – 1.0.2k-8.el7
nss, nss-tools, nss-sysinit – 3.28.4-15.el7_4
nss-util – 3.28.4-3.el7
nss-softokn, nss-softokn-freebl – 3.28.3-8.el7_4
openssh, openssh-clients, openssh-server – 7.4p1-11.el7
3. In the preinstall directory, run

yum - -nogpgcheck downgrade *.rpm


b. Packages to be added to the image:
1. Create a directory called newpackages.
2. Download the following packages into this directory:
libreswan – 3.20-3.el7
libgcrypt – 1.5.3-14.el7
gnutls – 3.3.26-9.el7
gmp – 6.0.0-15.el7
nettle – 2.7.1-8.el7
3. In the newpackages directory, run

yum - -nogpgcheck localinstall *.rpm

The URLs for the packages used for this installation are:
Preinstall:
http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/nss-3.28.4-15.el7_4.x86_64.rpm
http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/nss-util-3.28.4-3.el7.x86_64.rpm
http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/nss-tools-3.28.4-15.el7_4.x86_64.rpm
http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/nss-sysinit-3.28.4-15.el7_4.x86_64.rpm
http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/nss-softokn-freebl-3.28.3-8.el7_4.x86_64.rpm
http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/nss-softokn-3.28.3-8.el7_4.x86_64.rpm
http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/openssl-1.0.2k-8.el7.x86_64.rpm
http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/openssl-libs-1.0.2k-8.el7.x86_64.rpm
http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/openssh-7.4p1-11.el7.x86_64.rpm
http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/openssh-clients-7.4p1-11.el7.x86_64.rpm
http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/openssh-server-7.4p1-11.el7.x86_64.rpm
Newpackages:
http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/libreswan-3.20-3.el7.x86_64.rpm
http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/libgcrypt-1.5.3-14.el7.x86_64.rpm
http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/gnutls-3.3.26-9.el7.x86_64.rpm

Oracle Cloud Infrastructure User Guide 153


Oracle Cloud Infrastructure Government Cloud

http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/gmp-6.0.0-15.el7.x86_64.rpm
http://linuxsoft.cern.ch/cern/centos/7/updates/x86_64/Packages/nettle-2.7.1-8.el7.x86_64.rpm

Kernel FIPS module and initramfs validation installation.


Perform this procedure as root:
1. Regenerate dracut:

dracut -f -v
2. Add the fips argument to the end of the default kernel boot command line:
a. Edit /etc/default/grub
b. At the end of the line starting with “GRUB_CMDLINE_LINUX”, add

fips=1

inside the double quotes of the command.


c. Save the result.
3. Generate a new grub.cfg:

grub2-mkconfig -o /etc/grub2-efi.cfg

Configure SSH to limit the encryption algorithms.


1. Sudo to root.
2. Edit /etc/ssh/sshd_config.
3. Add the following lines to the bottom of the file:

Protocol 2
Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc,aes192-
cbc,aes256-cbc
Macs hmac-sha1
4. Reboot the instance.
5. After instance has rebooted, validate that FIPS mode has been enabled in the kernel:
a. Sudo to root.
b. Run the following command:

cat /proc/sys/crypto/fips-enabled

The result should be '1'.


To further secure CentOS7/RHEL 7.x systems as required by individual agency guidance, follow the checklist
contained in the OpenSCAP guide. This guide can be found here: https://static.open-scap.org/ssg-guides/ssg-centos7-
guide-index.html
The STIG for evaluating compliance under multiple profiles can be found here: https://iase.disa.mil/stigs/os/unix-
linux/Pages/index.aspx . Use the Red Hat Linux 7.x STIG for CentOS 7.5 releases.

Required VPN Connect Parameters for Government Cloud


If you use VPN Connect with the Government Cloud, you must configure the IPSec connection with the following
FIPS-compliant IPSec parameters.
For some parameters, Oracle supports multiple values, and the recommended one is highlighted in bold text.

Oracle Cloud Infrastructure User Guide 154


Oracle Cloud Infrastructure Government Cloud

Oracle supports the following parameters for IKEv1 or IKEv2. Check the documentation for your particular CPE to
confirm which parameters the CPE supports for IKEv1 or IKEv2.

Phase 1 (ISAKMP)

Parameter Options
ISAKMP protocol Version 1

Exchange type Main mode

Authentication method Pre-shared keys *

Encryption algorithm AES-256-cbc (recommended)


AES-192-cbc
AES-128-cbc

Authentication algorithm SHA-2 384 (recommended)


SHA-2 256
SHA-1 (also called SHA or SHA1-96)

Diffie-Hellman group group 14 (MODP 2048)


group 19 (ECP 256)
group 20 (ECP 384) (recommended)

IKE session key lifetime 28800 seconds (8 hours)

* Only numbers, letters, and spaces are allowed characters in pre-shared keys.

Phase 2 (IPSec)

Parameter Options
IPSec protocol ESP, tunnel mode

Encryption algorithm AES-256-gcm (recommended)


AES-192-gcm
AES-128-gcm
AES-256-cbc
AES-192-cbc
AES-128-cbc

Authentication algorithm If using GCM (Galois/Counter Mode), no authentication


algorithm is required because authentication is included
with GCM encryption.
If not using GCM, use HMAC-SHA-256-128.

IPSec session key lifetime 3600 seconds (1 hour)

Oracle Cloud Infrastructure User Guide 155


Oracle Cloud Infrastructure Government Cloud

Parameter Options
Perfect Forward Secrecy (PFS) enabled, group 14

Oracle's BGP ASN


This section is for network engineers who configure an edge device for FastConnect or VPN Connect.
Oracle's BGP ASN for the Government Cloud depends on the authorization level:
• US Government Cloud: 6142
• US Federal Cloud (Impact Level 5 authorization): 20054

FIPS Compatible Terraform Provider


To use Terraform in US Government Cloud regions, refer to Enabling FIPS Compatiblity on page 4323 for
installation and configuration information.

Container Engine for Kubernetes


The components installed by Container Engine for Kubernetes are compliant with FIPs. When using Container
Engine for Kubernetes in US Government Cloud regions, you should also ensure that the underlying hosts are
compliant with FIPs.

TLS Certificates for API Gateway


If you use API Gateway in US Government Cloud regions, you must:
• Obtain a custom TLS certificate from an approved Certificate Authority.
• Record the mapping between an API gateway's custom domain name and its public IP address with an approved
DNS provider.
For more information, see Setting Up Custom Domains and TLS Certificates on page 375 for installation and
configuration information.

Requesting a Service Limit Increase for Government Cloud Tenancies


If you need to request a service limit increase, use the following instructions to create a service request in My Oracle
Support.
Important:

• Before you can create a service request, you must have an oracle.com
account and you must register your Oracle Cloud Infrastructure CSI
with My Oracle Support. See Requesting a Service Limit Increase for
Government Cloud Tenancies on page 156 for details.
• Be aware that the support engineer that reviews the information in the
service limit request might not be a U.S. citizen.

Creating a Service Request


To create a service request for Oracle Government Cloud:
1. Go to My Oracle Support and log in.
If you are not signed in to Oracle Cloud Support, click Switch to Cloud Support at the top of the page.
2. At the top of the page, click Service Requests.
3. Click Create Technical SR.

Oracle Cloud Infrastructure User Guide 156


Oracle Cloud Infrastructure Government Cloud

4. Select the following from the displayed menus:


•Service Type: Select Oracle Cloud Infrastructure from the list.
•Service Name: Select the appropriate option for your organization.
•Problem Type: Select Account Provisioning, Billing and Termination, and then select Limit Increase from
the submenu.
5. Enter your contact information.
6. Enter a Description, and then enter the required fields specific to your issue. If a field does not apply, you can
enter n/a.
For help with any of the general fields in the service request or for information on managing your service requests,
click Help at the top of the Oracle Cloud Support page.

Locating Oracle Cloud Infrastructure IDs


Use the following tips to help you locate identifiers you might be asked to provide:
Finding Your tenancy OCID (Oracle Cloud Identifier)
Get the tenancy OCID from the Oracle Cloud Infrastructure Console on the Tenancy Details page:
1.
Open the Profile menu ( ) and click Tenancy: <your_tenancy_name>.
2. The tenancy OCID is shown under Tenancy Information. Click Copy to copy it to your clipboard.

Finding the OCID of a Compartment

Oracle Cloud Infrastructure User Guide 157


Oracle Cloud Infrastructure Government Cloud

To find the OCID (Oracle Cloud Identifier) of a compartment:


1. Open the navigation menu. Under Governance and Administration, go to Identity and click Compartments.
A list of the compartments in your tenancy is displayed.
A shortened version of the OCID is displayed next to each compartment.

2. Click Copy to copy the OCID to your clipboard. You can then paste it into the service request form field.
Finding the OCID of a Resource
The OCID (Oracle Cloud Identifier) of a resource is displayed when you view the resource in the Console, both in the
list view and on the details page.
For example, to get the OCID for a compute instance:
1. Open the Console.
2. Select the Compartment to which the instance belongs from the list on the left side of the page.
Note that you must have appropriate permissions in a compartment to view resources.
3. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances. A list of instances in
the selected compartment is displayed.
4. A shortened version of the OCID is displayed on the instance details page.
5. Click Copy to copy the OCID to your clipboard. You can then paste it into the service request form field.
Finding Your Customer Service Identifier (CSI)
The Customer Support Identifier (CSI) number is generated after you purchase Oracle Cloud services. This number
can be found in several places, including in your contract document and also on your tenancy details page. You’ll
need the CSI number to register and log support requests in My Oracle Support (MOS).
Note:

The CSI number is not available for OCI Government Cloud regions.
To find your CSI number:
1.
Open the Profile menu ( ) and click Tenancy: <your_tenancy_name>.

Oracle Cloud Infrastructure User Guide 158


Oracle Cloud Infrastructure Government Cloud

2. The CSI number is shown under Tenancy Information.

Using My Oracle Support for the First Time


Before you can create service requests with My Oracle Support, you need to have an Oracle Single Sign On (SSO)
account and you need to register your Customer Support Identifier (CSI) with My Oracle Support.
Tip:

Before you begin this procedure, have your CSI handy (see Requesting a
Service Limit Increase for Government Cloud Tenancies on page 156).
To request an SSO account and register with My Oracle Support
1. Go to https://support.oracle.com.
2. Click New user? Register here to create your Oracle Single Sign On (SSO) account.
3. Enter your company e-mail address in the Email address field, complete the rest of the form, and then click
Create Account. A verification email is generated.
4. Check your email account for an email from Oracle asking your to verify your email address.
5. Open the email and click Verify Email Address.
6. Sign in with the credentials you just set up.
7. At sign in, you are prompted to enter a Note to the Approver and the Support Identifier (your CSI).
8. Click Request Access.
9. Enter the first five characters of the name of the organization that owns the Customer Support Identifier (listed in
the Welcome letter and on My Services), and then click Validate. The support identifier appears in the table.
10. Click Next.
11. Enter your contact information and click Next.
12. Accept the terms and click Next.
The status of the request is pending until you receive approval from the Customer User Administrator (CUA) or from
Oracle Support if you are the first person requesting this support identifier.
If you have previously registered, but need to add the CSI for Oracle Cloud Infrastructure
1. Go to https://support.oracle.com and log in.
2. Navigate to the My Account page: Go to your user name at the of the page, open the menu, and then click My
Account.
3. The Support Identifiers region displays the accounts that your user name is currently associated with.
4. Click Request Access.
5. Enter a Note to the Approver and then enter the Support Identifier (your CSI).
6. Click Request Access.

Oracle Cloud Infrastructure User Guide 159


Oracle Cloud Infrastructure Government Cloud

7. Enter the first five characters of the name of the organization that owns the Customer Support Identifier (listed in
the Welcome letter and on My Services), and then click Validate. The support identifier appears in the table.
8. Click Validate.
9. The entry is validated. Close the dialog.
The status of the request is pending until you receive approval from the Customer User Administrator (CUA).
For more information about signing in and using My Oracle Support, see Registration, Sign In, and Accessibility
Options in My Oracle Support Help.

Oracle Cloud Infrastructure US Government Cloud with FedRAMP Authorization


This topic contains information specific to Oracle Cloud Infrastructure US Government Cloud with FedRAMP High
Joint Authorization Board.

Authorizations
Oracle Cloud Infrastructure US Government Cloud has obtained the following authorizations:
• FedRAMP High
• DISA Impact Level 4
For information about the US Government Cloud, see For All US Government Cloud Customers on page 150.

Regions
The region names and identifiers for the US Government Cloud with FedRAMP High Joint Authorization Board are
shown in the following table:

Region Name Region Region Location Region Key Realm Key Availability
Identifier Domains
US Gov East us-langley-1 Ashburn, VA LFI OC2 1
(Ashburn)
US Gov West us-luke-1 Phoenix, AZ LUF OC2 1
(Phoenix)

After your tenancy is created in one of these regions, you can subscribe to the other region. Tenancies in the
FedRAMP-authorized regions cannot subscribe to the commercial regions, or to the US Federal Cloud regions. For
information about subscribing to a region, see Managing Regions on page 2464.

Console Sign-in URLs


To sign in to the FedRAMP-authorized US Government Cloud, enter one of the following URLs in a supported
browser:
• https://console.us-langley-1.oraclegovcloud.com/
• https://console.us-luke-1.oraclegovcloud.com/
Note:

When you're logged in to the Console for one of the US Government Cloud
regions, the browser times out after 15 minutes of inactivity, and you need to
sign in again to use the Console.

US Government Cloud with FedRAMP Authorization API Reference and Endpoints


US Government Cloud with FedRAMP High Joint Authorization Board has these APIs and corresponding regional
endpoints:

Oracle Cloud Infrastructure User Guide 160


Oracle Cloud Infrastructure Government Cloud

Announcements API
API reference
• https://announcements.us-langley-1.oraclegovcloud.com
• https://announcements.us-luke-1.oraclegovcloud.com
API Gateway API
API reference
• https://apigateway.us-langley-1.oci.oraclegovcloud.com
• https://apigateway.us-luke-1.oci.oraclegovcloud.com
Autoscaling API
API reference
• https://autoscaling.us-langley-1.oci.oraclegovcloud.com
• https://autoscaling.us-luke-1.oci.oraclegovcloud.com
Core Services (covering Networking, Compute, and Block Volume)
The Networking, Compute, and Block Volume services are accessible with the following API:
Core Services API
API reference
• https://iaas.us-langley-1.oraclegovcloud.com
• https://iaas.us-luke-1.oraclegovcloud.com
Container Engine for Kubernetes API
API reference
• https://containerengine.us-langley-1.oci.oraclegovcloud.com
• https://containerengine.us-luke-1.oci.oraclegovcloud.com
Database API
API reference
• https://database.us-langley-1.oraclegovcloud.com
• https://database.us-luke-1.oraclegovcloud.com
You can track the progress of long-running Database operations with the Work Requests API.
Digital Assistant API
API reference
• https://digitalassistant.us-langley-1.oci.oraclegovcloud.com
• https://digitalassistant.us-luke-1.oci.oraclegovcloud.com
Email Delivery API
API reference
• https://ctrl.email.us-langley-1.oci.oraclegovcloud.com
• https://ctrl.email.us-luke-1.oci.oraclegovcloud.com
Events API
API reference
• https://events.us-langley-1.oci.oraclegovcloud.com
• https://events.us-luke-1.oci.oraclegovcloud.com

Oracle Cloud Infrastructure User Guide 161


Oracle Cloud Infrastructure Government Cloud

File Storage API


API reference
• https://filestorage.us-langley-1.oraclegovcloud.com
• https://filestorage.us-luke-1.oraclegovcloud.com
Functions API
API reference
• https://functions.us-langley-1.oci.oraclegovcloud.com
• https://functions.us-luke-1.oci.oraclegovcloud.com
IAM API
API reference
• https://identity.us-langley-1.oraclegovcloud.com
• https://identity.us-luke-1.oraclegovcloud.com
Note:

Use the Endpoint of Your Home Region for All IAM API Calls
When you sign up for Oracle Cloud Infrastructure, Oracle creates a tenancy
for you in one region. This is your home region. Your home region is where
your IAM resources are defined. When you subscribe to a new region,
your IAM resources are replicated in the new region, however, the master
definitions reside in your home region and can only be changed there.
Make all IAM API calls against your home region endpoint. The changes
automatically replicate to all regions. If you try to make an IAM API call
against a region that is not your home region, you will receive an error.
Key Management API (for the Vault service)
API reference
• https://kms.us-langley-1.oraclegovcloud.com
• https://kms.us-luke-1.oraclegovcloud.com
In addition to these endpoints, each vault has a unique endpoint for create, update, and list operations for keys. This
endpoint is referred to as the control plane URL or management endpoint. Each vault also has a unique endpoint for
cryptographic operations. This endpoint is known as the data plane URL or the cryptographic endpoint.
Events API
API reference
• https://events.us-langley-1.oraclegovcloud.com
• https://events.us-luke-1.oraclegovcloud.com
Marketplace Service API
API reference
• https://marketplace.us-langley-1.oci.oraclegovcloud.com
• https://marketplace.us-luke-1.oci.oraclegovcloud.com
Monitoring API
API reference
• https://telemetry-ingestion.us-langley-1.oraclegovcloud.com
• https://telemetry-ingestion.us-luke-1.oraclegovcloud.com
• https://telemetry.us-langley-1.oraclegovcloud.com
• https://telemetry.us-luke-1.oraclegovcloud.com

Oracle Cloud Infrastructure User Guide 162


Oracle Cloud Infrastructure Government Cloud

Notifications API
API reference
• https://notification.us-langley-1.oraclegovcloud.com
• https://notification.us-luke-1.oraclegovcloud.com
The source service must be available in US Government Cloud regions for messages to be successfully sent through
the Notifications service. If the source service is not available in these regions, then the message is not sent. For a list
of unavailable services, see Services Not Supported in US Government Cloud with FedRAMP Authorization on page
165.
Object Storage and Archive Storage APIs
Both Object Storage and Archive Storage are accessible with the following APIs:
Object Storage API
API reference
• https://objectstorage.us-langley-1.oraclegovcloud.com
• https://objectstorage.us-luke-1.oraclegovcloud.com
Amazon S3 Compatibility API
API reference
• https://<object_storage_namespace>.compat.objectstorage.us-langley-1.oraclegovcloud.com
• https://<object_storage_namespace>.compat.objectstorage.us-luke-1.oraclegovcloud.com
Tip:

See Understanding Object Storage Namespaces on page 3423 for


information regarding how to find your Object Storage namespace.
Swift API (for use with Oracle RMAN)
• https://swiftobjectstorage.us-langley-1.oraclegovcloud.com
• https://swiftobjectstorage.us-luke-1.oraclegovcloud.com
Oracle Cloud VMware Solution API
API reference
• https://ocvps.us-langley-1.oci.oraclegovcloud.com
• https://ocvps.us-luke-1.oci.oraclegovcloud.com
Registry
Registry
• US Gov East (Ashburn)
• ocir.us-langley-1.oci.oraclegovcloud.com
• US Gov West (Phoenix)
• ocir.us-luke-1.oci.oraclegovcloud.com
Resource Manager API
API reference
• https://resourcemanager.us-langley-1.oci.oraclegovcloud.com
• https://resourcemanager.us-luke-1.oci.oraclegovcloud.com
Streaming API
API reference

Oracle Cloud Infrastructure User Guide 163


Oracle Cloud Infrastructure Government Cloud

• https://streaming.us-langley-1.oraclegovcloud.com
• https://streaming.us-luke-1.oraclegovcloud.com
Vault Service Key Management API
API reference
• https://kms.us-langley-1.oraclegovcloud.com
• https://kms.us-luke-1.oraclegovcloud.com
Vault Service Secret Management API
API reference
• https://vaults.us-langley-1.oraclegovcloud.com
• https://vaults.us-luke-1.oraclegovcloud.com
Vault Service Secret Retrieval API
API reference
• https://secrets.us-langley-1.oraclegovcloud.com
• https://secrets.us-luke-1.oraclegovcloud.com
Work Requests API (for Compute and Database work requests)
API reference
• https://iaas.us-langley-1.oraclegovcloud.com
• https://iaas.us-luke-1.oraclegovcloud.com

Oracle YUM Repo Endpoints


The Oracle YUM repo regional endpoints for US Government Cloud with FedRAMP High Joint Authorization Board
are shown in the following table

Region YUM Server Endpoint


US Gov East (Ashburn) • https://yum.us-langley-1.oci.oraclegovcloud.com
• https://yum-us-langley-1.oracle.com

US Gov West (Phoenix) • https://yum.us-luke-1.oci.oraclegovcloud.com


• https://yum-us-luke-1.oracle.com

SMTP Authentication and Connection Endpoints


Email Delivery only supports the AUTH PLAIN command when using SMTP authentication. If the sending
application is not flexible with the AUTH command, an SMTP proxy/relay can be used. For more information about
the AUTH command, see AUTH Command and its Mechanisms.

Region SMTP Connection Endpoint


US Gov East (Ashburn) smtp.email.us-langley-1.oci.oraclegovcloud.com
US Gov West (Phoenix) smtp.email.us-luke-1.oci.oraclegovcloud.com

SPF Record Syntax


An SPF record is a TXT record on your sending domain that authorizes Email Delivery IP addresses to send on your
behalf. SPF is required for subdomains of oraclegovcloud.com and recommended in other cases. The SPF
record syntax for each sending region is shown in the following table:

Oracle Cloud Infrastructure User Guide 164


Oracle Cloud Infrastructure Government Cloud

Realm Key SPF Record


OC2 v=spf1 include:rp.email.oci.oraclegovcloud.com
~all

The Realm Key is applicable for any sending regions in that realm.

Services Not Supported in US Government Cloud with FedRAMP Authorization


The following services are currently not available or not supported for tenancies in the US Government Cloud with
FedRAMP High Joint Authorization Board.
Database services not available:
• Autonomous Data Warehouse
• Autonomous Transaction Processing
• Data Safe
Solutions and Platform services not available:
• Analytics Cloud
• Fusion Analytics Warehouse
• Application Migration
• Compliance Documents
• Content and Experience
• DNS Zone Management
• Health Checks
• Integration
• Traffic Management Steering Policies
Governance and Administration features not supported:
• Auto-federation with Oracle Identity Cloud Service
• WAF service
Integration with Oracle SaaS and PaaS services, including those listed here: Getting Started with Oracle Platform
Services on page 120

Additional Information for US Government Cloud with FedRAMP Authorization


Customers
• Shared Responsibilities on page 150
• Setting Up an Identity Provider for Your Tenancy on page 150
• Using a Common Access Card/Personal Identity Verification Card to Sign in to the Console on page 151
• IPv6 Support for Virtual Cloud Networks on page 151
• Setting Up Secure Access for Compute Hosts on page 151
• Enabling FIPS Mode for Your Operating System on page 151
• Required VPN Connect Parameters for Government Cloud on page 154
• Oracle's BGP ASN on page 156
• Requesting a Service Limit Increase for Government Cloud Tenancies on page 156

Oracle Cloud Infrastructure US Federal Cloud with DISA Impact Level 5 Authorization
This topic contains information specific to Oracle Cloud Infrastructure US Federal Cloud with DISA Impact Level 5
authorization.
Note:

DISA Impact Level 5 authorization is in process for the following services:

Oracle Cloud Infrastructure User Guide 165


Oracle Cloud Infrastructure Government Cloud

• API Gateway
• Cloud Shell
• Data Transfer service
• Email Delivery service
• Functions
• Oracle Cloud VMware Solution
• Vault
FedRAMP High Joint Authorization Board accreditation for these services is
complete.

Compliance with Defense Cloud Security Requirements


US Federal Cloud with DISA Impact Level 5 authorization supports applications that require Impact Level 5 (IL5)
data, as defined in the Department of Defense Cloud Computing Security Requirements Guide (SRG).

US Federal Cloud with DISA Impact Level 5 Authorization Regions


The region names and identifiers for the US Federal Cloud with DISA Impact Level 5 authorization regions are
shown in the following table:

Region Name Region Identifier Region Key Realm Key Availability


Domains
US DoD East us-gov-ashburn-1 ric OC3 1
(Ashburn)
US DoD North us-gov-chicago-1 pia OC3 1
(Chicago)
US DoD West us-gov-phoenix-1 tus OC3 1
(Phoenix)

After your tenancy is created in one of the US Federal Cloud with DISA Impact Level 5 authorization regions, you
can subscribe to the other regions in the US Federal Cloud with DISA Impact Level 5 authorization. These tenancies
cannot subscribe to any Oracle Cloud Infrastructure regions not belonging to the OC3 realm. For information about
subscribing to a region, see Managing Regions on page 2464.

US Federal Cloud with DISA Impact Level 5 Authorization Console Sign-in URLs
To sign in to the US Federal Cloud with DISA Impact Level 5 authorization, enter one of the following URLs in a
supported browser:
• https://console.us-gov-ashburn-1.oraclegovcloud.com/
• https://console.us-gov-chicago-1.oraclegovcloud.com/
• https://console.us-gov-phoenix-1.oraclegovcloud.com/
Note:

When you're logged in to the Console for one of the US Federal Government
Cloud regions, the browser times out after 15 minutes of inactivity, and you
need to sign in again to use the Console.

US Federal Cloud with DISA Impact Level 5 Authorization API Reference and
Endpoints
This section includes the APIs and corresponding regional endpoints with US Federal Cloud DISA Impact Level 5
authorization.

Oracle Cloud Infrastructure User Guide 166


Oracle Cloud Infrastructure Government Cloud

Announcements API
API reference
• https://announcements.us-gov-ashburn-1.oraclegovcloud.com
• https://announcements.us-gov-chicago-1.oraclegovcloud.com
• https://announcements.us-gov-phoenix-1.oraclegovcloud.com
API Gateway API
API reference
• https://apigateway.us-gov-ashburn-1.oci.oraclegovcloud.com
• https://apigateway.us-gov-chicago-1.oci.oraclegovcloud.com
• https://apigateway.us-gov-phoenix-1.oci.oraclegovcloud.com
Autoscaling API
API reference
• https://autoscaling.us-gov-ashburn-1.oci.oraclegovcloud.com
• https://autoscaling.us-gov-chicago-1.oci.oraclegovcloud.com
• https://autoscaling.us-gov-phoenix-1.oci.oraclegovcloud.com
Core Services (covering Networking, Compute, and Block Volume)
The Networking, Compute, and Block Volume services are accessible with the following API:
Core Services API
API reference
• https://iaas.us-gov-ashburn-1.oraclegovcloud.com
• https://iaas.us-gov-chicago-1.oraclegovcloud.com
• https://iaas.us-gov-phoenix-1.oraclegovcloud.com
Container Engine for Kubernetes API
API reference
• https://containerengine.us-gov-ashburn-1.oci.oraclegovcloud.com
• https://containerengine.us-gov-chicago-1.oci.oraclegovcloud.com
• https://containerengine.us-gov-phoenix-1.oci.oraclegovcloud.com
Database API
API reference
• https://database.us-gov-ashburn-1.oraclegovcloud.com
• https://database.us-gov-chicago-1.oraclegovcloud.com
• https://database.us-gov-phoenix-1.oraclegovcloud.com
You can track the progress of long-running Database operations with the Work Requests API.
Digital Assistant API
API reference
• https://digitalassistant.us-gov-ashburn-1.oci.oraclegovcloud.com
• https://digitalassistant.us-gov-chicago-1.oci.oraclegovcloud.com
• https://digitalassistant.us-gov-phoenix-1.oci.oraclegovcloud.com
Email Delivery API
API reference
• https://ctrl.email.us-gov-ashburn-1.oci.oraclegovcloud.com

Oracle Cloud Infrastructure User Guide 167


Oracle Cloud Infrastructure Government Cloud

• https://ctrl.email.us-gov-chicago-1.oci.oraclegovcloud.com
• https://ctrl.email.us-gov-phoenix-1.oci.oraclegovcloud.com
Events API
API reference
• https://events.us-gov-ashburn-1.oci.oraclegovcloud.com
• https://events.us-gov-chicago-1.oci.oraclegovcloud.com
• https://events.us-gov-phoenix-1.oci.oraclegovcloud.com
File Storage API
API reference
• https://filestorage.us-gov-ashburn-1.oraclegovcloud.com
• https://filestorage.us-gov-chicago-1.oraclegovcloud.com
• https://filestorage.us-gov-phoenix-1.oraclegovcloud.com
Functions API
API reference
• https://functions.us-gov-ashburn-1.oci.oraclegovcloud.com
• https://functions.us-gov-chicago-1.oci.oraclegovcloud.com
• https://functions.us-gov-phoenix-1.oci.oraclegovcloud.com
IAM API
API reference
• https://identity.us-gov-ashburn-1.oraclegovcloud.com
• https://identity.us-gov-chicago-1.oraclegovcloud.com
• https://identity.us-gov-phoenix-1.oraclegovcloud.com
Note:

Use the Endpoint of Your Home Region for All IAM API Calls
When you sign up for Oracle Cloud Infrastructure, Oracle creates a tenancy
for you in one region. This is your home region. Your home region is where
your IAM resources are defined. When you subscribe to a new region,
your IAM resources are replicated in the new region, however, the master
definitions reside in your home region and can only be changed there.
Make all IAM API calls against your home region endpoint. The changes
automatically replicate to all regions. If you try to make an IAM API call
against a region that is not your home region, you will receive an error.
Key Management API (for the Vault service)
API reference
• https://kms.us-gov-ashburn-1.oraclegovcloud.com
• https://kms.us-gov-chicago-1.oraclegovcloud.com
• https://kms.us-gov-phoenix-1.oraclegovcloud.com
In addition to these endpoints, each vault has a unique endpoint for create, update, and list operations for keys. This
endpoint is referred to as the control plane URL or management endpoint. Each vault also has a unique endpoint for
cryptographic operations. This endpoint is known as the data plane URL or the cryptographic endpoint.
Marketplace Service API
API reference
• https://marketplace.us-gov-ashburn-1.oci.oraclegovcloud.com

Oracle Cloud Infrastructure User Guide 168


Oracle Cloud Infrastructure Government Cloud

• https://marketplace.us-gov-chicago-1.oci.oraclegovcloud.com
• https://marketplace.us-gov-phoenix-1.oci.oraclegovcloud.com
Monitoring API
API reference
• https://telemetry-ingestion.us-gov-ashburn-1.oraclegovcloud.com
• https://telemetry-ingestion.us-gov-chicago-1.oraclegovcloud.com
• https://telemetry-ingestion.us-gov-phoenix-1.oraclegovcloud.com
• https://telemetry.us-gov-ashburn-1.oraclegovcloud.com
• https://telemetry.us-gov-chicago-1.oraclegovcloud.com
• https://telemetry.us-gov-phoenix-1.oraclegovcloud.com
Notifications API
API reference
• https://notification.us-gov-ashburn-1.oraclegovcloud.com
• https://notification.us-gov-chicago-1.oraclegovcloud.com
• https://notification.us-gov-phoenix-1.oraclegovcloud.com
The source service must be available in US Government Cloud regions for messages to be successfully sent through
the Notifications service. If the source service is not available in these regions, then the message is not sent. For a list
of unavailable services, see Services Not Supported in US Federal Cloud with DISA Impact Level 5 Authorization on
page 171.
Object Storage and Archive Storage APIs
Both Object Storage and Archive Storage are accessible with the following APIs:
Object Storage API
API reference
• https://objectstorage.us-gov-ashburn-1.oraclegovcloud.com
• https://objectstorage.us-gov-chicago-1.oraclegovcloud.com
• https://objectstorage.us-gov-phoenix-1.oraclegovcloud.com
Amazon S3 Compatibility API
API reference
• https://<object_storage_namespace>.compat.objectstorage.us-gov-ashburn-1.oraclegovcloud.com
• https://<object_storage_namespace>.compat.objectstorage.us-gov-chicago-1.oraclegovcloud.com
• https://<object_storage_namespace>.compat.objectstorage.us-gov-phoenix-1.oraclegovcloud.com
Tip:

See Understanding Object Storage Namespaces on page 3423 for


information regarding how to find your Object Storage namespace.
Swift API (for use with Oracle RMAN)
• https://swiftobjectstorage.us-gov-ashburn-1.oraclegovcloud.com
• https://swiftobjectstorage.us-gov-chicago-1.oraclegovcloud.com
• https://swiftobjectstorage.us-gov-phoenix-1.oraclegovcloud.com
Oracle Cloud VMware Solution API
API reference
• https://ocvps.us-ashburn-1.oci.oraclegovcloud.com
• https://ocvps.us-chicago-1.oci.oraclegovcloud.com
• https://ocvps.us-phoenix-1.oci.oraclegovcloud.com

Oracle Cloud Infrastructure User Guide 169


Oracle Cloud Infrastructure Government Cloud

Registry
Registry
• US DoD East (Ashburn)
• ocir.us-gov-ashburn-1.oci.oraclegovcloud.com
• US DoD North (Chicago)
• ocir.us-gov-chicago-1.oci.oraclegovcloud.com
• US DoD North (Chicago)
• ocir.us-gov-phoenix-1.oci.oraclegovcloud.com
Resource Manager API
API reference
• https://resourcemanager.us-ashburn-1.oci.oraclegovcloud.com
• https://resourcemanager.us-chicago-1.oci.oraclegovcloud.com
• https://resourcemanager.us-phoenix-1.oci.oraclegovcloud.com
Streaming API
API reference
• https://streaming.us-gov-ashburn-1.oraclegovcloud.com
• https://streaming.us-gov-chicago-1.oraclegovcloud.com
• https://streaming.us-gov-phoenix-1.oraclegovcloud.com
Vault Service Key Management API
API reference
• https://kms.us-gov-ashburn-1.oraclegovcloud.com
• https://kms.us-gov-chicago-1.oraclegovcloud.com
• https://kms.us-gov-phoenix-1.oraclegovcloud.com
Vault Service Secret Management API
API reference
• https://vaults.us-gov-ashburn-1.oraclegovcloud.com
• https://vaults.us-gov-chicago-1.oraclegovcloud.com
• https://vaults.us-gov-phoenix-1.oraclegovcloud.com
Vault Service Secret Retrieval API
API reference
• https://secrets.us-gov-ashburn-1.oraclegovcloud.com
• https://secrets.us-gov-chicago-1.oraclegovcloud.com
• https://secrets.us-gov-phoenix-1.oraclegovcloud.com
Work Requests API (for Compute and Database work requests)
API reference
• https://iaas.us-gov-ashburn-1.oraclegovcloud.com
• https://iaas.us-gov-chicago-1.oraclegovcloud.com
• https://iaas.us-gov-phoenix-1.oraclegovcloud.com

Oracle YUM Repo Endpoints


The Oracle YUM repo regional endpoints for US Federal Cloud with DISA Impact Level 5 authorization are shown
in the following table

Oracle Cloud Infrastructure User Guide 170


Oracle Cloud Infrastructure Government Cloud

Region YUM Server Endpoint


US DoD East (Ashburn) • https://yum.us-gov-
ashburn-1.oci.oraclegovcloud.com
• https://yum-us-gov-ashburn-1.oracle.com

US DoD North (Chicago) • https://yum.us-gov-


chicago-1.oci.oraclegovcloud.com
• https://yum-us-gov-chicago-1.oracle.com

US DoD West (Phoenix) • https://yum.us-gov-


phoenix-1.oci.oraclegovcloud.com
• https://yum-us-gov-phoenix-1.oracle.com

SMTP Authentication and Connection Endpoints


Email Delivery only supports the AUTH PLAIN command when using SMTP authentication. If the sending
application is not flexible with the AUTH command, an SMTP proxy/relay can be used. For more information about
the AUTH command, see AUTH Command and its Mechanisms.

Region SMTP Connection Endpoint


US DoD East (Ashburn) smtp.email.us-gov-ashburn-1.oci.oraclegovcloud.com
US DoD North (Chicago) smtp.email.us-gov-chicago-1.oci.oraclegovcloud.com
US DoD West (Phoenix) smtp.email.us-gov-phoenix-1.oci.oraclegovcloud.com

SPF Record Syntax


An SPF record is a TXT record on your sending domain that authorizes Email Delivery IP addresses to send on your
behalf. SPF is required for subdomains of oraclegovcloud.com and recommended in other cases. The SPF
record syntax for each sending region is shown in the following table:

Realm Key SPF Record


OC3 v=spf1 include:rp.email.oci.oraclegovcloud.com
~all

The Realm Key is applicable for any sending regions in that realm.

Services Not Supported in US Federal Cloud with DISA Impact Level 5


Authorization
Currently, the following services are not available or not supported for tenancies in the US Federal Cloud with DISA
Impact Level 5 authorization.
Core Infrastructure services and features not available:
• FastConnect with a provider (FastConnect in a colocation model is supported)
• Data Transfer service
Database services not available:
• Autonomous Data Warehouse
• Autonomous Transaction Processing
• Data Safe
Solutions and Platform services not available:

Oracle Cloud Infrastructure User Guide 171


Oracle Cloud Infrastructure Government Cloud

• Analytics Cloud
• Fusion Analytics Warehouse
• Application Migration
• Compliance Documents
• Content and Experience
• DNS Zone Management
• Email Delivery
• Health Checks
• Integration
• Traffic Management Steering Policies
Governance and Administration features not supported:
• Auto-federation with Oracle Identity Cloud Service
• WAF service
Integration with Oracle SaaS and PaaS services, including those listed here: Getting Started with Oracle Platform
Services on page 120.

Access to Multiple US Federal Cloud with DISA Impact Level 5 Authorization


Regions
This section shows how to give the on-premises resources that are part of NIPRNet access to multiple US Federal
Cloud regions over a single FastConnect connection. This is important if one of the regions does not have a direct
connection to the NIPRNet's border cloud access point (BCAP). The BCAP is also referred to as the meet me point.

Overview
Some US Federal Cloud regions have a direct connection to a NIPRNet BCAP, but others do not. You can use the
Networking service to give on-premises resources that are part of NIPRNet access to a US Federal Cloud region that
is not directly connected to the NIPRNet's BCAP. You might do this to extend your on-premises workloads into a
particular US Federal Cloud region that you're interested in, or to use that region for disaster recovery (DR).
This scenario is illustrated in the following diagram.

In the diagram, US Federal Government Cloud region 1 has a direct connection to the NIPRNet's BCAP, but
US Federal Government Cloud region 2 does not. Imagine that on-premises resources in NIPRNet (in subnet
172.16.1.0/24) need access to your virtual cloud network (VCN) in region 2 (with CIDR 10.0.3.0/24).

Oracle Cloud Infrastructure User Guide 172


Oracle Cloud Infrastructure Government Cloud

Optionally, there could also be a VCN with cloud resources in region 1 (with CIDR 10.0.1.0/24), but a VCN in
region 1 is not required for this scenario. The intent of this scenario is for the on-premises resources to get access to
resources in region 2.
In general, you set up two types of connections:
• FastConnect between the NIPRNet BCAP and region 1.
• Remote peering connection between region 1 and region 2.
Here are some details about the connections:
• That FastConnect has at least one physical connection, or cross-connect. You set up a private virtual circuit that
runs on the FastConnect. The private virtual circuit enables communication that uses private IP addresses between
the on-premises resources and the cloud resources.
• The remote peering connection is between a dynamic routing gateway (DRG) in region 1, and a DRG in region
2. A DRG is a virtual router that you typically attach to a VCN to give that VCN access to resources outside its
Oracle region.
• You can control which on-premises subnets are advertised to the VCNs by configuring your BCAP edge router
accordingly.
• The subnets in both VCN-1 and VCN-2 are advertised to your BCAP edge router over the FastConnect
connection.
• You can optionally configure VCN security rules and other firewalls that you maintain to allow only certain types
of traffic (such as SSH or SQL*NET) between the on-premises resources and VCNs.
Here are some basic requirements:
• The VCNs and DRGs in region 1 and region 2 must belong to the same tenancy, but they can be in different
compartments within the tenancy.
• For accurate routing, the CIDR blocks of the on-premises subnets of interest and the VCNs must not overlap.
• To enable traffic to flow from a VCN to the on-premises subnets of interest, you must add a route rule to the
VCN subnet route tables for each of the on-premises subnets. The preceding diagram shows the route rule for
172.16.1.0/24 in each VCN's route table.

General Setup Process


Task 1: Set up FastConnect to region 1
Summary: In this task, you set up the FastConnect between the NIPRNet BCAP and region 1. FastConnect has three
connectivity models, and you generally follow the colocate with Oracle model. In this case, colocation occurs in the
BCAP (the meet me point). The connection consists of both a physical connection (at least one cross-connect) and
logical connection (private virtual circuit).
For instructions, follow the flow chart and tasks listed in Getting Started with FastConnect on page 3239, and notice
these specific variations:
• In task 2, the instructions assume that you have a VCN (in region 1), but it is optional.
• In task 8, create a private virtual circuit (not a public one).
Task 2: Set up a VCN and DRG in region 2
Summary: If you don't yet have a VCN in region 2 (VCN-2 in the preceding diagram), you set it up in this task. You
also create a DRG in region 2 and attach it to the VCN. Then, for each VCN-2 subnet that needs to communicate with
the on-premises network, you update that subnet's route table to include a route rule for the on-premises subnet of
interest. If there are multiple on-premises subnets that you want to route to, set up a route rule for each one.
For instructions, see these procedures:
1. To create a VCN on page 2851
2. To create a DRG on page 2955
3. To attach a DRG to a VCN on page 2956

Oracle Cloud Infrastructure User Guide 173


Oracle Cloud Infrastructure Government Cloud

4. To route a subnet's traffic to a DRG on page 2956


Important:

In step 4 in the preceding list, add a route rule with the following settings:
• Destination CIDR = the on-premises subnet of interest
• Target = the VCN's DRG
In the preceding diagram, it's the rule with 172.16.1.0/24 as the destination
CIDR, and target as DRG-2. The second rule in the diagram (for
10.0.1.0/24 and DRG-2) is necessary only if resources in VCN-2 need to
communicate with resources in VCN-1.
Task 3: Set up remote peering between region 1 and region 2
Summary: In this task, you set up a remote peering to enable private traffic between DRG-1 and DRG-2. The term
remote peering typically means that resources in one VCN can communicate privately with resources in a VCN in
a different region. In this case, the remote peering also enables private communication between the on-premises
network and VCN-2.
For instructions, see Setting Up a Remote Peering on page 3312, and notice these important details:
• Optional region 1 VCN: The instructions assume that each region has a VCN, but in this situation, it is optional
for region 1.
• Single VCN administrator: The instructions assume that there are two different VCN administrators: one for
the VCN in region 1 and another for the VCN in region 2. In this situation, there might be only a single VCN
administrator (you) who handles both regions and configures the remote peering connection.
• Unnecessary IAM policies: The instructions include a task for each VCN administrator to set up particular IAM
policies to enable the remote peering connection. One policy is for the VCN administrator who is designated
as the requestor, and one is for the VCN administrator who is designated as the acceptor. Those terms are
further defined in Important Remote Peering Concepts on page 3310. However, if there's only a single VCN
administrator with comprehensive networking permissions across the tenancy, those IAM policies are not
necessary. For more information, read the tip that appears at the end of the task.
• RPC anchor points and connection: The remote peering actually consists of multiple components that you
must set up. There's an anchor point on each DRG (shown as RPC-1 and RPC-2 in the preceding diagram), plus a
connection between those two RPC anchor points. The instructions include steps for creating those RPCs and the
connection between them. Ensure that you create all the components.

Additional Information for US Federal Cloud with DISA Impact Level 5


Authorization Customers
• Shared Responsibilities on page 150
• Setting Up an Identity Provider for Your Tenancy on page 150
• Using a Common Access Card/Personal Identity Verification Card to Sign in to the Console on page 151
• IPv6 Support for Virtual Cloud Networks on page 151
• Setting Up Secure Access for Compute Hosts on page 151
• Setting Up an Identity Provider for Your Tenancy on page 150
• Required VPN Connect Parameters for Government Cloud on page 154
• Oracle's BGP ASN on page 156
• Requesting a Service Limit Increase for Government Cloud Tenancies on page 156

Oracle Cloud Infrastructure United Kingdom Government Cloud


This topic contains information specific to Oracle Cloud Infrastructure United Kingdom Government Cloud.

Oracle Cloud Infrastructure User Guide 174


Oracle Cloud Infrastructure Government Cloud

Regions
The region names and identifiers for the United Kingdom Government Cloud are shown in the following table:

Region Name Region Region Location Region Key Realm Key Availability
Identifier Domains
UK Gov South uk-gov-london-1 London, United LTN OC4 1
(London) Kingdom
UK Gov West uk-gov-cardiff-1 Newport, United BRS OC4 1
(Newport) Kingdom

Oracle's BGP ASN


This section is for network engineers who configure an edge device for FastConnect or VPN Connect.
Oracle's BGP autonomous system number (ASN) for the UK Government Cloud is 1218.

Console Sign-in URLs


To sign in to the United Kingdom Government Cloud, enter the following URL in a supported browser:
• https://console.uk-gov-london-1.oraclegovcloud.uk/
• https://console.uk-gov-cardiff-1.oraclegovcloud.uk/

API Reference and Endpoints


Oracle Cloud Infrastructure United Kingdom Government Cloud has these APIs and corresponding regional
endpoints:

API Gateway API


API reference
• https://apigateway.uk-gov-london-1.oci.oraclegovcloud.uk
• https://apigateway.uk-gov-cardiff-1.oci.oraclegovcloud.uk

Analytics API
API reference
• https://analytics.uk-gov-london-1.oci.oraclegovcloud.uk
• https://analytics.uk-gov-cardiff-1.oci.oraclegovcloud.uk

Core Services (covering Networking, Compute, and Block Volume)


The Networking, Compute, and Block Volume services are accessible with the following API:
Core Services API
API reference
• https://iaas.uk-gov-london-1.oraclegovcloud.uk
• https://iaas.uk-gov-cardiff-1.oraclegovcloud.uk

Container Engine for Kubernetes API


API reference
• https://containerengine.uk-gov-london-1.oci.oraclegovcloud.uk
• https://containerengine.uk-gov-cardiff-1.oci.oraclegovcloud.uk

Oracle Cloud Infrastructure User Guide 175


Oracle Cloud Infrastructure Government Cloud

Database API
API reference
• https://database.uk-gov-london-1.oraclegovcloud.uk
• https://database.uk-gov-cardiff-1.oraclegovcloud.uk
You can track the progress of long-running Database operations with the Work Requests API.

Digital Assistant Service Instance API


API reference
• https://digitalassistant-api.uk-gov-london-1.oci.oraclegovcloud.uk
• https://digitalassistant-api.uk-gov-cardiff-1.oci.oraclegovcloud.uk

Email Delivery API


API reference
• https://ctrl.email.uk-gov-london-1.oci.oraclegovcloud.uk
• https://ctrl.email.uk-gov-cardiff-1.oci.oraclegovcloud.uk

Events API
API reference
• https://events.uk-gov-london-1.oci.oraclegovcloud.uk
• https://events.uk-gov-cardiff-1.oci.oraclegovcloud.uk

File Storage API


API reference
• https://filestorage.uk-gov-london-1.oraclegovcloud.uk
• https://filestorage.uk-gov-cardiff-1.oraclegovcloud.uk

Functions Service API


API reference
• https://functions.uk-gov-london-1.oci.oraclegovcloud.uk
• https://functions.uk-gov-cardiff-1.oci.oraclegovcloud.uk

IAM API
API reference
• https://identity.uk-gov-london-1.oraclegovcloud.uk
• https://identity.uk-gov-cardiff-1.oraclegovcloud.uk
Note:

Use the Endpoint of Your Home Region for All IAM API Calls
When you sign up for Oracle Cloud Infrastructure, Oracle creates a tenancy
for you in one region. This is your home region. Your home region is where
your IAM resources are defined. When you subscribe to a new region,
your IAM resources are replicated in the new region, however, the master
definitions reside in your home region and can only be changed there.
Make all IAM API calls against your home region endpoint. The changes

Oracle Cloud Infrastructure User Guide 176


Oracle Cloud Infrastructure Government Cloud

automatically replicate to all regions. If you try to make an IAM API call
against a region that is not your home region, you will receive an error.

Key Management API (for the Vault service)


API reference
• https://kms.uk-gov-london-1.oraclegovcloud.uk
• https://kms.uk-gov-cardiff-1.oraclegovcloud.uk
In addition to these endpoints, each vault has a unique endpoint for create, update, and list operations for keys. This
endpoint is referred to as the control plane URL or management endpoint. Each vault also has a unique endpoint for
cryptographic operations. This endpoint is known as the data plane URL or the cryptographic endpoint.

Load Balancing API


API reference
• https://iaas.uk-gov-london-1.oraclegovcloud.uk
• https://iaas.uk-gov-cardiff-1.oraclegovcloud.uk

Marketplace Service API


API reference
• https://marketplace.uk-gov-london-1.oraclegovcloud.uk
• https://marketplace.uk-gov-cardiff-1.oraclegovcloud.uk

Monitoring API
API reference
• https://telemetry-ingestion.uk-gov-london-1.oraclegovcloud.uk
• https://telemetry.uk-gov-london-1.oraclegovcloud.uk

Notifications API
API reference
• https://cp.notification.uk-gov-london-1.oraclegovcloud.uk
• https://cp.notification.uk-gov-cardiff-1.oraclegovcloud.uk

Object Storage and Archive Storage APIs


Both Object Storage and Archive Storage are accessible with the following APIs:
Object Storage API
API reference
• https://objectstorage.uk-gov-london-1.oraclegovcloud.uk
• https://objectstorage.uk-gov-cardiff-1.oraclegovcloud.uk
Amazon S3 Compatibility API
API reference
• https://<object_storage_namespace>.compat.objectstorage.uk-gov-london-1.oraclegovcloud.uk
• https://<object_storage_namespace>.compat.objectstorage.uk-gov-cardiff-1.oraclegovcloud.uk
Tip:

See Understanding Object Storage Namespaces on page 3423 for


information regarding how to find your Object Storage namespace.

Oracle Cloud Infrastructure User Guide 177


Oracle Cloud Infrastructure Government Cloud

Swift API (for use with Oracle RMAN)


• https://swiftobjectstorage.uk-gov-london-1.oraclegovcloud.uk
• https://swiftobjectstorage.uk-gov-cardiff-1.oraclegovcloud.uk

OS Management API
API reference
• https://osms.uk-gov-london-1.oci.oraclegovcloud.uk
• https://osms.uk-gov-cardiff-1.oci.oraclegovcloud.uk

Registry
Registry

Region Name Available Endpoints


UK Gov South (London) • ocir.uk-gov-
london-1.oci.oraclegovcloud.uk

UK Gov West (Newport) • ocir.uk-gov-


cardiff-1.oci.oraclegovcloud.uk

Resource Manager API


API reference
• https://resourcemanager.uk-gov-london-1.oraclegovcloud.uk
• https://resourcemanager.uk-gov-cardiff-1.oraclegovcloud.uk

Search API
API reference
• https://query.uk-gov-london-1.oraclegovcloud.uk
• https://query.uk-gov-cardiff-1.oraclegovcloud.uk

Secret Management API (for the Vault service)


API reference
• https://vaults.uk-gov-london-1.oraclegovcloud.uk
• https://vaults.uk-gov-cardiff-1.oraclegovcloud.uk

Secret Retrieval API (for the Vault service)


API reference
• https://secrets.vaults.uk-gov-london-1.oraclegovcloud.uk
• https://secrets.vaults.uk-gov-cardiff-1.oraclegovcloud.uk

Streaming API
API reference
• https://streaming.uk-gov-london-1.oraclegovcloud.uk
• https://streaming.uk-gov-cardiff-1.oraclegovcloud.uk

Oracle Cloud Infrastructure User Guide 178


Oracle Cloud Infrastructure Government Cloud

Web Application Acceleration and Security API


API reference
• https://waas.uk-gov-london-1.oraclegovcloud.uk
• https://waas.uk-gov-cardiff-1.oraclegovcloud.uk

Work Requests API (for Compute and Database work requests)


API reference
• https://iaas.uk-gov-london-1.oraclegovcloud.uk
• https://iaas.uk-gov-cardiff-1.oraclegovcloud.uk

Services Not Supported in Oracle Cloud Infrastructure United Kingdom Government


Cloud
The following services are currently not available for tenancies in the United Kingdom Government Cloud:
Solutions and Platform services not available:
• Announcements
• Application Migration
• Compliance Documents
• Big Data
• Blockchain Platform
• Health Checks

SMTP Authentication and Connection Endpoints


Email Delivery only supports the AUTH PLAIN command when using SMTP authentication. If the sending
application is not flexible with the AUTH command, an SMTP proxy/relay can be used. For more information about
the AUTH command, see AUTH Command and its Mechanisms.

Region SMTP Connection Endpoint


UK Gov South (London) smtp.email.uk-gov-london-1.oci.oraclegovcloud.uk
UK Gov West (Newport) smtp.email.uk-gov-cardiff-1.oci.oraclegovcloud.uk

SPF Record Syntax


An SPF record is a TXT record on your sending domain that authorizes Email Delivery IP addresses to send on your
behalf. SPF is required for subdomains of oraclegovcloud.com and recommended in other cases. The SPF
record syntax for the United Kingdom Government Cloud sending region is shown in the following table:

Realm Key SPF Record


OC4 v=spf1 include:rp.oraclegovemaildelivery.uk ~all

The Realm Key is applicable for any sending regions in that realm.

Oracle Cloud Infrastructure User Guide 179


Service Essentials

Chapter

5
Service Essentials
The following topics provide essential information that applies across Oracle Cloud Infrastructure.

Security Credentials on page 181


The types of credentials you'll use when working with Oracle Cloud Infrastructure.

Regions and Availability Domains on page 182


An introduction to the concepts of regions and availability domains.

Resource Identifiers on page 199


A description of the different ways your Oracle Cloud Infrastructure resources are identified.

Resource Monitoring on page 201


Information about how to monitor your resources.

Resource Tags on page 213


Information about Oracle Cloud Infrastructure tags and how to apply them to your resources.

Compartment Quotas on page 246


Information about how to control resource consumption within compartments using quotas.

Tenancy Explorer
View all resources in a selected compartment, across regions.

Service Limits on page 217


A list of the default limits applied to your cloud resources and how to request an increase.

Console Announcements on page 264


Information about the announcements that occasionally appear in the Oracle Cloud Infrastructure Console

Prerequisites for Oracle Platform Services on Oracle Cloud Infrastructure on page 269
Instructions for setting up the resources required when running an Oracle Platform Service on Oracle Cloud
Infrastructure.

Renaming a Cloud Account on page 275


Instructions for renaming an Oracle cloud account.

Oracle Cloud Infrastructure User Guide 180


Service Essentials

Billing and Payment Tools Overview on page 276


Information about billing and payment tools that you can use to analyze your service usage and manage your costs.

My Services Use Cases on page 304


Use cases for the Oracle Cloud My Services API, to help you interact programmatically with My Services.

Cloud Shell
A free-to-use browser-based terminal accessible from the Oracle Cloud Console that provides access to a Linux shell
with pre-authenticated Oracle Cloud Infrastructure CLI and other useful tools.

Security Credentials
This section describes the types of credentials you'll use when working with Oracle Cloud Infrastructure.

Console Password
• What it's for: Using the Console.
• Format: Typical password text string.
• How to get one: An administrator will provide you with a one-time password.
• How to use it: Sign in to the Console the first time with the one-time password, and then change it when
prompted. Requirements for the password are displayed there. The one-time password expires in seven days. If
you want to change the password later, see To change your Console password on page 2478. Also, you or an
administrator can reset the password in the Console or with the API (see To create or reset another user's Console
password on page 2479). Resetting the password creates a new one-time password that you'll be prompted to
change the next time you sign in to the Console. If you're blocked from signing in to the Console because you've
tried 10 times in a row unsuccessfully, contact your administrator.
• Note for Federated Users: Federated users do not use a Console password. Instead, they sign in to the Console
through their identity provider.

API Signing Key


• What it's for: Using the API (see Software Development Kits and Command Line Interface on page 4262 and
Request Signatures on page 4426).
• Format: RSA key pair in PEM format (minimum 2048 bits required).
• How to get one: You can use the Console to generate the private/public key pair for you, or you can generate your
own. See Required Keys and OCIDs on page 4215.
• How to use it: Use the private key with the SDK or with your own client to sign your API requests. Note that
after you've added your first API key in the Console, you can use the API to upload any additional ones you want
to use. If you provide the wrong kind of key (for example, your instance SSH key, or a key that isn't at least 2048
bits), you'll get an InvalidKey error.
• Example: The PEM public key looks something like this:

-----BEGIN PUBLIC KEY-----

MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAoTFqF...
...

-----END PUBLIC KEY——

Instance SSH Key


• What it's for: Accessing a compute instance.

Oracle Cloud Infrastructure User Guide 181


Service Essentials

• Format: For Oracle-provided images, these SSH key types are supported: RSA, DSA, DSS, ECDSA, and
Ed25519. If you bring your own image, you're responsible for managing the SSH key types that are supported.
For RSA, DSS, and DSA keys, a minimum of 2048 bits is recommended. For ECDSA keys, a minimum of 256
bits is recommended.
• How to get one: See Managing Key Pairs on Linux Instances on page 698. Optionally, you can use a key pair
that is generated by Oracle Cloud Infrastructure when you create an instance in the Console.
• How to use it: When you launch an instance, provide the public key from the key pair.
• Example:
A public key has the following format:

<key_type> <public_key> <optional_comment>

For example, an RSA public key looks like this:

ssh-rsa AAAAB3BzaC1yc2EAAAADAQABAAABAQD9BRwrUiLDki6P0+jZhwsjS2muM...

...yXDus/5DQ== rsa-key-20201202

Auth Token
• What it's for: Authenticating with third-party APIs that do not support Oracle Cloud Infrastructure's signature-
based authentication. For example, use an auth token as your password with Swift clients.
• Format: Typical password text string.
• How to get one: See Working with Console Passwords and API Keys on page 2475.
• How to use it: Usage depends on the service your are authenticating with. Typically, you authenticate with third-
party APIs by providing your Oracle Cloud Infrastructure Console login, your auth token provided by Oracle, and
your organization's Oracle tenant name.

Regions and Availability Domains


This topic describes the physical and logical organization of Oracle Cloud Infrastructure resources.

About Regions and Availability Domains


Oracle Cloud Infrastructure is hosted in regions and availability domains. A region is a localized geographic area,
and an availability domain is one or more data centers located within a region. A region is composed of one or more
availability domains. Most Oracle Cloud Infrastructure resources are either region-specific, such as a virtual cloud
network, or availability domain-specific, such as a compute instance. Traffic between availability domains and
between regions is encrypted. Availability domains are isolated from each other, fault tolerant, and very unlikely to
fail simultaneously. Because availability domains do not share infrastructure such as power or cooling, or the internal
availability domain network, a failure at one availability domain within a region is unlikely to impact the availability
of the others within the same region.
The availability domains within the same region are connected to each other by a low latency, high bandwidth
network, which makes it possible for you to provide high-availability connectivity to the internet and on-premises,
and to build replicated systems in multiple availability domains for both high-availability and disaster recovery.
Oracle is adding multiple cloud regions around the world to provide local access to cloud resources for our customers.
To accomplish this quickly, we’ve chosen to launch regions in new geographies with one availability domain.
As regions require expansion, we have the option to add capacity to existing availability domains, to add additional
availability domains to an existing region, or to build a new region. The expansion approach in a particular scenario is
based on customer requirements as well as considerations of regional demand patterns and resource availability.

Oracle Cloud Infrastructure User Guide 182


Service Essentials

For any region with one availability domain, a second availability domain or region in the same country or geo-
political area will be made available within a year to enable further options for disaster recovery that support
customer requirements for data residency where they exist.
Regions are independent of other regions and can be separated by vast distances—across countries or even continents.
Generally, you would deploy an application in the region where it is most heavily used, because using nearby
resources is faster than using distant resources. However, you can also deploy applications in different regions for
these reasons:
• To mitigate the risk of region-wide events such as large weather systems or earthquakes.
• To meet varying requirements for legal jurisdictions, tax domains, and other business or social criteria.
Regions are grouped into realms. Your tenancy exists in a single realm and can access all regions that belong to that
realm. You cannot access regions that are not in your realm. Currently, Oracle Cloud Infrastructure has multiple
realms. There is one commercial realm. There are multiple realms for Government Cloud: US Government Cloud
FedRAMP authorized and IL5 authorized, and United Kingdom Government Cloud.
The following table lists the regions in the Oracle Cloud Infrastructure commercial realm:

Region Name Region Region Location Region Key Realm Key Availability
Identifier Domains
Australia East ap-sydney-1 Sydney, Australia SYD OC1 1
(Sydney)
Australia ap-melbourne-1 Melbourne, MEL OC1 1
Southeast Australia
(Melbourne)
Brazil East (Sao sa-saopaulo-1 Sao Paulo, Brazil GRU OC1 1
Paulo)
Canada Southeast ca-montreal-1 Montreal, Canada YUL OC1 1
(Montreal)
Canada Southeast ca-toronto-1 Toronto, Canada YYZ OC1 1
(Toronto)
Chile (Santiago) sa-santiago-1 Santiago, Chile SCL OC1 1
Germany Central eu-frankfurt-1 Frankfurt, FRA OC1 3
(Frankfurt) Germany
India South ap-hyderabad-1 Hyderabad, India HYD OC1 1
(Hyderabad)
India West ap-mumbai-1 Mumbai, India BOM OC1 1
(Mumbai)
Japan Central ap-osaka-1 Osaka, Japan KIX OC1 1
(Osaka)
Japan East ap-tokyo-1 Tokyo, Japan NRT OC1 1
(Tokyo)
Netherlands eu-amsterdam-1 Amsterdam, AMS OC1 1
Northwest Netherlands
(Amsterdam)
Saudi Arabia me-jeddah-1 Jeddah, Saudi JED OC1 1
West (Jeddah) Arabia
South Korea ap-seoul-1 Seoul, South ICN OC1 1
Central (Seoul) Korea

Oracle Cloud Infrastructure User Guide 183


Service Essentials

Region Name Region Region Location Region Key Realm Key Availability
Identifier Domains
South ap-chuncheon-1 Chuncheon, YNY OC1 1
Korea North South Korea
(Chuncheon)
Switzerland eu-zurich-1 Zurich, ZRH OC1 1
North (Zurich) Switzerland
UAE East me-dubai-1 Dubai, UAE DXB OC1 1
(Dubai)
UK South uk-london-1 London, United LHR OC1 3
(London) Kingdom
UK West uk-cardiff-1 Newport, United CWL OC1 1
(Newport) Kingdom
US East us-ashburn-1 Ashburn, VA IAD OC1 3
(Ashburn)
US West us-phoenix-1 Phoenix, AZ PHX OC1 3
(Phoenix)
US West (San us-sanjose-1 San Jose, CA SJC OC1 1
Jose)

To subscribe to a region, see Managing Regions on page 2464.


For a list of regions in the the Oracle Government Cloud realms, see the following topics:
• Oracle Cloud Infrastructure US Government Cloud with FedRAMP Authorization on page 160
• Oracle Cloud Infrastructure US Federal Cloud with DISA Impact Level 5 Authorization on page 165
• Oracle Cloud Infrastructure United Kingdom Government Cloud on page 174
Note:

Your Tenancy's Availability Domain Names


Oracle Cloud Infrastructure randomizes the availability domains by tenancy
to help balance capacity in the data centers. For example, the availability
domain labeled PHX-AD-1 for tenancyA may be a different data center than
the one labeled PHX-AD-1 for tenancyB. To keep track of which availability
domain corresponds to which data center for each tenancy, Oracle Cloud
Infrastructure uses tenancy-specific prefixes for the availability domain
names. For example: the availability domains for your tenancy are something
like Uocm:PHX-AD-1, Uocm:PHX-AD-2, and so on.
To get the specific names of your tenancy's availability domains, use the
ListAvailabilityDomains operation, which is available in the IAM API. You
can also see the names when you use the Console to launch an instance and
choose which availability domain to launch the instance in.

Fault Domains
A fault domain is a grouping of hardware and infrastructure within an availability domain. Each availability domain
contains three fault domains. Fault domains provide anti-affinity: they let you distribute your instances so that the
instances are not on the same physical hardware within a single availability domain. A hardware failure or Compute
hardware maintenance event that affects one fault domain does not affect instances in other fault domains. In addition,
the physical hardware in a fault domain has independent and redundant power supplies, which prevents a failure in
the power supply hardware within one fault domain from affecting other fault domains.

Oracle Cloud Infrastructure User Guide 184


Service Essentials

To control the placement of your compute instances, bare metal DB system instances, or virtual machine DB system
instances, you can optionally specify the fault domain for a new instance or instance pool at launch time. If you
don't specify the fault domain, the system selects one for you. Oracle Cloud Infrastructure makes a best-effort anti-
affinity placement across different fault domains, while optimizing for available capacity in the availability domain.
To change the fault domain for a compute instance, edit the fault domain. To change the fault domain for a bare metal
or virtual machine DB system instance, terminate it and launch a new instance in the preferred fault domain.
Use fault domains to do the following things:
• Protect against unexpected hardware failures or power supply failures.
• Protect against planned outages because of Compute hardware maintenance.
For more information:
• For recommendations about how to use fault domains when provisioning application and database servers, see
Fault Domains on page 599 in Best Practices for Your Compute Instance on page 597.
• For more information about using fault domains when provisioning Oracle bare metal and virtual machine
DB systems, see Virtual Machine DB Systems on page 1356 and Availability Domain and Fault Domain
Considerations for Oracle Data Guard on page 1464.

Subscribed Region Limits


Trial, free tier, and pay-as-you-go tenancies are limited to one subscribed region. You can request an increase to
the limit for pay-as-you-go tenancies, see To request a subscribed region limit increase on page 185 for more
information.
Universal monthly credit tenancies can subscribe to all publicly released commercial regions.

Requesting a Limit Increase to the Subscribed Region Count


You can submit a request to increase the subscribed region count for your tenancies from within the Console. If you
try to subscribe to a region beyond the limit for your tenancy, you'll be prompted to submit a limit increase request.
Additionally, you can launch the request from the service limits page or at any time by clicking the link under the
Help menu ( ).

To request a subscribed region limit increase


1.
Open the Help menu ( ), go to Support and click Request service limit increase.
2. Enter the following:
•Primary Contact Details: Enter the name and email address of the person making the request. Enter one
email address only. A confirmation will be sent to this address.
• Service Category: Select Regions.
• Resource: Select Subscribed region count.
• Tenancy Limit: Specify the limit number.
• Reason for Request: Enter a reason for your request. If your request is urgent or unusual, please provide
details here.
3. Click Submit Request.
After you submit the request, it is processed. A response can take anywhere from a few minutes to a few days. If your
request is granted, a confirmation email is sent to the address provided in the primary contact details.
If we need additional information about your request, a follow-up email is sent to the address provided in the primary
contact details.

Service Availability Across Regions


All Oracle Cloud Infrastructure regions offer core infrastructure services, including the following:

Oracle Cloud Infrastructure User Guide 185


Service Essentials

• Compute: Compute (Intel-based bare metal & VM, DenseIO & Standard), Container Engine for Kubernetes,
Registry
• Storage: Block Volume, File Storage, Object Storage, Archive Storage
• Networking: Virtual Cloud Network, Load Balancing, FastConnect (specific partners as available and requested)
• Database: Database, Exadata Cloud Service, Autonomous Data Warehouse, Autonomous Transaction Processing
• Edge: DNS
• Platform: Audit, Identity and Access Management, Monitoring, Notifications, Tagging, Work Requests
• Security: Vault
Generally available cloud services beyond those in the previous list are made available based on regional customer
demand. Any service can be made available within a maximum of three months, with many services deploying more
quickly. New cloud services are made available in regions as quickly as possible based on a variety of considerations,
including regional customer demand, ability to achieve regulatory compliance where applicable, resource availability,
and other factors. Because of Oracle Cloud Infrastructure's low latency interconnect backbone, you can use cloud
services in other geographic regions with effective results when those services are not available in your home region,
as long as data residency requirements do not prevent you from doing so. We regularly work with customers to help
ensure effective access to required services.

Resource Availability
The following sections list the resource types based on their availability: across regions, within a single region, or
within a single availability domain.
Tip:

In general: IAM resources are cross-region. DB Systems, instances, and


volumes are specific to an availability domain. Everything else is regional.
Exception: Subnets were originally designed to be specific to an availability
domain. Now, you can create regional subnets, which are what Oracle
recommends.

Cross-Region Resources
• API signing keys
• compartments
• detectors (Cloud Guard; regional to reporting region)
• dynamic groups
• federation resources
• groups
• managed lists (Cloud Guard)
• network sources
• policies
• responders (Cloud Guard; regional to reporting region)
• tag namespaces
• tag keys
• targets (Cloud Guard; regional to reporting region)
• users

Regional Resources
• alarms
• apm-domains (Application Performance Monitoring)
• applications (Data Flow service)
• applications (Functions service)
• blockchain platforms (Blockchain Platform service)

Oracle Cloud Infrastructure User Guide 186


Service Essentials

• buckets: Although buckets are regional resources, they can be accessed from any location if you use the correct
region-specific Object Storage URL for the API calls.
• clusters (Big Data service)
• clusters (Container Engine for Kubernetes service)
• cloudevents-rules
• config work requests (Logging Analytics)
• configuration source providers (Resource Manager)
• content and experience
• customer-premises equipment (CPE)
• dashboards (Management Dashboard)
• data catalogs
• database insights (Operations Insights)
• DB Systems (MySQL Database service)
• deployments (GoldenGate)
• DHCP options sets
• dynamic routing gateways (DRGs)
• encryption keys
• entities (Logging Analytics)
• functions
• images
• internet gateways
• jobs (Database Management)
• jobs (Resource Manager)
• load balancers
• local peering gateways (LPGs)
• log groups (Logging Analytics)
• management agent install keys
• management agents
• managed database groups (Database Management)
• managed databases (Database Management)
• metrics
• models
• NAT gateways
• network security groups
• node pools
• notebook sessions
• object collection rules (Logging Analytics)
• private templates (Resource Manager)
• problems (Cloud Guard; regional to reporting region)
• projects
• queryjob work requests (Logging Analytics)
• registered databases (GoldenGate)
• repositories
• reserved public IPs
• route tables
• runs
• saved searches (Management Dashboard)
• scheduled tasks (Logging Analytics)
• secrets
• security lists

Oracle Cloud Infrastructure User Guide 187


Service Essentials

• service connectors
• service gateways
• stacks (Resource Manager)
• storage work requests (Logging Analytics)
• subnets: When you create a subnet, you choose whether it's regional or specific to an availability domain. Oracle
recommends using regional subnets.
• subscriptions
• tables
• tickets (Support Management service)
• topics
• vaults
• virtual cloud networks (VCNs)
• volume backups: They can be restored as new volumes to any availability domain within the same region in which
they are stored.
• workspaces

Availability Domain-Specific Resources


• DB systems (Oracle Database service)
• ephemeral public IPs
• instances: They can be attached only to volumes in the same availability domain.
• subnets: When you create a subnet, you choose whether it is regional or specific to an availability domain. Oracle
recommends using regional subnets.
• volumes: They can be attached only to an instance in the same availability domain.

Dedicated Regions
Dedicated regions are public regions assigned to a single organization. Region specific details, such as region ID and
region key are not available in public documentation, check with your Oracle contact for this information for your
dedicated region.
This topic provides general information for dedicated regions. For information about Oracle Cloud Infrastructure
services, see Oracle Cloud Infrastructure.

Console URL
The Console URL you use to sign in to your region is constructed as follows:

http://console.<region_identifier>.oraclecloud8.com

To sign in to your region, enter the Console URL in a supported browser.

Service Endpoint Patterns


This section describes the pattern you use to construct endpoints for each service available in Oracle Cloud
Infrastructure. Not all services listed here may be available in your specific region, confirm with your Oracle contact
which services are available in your region.
Analytics API
API reference
Endpoint URL pattern:
• https://analytics.<region_identifier>.ocp.oraclecloud8.com
Announcements Service API
API reference

Oracle Cloud Infrastructure User Guide 188


Service Essentials

Endpoint URL pattern:


• https://announcements.<region_identifier>.oraclecloud8.com
API Gateway API
API reference
Endpoint URL pattern:
• https://apigateway.<region_identifier>.oci.oraclecloud8.com
Application Migration API
API reference
Endpoint URL pattern:
• https://applicationmigration.<region_identifier>.oraclecloud8.com
Audit API
API reference
Endpoint URL pattern:
• https://audit.<region_identifier>.oraclecloud8.com
Autoscaling API
API reference
Endpoint URL pattern:
• https://autoscaling.<region_identifier>.oci.oraclecloud8.com
Big Data Service API
API reference
Endpoint URL pattern:
• https://bigdataservice.<region_identifier>.oci.oraclecloud8.com
Blockchain Platform Control Plane API
API reference
Endpoint URL pattern:
• https://blockchain.<region_identifier>.oci.oraclecloud8.com
Budgets API
API reference
Endpoint URL pattern:
• https://usage.<region_identifier>.oci.oraclecloud8.com
Cloud Advisor API
API reference
Endpoint URL pattern:
• https://advisor.<region_identifier>.oraclecloud8.com
Cloud Guard API
API reference
Endpoint URL pattern:
• https://cloudguard-cp-api.<region_identifier>.oci.oraclecloud8.com

Oracle Cloud Infrastructure User Guide 189


Service Essentials

Container Engine for Kubernetes API


API reference
Endpoint URL pattern:
• https://containerengine.<region_identifier>.oci.oraclecloud8.com
Core Services (covering Networking, Compute, and Block Volume)
The Networking, Compute, and Block Volume services are accessible with the following API:
API reference
Endpoint URL pattern:
• https://iaas.<region_identifier>.oraclecloud8.com
Data Catalog API
API reference
Endpoint URL pattern:
• https://datacatalog.<region_identifier>.oci.oraclecloud8.com
Data Flow API
API reference
Endpoint URL pattern:
• https://dataflow.<region_identifier>.oci.oraclecloud8.com
Data Integration API
API reference
Endpoint URL pattern:
• https://dataintegration.<region_identifier>.oci.oraclecloud8.com
Data Safe API
API reference
Endpoint URL pattern:
• https://datasafe.<region_identifier>.oci.oraclecloud8.com
Data Science API
API reference
Endpoint URL pattern:
• https://datascience.<region_identifier>.oci.oraclecloud8.com
Database API
API reference
Endpoint URL pattern:
• https://database.<region_identifier>.oraclecloud8.com
You can track the progress of long-running Database operations with the Work Requests API.
Digital Assistant Service Instance API
API reference
Endpoint URL pattern:
• https://digitalassistant.<region_identifier>.oci.oraclecloud8.com

Oracle Cloud Infrastructure User Guide 190


Service Essentials

DNS API
API reference
Endpoint URL pattern:
• https://dns.<region_identifier>.oci.oraclecloud8.com
Email Delivery API
API reference
Endpoint URL pattern:
• https://ctrl.email.<region_identifier>.oci.oraclecloud8.com
Events API
API reference
Endpoint URL pattern:
• https://events.<region_identifier>.oci.oraclecloud8.com
File Storage API
API reference
Endpoint URL pattern:
• https://filestorage.<region_identifier>.oraclecloud8.com
Functions Service API
API reference
Endpoint URL pattern:
• https://functions.<region_identifier>.oci.oraclecloud8.com
Health Checks API
API reference
Endpoint URL pattern:
• https://healthchecks.<region_identifier>.oraclecloud8.com
IAM API
API reference
Endpoint URL pattern:
• https://identity.<region_identifier>.oraclecloud8.com
Load Balancing API
API reference
Endpoint URL pattern:
• https://iaas.<region_identifier>.oraclecloud8.com
LogAnalytics API
API reference
Endpoint URL pattern:
• https://loganalytics.<region_identifier>.oraclecloud8.com
Logging Ingestion API
API reference

Oracle Cloud Infrastructure User Guide 191


Service Essentials

Endpoint URL pattern:


• https://ingestion.logging.<region_identifier>.oraclecloud8.com
Logging Management API
API reference
Endpoint URL pattern:
• https://logging.<region_identifier>.oraclecloud8.com
Logging Search API
API reference
Endpoint URL pattern:
• https://logging.<region_identifier>.oraclecloud8.com
Management Agent API
API reference
Endpoint URL pattern:
• https://management-agent.<region_identifier>.oraclecloud8.com
ManagementDashboard API
API reference
Endpoint URL pattern:
• https://managementdashboard.<region_identifier>.oraclecloud8.com
Marketplace Service API
API reference
Endpoint URL pattern:
• https://marketplace.<region_identifier>.oraclecloud8.com
Monitoring API
API reference
Endpoint URL pattern:
• https://telemetry-ingestion.<region_identifier>.oraclecloud8.com
• https://telemetry.<region_identifier>.oraclecloud8.com
MySQL Database Service API
API reference
Endpoint URL pattern:
• https://mysql.<region_identifier>.ocp.oraclecloud8.com
NoSQL Database API
API reference
Endpoint URL pattern:
• https://nosql.<region_identifier>.oci.oraclecloud8.com
Notifications API
API reference
Endpoint URL pattern:

Oracle Cloud Infrastructure User Guide 192


Service Essentials

• https://notification.<region_identifier>.oraclecloud8.com
Object Storage and Archive Storage APIs
Both Object Storage and Archive Storage are accessible with the following APIs:
Object Storage API
API reference
Endpoint URL pattern:
• https://objectstorage.<region_identifier>.oraclecloud8.com
Amazon S3 Compatibility API
API reference
Endpoint URL pattern:
• https://<object_storage_namespace>.compat.objectstorage.<region_identifier>.oraclecloud8.com
Tip:

See Understanding Object Storage Namespaces on page 3423 for


information regarding how to find your Object Storage namespace.
Swift API (for use with Oracle RMAN)
Endpoint URL pattern:
• https://swiftobjectstorage.<region_identifier>.oraclecloud8.com
Operations Insights API
API reference
Endpoint URL pattern:
• https://operationsinsights.<region_identifier>.oci.oraclecloud8.com
Oracle Cloud Agent API
API reference
Endpoint URL pattern:
• https://iaas.<region_identifier>.oraclecloud8.com
Oracle Cloud My Services API
API reference
Endpoint URL:
• https://itra.oraclecloud.com
Oracle Cloud VMware Solution API
API reference
Endpoint URL pattern:
• https://ocvps.<region_identifier>.oci.oraclecloud8.com
Oracle Content and Experience API
API reference
Endpoint URL pattern:
• https://cp.oce.<region_identifier>.ocp.oraclecloud8.com

Oracle Cloud Infrastructure User Guide 193


Service Essentials

Oracle Integration API


API reference
Endpoint URL pattern:
• https://integration.<region_identifier>.ocp.oraclecloud8.com
Organizations API
API reference
Endpoint URL pattern:
• https://organizations.<region_identifier>.oci.oraclecloud8.com
OS Management API
API reference
Endpoint URL pattern:
• https://osms.<region_identifier>.oci.oraclecloud8.com
Registry
Registry
Endpoint URL pattern:
• https://ocir.<region_identifier>.oci.oraclecloud8.com
Resource Manager API
API reference
Endpoint URL pattern:
• https://resourcemanager.<region_identifier>.oraclecloud8.com
Search API
API reference
Endpoint URL pattern:
• https://query.<region_identifier>.oraclecloud8.com
Service Connector Hub API
API reference
Endpoint URL pattern:
• https://service-connector-hub.<region_identifier>.oci.oraclecloud8.com
Service Limits API
API reference
Endpoint URL pattern:
• https://limits.<region_identifier>.oci.oraclecloud8.com
Streaming API
API reference
Endpoint URL pattern:
• https://streaming.<region_identifier>.oraclecloud8.com
Support Managements API
API reference

Oracle Cloud Infrastructure User Guide 194


Service Essentials

Endpoint URLs:
• https://incidentmanagement.us-ashburn-1.oraclecloud.com
• https://incidentmanagement.us-phoenix-1.oraclecloud.com
Usage API
API reference
Endpoint URL pattern:
• https://usageapi.<region_identifier>.oci.oraclecloud8.com
Vault Service Key Management API
API reference
Endpoint URL pattern:
• https://kms.<region_identifier>.oraclecloud8.com
In addition to these endpoints, each vault has a unique endpoint for create, update, and list operations for keys. This
endpoint is referred to as the control plane URL or management endpoint. Each vault also has a unique endpoint for
cryptographic operations. This endpoint is known as the data plane URL or the cryptographic endpoint.
Vault Service Secret Management API
API reference
Endpoint URL pattern:
• https://vaults.<region_identifier>.oci.oraclecloud8.com
Vault Service Secret Retrieval API
API reference
Endpoint URL pattern:
• https://secrets.vaults.<region_identifier>.oci.oraclecloud8.com
Web Application Acceleration and Security API
API reference
Endpoint URL pattern:
• https://waas.<region_identifier>.oraclecloud8.com
Work Requests API (for Compute and Database work requests)
API reference
Endpoint URL pattern:
• https://iaas.<region_identifier>.oraclecloud8.com

IP Address Ranges
This topic provides information about public IP address ranges for services that are deployed in Oracle Cloud
Infrastructure. Allow traffic to these CIDR blocks to ensure access to the services.
Endpoints for Oracle YUM repos are listed on this page. You can use DNS lookup to determine the public IP address
for each endpoint.

Public IP Addresses for VCNs and the Oracle Services Network


Public IP address ranges for VCNs and the Oracle Services Network are published to a JSON file which you can
download and view manually or consume programmatically.

Oracle Cloud Infrastructure User Guide 195


Service Essentials

The Oracle Services Network is a conceptual network in Oracle Cloud Infrastructure that is reserved for Oracle
services. A service gateway offers private access to the Oracle Services Network from workloads in your VCN
and your on-premises network. The published addresses correspond to the service CIDR label called All <region>
Services in Oracle Services Network. For a list of the services available with a service gateway, see Service
Gateway: Supported Cloud Services in Oracle Services Network.

Downloading the JSON File


Use this link to download the current list of public IP ranges.
You can poll the published file to check for new IP address ranges as frequently as every 24 hours. We recommend
that you poll the published file at least weekly.

JSON File Contents and Syntax


IP addresses are published in the public_ip_ranges.json file with the fields in the following table.

Example of the public_ip_ranges.json file


{
"last_updated_timestamp": "2019-11-18T19:55:47.204985",
"regions": [
{
"region": "us-phoenix-1",
"cidrs": [
{
"cidr": "129.146.0.0/21",
"tags": [
"OCI"
]
},
{
"cidr": "134.70.8.0/21",
"tags": [
"OSN",
"OBJECT_STORAGE"
]
},
]
}
{
"region": "us-ashburn-1",
"cidrs": [
{
"cidr": "129.213.8.0/21",
"tags": [
"OCI"
]
},
{
"cidr": "134.70.24.0/21",
"tags": [
"OSN",
"OBJECT_STORAGE"
]
}
]
}
]
}

Oracle Cloud Infrastructure User Guide 196


Service Essentials

Field Name Definition Type Example


last_updated_timestamp File creation time in string "last_updated_timestamp"
ISO 8601 format. "2019-11-18T19:55:47.204

Expressed as
<date>T<time>

regions IP CIDR ranges array See preceding


grouped by region. Example of the
public ip ranges.json
file
region The region of the string "region":
IP CIDR ranges. "us-
phoenix-1"
Valid values: Any
region in the Oracle
Cloud Infrastructure
commercial realm.
For a complete
list of regions,
see Regions
and Availability
Domains on page
182.

cidrs A group of IP array See preceding


address CIDR Example of the
ranges. public ip ranges.json
file
cidr One or more string "cidr":
IPv4 IP addresses "147.154.0.0/18"
expressed in CIDR
notation.

Oracle Cloud Infrastructure User Guide 197


Service Essentials

Field Name Definition Type Example


tags The services array of string "tags":
associated with the values [ "OCI" ]
IP address CIDR
range.
Valid values:
• OCI: The VCN
CIDR blocks.
• OSN: The
CIDR block
ranges for the
Oracle Services
Network.
• OBJECT_STORAGE: The
CIDR block
ranges used
by the Object
Storage service.
For more
information, see
Overview of
Object Storage
on page 3420.

Filtering the JSON file contents


After you download the JSON file, you can use a command line tool such as jq to filter the contents.
Download jq
Here are some examples of how you can use the tool to find and filter the information you need:
Find the creation date of the JSON file:

jq .last_updated_timestamp < public_ip_ranges.json

Get all IPv4 addresses for a specific region:

jq -r '.regions[] | select (.region=="us-phoenix-1") | .cidrs[] | select


(.cidr | contains(".")) | .cidr ' < public_ip_ranges.json

Public IP Addresses for the Oracle YUM Repos


The Oracle YUM repos have the following regional public endpoints.

Region YUM Server Endpoint


Netherlands Northwest (Amsterdam) https://yum-eu-amsterdam-1.oracle.com
Australia East (Sydney) https://yum-ap-sydney-1.oracle.com
Canada Southeast (Toronto) https://yum-ca-toronto-1.oracle.com
Germany Central (Frankfurt) https://yum-eu-frankfurt-1.oracle.com
India West (Mumbai) https://yum-ap-mumbai-1.oracle.com

Oracle Cloud Infrastructure User Guide 198


Service Essentials

Region YUM Server Endpoint


Japan Central (Osaka) https://yum-ap-osaka-1.oracle.com
Japan East (Tokyo) https://yum-ap-tokyo-1.oracle.com
Saudi Arabia West (Jeddah) https://yum-me-jeddah-1.oracle.com
Australia Southeast (Melbourne) https://yum-ap-melbourne-1.oracle.com
South Korea Central (Seoul) https://yum-ap-seoul-1.oracle.com
UK South (London) https://yum-uk-london-1.oracle.com
US East (Ashburn) https://yum-us-ashburn-1.oracle.com
US West (Phoenix) https://yum-us-phoenix-1.oracle.com

You can use DNS lookup to determine the public IP address for each endpoint.

Resource Identifiers
This chapter describes the different ways your Oracle Cloud Infrastructure resources are identified.

Oracle Cloud IDs (OCIDs)


Most types of Oracle Cloud Infrastructure resources have an Oracle-assigned unique ID called an Oracle Cloud
Identifier (OCID). It's included as part of the resource's information in both the Console and API.
Important:

To use the API, you need the OCID for your tenancy. For information about
where to find it, see the next section.
OCIDs use this syntax:

ocid1.<RESOURCE TYPE>.<REALM>.[REGION][.FUTURE USE].<UNIQUE ID>

• ocid1: The literal string indicating the version of the OCID.


• resource type: The type of resource (for example, instance, volume, vcn, subnet, user, group, and so
on).
• realm: The realm the resource is in. A realm is a set of regions that share entities. Possible values are oc1 for
the commercial realm, oc2 for the Government Cloud realm, or oc3 for the Federal Government Cloud realm.
The regions in the commercial realm (OC1) belong to the domain oraclecloud.com. The regions in the
Government Cloud (OC2) belong to the domain oraclegovcloud.com.
• region: The region the resource is in (for example, phx, iad, eu-frankfurt-1). With the introduction of
the Frankfurt region, the format switched from a three-character code to a longer string. This part is present in the
OCID only for regional resources or those specific to a single availability domain. If the region is not applicable to
the resource, this part might be blank (see the example tenancy ID below).
• future use: Reserved for future use. Currently blank.
• unique ID: The unique portion of the ID. The format may vary depending on the type of resource or service.

Example OCIDs
Tenancy:

ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f44n2b2m2yt2j6rx32uzr4h25vqstifsfdsq

Oracle Cloud Infrastructure User Guide 199


Service Essentials

Instance:

ocid1.instance.oc1.phx.abuw4ljrlsfiqw6vzzxb43vyypt4pkodawglp3wqxjqofakrwvou52gb6s5a

Where to Find Your Tenancy's OCID


If you use the Oracle Cloud Infrastructure API, you need your tenancy's OCID in order to sign the API requests. You
also use the tenancy ID in some of the IAM API operations.
Get the tenancy OCID from the Oracle Cloud Infrastructure Console on the Tenancy Details page:
1.
Open the Profile menu ( ) and click Tenancy: <your_tenancy_name>.
2. The tenancy OCID is shown under Tenancy Information. Click Copy to copy it to your clipboard.

The tenancy OCID looks something like this (notice the word "tenancy" in it):

ocid1.tenancy.oc1..<unique_ID>

Name and Description (IAM Only)


The IAM service requires you to assign a unique, unchangeable name to each of your IAM resources (users, groups,
dynamic groups, federations, and policies). The name must be unique within the scope of the type of resource (for
example, you can only have one user with the name BobSmith). Notice that this requirement is specific to IAM and
not the other services.
The name you assign to a user at creation is their login for the Console.
You can use these names instead of the OCID when writing a policy (for example, Allow group <GROUP
NAME> to manage all-resources in compartment <COMPARTMENT NAME>).
In addition to the name, you must also assign a description to each of your IAM resources (although it can be an
empty string). It can be a friendly description or other information that helps you easily identify the resource. The
description does not have to be unique, and you can change it whenever you like. For example, you might want to
use the description to store the user's email address if you're not already using the email address as the user's unique
name.

Display Name
For most of the Oracle Cloud Infrastructure resources you create (other than those in IAM), you can optionally assign
a display name. It can be a friendly description or other information that helps you easily identify the resource. The
display name does not have to be unique, and you can change it whenever you like. The Console shows the resource's
display name along with its OCID.

Oracle Cloud Infrastructure User Guide 200


Service Essentials

Caution:

Avoid entering confidential information when assigning descriptions,


tags, or friendly names to your cloud resources through the Oracle Cloud
Infrastructure Console, API, or CLI.

Resource Monitoring
You can monitor the health, capacity, and performance of your Oracle Cloud Infrastructure resources when needed
using queries or on a passive basis using alarms. Queries and alarms rely on metrics emitted by your resource to the
Monitoring service.

Prerequisites
• IAM policies: To monitor resources, you must be given the required type of access in a policy written by an
administrator, whether you're using the Console or the REST API with an SDK, CLI, or other tool. The policy
must give you access to the monitoring services as well as the resources being monitored. If you try to perform
an action and get a message that you don’t have permission or are unauthorized, confirm with your administrator
the type of access you've been granted and which compartment you should work in. For more information on user
authorizations for monitoring, see the Authentication and Authorization section for the related service: Monitoring
or Notifications.
• Metrics exist in Monitoring: The resources that you want to monitor must emit metrics to the Monitoring service.
• Compute instances: To emit metrics, the Compute Instance Monitoring plugin must be enabled on the instance,
and plugins must be running. The instance must also have either a service gateway or a public IP address to send
metrics to the Monitoring service. For more information, see Enabling Monitoring for Compute Instances on page
790.

Working with Resource Monitoring


Not all resources support monitoring. See Supported Services on page 2695 for the list of resources that support the
Monitoring service, which is required for queries and alarms used in monitoring.
The Monitoring service works with the Notifications service to notify you when metrics breach. For more information
about these services, see Monitoring Overview on page 2686 and Notifications Overview on page 3378.

To view default metric charts for a resource


On the page for the resource of interest, under Resources, click Metrics.
For example, to view metric data for a Compute instance:
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance you're interested in.
3. On the instance details page, under Resources, click Metrics.
A chart is shown for each metric. For a list of metrics related to Compute instances, see Compute Instance Metrics
on page 794.
The Console displays the last hour of metric data for the selected resource. A chart is shown for each metric emitted
by the selected resource.
For a list of metrics emitted by your resource, see Supported Services on page 2695.

To view default metric charts for a set of resources


1. Open the navigation menu. Under Solutions and Platform, go to Monitoring and click Service Metrics.
2. Choose a compartment you have permission to work in (on the left side of the page). The page updates to display
only the resources in that compartment. If you're not sure which compartment to use, contact an administrator.

Oracle Cloud Infrastructure User Guide 201


Service Essentials

3. Choose the Metric Namespace for the resource types of interest in the selected compartment.
For example, choose oci_lbaas to see metrics for load balancers.
Default charts are displayed for all resources in the selected Metric Namespace and Compartment. Very small or
large values are indicated by International System of Units (SI units), such as M for mega (10 to the sixth power).
Don't see all expected resources or metrics?
• Try a different time range.
• Make sure the correct Compartment is selected.
Metric namespaces are shown only when associated resources exist in the selected compartment. For example, the
oci_autonomous_database namespace is shown only when Autonomous Databases exist in the selected
compartment.
• Confirm that the missing resources are emitting metrics. See Enabling Monitoring for Compute Instances on page
790.
• Review limits information. Limits information for returned data includes the 100,000 data point maximum and
time range maximums (determined by resolution, which relates to interval). See MetricData Reference.

To create a query
1. Open the navigation menu. Under Solutions and Platform, go to Monitoring and click Metrics Explorer.
The Metrics Explorer page displays an empty chart with fields to build a query.
2. Fill in the fields for a new query.
• Compartment: The compartment containing the resources that you want to monitor. By default, the first
accessible compartment is selected.
• Metric Namespace: The service or application emitting metrics for the resources that you want to monitor.
• Resource Group (optional): The group that the metric belongs to. A resource group is a custom string
provided with a custom metric. Not applicable to service metrics.
• Metric Name: The name of the metric. Only one metric can be specified. Metric selections depend on the
selected compartment and metric namespace. Example: CpuUtilization
• Interval: The aggregation window.
Interval values
Supported values for interval depend on the specified time range in the metric query (not applicable to alarm
queries). More interval values are supported for smaller time ranges. For example, if you select one hour for
the time range, then all interval values are supported. If you select 90 days for the time range, then only the 1h
or 1d interval values are supported.
• 1m - 1 minute
• 5m - 5 minutes
• 1h - 1 hour
• 1d - 1 day
Note:

For metric queries, the interval you select drives the default resolution of
the request, which determines the maximum time range of data returned.
For more information about the resolution parameter as used in metric
queries, see SummarizeMetricsData.
Maximum time range returned for a query
The maximum time range returned for a metric query depends on the
resolution. By default, for metric queries, the resolution is the same as
the query interval.

Oracle Cloud Infrastructure User Guide 202


Service Essentials

The maximum time range is calculated using the current time, regardless
of any specified end time. Following are the maximum time ranges
returned for each interval selection available in the Console (Basic
Mode). To specify an interval value that is not available in Basic Mode
in the Console, such as 12 hours, switch to Advanced Mode.

Interval Default resolution Maximum time


(metric queries) range returned
1d 1 day 90 days
1h 1 hour 90 days
5m 5 minutes 30 days
1m 1 minute 7 days

To specify a non-default resolution that differs from the interval, use the
SummarizeMetricsData operation.
See examples of returned data
Example 1: One-minute interval and resolution up to the current time,
sent at 10:00 on January 8th. No resolution or end time is specified,
so the resolution defaults to the interval value of 1m, and the end time
defaults to the current time (2019-01-08T10:00:00.789Z). This
request returns a maximum of 7 days of metric data points. The earliest
data point possible within this seven-day period would be 10:00 on
January 1st (2019-01-01T10:00:00.789Z).
Example 2: Five-minute interval with one-minute resolution up to
two days ago, sent at 10:00 on January 8th. Because the resolution
drives the maximum time range, a maximum of 7 days of metric data
points is returned. While the end time specified was 10:00 on January
6th (2019-01-06T10:00:00.789Z), the earliest data point
possible within this seven-day period would be 10:00 on January 1st

Oracle Cloud Infrastructure User Guide 203


Service Essentials

(2019-01-01T10:00:00.789Z). Therefore, only 5 days of metric


data points can be returned in this example.
• Statistic: The aggregation function.
Statistic values
• COUNT- The number of observations received in the specified time period.
• MAX - The highest value observed during the specified time period.
• MEAN - The value of Sum divided by Count during the specified time period.
• MIN - The lowest value observed during the specified time period.
• P50 - The value of the 50th percentile.
• P90 - The value of the 90th percentile.
• P95 - The value of the 95th percentile.
• P99 - The value of the 99th percentile.
• P99.5 - The value of the 99.5th percentile.
• RATE - The per-interval average rate of change.
• SUM - All values added together.
• Metric dimensions: Optional filters to narrow the metric data evaluated.
Dimension fields
• Dimension Name: A qualifier specified in the metric definition. For example, the dimension
resourceId is specified in the metric definition for CpuUtilization.
Note:

Long lists of dimensions are trimmed.


• To view dimensions by name, type one or more characters in the
box. A refreshed (trimmed) list shows matching dimension names.
• To retrieve all dimensions for a given metric, use the following
API operation: ListMetrics
• Dimension Value: The value you want to use for the specified dimension. For example, the resource
identifier for your instance of interest.
• + Additional dimension: Adds another name-value pair for a dimension.
• Aggregate Metric Streams: Aggregates all results to plot a single aggregated average for all metric streams.
This average is plotted as a single line on the metric chart. This operation is helpful when you want to plot a
metric as one line for all resources.
3. Click Update Chart.
The chart shows the results of your new query. Very small or large values are indicated by International System
of Units (SI units), such as M for mega (10 to the sixth power). Units correspond to the selected metric and do not
change by statistic.
Troubleshooting Errors and Query Limits
If you see an error that the query has exceeded the maximum number of metric streams, then update the query
to evaluate a number of metric streams that is within the limit. For example, you can reduce the metric streams

Oracle Cloud Infrastructure User Guide 204


Service Essentials

by specifying dimensions. You can continue to evaluate all metric streams that were in the original query by
spreading the metric streams across multiple queries (or alarms).
Limits information for returned data includes the 100,000 data point maximum and time range maximums
(determined by resolution, which relates to interval). See MetricData Reference.
4. To customize the y-axis label or range, type the label you want into Y-Axis Label or type the minimum and
maximum values you want into Y-Axis Min Value and Y-Axis Max Value.
Only numeric characters are allowed for custom ranges. Custom labels and ranges are not persisted in shared
queries (MQL).
5. To view the query as a Monitoring Query Language (MQL) expression, click Advanced Mode.
Advanced Mode is located on the right, under the chart.
Use Advanced Mode to edit your query using MQL syntax to aggregate results by group. The MQL syntax also
supports additional parameter values. For more information about query parameters in Basic Mode and Advanced
Mode, see Monitoring Query Language (MQL) Reference on page 2767.
6. To create another query, click Add Query below the chart.

To create an alarm
1. Open the navigation menu. Under Solutions and Platform, go to Monitoring and click Alarm Definitions.
2. Click Create alarm.
Note:

You can also create an alarm from a predefined query on the Service
Metrics page. Expand Options and click Create an Alarm on this
Query. For more information about service metrics, see Viewing Default
Metric Charts on page 2697.
3. On the Create Alarm page, under Define alarm, fill in or update the alarm settings:
Note:

To toggle between Basic Mode and Advanced Mode, click Switch to


Advanced Mode or Switch to Basic Mode (to the right of Define Alarm).
Basic Mode (default)
By default, this page uses Basic Mode, which separates the metric from its dimensions and its trigger rule.
• Alarm Name:
User-friendly name for the new alarm. This name is sent as the title for notifications related to this alarm.
Avoid entering confidential information.
Rendering of the title by protocol

Protocol Rendering of the title


Email Subject line of the email message.
HTTPS (Custom URL) Not rendered.
PagerDuty Title field of the published message.
Slack Not rendered.

Oracle Cloud Infrastructure User Guide 205


Service Essentials

Protocol Rendering of the title


SMS Not rendered.
• Alarm Severity: The perceived type of response required when the alarm is in the firing state.
• Alarm Body: The human-readable content of the notification delivered. Oracle recommends providing
guidance to operators for resolving the alarm condition. Consider adding links to standard runbook practices.
Example: "High CPU usage alert. Follow runbook instructions for resolution."
• Tags (optional): If you have permissions to create a resource, then you also have permissions to apply free-
form tags to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For
more information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags,
then skip this option (you can apply tags later) or ask your administrator.
• Metric description: The metric to evaluate for the alarm condition.
• Compartment: The compartment containing the resources that emit the metrics evaluated by the alarm.
The selected compartment is also the storage location of the alarm. By default, the first accessible
compartment is selected.
• Metric Namespace: The service or application emitting metrics for the resources that you want to monitor.
• Resource Group (optional): The group that the metric belongs to. A resource group is a custom string
provided with a custom metric. Not applicable to service metrics.
• Metric Name: The name of the metric. Only one metric can be specified. Example: CpuUtilization
• Interval: The aggregation window, or the frequency at which data points are aggregated.
Interval values
Note:

Valid alarm intervals depend on the frequency at which the metric is


emitted. For example, a metric emitted every five minutes requires
a 5-minute alarm interval or higher. Most metrics are emitted every
minute, which means most metrics support any alarm interval. To
determine valid alarm intervals for a given metric, check the relevant
service's metric reference.
• 1m - 1 minute
• 5m - 5 minutes
• 1h - 1 hour
• 1d - 1 day
Note:

For alarm queries, the specified interval has no effect on the resolution
of the request. The only valid value of the resolution for an alarm

Oracle Cloud Infrastructure User Guide 206


Service Essentials

query request is 1m. For more information about the resolution


parameter as used in alarm queries, see Alarm.
• Statistic: The aggregation function.
Statistic values
• COUNT- The number of observations received in the specified time period.
• MAX - The highest value observed during the specified time period.
• MEAN - The value of Sum divided by Count during the specified time period.
• MIN - The lowest value observed during the specified time period.
• P50 - The value of the 50th percentile.
• P90 - The value of the 90th percentile.
• P95 - The value of the 95th percentile.
• P99 - The value of the 99th percentile.
• P99.5 - The value of the 99.5th percentile.
• RATE - The per-interval average rate of change.
• SUM - All values added together.
• Metric dimensions: Optional filters to narrow the metric data evaluated.
Dimension fields
• Dimension Name: A qualifier specified in the metric definition. For example, the dimension
resourceId is specified in the metric definition for CpuUtilization.
Note:

Long lists of dimensions are trimmed.


• To view dimensions by name, type one or more characters in the
box. A refreshed (trimmed) list shows matching dimension names.

Oracle Cloud Infrastructure User Guide 207


Service Essentials

• To retrieve all dimensions for a given metric, use the following


API operation: ListMetrics
• Dimension Value: The value you want to use for the specified dimension. For example, the resource
identifier for your instance of interest.
• + Additional dimension: Adds another name-value pair for a dimension.
• Trigger rule: The condition that must be satisfied for the alarm to be in the firing state. The condition can
specify a threshold, such as 90% for CPU Utilization, or an absence.
• Operator: The operator used in the condition threshold.
Operator values
• greater than
• greater than or equal to
• equal to
• less than
• less than or equal to
• between (inclusive of specified values)
• outside (inclusive of specified values)
• absent
• Value: The value to use for the condition threshold.
• Trigger Delay Minutes: The number of minutes that the condition must be maintained before the alarm is
in firing state.

Advanced Mode
Click Advanced Mode or Switch to Advanced Mode to view the alarm query as a Monitoring Query Language
(MQL) expression. Edit your query using MQL syntax to aggregate results by group or for additional parameter
values. See Monitoring Query Language (MQL) Reference on page 2767.
• Alarm Name:
User-friendly name for the new alarm. This name is sent as the title for notifications related to this alarm.
Avoid entering confidential information.
Rendering of the title by protocol

Protocol Rendering of the title


Email Subject line of the email message.
HTTPS (Custom URL) Not rendered.
PagerDuty Title field of the published message.
Slack Not rendered.
SMS Not rendered.
• Alarm Severity: The perceived type of response required when the alarm is in the firing state.
• Alarm Body: The human-readable content of the notification delivered. Oracle recommends providing
guidance to operators for resolving the alarm condition. Consider adding links to standard runbook practices.
Example: "High CPU usage alert. Follow runbook instructions for resolution."
• Tags (optional): If you have permissions to create a resource, then you also have permissions to apply free-
form tags to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For

Oracle Cloud Infrastructure User Guide 208


Service Essentials

more information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags,
then skip this option (you can apply tags later) or ask your administrator.
• Metric description, dimensions, and trigger rule: The metric to evaluate for the alarm condition, including
dimensions and the trigger rule.
• Compartment: The compartment containing the resources that emit the metrics evaluated by the alarm.
The selected compartment is also the storage location of the alarm. By default, the first accessible
compartment is selected.
• Metric Namespace: The service or application emitting metrics for the resources that you want to monitor.
• Resource Group (optional): The group that the metric belongs to. A resource group is a custom string
provided with a custom metric. Not applicable to service metrics.
• Query Code Editor box: The alarm query as a Monitoring Query Language (MQL) expression.
Note:

Valid alarm intervals depend on the frequency at which the metric is


emitted. For example, a metric emitted every five minutes requires
a 5-minute alarm interval or higher. Most metrics are emitted every
minute, which means most metrics support any alarm interval. To
determine valid alarm intervals for a given metric, check the relevant
service's metric reference.
Example alarm query:

CpuUtilization[1m]
{availabilityDomain=AD1}.groupBy(poolId).percentile(0.9) > 85

For query syntax and examples, see Working with Metric Queries on page 2726.
• Trigger Delay Minutes: The number of minutes that the condition must be maintained before the alarm is
in firing state.
The chart below the Define alarm section dynamically displays the last six hours of emitted metrics according to
currently selected fields for the query. Very small or large values are indicated by International System of Units
(SI units), such as M for mega (10 to the sixth power).

Oracle Cloud Infrastructure User Guide 209


Service Essentials

4. Set up notifications: Under Notifications, fill in the fields.


• Destinations
• Destination Service: The provider of the destination to use for notifications.
Available options:
• Notifications Service.
• Compartment: The compartment storing the topic to be used for notifications. Can be a different
compartment from the alarm and metric. By default, the first accessible compartment is selected.
• Topic: The topic to use for notifications. Each topic supports a subscription protocol, such as PagerDuty.
• Create a topic: Sets up a topic and subscription protocol in the selected compartment, using the specified
destination service.
• Topic Name: User-friendly name for the new topic. Example: "Operations Team " for a topic used to
notify operations staff of firing alarms. Avoid entering confidential information.
• Topic Description: Description of the new topic.
• Subscription Protocol: Medium of communication to use for the new topic. Configure your
subscription for the protocol you want:
Email subscription
Sends an email message when you publish a message to the subscription's parent topic.
Message contents and appearance vary by message type. See alarm messages, event messages, and
service connector messages.
Some message types allow friendly formatting.
• Subscription Protocol: Select Email.
• Subscription Email: Type an email address.
HTTPS (Custom URL) subscription
Sends specified information when you publish a message to the subscription's parent topic.
Endpoint format (URL using HTTPS protocol):

https://<anyvalidURL>

Basic access authentication is supported, allowing you to specify a username and password in the
URL, as in https://user:[email protected] or https://[email protected]. The

Oracle Cloud Infrastructure User Guide 210


Service Essentials

username and password are encrypted over the SSL connection established when using HTTPS. For
more information about Basic Access Authentication, see RFC-2617.
Query parameters are not allowed in URLs.
• Subscription Protocol: Select HTTPS (Custom URL).
• Subscription URL: Type (or copy and paste) the URL you want to use as the endpoint.
PagerDuty subscription
Creates a PagerDuty incident by default when you publish a message to the subscription's parent topic.
Endpoint format (URL):

https://events.pagerduty.com/integration/<integrationkey>/enqueue

Query parameters are not allowed in URLs.


To create an endpoint for a PagerDuty subscription (set up and retrieve an integration key), see the
PagerDuty documentation.
• Subscription Protocol: Select PagerDuty.
• Subscription URL: Type (or copy and paste) the integration key portion of the URL for your
PagerDuty subscription. (The other portions of the URL are hard-coded.)
Slack subscription
Sends a message to the specified Slack channel by default when you publish a message to the
subscription's parent topic.
Message contents and appearance vary by message type. See alarm messages, event messages, and
service connector messages.
Sends a message to the specified Slack channel by default when you publish a message to the
subscription's parent topic.
Endpoint format (URL):

https://hooks.slack.com/services/<webhook-token>

The <webhook-token> portion of the URL contains two slashes (/).


Query parameters are not allowed in URLs.
To create an endpoint for a Slack subscription (using a webhook for your Slack channel), see the Slack
documentation.
• Subscription Protocol: Select Slack.
• Subscription URL: Type (or copy and paste) the Slack endpoint, including your webhook token.
SMS subscription
Sends a text message using Short Message Service (SMS) to the specified phone number when you
publish a message to the subscription's parent topic. Supported endpoint formats: E.164 format.
Note:

SMS subscriptions are enabled only for messages sent by the


following Oracle Cloud Infrastructure services: Monitoring, Service

Oracle Cloud Infrastructure User Guide 211


Service Essentials

Connector Hub. SMS messages sent by unsupported services are


dropped. Troubleshoot dropped messages.
Message contents and appearance vary by message type. See alarm messages, event messages, and
service connector messages.
Available Countries and Regions
You can use Notifications to send SMS messages to the following countries and regions:

Country or region ISO code


Australia AU
Brazil BR
Canada CA
Chile CL
China CN
Costa Rica CR
Croatia HR
Czechia CZ
France FR
Germany DE
Hungary HU
India IN
Ireland IE
Israel IL
Japan JP
Lithuania LT
Mexico MX
Netherlands NL
New Zealand NZ
Norway NO
Philippines PH
Poland PL
Portugal PT
Romania RO
Saudi Arabia SA
Singapore SG
South Africa ZA
South Korea KR
Spain ES

Oracle Cloud Infrastructure User Guide 212


Service Essentials

Country or region ISO code


Sweden SE
Switzerland CH
Ukraine UA
United Arab Emirates AE
United Kingdom GB
United States US
• + Additional destination service: Adds another destination service and topic to use for notifications.
Note:

Each alarm is limited to one destination per supported destination


service.
• Repeat Notification?: While the alarm is in the firing state, resends notifications at the specified interval.
• Notification Interval: The period of time to wait before resending the notification.
• Suppress Notifications: Sets up a suppression time window during which to suspend evaluations and
notifications. Useful for avoiding alarm notifications during system maintenance periods.
• Suppression Description
• Start Time
• End Time
5. If you want to disable the new alarm, clear Enable This Alarm?.
6. Click Save alarm.
The new alarm is listed on the Alarm Definitions page.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials
on page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page
4262.
To create a query, use the SummarizeMetricsData operation.
To create an alarm, use the CreateAlarm operation.

Resource Tags
When you have many resources (for example, instances, VCNs, load balancers, and block volumes) across multiple
compartments in your tenancy, it can become difficult to track resources used for specific purposes, or to aggregate
them, report on them, or take bulk actions on them. Tagging allows you to define keys and values and associate them
with resources. You can then use the tags to help you organize and list resources based on your business needs.
There are two types of tags:
Defined tags are set up in your tenancy by an administrator. Only users granted permission to work with the defined
tags can apply them to resources.
Free-form tags can be applied by any user with permissions on the resource.
For more detailed information about tags and their features, see Tagging Overview on page 3942.
Tip:

Watch a video to introduce you to the concepts and features of tagging:


Introduction to Tagging.

Oracle Cloud Infrastructure User Guide 213


Service Essentials

Caution:

Avoid entering confidential information when assigning descriptions,


tags, or friendly names to your cloud resources through the Oracle Cloud
Infrastructure Console, API, or CLI.

Working with Resource Tags

To bulk add defined tags


How to add multiple defined tags to existing resources. To apply defined tags, you must have permission to use the
namespace.
1. Open the navigation menu. Under Governance and Administration, go to Governance and click Tenancy
Explorer.
2. Select the resources to which you want to add tags. Optionally, use the Filter by resource type drop-down menu
to narrow the list of resources.
3. In the Actions menu, click Manage Tags.
The Manage Tags page opens. The first table displays the tags currently applied to the selected resources. The
second table displays the selected resources.
4. Under the list of existing tags, click + Add New.
a. Select the Tag Namespace.
b. Select the Tag Key.
c. For Value, enter a value.
5. To apply another tag, repeat the previous step. To remove a row, click the Remove (x) button.
6. When you have added all the desired tags, click Next.
A confirmation page opens that lists the actions to take and the resources that the actions apply to.
7. Click Submit.
The Work Request page launches to show you the status of the work request to add tags to the resources.

To add defined tags to one existing resource


To apply a defined tag, you must have permission to use the namespace.
1. Open the Console, go to the details page of the resource you want to tag.
For example, to tag a compute instance: Open the navigation menu. Under Core Infrastructure, go to Compute
and click Instances. A list of the instances in your current compartment is displayed. Find the instance that you
want to tag, and click its name to view its details page.
2. Click Apply Tags. Depending on the resource, this option might appear in the More Actions menu.
3. In the Apply Tags to the Resource dialog:
a. Select the Tag Namespace.
b. Select the Tag Key.
c. In Value, either enter a value or select one from the list.
d. To apply another tag, click + Additional Tag.
e. When finished adding tags, click Apply Tag(s).

To add a free-form tag to an existing resource


1. Open the Console, go to the details page of the resource you want to tag.
For example, to tag a compute instance: Open the navigation menu. Under Core Infrastructure, go to Compute
and click Instances. A list of the instances in your current compartment is displayed. Find the instance that you
want to tag, and click its name to view its details page.
2. Click Apply Tags. Depending on the resource, this option might appear in the More Actions menu.

Oracle Cloud Infrastructure User Guide 214


Service Essentials

3. In the Apply Tags to the Resource dialog:


a. Select None (apply a free-form tag).
b. Enter the Tag Key.
c. Enter a Value.
d. To apply another tag, click + Additional Tag.
e. When finished adding tags, click Apply Tag(s).

To add a tag during resource creation


You can apply tags during resource creation. The location of the Apply Tag(s) option in the dialog varies by
resource. The general steps are:
1. In the resource Create dialog, click Apply Tags.
On some resources, you have to click Show Advanced Options to apply a tag.
2. In the Apply Tags to the Resource dialog:
a. Select the Tag Namespace, or select None to apply a free-form tag.
b. Select or enter the Tag Key.
c. In Value, either enter a value or select one from the list.
d. To apply another tag, click + Additional Tag.
e. Click Apply Tag(s).

To filter a list of resources by a tag


Open the Console, click the service name and then click the resource you want to view. The left side of the page
shows all the filters currently applied to the list.
For example, to view compute instances: Click Compute and then click Instances, to see the list of instances in your
current compartment.

To filter a list of resources by a defined tag


1. Next to Tag Filters, click add.
2. In the Apply a Tag Filter dialog, enter the following:
a. Namespace: Select the tag namespace.
b. Key: Select a specific key.
c. Value: Select from the following:
•Match Any Value - returns all resources tagged with the selected namespace and key, regardless of the tag
value.
• Match Any of the Following - returns resources with the tag value you enter in the text box. Enter a
single value in the text box. To specify multiple values for the same namespace and key, click + to display
another text box. Enter one value per text box.
d. Click Apply Filter.

To filter a list of resources by a free-form tag


1. Next to Tag Filters, click add.

Oracle Cloud Infrastructure User Guide 215


Service Essentials

2. In the Apply a Tag Filter dialog, enter the following:


a. Key: Enter the tag key.
b. Value: Select from the following:
• Match Any Value - returns all resources tagged with the selected free-form tag key, regardless of the tag
value.
• Match Any of the Following - returns resources with the tag value you enter in the text box. Enter a single
value in the text box. To specify multiple values for the same key, click + to display another text box. Enter
one value per text box.
c. Click Apply Filter.

To bulk update defined tags


How to update defined tags applied to one or more resources.
1. Open the navigation menu. Under Governance and Administration, go to Governance and click Tenancy
Explorer.
2. Select the resources whose tags you want to update. Optionally, use the Filter by resource type drop-down menu
to narrow the list of resources.
3. In the Actions menu, click Manage Tags.
The Manage Tags page opens. The first table displays the tags currently applied to the selected resources. The
second table displays the selected resources.
4. In the list of tags, find the tag that you want to update and enter a new value. To revert the change, click the Undo
button.
5. For Action, select Apply tag to all selected resources.
6. If desired, update more tag values. Then, click Next.
A confirmation page opens that lists the actions to take and the resources that the actions apply to.
7. Click Submit.
The Work Request page launches to show you the status of the work request to update the tags on the resources.

To update a tag applied to a single resource


1. Open the Console, click the service name, and then click the resource you want to view.
For example, to view compute instances: Open the navigation menu. Under Core Infrastructure, go to Compute
and click Instances. A list of the instances in your current compartment is displayed. Find the instance that you
want to update, and click its name to view its details page.
2. Click Tags.
The list of tags applied to the resource is displayed.
3. Find the tag you want to update and click the pencil icon next to it.
4. Enter or select a new value.
5. Click Save.

To bulk remove defined tags


How to remove multiple defined tags from resources.
1. Open the navigation menu. Under Governance and Administration, go to Governance and click Tenancy
Explorer.
2. Select the resources from which you want to remove tags. Optionally, use the Filter by resource type drop-down
menu to narrow the list of resources.
3. In the Actions menu, click Manage Tags.
The Manage Tags page opens. The first table displays the tags currently applied to the selected resources. The
second table displays the selected resources.

Oracle Cloud Infrastructure User Guide 216


Service Essentials

4. In the list of tags, find the tag that you want to remove. For Action, select Remove tag from all selected
resources.
5. To remove another tag, repeat the previous step.
6. Click Next.
A confirmation page opens that lists the actions to take and the resources that the actions apply to.
7. Click Submit.
The Work Request page launches to show you the status of the work request to remove tags from the resources.

To remove a tag from a single resource


1. Open the Console, click the service name and then click the resource you want to view.
For example, to view a compute instance: Open the navigation menu. Under Core Infrastructure, go to
Compute and click Instances. A list of the instances in your current compartment is displayed. Find the instance
that you want to remove the tag from, and click its name to view its details page.
2. Click Tags.
The list of tags applied to the resource is displayed.
3. Find the tag you want to remove and click the pencil icon next to it.
4. Click Remove Tag.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials
on page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page
4262.
• To apply a tag to an individual resource using the API, use the appropriate resource's create or update
operation.
• BulkEditTags - adds, updates, and removes multiple tag key definitions on the selected resources
• ListBulkEditTagsResourceTypes - lists the resource types that support bulk tag editing

Service Limits
This topic describes the service limits for Oracle Cloud Infrastructure and the process for requesting a service limit
increase.

About Service Limits and Usage


When you sign up for Oracle Cloud Infrastructure, a set of service limits is configured for your tenancy. The
service limit is the quota or allowance set on a resource. For example, your tenancy is allowed a maximum number
of compute instances per availability domain. These limits are generally established with your Oracle sales
representative when you purchase Oracle Cloud Infrastructure. If you did not establish limits with your Oracle sales
representative, or, if you signed up through the Oracle Store, default or trial limits are set for your tenancy. These
limits may be increased for you automatically based on your Oracle Cloud Infrastructure resource usage and account
standing. You can also request a service limit increase.

Compartment Quotas
Compartment quotas are similar to service limits; the biggest difference is that service limits are set by Oracle, and
compartment quotas are set by administrators, using policies that allow them to allocate resources with a high level of
flexibility. Compartment quotas are set using policy statements written in a simple declarative language that is similar
to the IAM policy language.
To learn more, see Compartment Quotas on page 246.

Oracle Cloud Infrastructure User Guide 217


Service Essentials

Viewing Your Service Limits, Quotas, and Usage


You can view your tenancy's limits, quotas, and usage in the Console. Be aware that:
• The Console might not yet display limits and usage information for all of the Oracle Cloud Infrastructure services
or resources.
• The usage level listed for a given resource type could be greater than the limit if the limit was reduced after the
resources were created.
• If all the resource limits are listed as 0, this means your account has been suspended. For help, contact Oracle
Support.
If you don't yet have a tenancy or a user login for the Console, or if you don't find a particular limit listed in the
Console, see Limits by Service on page 220 for the default tenancy limits.

Service Limits API Policy


For the resource availability API (usage) the policy can be at the tenant or compartment level:

Allow group LimitsAndUsageViewers to read resource-availability in tenancy


Allow group LimitsAndUsageViewers to read resource-availability in
compartment A

For limit definitions, services, and values APIs (only at the tenant level):

Allow group LimitsAndUsageViewers to inspect resource-availability in


tenancy

For limit values APIs (does not include definitions or services), the following policy is also supported:

Allow group LimitsAndUsageViewers to inspect limits in tenancy

To view your tenancy's limits and usage (by region)


Note:

Required Permission
If you're in the Administrators group, you have permission to
view the limits and usage. If you're not, here's an example IAM
policy that grants the required permission to users in a group called
LimitsAndUsageViewers:

Allow group LimitsAndUsageViewers to inspect


resource-availability in tenancy

READ resource-availability is required to obtain the resource availability.


There are four APIs:
• listServices
• listLimitDefinitions
• listLimitValues
• getResourceAvailability
limitServices, listLimitDefinitions, and
listLimitValues all require INSPECT at the tenancy level, while

Oracle Cloud Infrastructure User Guide 218


Service Essentials

getResourceAvailability requires READ at the compartment level


to be able to read the data.
Note:

The Console may not display limits and usage information yet for all Oracle
Cloud Infrastructure services or resources.
1. Open the Console. Open the navigation menu. Under Governance and Administration, go to Governance and
click Limits, Quotas and Usage..
Your resource limits, quotas, and usage for the specific region are displayed, broken out by service. You can use the
filter drop-down lists at the top of the list to filter by service, scope, resource, and compartment.

When You Reach a Service Limit


When you reach the service limit for a resource, you receive an error when you try to create a new resource of that
type. You are then prompted to submit a request to increase your limit. You cannot create a new resource until you
are granted an increase to your service limit or you terminate an existing resource. Note that service limits apply to
a specific scope, and when the service limit in one scope is reached you may still have resources available to you in
other scopes (for example, other availability domains).

Requesting a Service Limit Increase


Note:

Government Cloud customers can't use the procedure here to request a


service limit increase. Instead, see Requesting a Service Limit Increase for
Government Cloud Tenancies on page 156.
You can submit a request to increase your service limits from within the Console. If you try to create a resource for
which limit has been met, you'll be prompted to submit a limit increase request. Additionally, you can launch the
request from the service limits page or at any time by clicking the link under the Help menu ( ).
This procedure applies to requests for service limit increases. For details about the subscribed region limit and how to
request an increase to that limit, see Subscribed Region Limits on page 185.

To request a service limit increase


1.
Open the Help menu ( ), go to Support and click Request service limit increase.
2. Enter the following:
• Primary Contact Details: Enter the name and email address of the person making the request. Enter one
email address only. A confirmation will be sent to this address.
• Service Category: Select the appropriate category for your request.
• Resource: Select the appropriate resource.
Depending on your selection for resource, additional fields might display for more specific information.
•Reason for Request: Enter a reason for your request. If your request is urgent or unusual, please provide
details here.
3. Click Create Request.
After you submit the request, it is processed. A response can take anywhere from a few minutes to a few days. If your
request is granted, a confirmation email is sent to the address provided in the primary contact details.
If we need additional information about your request, a follow-up email is sent to the address provided in the primary
contact details.

Oracle Cloud Infrastructure User Guide 219


Service Essentials

Limits by Service
The following tables list the default limits for each service. Note the scope that each limit applies to (for example, per
availability domain, per region, per tenant, etc.).
Note:

Some services have additional limits. For more information, see the overview
of each service.

Analytics Cloud Limits


For Analytics Cloud limits, see Service Limits.

API Gateway Limits


Limits apply to each tenancy.

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

API gateways per region 10 5


API resources per region 100 100
API description length 1 MB 1 MB
Certificates per region 10 10

Application Migration Limits


For Application Migration limits, see Service Limits.

Application Performance Monitoring Limits


Application Performance Monitoring limits are regional.

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Maximum number of 1 1
Always Free APM domains
Maximum number of 3 3
paid APM domains
Contact us to request an exception to Contact us to request an exception to
increase the paid APM domain limit. increase the paid APM domain limit.

Big Data Limits


For Big Data limits, see Service Limits.

Block Volume Limits


Volume limits apply to each availability domain. Volume backup limits apply to each region.

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Block Volumes aggregated size 100 TB 30 TB

Oracle Cloud Infrastructure User Guide 220


Service Essentials

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Backups 100,000 100,000

Blockchain Platform Limits


For Blockchain Platform limits, see Service Limits.

Cloud Guard Limits


Cloud Guard limits are regional.

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Detector Recipe Count 25 Contact Us


Responder Recipe Count 15 Contact Us
Target Count 50 Contact Us
Managed List Count 20 Contact Us

Cloud Shell Limits


Limits apply to each region.

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Active User Count 75 50


Usage Hours Count 400 240

Compute Limits
Compute Instances
Limits apply to each availability domain.

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Total OCPUs (cores) for 3,460 6


shapes in the VM.Standard2
and BM.Standard2 series
Total OCPUs (cores) for shapes in 2 2
the VM.Standard.E2.1.Micro series
Total OCPUs (cores) for 730 6
shapes in the VM.Standard.E2
and BM.Standard.E2 series
Total OCPUs (cores) for 350 - commercial realm 6 - commercial realm and
shapes in the VM.Standard.E3 and US Government Cloud US Government Cloud
and BM.Standard.E3 series
3,460 - United Kingdom 24 - United Kingdom
Government Cloud Government Cloud

Oracle Cloud Infrastructure User Guide 221


Service Essentials

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Total memory for shapes 5,600 GB - commercial realm 96 GB - commercial realm


in the VM.Standard.E3 and US Government Cloud and US Government Cloud
and BM.Standard.E3 series
55,360 GB - United 384 GB - United Kingdom
Kingdom Government Cloud Government Cloud

Total OCPUs (cores) for 6,000 (US East (Ashburn), 6 (Australia Southeast
shapes in the VM.Standard.E4 US West (Phoenix)) (Melbourne), Brazil East (Sao
and BM.Standard.E4 series Paulo), Canada Southeast
1,500 (Australia Southeast
(Montreal), Canada Southeast
(Melbourne), Brazil East (Sao
(Toronto), India West
Paulo), Canada Southeast
(Mumbai), Switzerland
(Montreal), Canada Southeast
North (Zurich), US East
(Toronto), India West (Mumbai),
(Ashburn), US West (Phoenix))
Switzerland North (Zurich))

Total memory for shapes 96,000 GB (US East 96 GB (Australia Southeast


in the VM.Standard.E4 (Ashburn), US West (Phoenix)) (Melbourne), Brazil East (Sao
and BM.Standard.E4 series Paulo), Canada Southeast
24,000 (Australia Southeast
(Montreal), Canada Southeast
(Melbourne), Brazil East (Sao
(Toronto), India West
Paulo), Canada Southeast
(Mumbai), Switzerland
(Montreal), Canada Southeast
North (Zurich), US East
(Toronto), India West (Mumbai),
(Ashburn), US West (Phoenix))
Switzerland North (Zurich))

Total OCPUs (cores) for 344 Contact Us


shapes in the VM.DenseIO2
and BM.DenseIO2 series
Total GPUs for shapes in the Contact Us Contact Us
VM.GPU3 and BM.GPU3 series
Total GPUs for shapes Contact Us Contact Us
in the BM.GPU4 series
Total OCPUs (cores) for 180 Contact Us
shapes in the BM.HPC2 series
Total OCPUs (cores) for 52 Contact Us
DVH.Standard2.52 shapes

Note:

Compute limits used to apply at the individual shape level. These limits have
been deprecated. Although you can continue to use the deprecated shape-
based limits, the limits are converted to the equivalent OCPU-based (core-
based) values.
Other Compute Resources
Limits apply to different scopes, depending on the resource.

Oracle Cloud Infrastructure User Guide 222


Service Essentials

Resource Scope Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Autoscaling Region 200 200


configurations
Custom images Region 100 25
Cluster networks Tenancy 15 Contact Us
Instance configurations Region 200 200
Instance pools Region 50 50
Instances per Region 500 500
instance pool

Container Engine for Kubernetes Limits


Container Engine for Kubernetes limits are regional.

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Clusters 3 clusters per OCI region 1 cluster per OCI region


Nodes 1000 nodes per cluster 1000 nodes per cluster
Pods 110 pods per node 110 pods per node

Content and Experience Limits


For Content and Experience limits, see Service Limits.

Data Catalog Limits


Resource Scope Description

Data Catalog Regional 2 data catalog instances per region

Data Flow Limits


Limits apply to each tenancy.

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

VM.Standard2.1 24 3
VM.Standard2.2 24 3
VM.Standard2.4 6 0
VM.Standard2.8 3 0
VM.Standard2.16 0 # Contact Us 0
VM.Standard2.24 0 # Contact Us 0
VM.Total 30 30

Oracle Cloud Infrastructure User Guide 223


Service Essentials

Data Integration Limits


Resource Scope Limit

Workspace Regional 5 workspaces per region

Data Safe Limits


For Data Safe limits, see Service Limits.
Note:

To register an Oracle Database with Data Safe, you must be using a paid
account.

Data Science Limits


Resource Monthly or Annual Pay-as-You-Go or Promo
Universal Credits

Block Volumes 150 30


Block Volume Size in GB 102400 10240
GPUs for VM.GPU2 3 Contact Us
GPUs for VM.GPU3 Contact Us Contact Us
Models 10000 1000
Notebook Sessions 10000 1000
Projects 10000 1000
VM.Standard2 Core 800 60
VM.Standard E2 Core 200 40
Model Deployment Count 30 30
Model Deployment Bandwidth 1000 1000

Data Transfer Limits


Data Transfer limits are regional.
Disk-Based Data Import

Resource Monthly or Annual Pay-As-You-Go


Universal Credits

Transfer package 4 4

File a service request at My Oracle Support to increase the service limits for Disk-Based Data Import. See Requesting
a Service Limit Increase on page 219 for details.
Appliance-Based Data Import and Data Export

Resource Monthly or Annual Pay-As-You-Go


Universal Credits

Transfer appliances 2, Request Entitlement Contact Your CSM

Oracle Cloud Infrastructure User Guide 224


Service Essentials

To place an order for the Oracle-provided Data Transfer Appliance used for appliance-based data transfer and data
export jobs, request the required entitlement for your tenancy through the Console or CLI. See Requesting Appliance
Entitlement on page 1024 for instructions.
The buyer of your tenancy will be required to e-sign a Terms and Conditions document. After Oracle receives the
signed document you will have the entitlement to request and use the Data Transfer Appliance. Appliance-Based
Data Import and Data Export each come with a service limit of 2. File a service request if you need to increase that
number.

Database Limits
Database limits are per availability domain.
See Data Safe Limits on page 224 for information on Data Safe. See MySQL Database Limits on page 229 for
information on MySQL Database.

Resources Monthly Flex Pay-as-You-


Go or Promo

Autonomous Database on shared 128 cores 8 cores


Exadata infrastructure - Total OCPUs
Autonomous Database on dedicated Contact Us Contact Us
Exadata infrastructure - Total OCPUs
Total OCPUs determined
by the Exadata hardware
shape (quarter rack,
half rack, or full rack).

Autonomous Database on shared 128 TB 2 TB


Exadata infrastructure - Block Storage
Always Free Autonomous Database 2 instances 2 instances
Always Free Autonomous Database - Total OCPUs 1 core 1 core
Always Free Autonomous 20 GB 20 GB
Database - Total Block Storage
VM.Standard1 -Total OCPUs 10 cores 2 cores
VM.Standard2 -Total OCPUs 100 cores (US 2 cores
West (Phoenix),
US East (Ashburn))
50 cores (Germany
Central (Frankfurt),
UK South (London))

Total VM DB Block Storage (see note) 10TB 2TB


BM.DenseIO1.36 (see availability note ) 1 instance 1 instance
BM.DenseIO2.52 1 instance 1 instance
Exadata.Base.48 Contact Us Contact Us
Exadata.Quarter1.84 - X6 Contact Us Not available
Exadata.Half1.168 - X6 Contact Us Not available
Exadata.Full1.336 - X6 Contact Us Not available

Oracle Cloud Infrastructure User Guide 225


Service Essentials

Resources Monthly Flex Pay-as-You-


Go or Promo

Exadata.Quarter2.92 - X7 Contact Us Contact Us


Exadata.Half2.184 - X7 Contact Us Contact Us
Exadata.Full2.368 - X7 Contact Us Contact Us
Exadata.Quarter3.100 - X8 Contact Us Contact Us
Exadata.Half3.200 - X8 Contact Us Contact Us
Exadata.Full3.300 - X8 Contact Us Contact Us

Note:

• Autonomous Exadata Infrastructure: Exadata X8


(Exadata.Quarter3.100, Exadata.Half3.200, and Exadata.Full3.300)
and Exadata X7 limits (Exadata.Quarter2.92, Exadata.Half2.184, and
Exadata.Full2.368) include the corresponding Autonomous Exadata
Infrastructure shapes.
• Total VM DB Block Storage: Includes block storage for all
VM.Standard1 and VM.Standard2 virtual machine databases.
• BM.DenseIO1.36: This DB system shape is available only to monthly
universal credit customers with tenancies existing on or before November
9th, 2018, in the US West (Phoenix), US East (Ashburn), and Germany
Central (Frankfurt) regions.
• Always Free Autonomous Database: Each of the two Always Free
Autonomous Databases available in your tenancy can be provisioned with
your choice of Autonomous Transaction Processing or Autonomous Data
Warehouse workload types.

Digital Assistant Limits


For Digital Assistant limits, see Service Limits.

DNS Limits
DNS limits are global.

Resources Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Zones 1,000 zones 1,000 zones


Records 25,000 per zone 25,000 per zone
Zone File Size 1 MB 1 MB

Email Delivery Limits


Default limits apply to each tenant or availability domain, as specified below. Approved limit increases apply to a
specific region.

Oracle Cloud Infrastructure User Guide 226


Service Essentials

Resource Monthly or Annual Pay-As-You-Go or Promo


Universal Credits

Email volume 50,000 unique recipients 200 unique recipients among


among all emails sent per day all emails sent per day
Maximum approved senders 10,000 2,000
SMTP credentials 2 per user 2 per user
Sending rate 18,000 emails per minute 10 emails per minute

Note:

The email volume limit applies to unique recipients among all emails sent.
For example, a single email sent to 100 recipients would count the same as
100 individual emails each sent to a single recipient.
Note:

Email Delivery supports messages up to 2 MB, inclusive of message headers,


body, and attachments. Customers can request this limit to be increased based
on their requirement. The maximum size that can be requested is 60 MB.

Events Limits
Events limits are regional.

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Rules 50 50

File Storage Limits


Limits apply to each tenant or availability domain, as specified.

Resource Pre-Paid Pay-As-You-Go

File systems 1000 per tenant per 1000 per tenant per
availability domain availability domain
Mount targets 2 per tenant per availability domain 2 per tenant per availability domain
Maximum file system size 8 exabytes 8 exabytes

Functions Limits
Resource Scope Monthly or Annual Pay-as-You-Go or Promo
Universal Credits

Applications Region 20 10
Functions Region 500 50
Total memory Availability Domain 60 GB 60 GB
for concurrent
function execution

Oracle Cloud Infrastructure User Guide 227


Service Essentials

GoldenGate Limits
GoldenGate limits are regional.

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Deployments 20 deployments per region 20 deployments per region


Registered Databases 100 registered databases per region 100 registered databases per region

Health Checks Limits


Health Checks limits are global.

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Endpoint tests 1000 per account 1000 per account

IAM Limits
IAM limits are global.

Resource Monthly or Annual Pay-as-You-Go,


Universal Credits Promo, & Always Free

Users in a tenancy 2000 2000


Groups in a tenancy 250 250
Dynamic groups in a tenancy 50 50
Network source 10 10
groups in a tenancy
Compartments in a tenancy 50 50
Policies in a tenancy 100 100
Statements in a policy 50 50
Users per group in a tenancy 2000 2000
Groups per user in a tenancy 250 250
Identity providers in a tenancy 3 3
Group mappings for 250 250
an identity provider

Integration Limits
For Integration limits, see Service Limits.

Load Balancing Limits


Load Balancing limits are regional.

Oracle Cloud Infrastructure User Guide 228


Service Essentials

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

LB-Capacity-10Mbps 1 Load Balancer 1 Load Balancer


LB-Capacity-100Mbps 3 Load Balancers 1 Load Balancer
LB-Capacity-400Mbps 3 Load Balancers 1 Load Balancer
LB-Capacity-8000Mbps Contact Us Contact Us

Logging Analytics Limits


Logging Analytics limits are regional.

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

loganalytics-log-group 50 50
loganalytics-entity 10,000 10,000

Logging Limits
Logging limits are regional.

Resource Scope Monthly or Annual Pay-as-You-Go or Promo


Universal Credits
LogGroups Regional 100 100
LogObjects Regional 500 500
UnifiedAgentConfigurationsRegional 100 100
Regional
MaximumQueriesPerMinute 100 100
Regional
MaximumConcurrentQueries 5 5

Management Agent Limits


Resource Scope Limit

Management agents Tenant 5000


Management agent install keys Tenant 300

MySQL Database Limits


MySQL Database limits are per availability domain unless explicitly specified.

Resource Monthly Flex Pay-as-You-Go or Promo

MySQL Database Block Storage Contact Us Contact Us


MySQL Database Manual Contact Us Contact Us
Backup Count (regional)
MySQL Database VM Contact Us Contact Us
Standard E2.1 instances

Oracle Cloud Infrastructure User Guide 229


Service Essentials

Resource Monthly Flex Pay-as-You-Go or Promo

MySQL Database VM Contact Us Contact Us


Standard E2.2 instances
MySQL Database VM Contact Us Contact Us
Standard E2.4 instances
MySQL Database BM Contact Us Contact Us
Standard E2.64 instances
MySQL Database VM Contact Us Contact Us
Standard E2.8 instances
MySQL Database VM Contact Us Contact Us
Standard E3.1 instances
MySQL Database VM Contact Us Contact Us
Standard E3.16 instances
MySQL Database VM Contact Us Contact Us
Standard E3.2 instances
MySQL Database VM Contact Us Contact Us
Standard E3.24 instances
MySQL Database VM Contact Us Contact Us
Standard E3.32 instances
MySQL Database VM Contact Us Contact Us
Standard E3.36 instances
MySQL Database VM Contact Us Contact Us
Standard E3.4 instances
MySQL Database VM Contact Us Contact Us
Standard E3.64 instances
MySQL Database VM Contact Us Contact Us
Standard E3.8 instances

Monitoring Limits
Monitoring limits are regional.

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Alarms 50 50
Metrics (posted by services) Unlimited Unlimited

Networking Limits
Networking service limits apply to different scopes, depending on the resource.
BYOIP Limits

Resource Scope Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

BYOIP Region 10 0

Oracle Cloud Infrastructure User Guide 230


Service Essentials

Public IP Pool Limits

Resource Scope Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Public IP pool Region 10 0

VCN and Subnet Limits

Resource Scope Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

VCN Region 50 10
Subnets VCN 300 300

Gateway Limits

Resource Scope Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Dynamic routing Region 5 5


gateways (DRGs)
Internet gateways VCN 1* 1*
Local peering VCN 10 10
gateways (LPGs)
NAT gateways VCN 1 1
Service gateways VCN 1 1
* Limit for this resource cannot be increased

IP Address Limits

Resource Scope Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Reserved public IPs Region 50 50


Ephemeral public IPs Instance 2 per VM instance 2 per VM instance
16 per bare metal instance 16 per bare metal instance

DHCP Option Limits

Resource Scope Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

DHCP options VCN 300 300

Route Table Limits

Resource Scope Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Route tables VCN 300 300

Oracle Cloud Infrastructure User Guide 231


Service Essentials

Resource Scope Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Route rules Route table 100* 100*


* Limit for this resource cannot be increased

Network Security Group Limits

Resource Scope Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Network security groups VCN 1000 1000


VNICs Network security group A given network security A given network security
group can have as many group can have as many
VNICs as are in the VCN. VNICs as are in the VCN.
A given VNIC can belong A given VNIC can belong
to a maximum of 5 to a maximum of 5
network security groups.* network security groups.*

Security rules Network security group 120 (total ingress 120 (total ingress
plus egress) plus egress)

* Limit for this resource cannot be increased

Security List Limits

Resource Scope Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Security lists VCN 300 300


Security lists Subnet 5* 5*
Security rules Security list 200 ingress rules* 200 ingress rules*
and and
200 egress rules* 200 egress rules*

* Limit for this resource cannot be increased

IPSec VPN Connection Limits

Resource Scope Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

IPSec VPN connections Region 4 4


Customer-premises Region 10 10
equipment objects (CPEs)
Dynamic routing Region See Gateway Limits See Gateway Limits
gateways (DRGs) on page 231 on page 231

Oracle Cloud Infrastructure User Guide 232


Service Essentials

FastConnect Limits

Resource Scope Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Cross-connects Region Contact Us Contact Us


Virtual circuits Region 10 10
Dynamic routing Region See Gateway Limits See Gateway Limits
gateways (DRGs) on page 231 on page 231

VLAN Limits

Resource Scope Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

VLANs VCN 100 100

NoSQL Database Cloud Limits


For Oracle NoSQL Database Cloud limits, see Service Limits.

Notifications Limits
Notifications limits are regional.

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Topics 50 (Active or Creating*) per tenancy Contact Us


Subscriptions 10 (Active or Pending*) per topic Contact Us

100 (Pending*) per tenancy

* A lifecycle state. See NotificationTopic Reference and Subscription Reference.

Object Storage and Archive Storage Limits


Object Storage and Archive Storage limits are regional.

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Buckets 10,000 per tenancy 10,000 per tenancy


Objects per bucket Unlimited Unlimited

Operations Insights Limits


Operations Insights limits are regional.

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Database insights 800 cores 800 cores


(ADBs) - Total OCPUs

Oracle Cloud Infrastructure User Guide 233


Service Essentials

Registry Limits
Registry limits are regional.

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Repositories 500 repositories per OCI region 500 repositories per OCI region
Images 100,000 images per repository 100,000 images per repository
Registry Storage 500 GB per OCI 500 GB per OCI
region or Contact Us region or Contact Us

Resource Manager Limits


Resource Manager limits are regional.

Resource (per tenant) Monthly or Annual Pay-as-You-Go or Promo Always Free resources
Universal Credits

Configuration source 100 100 100


providers

Jobs (concurrent) 10 5 2
Job duration: 24 hours Job duration: 24 hours Job duration: 24 hours

Private templates 1000 100 10

Stacks 1000 100 10


Variables per stack: 250 Variables per stack: 250 Variables per stack: 250
Size per variable: 8192 Size per variable: 8192 Size per variable: 8192
bytes bytes bytes
Zip file per stack: 11 MB Zip file per stack: 11 MB Zip file per stack: 11 MB

Service Connector Hub Limits


Service Connector Hub limits are regional.

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Service connectors 5 5

Streaming Limits
Resource Scope Monthly or Annual Pay-as-You-Go or Promo
Universal Credits

Streams Tenancy Unlimited Unlimited


Stream pools Tenancy Unlimited Unlimited
Kafka connectors Tenancy Unlimited Unlimited
Partitions Tenancy 5 or Contact Us Contact Us

Oracle Cloud Infrastructure User Guide 234


Service Essentials

Resource Scope Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Maximum message Tenancy 7 days 7 days


retention period
Maximum message size Tenancy 1 MB 1 MB
Maximum write rate Partition 1 MB per second 1 MB per second
Maximum read rate Partition 2 MB per second 2 MB per second

Traffic Management Steering Policies Limits


Traffic Management Steering Policies limits are global.

Resource Monthly or Annual Universal Pay-as-You-Go or Promo


Credits
Policies 100 per tenant 100 per tenant
Attachments 1,000 per tenant 1,000 per tenant

Vault Limits
Vault service limits apply to different scopes, depending on the resource.

Resources Monthly or Annual Always Free


Universal Credits

Vaults in a tenancy 10 or Contact Us 10 or Contact Us

Virtual private vaults in a tenancy Contact Us None

Oracle Cloud Infrastructure User Guide 235


Service Essentials

Resources Monthly or Annual Always Free


Universal Credits

Keys in a vault 1000 or Contact Us 100 (software-protected)


Note: 20 (hardware-protected)

• Key versions
can exist across
a varying
combination of
keys or vaults.
• Key versions,
whether enabled
or disabled,
count against
your limits.
• When calculating
usage for
asymmetric keys,
each key version
increments the
count by two
when calculating
usage against
service limits in
order to account
for both the
public key and
the private key.

Oracle Cloud Infrastructure User Guide 236


Service Essentials

Resources Monthly or Annual Always Free


Universal Credits

Keys in a virtual private vault 1000 or Contact Us None


Note:

• Key versions
can exist across
a varying
combination of
keys or vaults.
• Key versions,
whether enabled
or disabled,
count against
your limits.
• When calculating
usage for
asymmetric keys,
each key version
increments the
count by two
when calculating
usage against
service limits in
order to account
for both the
public key and
the private key.

Secrets in a tenancy 5000 or Contact Us 150


(Secret versions, regardless of
rotation state, count against your
limits. All secret versions can
be in one vault or spread across
the allowable number of vaults.)

Secret versions in a secret 50 or Contact Us 40


(You can have up to 30 secret (You can have up to 20 secret
versions in active use and 20 versions in active use and 20
secret versions pending deletion.) secret versions pending deletion.)

VMware Solution Limits


Resource Scope Monthly or Annual Pay-as-You-Go or Promo
Universal Credits

SDDCs Region Not applicable Not available


ESXi hosts Region 52 cores per host Not available

WAF Limits
WAF limits are global.

Oracle Cloud Infrastructure User Guide 237


Service Essentials

Resource Monthly or Annual Pay-as-You-Go or Promo


Universal Credits

Policies 50 per tenant 50 per tenant

Service Logs
You can enable service logs for some resources. Service logs provide diagnostic information about the resources in
your tenancy. When you enable logging on resources, you receive information about the resource in a log file. This
information allows you to analyze, optimize, and troubleshoot your resources.

Working with Service Logs


Not all resources support Logging. See Supported Services for the list of services with resources that produce logs.
To enable logs on a resource, you must have permission to update the resource and permission to create the log in the
log group. See Required Permissions for Working with Logs and Log Groups, and Enabling Logging for a Resource
on page 2619.
For more information about Logging, see Overview of the Logging Service.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials
on page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page
4262.
To enable logging for a resource using the API, use the appropriate resource's create or update operation.
• Logging Management API
• Logging Ingestion API
• Logging Search API

Viewing All Resources in a Compartment


This topic describes how you can use the tenancy explorer to get a cross-region view of all resources in a
compartment.

Tenancy Explorer Highlights


• The tenancy explorer lets you view all your resources in a compartment, across all regions in your tenancy.
• You can choose to view just the resources that reside in the selected compartment, or you can choose to view all
the resources in all the subcompartments as well, to get a full view of the compartment tree.
• You can take actions on resources from the tenancy explorer. You can delete or move a single or multiple
resources at a time. The tenancy explorer is a convenient option when you need to perform bulk delete or move
actions on multiple resources.
The following image highlights these features:

Oracle Cloud Infrastructure User Guide 238


Service Essentials

When using the tenancy explorer, be aware of the following:


• If you recently created a resource, it might not show up in the tenancy explorer immediately. Similarly, if you
recently updated a resource, your changes might not immediately appear.
• You must be in the same region as the resource to navigate to its details page. The tenancy explorer displays the
resource's region. Use the region selector at the top of the Console to change to the same region as the resource to
enable these actions.
• When taking bulk actions, you can monitor progress on the Work Requests page.

Work Requests
Tenancy explorer is one of the Oracle Cloud Infrastructure features that is integrated with the Work Requests API.
For general information on using work requests in Oracle Cloud Infrastructure, see Work Requests in the user guide,
and the Work Requests API.

Resources Supported by the Tenancy Explorer


The tenancy explorer is powered by the Search service and supports the same resource types. Most resources are
supported.

Supported resources
Service Resource Type Attributes
Application Migration amsmigration See Migration Reference
Application Migration amssource See Source Reference.
Analytics Cloud analyticsinstance See AnalyticsInstance Reference.
API Gateway apideployment See Deployment Reference.
API Gateway apigateway See Gateway Reference.
API Gateway apigatewayapi See Api Reference.
API Gateway apigatewaycertificate See Certificate Reference.

Oracle Cloud Infrastructure User Guide 239


Service Essentials

Service Resource Type Attributes


Big Data bigdataservice See BdsInstance Reference.
Block Volume bootvolume See BootVolume Reference.
Block Volume bootvolumebackup See BootVolumeBackup Reference.
Block Volume volume See Volume Reference.
Block Volume volumebackup See VolumeBackup Reference.
Note: Queries for the sourceType
attribute are not supported.

Block Volume volumebackuppolicy See VolumeBackupPolicy Reference.


Block Volume volumegroup See VolumeGroup Reference.
Block Volume volumegroupbackup See VolumeGroupBackup Reference.
Blockchain Platform blockchainplatforms See BlockchainPlatform Reference.
Budgets budget See Budget Reference.

Cloud Guard cloudguarddetectorrecipe See DetectorRecipe Reference.

Cloud Guard cloudguardmanagedlist See ManagedList Reference.

Cloud Guard cloudguardresponderrecipe See ResponderRecipe Reference.

Cloud Guard cloudguardtarget See Target Reference.

Compute autoscalingconfiguration See AutoScalingConfiguration


Reference.
Note: Queries for the policies
attribute are not supported.

Compute clusternetwork See ClusterNetwork Reference.


Note: Queries for the
primarySubnetId,
secondaryVnicSubnets, and
timeUpdated attributes are not
supported.

Compute consolehistory See ConsoleHistory Reference.


Compute dedicatedvmhost See DedicatedVmHost Reference.

Compute image See Image Reference.


Compute instance See Instance Reference.
Note: Queries for the privateIp
or publicIp attribute of a vnic
will include the related instance,
if one exists, and is running, in the
query results.

Oracle Cloud Infrastructure User Guide 240


Service Essentials

Service Resource Type Attributes


Compute instanceconfiguration See InstanceConfiguration
Reference.
Compute instancepool See InstancePool Reference.
Note: Queries for the
primarySubnetId,
faultDomains,
secondaryVnicSubnets, and
loadBalancers attributes are not
supported.

Content and Experience oceinstance See OceInstance Reference.


Data Catalog datacatalog See Catalog Reference.
Data Catalog datacatalogprivateendpoint See CatalogPrivateEndpoint
Reference.
Data Flow application See Application Reference.
Data Flow run See Run Reference.
Data Integration disworkspace See Workspace Reference.
Data Safe datasafeprivateendpoint See DataSafePrivateEndpoint
Reference.
Data Science datasciencemodel See Model Reference.
Data Science datasciencenotebooksession See NotebookSession Reference.
Data Science datascienceproject See Project Reference.
Database autonomouscontainerdatabaseSee AutonomousContainerDatabase
Reference.
Database autonomousdatabase See AutonomousDatabase Reference.
Database See
autonomousexadatainfrastructure
AutonomousExadataInfrastructure
Reference.
Database autonomousvmcluster See AutonomousVmCluster
Reference.
Database backupdestination See BackupDestination Reference.
Database cloudexadatainfrastructure See CloudExadataInfrastructure
Reference.
Database cloudvmcluster See CloudVmCluster Reference.
Database database See Database Reference.
Database dbhome See DbHome Reference.
Database dbsystem See DbSystem Reference.
Database exadatainfrastructure See ExadataInfrastructure Reference.
Database externalcontainerdatabase See ExternalContainerDatabase
Reference.

Oracle Cloud Infrastructure User Guide 241


Service Essentials

Service Resource Type Attributes


Database externaldatabaseconnector See ExternalDatabaseConnector
Reference.
Database See ExternalNonContainerDatabase
externalnoncontainerdatabase
Reference.
Database externalpluggabledatabase See ExternalPluggableDatabase
Reference.
Database vmcluster See VmCluster Reference.
Database vmclusternetwork See VmClusterNetwork Reference.
Digital Assistant odainstance See OdaInstance Reference.
Email Delivery emailsender See Sender Reference.
Events eventrule See Rule Reference.
File Storage filesystem See FileSystem Reference.
File Storage mounttarget See MountTarget Reference.
Functions functionsapplication See Application Reference.
Functions functionsfunction See Function Reference.
GoldenGate deployment See Deployment Reference
GoldenGate databaseregistration See DatabaseRegistration Reference
IAM compartment See Compartment Reference.
IAM group See Group Reference.
IAM identityprovider See IdentityProvider Reference.
IAM policy See Policy Reference.
IAM tagdefault See TagDefault Reference.
IAM tagnamespace See TagNamespace Reference.
IAM user See User Reference.
Integration Cloud integrationinstance See IntegrationInstance Reference.
Load Balancing loadbalancer See LoadBalancer Reference.
Management Agent management-agents See ManagementAgent Reference.
Management Agent management-agent-install- See ManagementAgentInstallKey
keys Reference.
Monitoring alarm See Search-Supported Attributes for
Alarms on page 2690.
MySQL Database dbsystem See DBSystem Reference.
Networking byoiprange See ByoipRange Reference.
Networking cpe See Cpe Reference.
Networking crossconnect See CrossConnect Reference.
Networking crossconnectgroup See CrossConnectGroup Reference.

Oracle Cloud Infrastructure User Guide 242


Service Essentials

Service Resource Type Attributes


Networking dhcpoptions See DhcpOptions Reference.
Networking drg See Drg Reference.
Networking internetgateway See InternetGateway Reference.
Networking ipsecconnection See IPSecConnection Reference.
Networking ipv6 See IPv6 Reference.
Networking localpeeringgateway See LocalPeeringGateway
Reference.
Networking natgateway See NatGateway Reference.
Networking networksecuritygroup See NetworkSecurityGroup
Reference.
Networking publicip See PublicIp Reference.
Networking publicippool See PublicIpPool Reference.
Networking privateip See PrivateIp Reference.
Networking remotepeeringconnection See RemotePeeringConnection
Reference.
Networking routetable See RouteTable Reference.
Networking securitylist See SecurityList Reference.
Networking servicegateway See ServiceGateway Reference.
Networking subnet See Subnet Reference.
Networking vcn See Vcn Reference.
Networking virtualcircuit See VirtualCircuit Reference.
Networking vlan See Vlan Reference.
Networking vnic See Vnic Reference.
Note: Queries for the privateIp
or publicIp attribute of a vnic
will include the related instance,
if one exists and is running, in the
query results.

NoSQL Database Cloud nosqltable See Table Reference.


Notifications onssubscription See Subscription Reference.
Note: Queries for the endpoint
attribute are not supported.

Notifications onstopic See NotificationTopic Reference.


Object Storage bucket See Bucket Reference.
OS Management osmsmanagedinstancegroup See ManagedInstanceGroup
Reference.
OS Management osmsscheduledjob See ScheduledJob Reference.

Oracle Cloud Infrastructure User Guide 243


Service Essentials

Service Resource Type Attributes


OS Management osmssoftwaresource See SoftwareSource Reference.
Registry containerimage See ContainerImage Reference.
Registry containerrepository See ContainerRepository Reference.
Resource Manager ormconfigsourceprovider See ConfigurationSourceProvider
Reference.
Resource Manager ormjob See Job Reference.
Resource Manager ormstack See Stack Reference.
Resource Manager ormtemplate See Template Reference.
Service Connector Hub serviceconnector See ServiceConnector Reference.
Service Limits quota See Quota Reference.
Streaming connectharness See ConnectHarness Reference.
Streaming stream See Stream Reference.
Vault key See Key Reference.
Vault vault See Vault Reference.
Vault vaultsecret See Secret Reference.
VMware solution vmwareesxihost See EsxiHost Reference.
VMware solution vmwaresddc See Sddc Reference.
WAF httpredirect See HttpRedirect Reference.
WAF waasaddresslist See AddressList Reference.
WAF waascertificate See Certificate Reference.
WAF waascustomprotectionrule See CustomProtectionRule
Reference.
WAF waaspolicy See WaasPolicy Reference.

Required IAM Policy to Work with Resources in the Tenancy Explorer


The resources that you see in the tenancy explorer depend on the permissions you have in place for the resource type.
You do not necessarily see results for everything in the compartment. For example, if your user account is not
associated with a policy that grants you the ability to, at a minimum, inspect the instance resource type, then
you can't view instances in the tenancy explorer. For more information about policies, see How Policies Work on
page 2144. For information about the permissions required for the list API operation for a specific resource type, see
the Policy Reference on page 2176 for the appropriate service.
Required Permissions to View Work Requests
Work requests inherit the permissions of the operation that spawns the work request. So if you have the permissions
to move or delete a resource, you also have permission to see the work requests associated with this action.
To enable users to list all work requests in a tenancy, use a policy like the following:

Allow group <My_Group> to inspect work-requests in tenancy

Oracle Cloud Infrastructure User Guide 244


Service Essentials

Navigating to the Tenancy Explorer and Viewing Resources


Open the navigation menu. Under Governance and Administration, go to Governance and click Tenancy
Explorer.
The tenancy explorer opens with a view of the root compartment. Select the compartment you want to explore
from the compartment picker on the left side of the Console. After you select a compartment, the resources that
you have permission to view are displayed. The Name and Description of the compartment you are viewing are
displayed at the top of the page.To also list all resources in the subcompartments of the selected compartment, select
Show resources in subcompartments. When viewing resources in all subcompartments, it is helpful to use the
Compartment column in the results list to see the compartment hierarchy where the resource resides.

Filtering Displayed Resources


To view only specific resource types, select the resource types you are interested in from the Filter by resource type
menu. You can select multiple resources to include in the filtered list. You can also filter the list by tags.

Opening the Resource Details Page


Detail page navigation is not supported for all resource types. If detail page navigation is not supported, the resource
name does not display as a link and the option is grayed out on the Actions menu.
To open the details page for a resource:
1. Locate the resource in the list.
2. Verify that you are in the same region as the resource. The resource's region is listed in the tenancy explorer
results. If it is not the same as the region you are currently in (shown at the top of the Console), then select the
appropriate region from the Regions menu.
3. To open the details page, you can either:
• Click the name.
• Click the the Actions icon (three dots) and select View Details.

Moving Resources to a Different Compartment


Not all resource-types can be moved to a different compartment. If the resource cannot be moved, the option is not
selectable on the Actions menu. You must have the appropriate permissions for the resources you want to move in
both the original and destination compartments.
Important:

Ensure that you understand the impact of moving a resource before you
perform this action. See the resource's service documentation for details.

To move a single resource to a different compartment


1. Locate the resource in the list.
2. Click the the Actions icon (three dots) and select Move Resource.
3. In the dialog, choose the destination compartment from the list.
4. Click Move Resource.

To move multiple resources to a different compartment


To move multiple resources, the resources must be in the same compartment.
1. Locate and select the resources in the list.
2. Click Move Selected.
3. In the dialog, choose the destination compartment from the list.
4. Click Move Resource.

Oracle Cloud Infrastructure User Guide 245


Service Essentials

The Work Request page launches to show you the status of the work request to move the resources.

Deleting Resources
Not all resource-types can be deleted using the tenancy explorer. If delete is not supported, the option is not selectable
on the Actions menu.
Also, if a resource is in use by another resource, you can't delete it. For example, to delete a VCN, it must first be
empty and have no related resources or attached gateways.

To delete a single resource


1. Locate the resource in the list.
2. Click the the Actions icon (three dots) and select Delete.
3. In the confirmation dialog, click Delete.
4. You are taken to the details page for the deleted resource.

To delete multiple resources


To delete multiple resources, the resources must be in the same compartment.
1. Locate and select the resources in the list.
2. Click Delete Selected.
3. In the confirmation dialog, click Delete.
The Work Request page launches to show you the status of the work request to move the resources.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials
on page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page
4262.
Use these API operations to move or delete multiple resources at once:
• ListBulkActionResourceTypes - use this API to help you provide the correct resource-type information to the
BulkDeleteResources and BulkMoveResources operations. The returned list of resource-types provides the
appropriate resource-type name to use as input and the required identifying information for each resource-type.
Most resource-types only require the OCID to identity a specific resource, but some resources, such as buckets,
require you to provide other identifying information.
• BulkDeleteResources
• BulkMoveResources

Compartment Quotas
This topic describes compartment quotas for Oracle Cloud Infrastructure.
Compartment quotas give tenant and compartment administrators better control over how resources are consumed in
Oracle Cloud Infrastructure, enabling administrators to easily allocate resources to compartments using the Console.
Along with compartment budgets, compartment quotas create a powerful toolset to manage your spending in Oracle
Cloud Infrastructure tenancies.
You can start using compartment quotas from any compartment detail page in the Console.

About Compartment Quotas


Compartment quotas are similar to Service Limits on page 217. The biggest difference is that service limits are set
by Oracle, and compartment quotas are set by administrators, using policies that allow them to allocate resources with
a high level of flexibility.

Oracle Cloud Infrastructure User Guide 246


Service Essentials

Compartment quotas are set using policy statements written in a simple declarative language that is similar to the
IAM policy language.
There are three types of quota policy statements:
• set - sets the maximum number of a cloud resource that can be used for a compartment
• unset - resets quotas back to the default service limits
• zero - removes access to a cloud resource for a compartment
The quota policy statements look like this:

The language components for a quota policy statement are:


• The action keyword, which corresponds to the type of quota being defined. This can be set, unset, or zero.
• The name of the service family; for example: compute-core.
• The quota or quotas keyword.
• The name of the quota, which varies by service family. For example, a valid quota in the compute-core family
is standard2-core-count.
• You can also use wildcards to specify a range of names. For example, "/standard*/" matches all
Compute quotas that start with the phrase "standard."
• For set statements, the value of the quota.
• The compartment that the quota covers.
• An optional condition. For example where request.region = 'us-phoenix-1'. Currently supported
conditionals are request.region and request.ad.
Authentication and Authorization
Each service in Oracle Cloud Infrastructure integrates with IAM for authentication and authorization, for all
interfaces (the Console, SDK or CLI, and REST API).
An administrator in your organization needs to set up groups, compartments, and policies that control which users
can access which services, which resources, and the type of access. For example, the policies control who can create
new users, create and manage the cloud network, launch instances, create buckets, download objects, etc. For more
information, see Getting Started with Policies on page 2143. For specific details about writing policies for each of
the different services, see Policy Reference on page 2176.
If you’re a regular user (not an administrator) who needs to use the Oracle Cloud Infrastructure resources that
your company owns, contact your administrator to set up a user ID for you. The administrator can confirm which
compartment or compartments you should be using.
For common policies used to authorize users, see Common Policies on page 2150.To manage quotas in a
compartment, you must belong to a group that has the correct permissions. For example:

allow group QuotaAdmins to { QUOTA_READ, QUOTA_CREATE, QUOTA_DELETE,


QUOTA_UPDATE, QUOTA_INSPECT } in tenancy

Oracle Cloud Infrastructure User Guide 247


Service Essentials

For in-depth information on granting users permissions for the Quotas service, see Details for the Quotas Service in
the IAM policy reference.
Permissions and Nesting
Compartment quotas can be set on the root compartment. An administrator (who must be able to manage quotas on
the root compartment) can set quotas on their own compartments and any child compartments. Quotas set on a parent
compartment override quotas set on child compartments. This way, an administrator of a parent compartment can
create a quota on a child compartment that cannot be overridden by the child.
Scope
Quotas can have different scopes, and work at the availability domain, the region, or globally. There are a few
important things to understand about scope when working with compartment quotas:
• When setting a quota at the availability domain (AD) level, the quota is allocated to each AD. So, for example,
setting a quota of 120 X7 OCPUs on a compartment actually sets a limit of 120 OCPUs per AD. To target a
specific AD, use the request.ad parameter in the where clause.
• Regional quotas apply to each region. For example, if a quota of 10 functions is set on a compartment, 10
functions will be allocated per region. To target a specific region, use the request.region parameter in the
where clause.
• Usage for sub-compartments counts towards usage for the main compartment.
For more information, see Regions and Availability Domains on page 182.
Quota Evaluation and Precedence
The following rules apply when quota statements are evaluated:
• Within a policy, quota statements are evaluated in order, and later statements supersede previous statements that
target the same resource.
• In cases where more than one policy is set for the same resource, the most restrictive policy is applied.
• Service limits always take precedence over quotas. Although it is possible to specify a quota for a resource that
exceeds the service limit for that resource, the service limit will still be enforced.
Usage Examples
The following example sets the quota for VM.Standard2 and BM.Standard2 compute series to 240 OCPUs (cores) in
each AD on compartment MyCompartment in the US West (Phoenix) region:set compute-core quota
standard2-core-count to 240 in compartment MyCompartment where request.region
= us-phoenix-1The next example shows how to make an allowlist, setting every quota in a family to zero and
then explicitly allocating resources:
zero compute-core quotas in tenancy set compute-core quota standard2-core-
count to 240 in tenancy
This example shows how to limit creating dense I/O compute resources to only one region:
zero compute-core quotas /*dense-io*/ in tenancy set compute-core quota /
*dense-io*/ to 48 in tenancy where request.region = us-phoenix-1
You can clear quotas by using an unset statement, which removes the quota for a resource - any limits on this
resource will now be enforced by the service limits:
zero compute-core quotas in tenancy unset compute-core quota standard2-core-
count in tenancy

Using the Console

To create a quota
1. Open the navigation menu. Under Governance and Administration, go to Governance and click Quota
Policies.
2. On the Quota Policies screen, click Create Quota.

Oracle Cloud Infrastructure User Guide 248


Service Essentials

3. Enter the following:


• Enter a name for your quota in the Name field. Avoid entering confidential information.
• Enter a description for your quota in the Description field.
• Enter a quota policy string in the Quota Policy field.
4. Click Create Quota Policy.
Note:

New policies can take up to 10 minutes to start working.

To edit a quota
1. On the Quota Policies screen, click the quota you want to edit to display the quota policy details page, then click
the Edit Quota button.
2. Edit the quota.
3. Click Save Changes.

To delete a quota
1. There are two ways to delete a quota from the console:
• On the main Quota Policies page, click the context menu to the right of the quota you want to delete, then
select Delete.
• Click the quota you want to delete, then from the quota policy detail page click Delete .
2. From the Confirm Delete dialog, click Delete or Cancel.

Available Quotas by Service

Analytics Cloud
For Analytics Cloud quotas and examples, see Service Quotas.

Big Data
For Big Data quotas and examples, see Service Quotas.

Block Volume Quotas


Family name: block-storage

Name Scope Description

backup-count Regional Total number of block and boot volume backups


total-storage-gb AvailabilityMaximum storage space of block and boot volumes, in GB
domain
volume-count AvailabilityTotal number of block and boot volumes
domain

Example

set block-storage quota volume-count to 10 in compartment MyCompartment

Blockchain Platform Quotas


For Blockchain Platform quotas and examples, see Service Quotas.

Oracle Cloud Infrastructure User Guide 249


Service Essentials

Compute Quotas
Compute Instances
Quotas for Compute instances are available per core (OCPU) and per shape.

Core-Based Quotas
Family name: compute-core

Name Scope Description

standard1-core- AvailabilityTotal number of OCPUs for shapes in the VM.Standard1 and


count domain BM.Standard1 series
standard-b1-core- AvailabilityTotal number of OCPUs for shapes in the VM.Standard.B1 and
count domain BM.Standard.B1 series
standard2-core- AvailabilityTotal number of OCPUs for shapes in the VM.Standard2 and
count domain BM.Standard2 series
standard-e2-micro- AvailabilityTotal number of OCPUs for shapes in the VM.Standard.E2.1.Micro
core-count domain series
standard-e2-core- AvailabilityTotal number of OCPUs for shapes in the VM.Standard.E2 and
count domain BM.Standard.E2 series
standard-e3-core- AvailabilityTotal number of OCPUs for shapes in the VM.Standard.E3 and
ad-count domain BM.Standard.E3 series
standard-e3- AvailabilityTotal amount of memory for shapes in the VM.Standard.E3 and
memory-count domain BM.Standard.E3 series, in GB
standard-e4-core- AvailabilityTotal number of OCPUs for shapes in the VM.Standard.E4 and
ad-count domain BM.Standard.E4 series
standard-e4- AvailabilityTotal amount of memory for shapes in the VM.Standard.E4 and
memory-count domain BM.Standard.E4 series, in GB
dense-io1-core- AvailabilityTotal number of OCPUs for shapes in the VM.DenseIO1 and
count domain BM.DenseIO1 series
dense-io2-core- AvailabilityTotal number of OCPUs for shapes in the VM.DenseIO2 and
count domain BM.DenseIO2 series
gpu2-count AvailabilityTotal number of GPUs for shapes in the VM.GPU2 and BM.GPU2
domain series
gpu3-count AvailabilityTotal number of GPUs for shapes in the VM.GPU3 and BM.GPU3
domain series
gpu4-count AvailabilityTotal number of GPUs for shapes in the BM.GPU4 series
domain
hpc2-core-count AvailabilityTotal number of OCPUs for shapes in the BM.HPC2 series
domain
dvh-standard2- AvailabilityTotal number of OCPUs for DVH.Standard2.52 shapes
core-count domain

Oracle Cloud Infrastructure User Guide 250


Service Essentials

Example

set compute-core quota standard2-core-count to 480 in compartment


MyCompartment

Shape-Based Quotas
Family name: compute

Name Scope Description

bm-standard1-36- AvailabilityNumber of BM.Standard1.36 instances


count domain
bm-standard-b1-44- AvailabilityNumber of BM.Standard.B1.44 instances
count domain
bm-standard2-52- AvailabilityNumber of BM.Standard2.52 instances
count domain
bm-standard-e2-64- AvailabilityNumber of BM.Standard.E2.64 instances
count domain
bm-dense-io1-36- AvailabilityNumber of BM.DenseIO1.36 instances
count domain
bm-dense-io2-52- AvailabilityNumber of BM.DenseIO2.52 instances
count domain
bm-gpu2-2-count AvailabilityNumber of BM.GPU2.2 instances
domain
bm-gpu3-8-count AvailabilityNumber of BM.GPU3.8 instances
domain
bm-hpc2-36-count AvailabilityNumber of BM.HPC2.36 instances
domain
vm-standard1-1- AvailabilityNumber of VM.Standard1.1 instances
count domain
vm-standard1-2- AvailabilityNumber of VM.Standard1.2 instances
count domain
vm-standard1-4- AvailabilityNumber of VM.Standard1.4 instances
count domain
vm-standard1-8- AvailabilityNumber of VM.Standard1.8 instances
count domain
vm-standard1-16- AvailabilityNumber of VM.Standard1.16 instances
count domain
vm-standard2-1- AvailabilityNumber of VM.Standard2.1 instances
count domain
vm-standard2-2- AvailabilityNumber of VM.Standard2.2 instances
count domain
vm-standard2-4- AvailabilityNumber of VM.Standard2.4 instances
count domain

Oracle Cloud Infrastructure User Guide 251


Service Essentials

Name Scope Description

vm-standard2-8- AvailabilityNumber of VM.Standard2.8 instances


count domain
vm-standard2-16- AvailabilityNumber of VM.Standard2.16 instances
count domain
vm-standard2-24- AvailabilityNumber of VM.Standard2.24 instances
count domain
vm-standard-e2-1- AvailabilityNumber of VM.Standard.E2.1.Micro instances
micro-count domain
vm-standard-e2-1- AvailabilityNumber of VM.Standard.E2.1 instances
count domain
vm-standard-e2-2- AvailabilityNumber of VM.Standard.E2.2 instances
count domain
vm-standard-e2-4- AvailabilityNumber of VM.Standard.E2.4 instances
count domain
vm-standard-e2-8- AvailabilityNumber of VM.Standard.E2.8 instances
count domain
standard-e3-core- AvailabilityTotal number of OCPUs for shapes in the VM.Standard.E3 and
ad-count domain BM.Standard.E3 series
vm-dense-io1-4- AvailabilityNumber of VM.DenseIO1.4 instances
count domain
vm-dense-io1-8- AvailabilityNumber of VM.DenseIO1.8 instances
count domain
vm-dense-io1-16- AvailabilityNumber of VM.DenseIO1.16 instances
count domain
vm-dense-io2-8- AvailabilityNumber of VM.DenseIO2.8 instances
count domain
vm-dense-io2-16- AvailabilityNumber of VM.DenseIO2.16 instances
count domain
vm-dense-io2-24- AvailabilityNumber of VM.DenseIO2.24 instances
count domain
vm-gpu2-1-count AvailabilityNumber of VM.GPU2.1 instances
domain
vm-gpu3-1-count AvailabilityNumber of VM.GPU3.1 instances
domain
vm-gpu3-2-count AvailabilityNumber of VM.GPU3.2 instances
domain
vm-gpu3-4-count AvailabilityNumber of VM.GPU3.4 instances
domain
dvh-standard2-52- AvailabilityNumber of DVH.Standard2.52 instances
count domain

Oracle Cloud Infrastructure User Guide 252


Service Essentials

Example

set compute quota vm-dense-io2-8-count to 10 in compartment MyCompartment


where request.ad = 'us-phoenix-1-ad-2'

Custom Images
Family name: compute

Name Scope Description

custom-image-count Regional Number of custom images

Example

set compute quota custom-image-count to 15 in compartment MyCompartment

Instance Configurations, Instance Pools, and Cluster Networks


Family name: compute-management

Name Scope Description

cluster-network- Regional Number of cluster networks


count
config-count Regional Number of instance configurations
pool-count Regional Number of instance pools

Example

set compute-management quota config-count to 10 in compartment MyCompartment

Autoscaling
Family name: auto-scaling

Name Scope Description

config-count Regional Number of autoscaling configurations

Example

Set auto-scaling quota config-count to 10 in compartment MyCompartment

Content and Experience Quotas


For Content and Experience quotas and examples, see Service Quotas.

Data Catalog Quotas


Family name: data-catalog

Name Scope Description

catalog-count Regional Number of data catalogs

Oracle Cloud Infrastructure User Guide 253


Service Essentials

Example

set data-catalog quota catalog-count to 1 in compartment <MyCompartment>

Data Integration Quotas


Family name: dataintegration

Name Scope Description

workspace-count Regional Number of workspaces

Example

set dataintegration quota workspace-count to 10 in compartment


<compartment_name>

Data Science Quotas


Family name: data-science

Name Scope Description

ds-block-volume- Regional Number of block volumes


count
ds-block-volume-gb Regional Block Volume Size in GB
ds-gpu2-count Regional GPUs for VM.GPU2
ds-gpu3-count Regional GPUs for VM.GPU3
ds-standard2-core- Regional Number of VM.Standard2 cores
regional-count
ds-standard-e2- Regional Number of VM.Standard E2 cores
core-regional-count
model-count Regional Number of models
notebook-session- Regional Number of notebook sessions
count
project-count Regional Number of projects

Example
The following example shows how to limit the number of data science projects in a specified compartment:

set data-science quota project-count to 10 in compartment MyCompartment

Data Transfer Quotas


Family name: data-transfer

Name Scope Description

active-appliance- Regional Number of approved transfer appliances


count

Oracle Cloud Infrastructure User Guide 254


Service Essentials

Name Scope Description

appliance-count Regional Number of transfer appliances


job-count Regional Number of transfer jobs

Example

zero data-transfer quota job-count in tenancy


set data-transfer quota job-count to 1 in compartment Finance
set data-transfer quota appliance-count to 3 in compartment Finance

Database Quotas
Family name: database

Name Scope Description

adb-free-count Regional Number of Always Free Autonomous Databases. Tenancies can have a
total of two Always Free Autonomous Databases, and these resources must
be provisioned in the home region. For each database, you can choose the
workload type (Autonomous Transaction Processing or Autonomous Data
Warehouse).
adw-dedicated- Availability Number of Autonomous Data Warehouse OCPUs for databases using dedicated
ocpu-count domain Exadata infrastructure. (See note about "n/a" values on the Limits, Quotas and
Usage page of the Console.)
adw-dedicated- Availability Amount of storage (in TB) for Autonomous Data Warehouse databases using
total-storage-tb domain dedicated Exadata infrastructure. (See note following this table about "n/a"
values on the Limits, Quotas and Usage page of the Console.)
adw-ocpu-count Regional Number of Autonomous Data Warehouse OCPUs for databases using shared
Exadata infrastructure.
adw-total- Regional Amount of storage (in TB) for Autonomous Data Warehouse databases using
storage-tb shared Exadata infrastructure.
atp-dedicated- Availability Number of Autonomous Data Warehouse OCPUs for databases using dedicated
ocpu-count domain Exadata infrastructure. (See note following this table about "n/a" values on the
Limits, Quotas and Usage page of the Console.)
atp-dedicated- Availability Amount of storage (in TB) for Autonomous Transaction Processing databases
total-storage-tb domain using dedicated Exadata infrastructure. (See note following this table about "n/
a" values on the Limits, Quotas and Usage page of the Console.)
atp-ocpu-count Regional Number of Autonomous Transaction Processing OCPUs for databases using
shared Exadata infrastructure.
atp-total-storage- Regional Amount of storage (in TB) for Autonomous Transaction Processing databases
tb using shared Exadata infrastructure.
bm-dense-io1-36- Availability Number of BM.DenseIO1.36 DB systems
count domain
bm-dense-io2-52- Availability Number of BM.DenseIO2.52 DB systems
count domain
exadata-base-48- Availability Number of Exadata.Base.48 DB systems
count domain

Oracle Cloud Infrastructure User Guide 255


Service Essentials

Name Scope Description

exadata- Availability Number of Exadata.Full1.336 - X6 DB systems


full1-336-x6- domain
count
exadata- Availability Number of Exadata.Full2.368 - X7 DB systems and Autonomous Exadata
full2-368-x7- domain Infrastructure
count
exadata- Availability Number of Exadata.Half1.168 - X6 DB systems
half1-168-x6- domain
count
exadata- Availability Number of Exadata.Half2.184 - X7 DB systems and Autonomous Exadata
half2-184-x7- domain Infrastructure
count
exadata- Availability Number of Exadata.Quarter1.84 - X6 DB systems
quarter1-84-x6- domain
count
exadata- Availability Number of Exadata.Quarter2.92 - X7 DB systems and Autonomous Exadata
quarter2-92-x7- domain Infrastructure
count
vm-block- Availability Total size of block storage attachments across all virtual machine DB systems,
storage-gb domain in GB
vm-standard1- Availability Number of VM.Standard1.x OCPUs
ocpu-count domain
vm-standard2- Availability Number of VM.Standard2.x OCPUs
ocpu-count domain

Note:

When viewing the Limits, Quotas and Usage page of the Console, you
will see the value "n/a" in the Service Limit column for storage and OCPU
resources related to Autonomous Transaction Processing and Autonomous
Data Warehouse with dedicated Exadata infrastructure. You might also
see this value in the Available column for these resources. This is because
limits for these resources are based on the capacity of your provisioned
Exadata hardware, and are not service limits controlled by Oracle Cloud
Infrastructure. If you define compartment quota policies for either of these
resources, the Available column will display a value for the amount that
is available to be allocated, based on your existing usage in the Exadata
hardware.
For information about shapes that are not listed, including non-metered shapes, contact Oracle Support.

Examples
The following example shows how to limit the number of Autonomous Data Warehouse resources in a compartment:

#Limits the Autonomous Data Warehouse CPU core count to 2 in


the MyCompartment compartment
set database quota adw-ocpu-count to 2 in compartment MyCompartment

Oracle Cloud Infrastructure User Guide 256


Service Essentials

This example shows how to set a quota for OCPU cores in an Autonomous Data Warehouse with dedicated Exadata
infrastructure:

#Limits the number of Autonomous Data Warehouse dedicated Exadata


infrastructure OCPUs to 20 in the MyCompartment compartment
set database quota adw-dedicated-ocpu-count to 20 in compartment
MyCompartment

This example shows how to set a quota for Autonomous Exadata Infrastructure quarter rack resources in a
compartment:

#Limits the usage of Exadata.Quarter2.92 X7 shapes to 1 in the MyCompartment


compartment
set database quota exadata-quarter2-92-x7-count to 1 in compartment
MyCompartment

To limit the number of virtual machine DB systems in a compartment, you must set a quota for the number of CPU
cores and a separate quota for the block storage:

#Sets a quota for virtual machine Standard Edition OCPUs to 2 in


the MyCompartment compartment
set database quota vm-standard1-ocpu-count to 2 in compartment MyCompartment

#Sets the virtual machine DB system block storage quota to 1024 GB in the
same compartment
set database quota vm-block-storage-gb to 1024 in compartment MyCompartment

The following example shows how to prevent the usage of all database resources in the tenancy except for two
Exadata full rack X7 resources in a specified compartment:

zero database quotas in tenancy


set database quota exadata-full2-368-x7-count to 2 in compartment
MyCompartment

This example of nested quotas shows how to distribute limits for a resource type in a compartment among its
subcompartments:

#Allows usage of 3 Autonomous Data Warehouse OCPUs in parent


compartment Compartment1
set database quota adw-ocpu-count to 3 in compartment Compartment1

#Allows usage of 1 Autonomous Data Warehouse OCPU in child


compartment Compartment1.1
set database quota adw-ocpu-count to 1 in compartment Compartment1.1

#Allows usage of 2 Autonomous Data Warehouse OCPUs in child


compartment Compartment1.2
set database quota adw-ocpu-count to 2 in compartment Compartment1.2

Digital Assistant Quotas


For Digital Assistant quotas and examples, see Service Quotas.

DNS Quotas
Family name: dns

Oracle Cloud Infrastructure User Guide 257


Service Essentials

Name Scope Description

global-zone-count Global Number of public DNS zones


steering-policy- Global Number of traffic management steering policies
count
steering-policy- Global Number of traffic management steering policy attachments
attachment-count

Example

zero dns quotas in compartment MyCompartment


zero dns quota global-zone-count in compartment MyCompartment
zero dns quota steering-policy-count in compartment MyCompartment
zero dns quota steering-policy-attachment-count in compartment MyCompartment

Events Quotas
Family name: events

Name Scope Description

rule-count Regional Number of rules

Example

Set events quota rule-count to 10 in compartment MyCompartment


Zero events quota rule-count in compartment MyCompartment

Email Delivery Quotas


Family name: email-delivery

Name Scope Description

approved-sender- Regional Number of approved senders


count

Example

zero email-delivery quota approved-sender-count in compartment MyCompartment

File Storage Quotas


Family name: filesystem

Name Scope Description

mount-target-count AvailabilityNumber of mount targets


domain
file-system-count AvailabilityNumber of file systems
domain

Oracle Cloud Infrastructure User Guide 258


Service Essentials

Example

Set filesystem quota file-system-count to 5 in compartment MyCompartment


Zero filesystem quota file-system-count in compartment MyCompartment
Set filesystem quota mount-target-count to 1 in compartment MyCompartment
Zero filesystem quota mount-target-count in compartment MyCompartment

GoldenGate Quotas
Family name: goldengate

Name Scope Description

deployment-count Regional Number of deployments


database- Regional Number of registered databases
registration-count

Examples

set goldengate quota deployment-count to 5 in compartment MyCompartment


set goldengate quota database-registration-count to 5 in compartment
MyCompartment

Management Agent Quotas


Family name: management-agent

Name Scope Description

management-agent- Regional Number of management agents


count
management-agent- Regional Number of management agent install keys
install-key-count

Examples
The following example limits the number of management agents that users can install in MyCompartment to 200.

set management-agent quota management-agent-count to 200 in compartment


MyCompartment

The following example limits the number of management agent install keys that users can create in MyCompartment
to 10.

set management-agent quota management-agent-install-key-count to 10 in


compartment MyCompartment

Networking Quotas
VCN Quotas
Family name: vcn

Name Scope Description

vcn-count Regional Number of virtual cloud networks

Oracle Cloud Infrastructure User Guide 259


Service Essentials

Name Scope Description

reserved-public-ip- Regional Number of reserved regional public IP addresses


count

Example

Set vcn quota vcn-count to 10 in compartment MyCompartment

NoSQL Database Cloud Quotas


For Oracle NoSQL Database Cloud quotas and examples, see Service Quotas.

Notifications Quotas
Family name: notifications

Name Scope Description

topic-count Regional Number of topics

Example

set notifications quota topic-count to 10 in compartment MyCompartment

Object Storage Quotas


Family name: object-storage

Name Scope Description

storage-bytes Regional Total storage size in bytes

Examples

Set object-storage quota storage-bytes to 10000000000 in tenancy

Set object-storage quota storage-bytes to 5000000000 in compartment


MyCompartment

Zero object-storage quota storage-bytes in compartment AnotherCompartment

Unset object-storage quota storage-bytes in tenancy

Resource Manager Quotas


Family name: resource-manager

Name Scope Description

concurrent-job- Regional Number of concurrent Jobs per compartment


count

Oracle Cloud Infrastructure User Guide 260


Service Essentials

Name Scope Description

configuration- Regional Number of configuration source providers per compartment


source-provider-
count
stack-count Regional Number of stacks per compartment
template-count Regional Number of private templates per compartment

Example

set resource-manager quota concurrent-job-count to 1 in compartment


MyCompartment
zero resource-manager quota stack-count in compartment MyCompartment
set resource-manager quota configuration-source-provider-count to 5 in
compartment MyCompartment
set resource-manager quota template-count to 3 in compartment MyCompartment

Service Connector Hub Quotas


Family name: service-connector-hub

Name Scope Description

service-connector- Regional Number of service connectors


count

Example

set service-connector-hub quota service-connector-count to 10 in compartment


preview

Streaming Quotas
Family name: streaming

Name Scope Description

partition-count Regional Number of partitions

Example

set streaming quota partition-count to 10 in compartment MyCompartment

Vault Quotas
Family name: kms

Name Scope Description

virtual-private- Regional Number of virtual private vaults


vault-count

Oracle Cloud Infrastructure User Guide 261


Service Essentials

Example

set kms quota virtual-private-vault-count to 1 in compartment MyCompartment

WAF Quotas
Family name: waas

Name Scope Description

waas-policy-count Regional Number of WAF policies

Example

zero waas quota waas-policy-count in compartment MyCompartment

Work Requests
This topic describes the work requests feature documented in the Work Requests API. The following Oracle Cloud
Infrastructure services are integrated with this API:
• Compute
• Database
• Tenancy Explorer
Note:

Some Oracle Cloud Infrastructure services offer work requests supported by


the service API rather than the Work Requests API discussed in this topic.
For information about work requests in these services, see the following
topics:
• Application Migration: View the State of a Work Request
• Application Performance Monitoring: Work Request API
• Blockchain Platform: Integration: Work Requests
• Cloud Advisor: Work Request API
• Content and Experience: Work Request API
• Data Catalog: Integration: Work Requests
• Data Integration: Integration: Work Requests
• Data Science: Creating Notebook Sessions and Deleting Projects
• GoldenGate: WorkRequest Reference
• IAM: To delete a compartment on page 2463 and Deleting Tag Key
Definitions and Namespaces on page 3953
• Load Balancing: Viewing the State of a Work Request on page 2594
• Logging Analytics: WorkRequest API
• Management Agent: WorkRequest API
• Object Storage: Copy Object Work Requests on page 3518
• Service Connector Hub: Viewing the State of a Work Request on page
3813
Work requests allow you to monitor long-running operations such as Database backups or the provisioning of
Compute instances. When you launch such an operation, the service spawns a work request. A work request is an
activity log that enables you to track each step in the operation's progress. Each work request has an OCID that allows
you to interact with it programmatically and use it for automation.

Oracle Cloud Infrastructure User Guide 262


Service Essentials

If an operation fails, a work request can help you determine which step of the process had an error.
Some operations affect multiple resources. For example, creating an instance pool also affects instances and instance
configurations. A work request provides a list of the resources that an operation affects.
For workflows that require sequential operations, you can monitor each operation’s work request and confirm that the
operation has completed before proceeding to the next operation. For example, say that you want to create an instance
pool with autoscaling enabled. To do this, you must first create the instance pool, and then configure autoscaling. You
can monitor the work request for creating the instance pool to determine when that workflow is complete, and then
configure autoscaling after it is done.
Work requests are retained for 12 hours.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: Work requests inherit the permissions of the operation that spawns the work request. To enable
users to view the work requests, logs, and error messages for an operation, write a policy that grants users permission
to do the operation. For example, to let users see the work requests associated with launching instances, write a policy
that enables users to launch instances.
To enable users to list all work requests in a tenancy, use the following policy:

Allow group SupportTeam to inspect work-requests in tenancy

If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150.

Work Request States


Note: Work requests for some services or operations may support only a subset of the following statuses.
ACCEPTED
The request is in the work request queue to be processed.
IN_PROGRESS
A work request record exists for the specified request, but there is no associated WORK_COMPLETED
record.
SUCCEEDED
A work request record exists for this request and an associated WORK_COMPLETED record has the state
SUCCEEDED.
FAILED
A work request record exists for this request and an associated WORK_COMPLETED record has the state
FAILED.
CANCELING
The work request is in the process of canceling.
CANCELED
The work request has been canceled.

Oracle Cloud Infrastructure User Guide 263


Service Essentials

Using the Console to View Work Requests


The steps to view a work request are similar for Oracle Cloud Infrastructure services that support work requests.
1. Navigate to resource whose work requests you want to see.
For example, to see the work requests for a Compute instance: Open the navigation menu. Under Core
Infrastructure, go to Compute and click Instances.
2. If the resource is displayed in a list view, click the resource name to view the resource details.
3. Under Resources, click Work Requests. The status of all work requests appears on the page.
4. To see the log messages, error messages, and resources that are associated with a specific work request, click the
operation name. Then, select an option in the More information section.
For associated resources, you can click the the Actions icon (three dots) next to a resource to copy the resource's
OCID.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials
on page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page
4262.
Use these API operations to monitor the state of work requests:
• ListWorkRequests
• GetWorkRequest
• ListWorkRequestErrors
• ListWorkRequestLogs

Console Announcements
This topic describes the announcements that Oracle Cloud Infrastructure displays in the Console. Console
announcements appear at the top of the page to communicate timely, important information about service status. You
can also view a list of past and ongoing announcements.
Note:

• If you use Oracle Platform Cloud Services or Oracle Cloud Applications


and you have announcements about those service entitlements, the
Console displays a banner with a link that you can use to access those
announcements. For more information about these announcements,
including how to set notification preferences, see Monitoring
Notifications.

Types of Announcements
Announcements belong to different categories. An announcement's prefix helps you understand, at a glance, the
type and relative severity of the information and whether there's anything you can or must do. Announcement types
currently include the following, in order of most important to least:
• Required action. You must take specific action within your environment.
• Emergency change. There is a time period during which an unplanned, but urgent, change associated with your
environment will take place.
• Recommended action. You have specific action to take within your environment, but the action is not required.
• Planned change. There is a time period during which a planned change associated with your environment will
take place.
• Planned change extended. The scheduled change period has extended beyond what was previously
communicated.

Oracle Cloud Infrastructure User Guide 264


Service Essentials

• Planned change rescheduled. The planned change to your environment has been postponed to a later time or
date.
• Production event. An impactful change to your environment either recently occurred or is actively occurring.
• Planned change completed. The planned change to your environment has been completed and regular operations
have resumed.
• Information. There is information that you might find useful, but is not urgent and does not require action on
your part.
For announcements that require action and affect Oracle Cloud Infrastructure Compute instances, you will get 30 days
of advance notice. If you need to delay the actions described in the announcement, contact support to request one of
the alternate dates listed in the announcement. Critical vulnerabilities might not be eligible for delay.

Required IAM Policy


Each service in Oracle Cloud Infrastructure integrates with IAM for authentication and authorization, for all
interfaces (the Console, SDK or CLI, and REST API).
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
Depending on whether you have access, you might not see any announcements. With access to announcements, you
can either see only the summary version of any given announcement or you can also view announcement details.
For administrators: for typical policies that give users access to announcements, see Restrict user access to view only
summary announcements and Let users view details of announcements. For more information, see Details for the
Announcements Service.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150.

Email Delivery
As part of your service agreement, Oracle Cloud Infrastructure also contacts you with service status announcements
through email. These emails help alert you to upcoming changes that will impact your tenancy, such as those
involving data centers or instances you use, or about required action on your part. Whenever possible, we try to
provide advance notice of impactful events. Oracle sends these announcements to the default tenancy administrator
email address on record. You can opt out of emails with operational information that is not urgent and does not
require administrator action. (In the Console, these announcements are marked with the type, "Information." For more
information about announcement types, see Types of Announcements on page 264.) If you want to change the
default tenancy administrator email address on record, contact Oracle Support. For more information, see Contacting
Support.

To manage email preferences for announcements


1. Click the Announcements icon ( ).
2. Click Manage Email Preferences.
3. Do one of the following:
• If you want Oracle to email a copy of all announcements, click Opt in.
• If you want Oracle to withhold email copies of informational announcements that don't require action on your
part, click Opt out.

Viewing Announcements
This section describes how to view announcements. The Console displays announcements as banners that span the
width of the top of your browser window. As long as an announcement remains in effect and you have the access
to view announcements, the banner announcement displays each time you sign in to the Console until you mark it

Oracle Cloud Infrastructure User Guide 265


Service Essentials

as read. You can also view all past announcements. The Announcements icon displays a green dot if you have any
unread announcements.

To dismiss a banner announcement


• To close a banner announcement until the next time you sign in to the Console, click the X at the far right edge of
the banner. If you want to stop seeing an announcement as a banner altogether, you must mark it as read. For more
information, see To mark an announcement as read on page 267.

To view the details of an announcement


1. Do one of the following:
• If you are viewing a banner, click the Show details link near the far right edge of the banner.
• If you are viewing a list of announcements, under the Summary column, click the announcement summary.
2. On the Announcement Details page, you can view the following information:
• Description. This describes the issue or event in greater detail than the summary text of the announcement.
• OCID. This is the announcement's unique, Oracle-assigned identifier.
• Reference Ticket Number. You can use this number to refer to the issue when talking to Support.
• Type. This is one of several predefined categories that helps to set expectations about the nature and severity
of the issue described.
• Affected Service. This indicates the Oracle Cloud Infrastructure services affected by the issue or event.
• Region. This tells you what Oracle Cloud Infrastructure regions are impacted.
• Start Time. This is when the issue or event was first detected.
• End Time. This is when the issue or event was resolved.
• Required After. This is the date after which you must address any required actions described in the
announcement.
• Created. This is when the announcement was created.
• Updated. This is when the announcement was updated.
• Additional Information. This includes information such as workarounds or background material.
• Impacted Resources. This shows the resources that were affected in some way by the event that prompted the
announcement.
3. Optionally, if you want to refer to the list of impacted resources later, click Download Impacted Resources List.

To view a list of all announcements


1. Click the Announcements icon ( ).
2. The Announcements page displays all announcements. From this page, you can do the following:
• Filter. You can filter announcements by type or by start or end date.
• Sort. You can sort announcements by summary, type, event start time, or publish time (which indicates when
the announcement was last updated).
• Mark as read. You can mark announcements as read if you want stop seeing them as banners in the Console
in subsequent sessions.
• View announcement details. You can view the details of an announcement.

To filter a list of announcements


1. Click the Announcements icon ( ).
2. To filter the list, under Filters, do one of the following:
• Click Type, and then click a type from the list.
• Click Start Date, and then choose a date to see only events that started on that date.
• Click End Date, and then choose a date to see only events that ended on that date.
3. To clear a filter on a date, click the X next to the date.

Oracle Cloud Infrastructure User Guide 266


Service Essentials

To sort a list of announcements


1. Click the Announcements icon ( ).
2. By default, the list displays announcements according to the event start time, from most recent to least. To sort the
list another way, do one of the following:
• Click Summary. The list sorts alphabetically, according to the summary of the announcement.
• Click Type. The list sorts according to the importance of the announcement.
• Click Start Time. The list sorts according to the start time of the event described in the announcement. If you
begin by viewing the default sort order, the sort order will change to show the oldest announcement at the
beginning of the list.
• Click Publish Time. The list sorts according to the time that an announcement was last updated. You might
find it helpful to sort by this column if you want to track an ongoing issue or if an announcement requires
action on your part.
3. To sort the list again, repeat the previous step.

To mark an announcement as read


1. Click the Announcements icon ( ).
2. Find the announcement that you want to mark as read, click the Actions icon (three dots), and then click Mark As
Read.

Using the Command Line Interface (CLI)


For information about using the CLI, see Command Line Interface (CLI). For a complete list of flags and options
available for CLI commands, see the Command Line Reference.

To view the details of an announcement


Open a command prompt and run oci announce announcements get to view detailed information about an
announcement:

oci announce announcements get --announcement-id <announcement_OCID>

For example:

oci announce announcements get --announcement-id


ocid1.announcement.region1..examplear73oue4jdywjjvietoc6im3cvb6xae4falm3faux5us3iwra3t6

To view a list of all announcements


Open a command prompt and run oci announce announcements list to view a list of all announcements:

oci announce announcements list --compartment-id <compartment_OCID>

For example:

oci announce announcements list --compartment-id


ocid1.tenancy.oc1..exampleati4wjo6cvbxq4iusld5ltpneskcfy7lr4a6wfauxuwrwed5bsdea

To filter a list of announcements


Open a command prompt and run oci announce announcements list to filter a list of announcements.

Oracle Cloud Infrastructure User Guide 267


Service Essentials

To filter a list of announcements by announcement type:

oci announce announcements list --compartment-id <compartment_OCID> --


announcement-type <announcement_type>

For example:

oci announce announcements list --compartment-id


ocid1.tenancy.oc1..exampleati4wjo6cvbxq4iusld5ltpneskcfy7lr4a6wfauxuwrwed5bsdea
--announcement-type ACTION_REQUIRED

To sort a list of announcements


Open a command prompt and run oci announce announcements list to sort a list of announcements.
To sort a list of announcements in ascending order of time created, from oldest to newest:

oci announce announcements list --compartment-id <compartment_OCID> --sort-


order ASC

For example:

oci announce announcements list --compartment-id


ocid1.tenancy.oc1..exampleati4wjo6cvbxq4iusld5ltpneskcfy7lr4a6wfauxuwrwed5bsdea
--sort-order ASC

To mark an announcement as read


Open a command prompt and run oci announce user-status update to mark an announcement as read:

oci announce user-status update --announcement-id <announcement_OCID> --


user-status-announcement-id <announcement_OCID> --user-id <user_OCID> --
time-acknowledged <date_and_time>

For example:

oci announce user-status update --announcement-id


ocid1.announcement.region1..examplear73oue4jdywjjvietoc6im3cvb6xae4falm3faux5us3iwra3t6
--user-status-announcement-id
ocid1.announcement.region1..examplear73oue4jdywjjvietoc6im3cvb6xae4falm3faux5us3iwra3t6
--user-id
ocid1.user.region1..exampleaorxz3psplonigcvbzy5oaiwiubh7k7ip6zgklfauxic67kksu4oq
--time-acknowledged 2019-01-06T20:14:00+00:00

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials
on page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page
4262.
Use the following operations to manage announcements:
• GetAnnouncement
• GetAnnouncementUserStatus
• ListAnnouncements
• UpdateAnnouncementUserStatus

Oracle Cloud Infrastructure User Guide 268


Service Essentials

Prerequisites for Oracle Platform Services on Oracle Cloud Infrastructure


This topic describes procedures that are required by some Oracle Platform Services before you can launch them on
Oracle Cloud Infrastructure. The information in this topic applies only to the following services:
• Oracle Database Cloud Service
• Oracle Data Hub Cloud Service
• Oracle Event Hub Cloud Service
• Oracle Java Cloud Service
• Oracle SOA Cloud Service
For a list of all services supported on Oracle Cloud Infrastructure, see Information About Supported Platform
Services on page 274.

Accessing Oracle Cloud Infrastructure


Oracle Cloud Infrastructure has a different interface and credential set than your Oracle Platform Services. You can
access Oracle Cloud Infrastructure using the Console (a browser-based interface) or the REST API. Instructions
for the Console and API are included in topics throughout this guide. For a list of available SDKs, see Software
Development Kits and Command Line Interface on page 4262.
To access the Console, you must use a supported browser (
Oracle Cloud Infrastructure supports the following browsers and versions:
• Google Chrome 69 or later
• Safari 12.1 or later
• Firefox 62 or later

Required Identity and Access Management (IAM) Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
See Common Policies on page 2150 for more information and examples.

Resources Created in Your Tenancy by Oracle


Oracle creates a compartment in your tenancy for Oracle Platform Services. This compartment is specially configured
by Oracle for the Oracle Cloud Infrastructure resources that you create through the Platform Services. You can't
choose another compartment for Oracle to use.
Along with this compartment, Oracle creates the IAM policies to allow Oracle Platform Services access to the
resources.
The compartment that Oracle creates for Oracle Platform Services is named: ManagedCompartmentForPaaS.
The polices that Oracle creates for Oracle Platform Services are:
• PSM-root-policy
This policy is attached to the root compartment of your tenancy.
• PSM-mgd-comp-policy
This policy is attached to the ManagedCompartmentForPaaS compartment.
Caution:

Do not make any changes to these resources. Editing or renaming the policies
or the compartment can result in loss of functionality.

Oracle Cloud Infrastructure User Guide 269


Service Essentials

Prerequisites for Oracle Platform Services


Before you can create instances of an Oracle Platform Service on Oracle Cloud Infrastructure, you need to have the
following resources in your Oracle Cloud Infrastructure tenancy:
• A compartment for your resources
• A virtual cloud network (VCN) with at least one public subnet
• IAM policies to allow Oracle Platform Services to access the VCN
• An Object Storage bucket
• Credentials to use with Object Storage
Some of the Platform Services automatically create some of these resources for you. See details about your service in
the following sections.

Setting Up the Prerequisites


Note:

To use Autonomous Data Warehouse Cloud, you don't need to set up


any of the resources listed in this prerequisites section. However, if you
optionally choose to use Oracle Cloud Infrastructure Object Storage for data
loading, you need to perform these two tasks:
Create a bucket
Create an auth token
Following are two scenarios with procedure sets. If you need to set up all the required resources, follow Scenario
1. If you already have a VCN in your Oracle Cloud Infrastructure tenancy that you want to use for Oracle Platform
Services, follow Scenario 2.
To follow a tutorial on how to set up the prerequisites for Scenario 1, see Creating the Infrastructure Resources
Required for Oracle Platform Services.

Scenario 1: I need to create all the prerequisite resources

Scenario 2: I have an existing VCN in Oracle Cloud Infrastructure that I want to use for my Oracle
Platform Services instance
You can use an existing VCN. The VCN must have at least one public subnet. Perform these tasks to complete the
prerequisites:

Create a compartment
Important:

You cannot use the ManagedCompartmentForPaaS for your VCN and


bucket.
1. Open the navigation menu. Under Governance and Administration, go to Identity and click Compartments.
2. A list of the existing compartments in your tenancy is displayed.
3. Click Create Compartment.
4. Enter the following:
•Name: For example, PaaSResources. Restrictions for compartment names are: Maximum 100 characters,
including letters, numbers, periods, hyphens, and underscores. The name must be unique across all the
compartments in your tenancy. Avoid entering confidential information.
• Description: A friendly description.
5. Click Create Compartment.

Oracle Cloud Infrastructure User Guide 270


Service Essentials

Set up your virtual cloud network


This procedure creates a VCN with these characteristics:
• A VCN with the CIDR of your choice (example: 10.0.0.0/16).
• A regional public subnet with access to the VCN's internet gateway. You can choose the subnet's
CIDR (example: 10.0.0.0/24).
• A regional private subnet with access to the VCN's NAT gateway and service gateway (and therefore the Oracle
Services Network). You can choose the subnet's CIDR (example: 10.0.1.0/24).
• Use of the Internet and VCN Resolver for DNS, so your instances can use their hostnames instead of their private
IP addresses to communicate with each other.
Tip:

The following VCN quickstart procedure is useful for getting started and
trying out Oracle Platform Services on Oracle Cloud Infrastructure. For
production, use the procedure in VCNs and Subnets on page 2847. That
topic explains features such as how to specify the CIDR ranges for your VCN
and subnets, and how to secure your network. When you use the advanced
procedure in that topic, remember that the VCN that you create must have a
public subnet for Oracle Platform Services to use.
1. Open the Region menu and select the region in which you want to create the Oracle PaaS service instance.
Select a region that's within the default data region of your account. For example, if your default data region is
EMEA, then select Germany Central (Frankfurt) or UK South (London).
2. From the Compartment list, select the compartment you created.
3. Open the navigation menu. Under Core Infrastructure, go to Networking and click Virtual Cloud Networks.
4. Click Networking Quickstart.
5. Select VCN with Internet Connectivity, and then click Start Workflow.
6. Enter the following:
• VCN Name: Enter a name for your cloud network, for example, <your_initials>_Network. The name
is incorporated into the names of all the related resources that are automatically created. Avoid entering
confidential information.
• Compartment: Leave the default value (the compartment you're currently working in). All the resources will
be created in this compartment.
• VCN CIDR Block: Enter a valid CIDR block for the VCN. For example 10.0.0.0/16.
• Public Subnet CIDR Block: Enter a valid CIDR block for the subnet. The value must be within the VCN's
CIDR block. For example: 10.0.0.0/24.
• Private Subnet CIDR Block: Enter a valid CIDR block for the subnet. The value must be within the VCN's
CIDR block and not overlap with the public subnet's CIDR block. For example: 10.0.1.0/24.
• Accept the defaults for any other fields.
7. Click Next.
8. Review the list of resources that the workflow will create for you. Notice that the workflow will set up security list
rules and route table rules to enable basic access for the VCN.
9. Click Create to start the short workflow.

Permit Oracle Platform Services to access resources


1. In the Console, navigate to the root compartment of your tenancy by clicking your tenancy name in the
Compartment list.
2. Open the navigation menu. Under Governance and Administration, go to Identity and click Policies.
3. Click Create Policy.

Oracle Cloud Infrastructure User Guide 271


Service Essentials

4. Enter the following:


• Name: A unique name for the policy. The name must be unique across all policies in your tenancy. You
cannot change this later.
• Description: A friendly description. You can change this later if you want to.
• Statement: To allow Oracle Platform Services access to use the network in your compartment, enter the
following policy statements. Replace <compartment_name> with your compartment name. Click + after each
statement to add another.

Allow service PSM to inspect vcns in compartment <compartment_name>

Allow service PSM to use subnets in compartment <compartment_name>

Allow service PSM to use vnics in compartment <compartment_name>

Allow service PSM to manage security-lists in


compartment <compartment_name>

For more information about policies, see Policy Basics on page 2145 and also Policy Syntax on page 2172.
5. (Optional) If you want to enable the use of an Autonomous Transaction Processing or Oracle Cloud Infrastructure
Database instance in your compartment as the infrastructure schema database for your Oracle Java Cloud Service
instance, then add the following statements:

Allow service PSM to inspect autonomous-database in


compartment <compartment_name>

Allow service PSM to inspect database-family in


compartment <compartment_name>
6. Click Create.

Create a bucket
1. Open the Region menu and select the region in which you want to create the Oracle PaaS service instance.
Select a region that's within the default data region of your account. For example, if your default data region is
EMEA, then select Germany Central (Frankfurt) or UK South (London).
2. Open the navigation menu. Under Core Infrastructure, click Object Storage.
3. Choose the compartment you created.
4. Click Create Bucket.
5. In the Create Bucket dialog, enter a bucket name, for example: PaasBucket.
Make a note of the name you enter. You will need it when you create an instance for your Oracle Platform Service
later.
6. Click Create Bucket.

Set up credentials to use with Object Storage


For Big Data Cloud, set up an API signing key:
Set up an API signing key
Follow the instructions in this topic: Required Keys and OCIDs on page 4215.
For all other services, create an auth token. Note that your service might refer to this credential as a Swift password.
Use the auth token wherever you are asked to provide a Swift password.

Oracle Cloud Infrastructure User Guide 272


Service Essentials

Create an auth token


1. View the user's details:
• If you're creating an auth token for yourself:

Open the Profile menu ( ) and click User Settings.


• If you're an administrator creating an auth token for another user: In the Console, click Identity, and then click
Users. Locate the user in the list, and then click the user's name to view the details.
2. On the left side of the page, click Auth tokens.
3. Click Generate Token.
4. Enter a friendly description for the token and click Generate Token.
The new token is displayed.
5. Copy the token immediately, because you can't retrieve it again after closing the dialog box. Also, make sure you
have this token available when you create your Oracle Platform Services instance.

Permit Oracle Platform Services to access resources


1. In the Console, navigate to the root compartment of your tenancy by clicking your tenancy name in the
Compartment list.
2. Open the navigation menu. Under Governance and Administration, go to Identity and click Policies.
3. Click Create Policy.
4. Enter the following:
• Name: A unique name for the policy. The name must be unique across all policies in your tenancy. You
cannot change this later. Avoid entering confidential information.
• Description: A friendly description. You can change this later if you want to.
• Statement: To allow Oracle Platform Services access to use the network, enter the following policy. Click +
after each statement to add another. In each statement, replace <compartment_name> with the name of the
compartment where your VCN resides.

Allow service PSM to inspect vcns in compartment <compartment_name>

Allow service PSM to use subnets in compartment <compartment_name>

Allow service PSM to use vnics in compartment <compartment_name>

Allow service PSM to manage security-lists in


compartment <compartment_name>

For more information about policies, see Policy Basics on page 2145 and also Policy Syntax on page 2172.
5. (Optional) If you want to enable the use of an Autonomous Transaction Processing or Oracle Cloud Infrastructure
Database instance in your compartment as the infrastructure schema database for your Oracle Java Cloud Service
instance, then add the following statements:

Allow service PSM to inspect autonomous-database in


compartment <compartment_name>

Allow service PSM to inspect database-family in


compartment <compartment_name>
6. Click Create.

Oracle Cloud Infrastructure User Guide 273


Service Essentials

Create a bucket
1. Open the Region menu and select the region in which you want to create the Oracle PaaS service instance.
Select a region that's within the default data region of your account. For example, if your default data region is
EMEA, then select Germany Central (Frankfurt) or UK South (London).
2. Open the navigation menu. Under Core Infrastructure, click Object Storage.
3. Choose the compartment you want to create the bucket in.
4. Click Create Bucket.
5. In the Create Bucket dialog, enter a bucket name, for example: PaasBucket. Make a note of the name you enter.
You will need it when you create an instance for your Oracle Platform Service later. Avoid entering confidential
information.
6. Click Create Bucket.

Set up credentials to use with Object Storage


For Big Data Cloud, set up an API signing key:
Set up an API signing key
Follow the instructions in this topic: Required Keys and OCIDs on page 4215.
For all other services, create an auth token. Note that your service might refer to this credential as a Swift password.
Use the auth token wherever you are asked to provide a Swift password.
Create an auth token
1. View the user's details:
• If you're creating an auth token for yourself:

Open the Profile menu ( ) and click User Settings.


• If you're an administrator creating an auth token for another user: In the Console, click Identity, and then click
Users. Locate the user in the list, and then click the user's name to view the details.
2. On the left side of the page, click Auth Tokens.
3. Click Generate Token.
4. Enter a friendly description for the token and click Generate Token.
The new token is displayed.
5. Copy the auth token immediately, because you can't retrieve it again after closing the dialog box. Also, make sure
you have this token available when you create your Oracle Platform Services instance.

Information About Supported Platform Services


The following table lists the services supported on Oracle Cloud Infrastructure and links to more information about
using those services on Oracle Cloud Infrastructure:

Service More Information

Analytics Cloud Getting Started with Oracle Analytics Cloud

API Platform Cloud Service Get Started with Oracle API Platform Cloud Service

Autonomous Data Warehouse Getting Started with Autonomous Data Warehouse

Integration Oracle Integration

Autonomous Mobile Cloud Enterprise About Oracle Autonomous Mobile Cloud Enterprise

Oracle Cloud Infrastructure User Guide 274


Service Essentials

Service More Information

NoSQL Database Cloud Service Oracle NoSQL Database Cloud Service

Oracle Visual Builder Administering Oracle Visual Builder

Data Hub Cloud Service About Oracle Data Hub Cloud Service Clusters in Oracle
Cloud Infrastructure

Data Integration Platform Cloud What is Oracle Data Integration Platform Cloud

Database Cloud Service About Database Deployments in Oracle Cloud


Infrastructure

Developer Cloud Service Using Oracle Developer Cloud Service

Event Hub Cloud Service About Instances in Oracle Cloud Infrastructure


Java Cloud Service About Java Cloud Service Instances in Oracle Cloud
Infrastructure
Oracle SOA Cloud Service About SOA Cloud Service Instances in Oracle Cloud
Infrastructure Classic and Oracle Cloud Infrastructure

Renaming a Cloud Account


This topic describes the process of changing your cloud account name. When you sign up for Oracle Cloud, you get
a cloud account and an Oracle Cloud Infrastructure tenancy. Both the cloud account and tenancy have an ID and a
name. Oracle assigns the same name to the cloud account and the tenancy, but they each have a unique ID. You need
to specify the tenancy name when you sign in to Oracle Cloud Infrastructure Console, so that you arrive at the right
account. Any programmatic access uses the tenant ID or cloud account ID, not its name. For example:
• Sample account and tenancy name: "OracleCustomer1"
• Sample cloud account ID: "cacct-7a26a4exampleuniqueID"
• Sample tenancy ID: "ocid1.tenancy.oc1..exampleuniqueID"
You can view your cloud account name in the Tenancy Details page. You might need to rename your cloud account
if the name that it was initially given is no longer relevant or correct. For example:
• You created a trial account called MyTrial, and then it became the main account for your company.
• You had an acquisition that is forcing name changes.
Follow these guidelines for a successful rename:
• Plan ahead and inform others that you plan on changing the name.
• Change the name during off-hours to reduce impact on users in your tenancy.
• Notify personnel who use Oracle Cloud when the rename is complete.
Note:

Only an Oracle Identity Cloud Service (IDCS) cloud account administrator


can rename the cloud account. After the account name changes, you can’t use
the old name to sign in to the Console. Existing sessions keep working, but
new sessions need to use the new name. You can’t change an account name
back to its old name.
You can change your cloud account name in the My Oracle Services dashboard, which is accessible from the Profile
menu's Service User Console option.

Oracle Cloud Infrastructure User Guide 275


Service Essentials

Important:

Two methods for signing in are available: by using the oracle.com URL,
which always uses IDCS, and by using the Console URL. If you use the
Console URL to navigate directly to the Console and sign in, the Service
User Console menu option isn't available in the Profile menu. Choose the
"oraclecloudidentityservice" option when signing in if you're unsure. For
more information, see Signing In to the Console on page 41.
To change a cloud account name
1. Click the Profile icon and select Service User Console. The My Oracle Services dashboard opens in another
browser window.
2. From the main menu, select Account.
3. On the Account page, select the Account Management tab.
4. Next to the Account Name field, click the Rename Account button. This button is only available if you are an
IDCS cloud account administrator.
5. In the Rename Account dialog box, enter the new account name, and click OK.
After you submit the change, it takes about 15 minutes to rename the account. When the rename is complete, the
account admin receives an email, which states that you need to use your existing credentials to sign in.
When you sign in to the Console, you are required to use the new account name (<new_account_name>) when
prompted for a cloud account or tenant name.
When you rename an account, all references to the cloud account name and the tenancy name are updated, including
the following names:
• IDCS instance name
• Amazon S3 Compatibility API Designated Compartment
• SWIFT API Designated Compartment
The API Designated Compartment names are listed on the Tenancy Information tab of the Tenancy Details page,
under Object Storage Settings. The Object Storage Namespace, also shown in the Object Storage Settings area,
is not updated. This name is set as the name for older accounts. Newer accounts have a short random string as the
namespace.

Billing and Payment Tools Overview


Oracle Cloud Infrastructure provides various billing and payment tools that make it easy to manage your service
costs.

Budgets
Budgets can be used to set thresholds for your Oracle Cloud Infrastructure spending. You can set alerts on your
budget to let you know when you might exceed your budget, and you can view all of your budgets and spending
from one single place in the Oracle Cloud Infrastructure console. See Budgets Overview on page 277 for more
information.

Cost Analysis
Cost Analysis provides easy-to-use visualization tools to help you track and optimize your Oracle Cloud
Infrastructure spending. For more information, see Checking Your Expenses and Usage on page 56.

Cost and Usage Reports


A cost report is a comma-separated value (CSV) file that is similar to a usage report, but also includes cost columns.
The report can be used to obtain a breakdown of your invoice line items at resource-level granularity. As a result, you
can optimize your Oracle Cloud Infrastructure spending, and make more informed cloud spending decisions.

Oracle Cloud Infrastructure User Guide 276


Service Essentials

A usage report is a comma-separated value (CSV) file that can be used to get a detailed breakdown of resources in
Oracle Cloud Infrastructure for audit or invoice reconciliation.
For more information, see Cost and Usage Reports Overview on page 281.

Unified Billing
You can unify billing across multiple tenancies by sharing your subscription between tenancies. For more
information, see Unified Billing Overview on page 300.

Invoices
You can view and download invoices for your Oracle Cloud Infrastructure usage. For more information, see Viewing
Your Subscription Invoice.

Payment Methods
The Payment Method section of the Oracle Cloud Infrastructure Console allows you to easily manage how you pay
for your Oracle Cloud Infrastructure usage. For more information, see Changing Your Payment Method on page 57.

Budgets Overview
A budget can be used to set soft limits on your Oracle Cloud Infrastructure spending. You can set alerts on your
budget to let you know when you might exceed your budget, and you can view all of your budgets and spending from
one single place in the Oracle Cloud Infrastructure console.

How Budgets Work


Budgets are set on cost-tracking tags or on compartments (including the root compartment) to track all spending in
that cost-tracking tag or for that compartment and its children.
All budgets alerts are evaluated every hour in most regions, and every four hours in IAD. To see the last time a
budget was evaluated, open the details for a budget. You will see fields that show the current spend, the forecast and
the "Spent in period" field which shows you the time period over which the budget was evaluated. When a budget
alert fires, the email recipients configured in the budget alert receive an email.

Budget Concepts
The following concepts are essential to working with budgets:
BUDGET
A monthly threshold you define for your Oracle Cloud Infrastructure spending. Budgets are set on cost-
tracking tags or compartments and track all spending in the cost-tracking tag or compartment and any child
compartments.
Note:

The budget tracks spending in the specified target compartment,


but you need to have permissions to manage budgets in the root
compartment of the tenancy to create and use budgets.
ALERT
You can define email alerts that get sent out for your budget. You can send a customized email message
body with these alerts. Alerts are evaluated every hour in most regions (every four hours in IAD), and can be

Oracle Cloud Infrastructure User Guide 277


Service Essentials

triggered when your actual or your forecasted spending hits either a percentage of your budget or a specified
set amount.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150.
To use budgets, you must be in a group that can use "usage-budgets" in the tenancy (which is the root compartment)
or be able to use all resources in the tenancy. All budgets are created in the root compartment, regardless of the
compartment they are targeting, so IAM policies that grant budget permissions outside of the root will not be
meaningful.

IAM Policy Description


Allow group accountants to inspect usage-budgets in Accountants can inspect budgets including spend.
tenancy
Allow group accountants to read usage-budgets in Accountants can read budgets including spend (same as
tenancy list).
Allow group accountants to use usage-budgets in Accountants can create and edit budgets and alerts rules.
tenancy
Allow group accountants to manage usage-budgets in Accountants can create, edit, and delete budgets and
tenancy alerts rules.

Tagging Resources
You can apply tags to your resources to help you organize them according to your business needs. You can apply tags
at the time you create a resource, or you can update the resource later with the wanted tags. For general information
about applying tags, see Resource Tags on page 213.

Authentication and Authorization


Each service in Oracle Cloud Infrastructure integrates with IAM for authentication and authorization, for all
interfaces (the Console, SDK or CLI, and REST API).
An administrator in your organization needs to set up groups, compartments, and policies that control which users
can access which services, which resources, and the type of access. For example, the policies control who can create
new users, create and manage the cloud network, launch instances, create buckets, download objects, etc. For more
information, see Getting Started with Policies on page 2143. For specific details about writing policies for each of
the different services, see Policy Reference on page 2176.
If you’re a regular user (not an administrator) who needs to use the Oracle Cloud Infrastructure resources that
your company owns, contact your administrator to set up a user ID for you. The administrator can confirm which
compartment or compartments you should be using.

Creating Automation for Budgets Using the Events Service


You can create automation based on state changes for your Oracle Cloud Infrastructure resources by using event
types, rules, and actions. For more information, see Overview of Events on page 1788.

Managing Budgets
This topic discusses how to view and manage your budgets.

Oracle Cloud Infrastructure User Guide 278


Service Essentials

Using the Console


To create a budget
1. Open the navigation menu. Under Governance and Administration, go to Account Management and click
Budgets.
2. Click Create Budget at the top of the budgets list. The Create Budget dialog is displayed.
3. Select either Compartment or Cost-Tracking Tag to select the type of target for your budget.
4. Enter a name for your budget in the Name text field. The name can only contain alphanumeric characters, dashes,
and the underscore character, and can’t begin with a number. Avoid entering confidential information.
5. Enter a description for the budget. Avoid entering confidential information.
6. Select the target for your budget:
• For budgets targeting a compartment:
• Select a target compartment for your budget from the Target Compartment drop-down list. Note that
while the budget tracks spending in the specified target compartment, but you need to have permissions to
manage budgets in the root compartment of the tenancy to create and use budgets.
• For budgets targeting a cost-tracking tag:
• Select a tag namespace.
• Select a target cost-tracking tag key.
• Enter a value for the cost-tracking tag.
7. Enter a monthly amount for your budget in the Monthly Budget Amount field. The minimum allowed value for
your monthly budget is 1; the maximum allowed value is 999,999,999,999.
8. From Day of the Month to Begin Budget Processing, select the day of the month that you want budget
processing to periodically begin on each month. Setting this value allows you to create a budget that aligns
with your billing cycle date, and to receive more meaningful budget alerts. Below this field, Current Budget
Processing Period Based on Selection reflects the budget processing period, according to the day of the month
you chose. When viewing or editing a budget on its details page, the Budget Processing Period field also
displays this information.
Note:

If you select the 29th, 30th, or 31st as the day of the month, budget
processing begins on the last day of the month, for months that have fewer
than the respective days you have chosen (whether 29, 30, or 31).
9. You can optionally create an alert for your budget by creating a budget alert rule. In Budget Alert Rule on the
Create Budget dialog, configure your alert rule:
a. Select a threshold for your alert from the Threshold Metric drop-down list. There are two possible values:
Actual Spend watches the actual amount you spend in your compartment per month;
Forecast Spend watches your resource usage and alert you when it appears that you'll exceed your budget.
The forecast algorithm is linear extrapolation and requires at least three days of consumption to trigger.
b. Select a threshold type from the Threshold Type drop-down list. You can select either a percentage of your
monthly budget (which must be greater than 0 and no greater than 10,000) or a fixed amount.
c. The label of the next text field changes depending on what type of threshold you selected. Enter either a
Threshold % or a Threshold Amount.
d. In the Email Recipients field, enter one or more email addresses to receive the alerts. Multiple addresses can
be separated using a comma, semicolon, space, tab, or new line.
e. Enter the body of your email alert in the Email Message field. The text of the email message cannot exceed
1000 characters. This message will be included with metadata about your budget, including the budget
name, the compartment, and the amount of your monthly budget. You can use this message to for things like
providing instructions to the recipient that explain how to request a budget increase or reminding users about
corporate policies.
10. Advanced Options (optional): Click the Show advanced options link to add Tagsto your budget. If you have
permissions to create a resource, then you also have permissions to apply free-form tags to that resource. To apply

Oracle Cloud Infrastructure User Guide 279


Service Essentials

a defined tag, you must have permissions to use the tag namespace. For more information about tagging, see
Resource Tags on page 213. If you are not sure whether to apply tags, then skip this option (you can apply tags
later) or ask your administrator.
11. Click the Create button to create your budget.
To view or edit a budget
1. Open the navigation menu. Under Governance and Administration, go to Account Management and click
Budgets.
2. From the list of budgets, click on the budget you want to edit. The budget detail screen will appear.
3. Click the Edit button. The Edit Budget dialog will appear.
4. You can edit the name of your budget or the budget amount. Avoid entering confidential information.
5. When you are finished, click Save Changes.
To delete a budget
1. From the list of budgets, select Delete from the context menu, or click the Delete button at the top of budget detail
screen. The Confirm Delete dialog will appear.
2. Click the Confirm button to delete the budget, or cancel by clicking Cancel.
To manage tags for a budget
1. Open the navigation menu. Under Governance and Administration, go to Account Management and click
Budgets.
2. From the list of budgets, click on the budget you want to tag. The budget detail screen will appear.
3. Click the Add tag(s) button to add a tag.
4. Click the Tags tab and then click on the pencil icon next to a tag you want to edit or remove.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials
on page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page
4262.
Use the following operation to manage budgets:
• ListBudgets
• GetBudget
• CreateBudget
• DeleteBudget
• UpdateBudget

Managing Budget Alert Rules


You can set email alerts on your budgets. You can set alerts that are based on a percentage of your budget or an
absolute amount, and on your actual spending or your forecast spending.
This topic covers how to view and manage your budget alert rules.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150.
Using the Console
To create a budget alert rule
1. Click the budget that you want to create an alert for from the budgets list.
2. In the Budget Alert Rules panel at the bottom of the screen, click the Create Budget Alert Rule button.

Oracle Cloud Infrastructure User Guide 280


Service Essentials

3. Configure your alert rule:


a. Select a threshold for your alert from the Threshold Metric drop-down list. There are two possible values:
Actual Spend will watch the actual amount you spend in your compartment per month;
Forecast Spend will watch your resource usage and alert you when it appears that you'll exceed your budget.
The forecast algorithm is linear extrapolation and requires at least 3 days of consumption to trigger
b. Select a threshold type from the Threshold Type drop-down list. You can select either a percentage of your
monthly budget (which must be greater than 0 and no greater than 10,000) or a fixed amount.
c. The label of the next text field changes depending on what type of threshold you selected. Enter either a
Threshold % or a Threshold Amount.
d. In the Email Recipients field, enter one or more email addresses to receive the alerts. Multiple addresses can
be separated using a comma, semicolon, space, tab, or new line.
e. Enter the body of your email alert in the Email Message field. The text of the email message cannot exceed
1000 characters. This message will be included with metadata about your budget, including the budget name,
the compartment, and the amount of your monthly budget. You can use this message for things like providing
instructions to the recipient that explain how to request a budget increase or reminding users about corporate
policies.
4. Click the Create button to create your alert.
To view or edit a budget alert rule
1. In the list of budget alert rules, click the menu icon at the right side of the list and select View/Edit from the
context menu.
2. Edit your alert rule.
3. Confirm your changes by clicking Save Changes, or dismiss the dialog without saving by clicking the Cancel
button.
To delete a budget alert rule
1. In the list of budget alert rules, click the menu icon at the right side of the list and select Delete from the context
menu.
2. Confirm or cancel the delete operation in the Confirm Delete dialog by clicking either the Confirm or Cancel
button.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials
on page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page
4262.
Use the following operations to manage budget alert rules:
• ListAlertRules
• GetAlertRule
• CreateAlertRule
• DeleteAlertRule
• UpdateAlertRule

Cost and Usage Reports Overview


A cost report is a comma-separated value (CSV) file that is similar to a usage report, but also includes cost columns.
The report can be used to obtain a breakdown of your invoice line items at resource-level granularity. As a result, you
can optimize your Oracle Cloud Infrastructure spending, and make more informed cloud spending decisions.
A usage report is a comma-separated value (CSV) file that can be used to get a detailed breakdown of resources in
Oracle Cloud Infrastructure for audit or invoice reconciliation.
In summary, usage reports indicate the quantity of what is consumed, while cost reports indicate the cost of resource
consumption.

Oracle Cloud Infrastructure User Guide 281


Service Essentials

Note:

Cost and usage reports do not apply to non-metered tenancies.

How Cost Reports Work


The cost report is automatically generated daily, and is stored in an Oracle-owned Object Storage bucket. It contains
one row per each Oracle Cloud Infrastructure resource (such as instance, Object Storage bucket, VNIC) per hour
along with consumption information (usage, price, cost), metadata, and tags. Cost reports generally contain 24 hours
of usage data, although occasionally a cost report may contain late-arriving data that is older than 24 hours.
Cost reports may contain corrections. Corrections are added as new rows to the report, with the lineItem/
iscorrection column set and the referenceNo value of the corrected line populated in the lineItem/
backReference column.
Cost reports are retained for one year.
The file name for each cost report is appended with an automatically incrementing numerical value.

Cost Report Schema


The following table shows the cost report schema.

Field Name Description


lineItem/referenceNo Line identifier. Used for debugging and correctio
lineItem/TenantId The identifier (OCID) for the Oracle Cloud Infra
lineItem/intervalUsageStart The start time of the usage interval for the resour
lineItem/intervalUsageEnd The end time of the usage interval for the resourc
product/service The service that the resource is in.
product/compartmentId The ID of the compartment that contains the reso
product/compartmentName The name of the compartment that contains the r
product/region The region that contains the resource.
product/availabilityDomain The availability domain that contains the resourc
product/resourceId The identifier for the resource.
usage/billedQuantity The quantity of the resource that has been billed
Note:

billedQuantity, myCost
Overage numbers.

cost/billingUnitReadable The unit measure associated with the usage/bille


<count> <GiB/MiB/TiB/PiB> <HOURS/
example: ONE GiB MONTH DATA_TRANSFE
cost/subscriptionId A unique identifier associated with your commit
cost/productSku The Part Number for the resource in the line.
product/description The product description for the resource in the li

Oracle Cloud Infrastructure User Guide 282


Service Essentials

Field Name Description


cost/unitPrice The cost billed to you for each unit of the resourc
Note:

billedQuantity, myCost
Overage numbers.

cost/myCost The cost charged for this line of usage. myCost


unitPrice.
Note:

billedQuantity, myCost
Overage numbers.

cost/currencyCode The currency code for your tenancy.


usage/billedQuantityOverage The usage quantity for which you were billed.
cost/unitPriceOverage The cost per unit of usage for overage usage of a
cost/myCostOverage The cost billed for overage usage of a resource.
lineItem/backReference Data amendments and corrections reference. If a
row is added with the corrected values and a refe
isCorrection field.
lineItem/isCorrection Used if the current line is a correction. See the l
to the corrected line item.
tags/ The report contains one column per tag definition
tags).

How Usage Reports Work


The usage report is automatically generated daily, and is stored in an Oracle-owned Object Storage bucket. It contains
one row per each Oracle Cloud Infrastructure resource (such as instance, Object Storage bucket, VNIC) per hour
along with consumption information, metadata, and tags. Usage reports generally contain 24 hours of usage data,
although occasionally a usage report may contain late-arriving data that is older than 24 hours.
Note:

If you change any cost tracking tags during a particular hour time slot, the
last cost tracking tag that is chosen is what gets applied to that hour. For
example, if you changed a tag from "AAA" to "BBB" at 10:40, the usage
for 10:00-11:00 would reflect "BBB" for the tag. In addition, tags cannot be
applied retroactively.
The report may contain corrections. Corrections are added as new rows to the report, with the lineItem/
iscorrection column set and the referenceNo value of the corrected line populated in the lineItem/
backReference column.
Usage reports are retained for one year.
The file name for each usage report is appended with an automatically incrementing numerical value.

Usage Report Schema


The following table shows the usage report schema.

Oracle Cloud Infrastructure User Guide 283


Service Essentials

Field Name Description


lineItem/referenceNo Line identifier. Used for debugging and corrections
lineItem/TenantId The identifier (OCID) for the Oracle Cloud Infrastr
lineItem/intervalUsageStart The start time of the usage interval for the resource
lineItem/intervalUsageEnd The end time of the usage interval for the resource
product/service The service that the resource is in.
product/resource The resource name used by the metering system.
product/compartmentId The ID of the compartment that contains the resour
product/compartmentName The name of the compartment that contains the reso
product/region The region that contains the resource.
product/availabilityDomain The availability domain that contains the resource.
product/resourceId The identifier for the resource.
usage/consumedQuantity The quantity of the resource that has been consume
usage/billedQuantity The quantity of the resource that has been billed ov
usage/consumedQuantityUnits The unit for the consumed quantity and billed quan
usage/consumedQuantityMeasure The measure for the consumed quantity and billed q
lineItem/backReference Data amendments and corrections reference. If a co
row is added with the corrected values and a referen
isCorrection field.
lineItem/isCorrection Used if the current line is a correction. See the lin
to the corrected line item.
tags/ The report contains one column per tag definition (
tags).

Accessing Cost and Usage Reports


Cost and usage reports are comma-separated value (CSV) files that are generated daily and stored in an Object
Storage bucket. This topic describes how to access these reports.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150.
To use cost and usage reports, the following policy statement is required:

define tenancy usage-report as


ocid1.tenancy.oc1..aaaaaaaaned4fkpkisbwjlr56u7cj63lf3wffbilvqknstgtvzub7vhqkggq
endorse group <group> to read objects in tenancy usage-report

Authentication and Authorization


Each service in Oracle Cloud Infrastructure integrates with IAM for authentication and authorization, for all
interfaces (the Console, SDK or CLI, and REST API).

Oracle Cloud Infrastructure User Guide 284


Service Essentials

An administrator in your organization needs to set up groups, compartments, and policies that control which users
can access which services, which resources, and the type of access. For example, the policies control who can create
new users, create and manage the cloud network, launch instances, create buckets, download objects, etc. For more
information, see Getting Started with Policies on page 2143. For specific details about writing policies for each of
the different services, see Policy Reference on page 2176.
If you’re a regular user (not an administrator) who needs to use the Oracle Cloud Infrastructure resources that
your company owns, contact your administrator to set up a user ID for you. The administrator can confirm which
compartment or compartments you should be using.
Using the Console
To download a cost or usage report:
1. Open the navigation menu. Under Governance and Administration, go to Account Management and select
Cost and Usage Reports.
2. Click the report you want to download from the list, and follow your browser's instructions for downloading.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials
on page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page
4262.
To download a cost or usage report, use the Object Storage APIs. The reports are stored in the tenancy's home region.
The Object Storage namespace used for the reports is bling; the bucket name is the tenancy OCID.
The following example shows how to download a cost report, usage report (or both) using a Python script:

import oci
import os

# This script downloads all of the cost, usage, (or both) reports for a
tenancy (specified in the config file).
#
# Pre-requisites: Create an IAM policy to endorse users in your tenancy to
read cost reports from the OCI tenancy.
#
# Example policy:
# define tenancy reporting as
ocid1.tenancy.oc1..aaaaaaaaned4fkpkisbwjlr56u7cj63lf3wffbilvqknstgtvzub7vhqkggq
# endorse group group_name to read objects in tenancy reporting
#
# Note - The only value you need to change is the group name. Do not change
the OCID in the first statement.

reporting_namespace = 'bling'

# Download all usage and cost files. You can comment out based on the
specific need:
prefix_file = "" # For cost and usage files
# prefix_file = "reports/cost-csv" # For cost
# prefix_file = "reports/usage-csv" # For usage

# Update these values


destintation_path = 'downloaded_reports'

# Make a directory to receive reports


if not os.path.exists(destintation_path):
os.mkdir(destintation_path)

# Get the list of reports

Oracle Cloud Infrastructure User Guide 285


Service Essentials

config = oci.config.from_file(oci.config.DEFAULT_LOCATION,
oci.config.DEFAULT_PROFILE)
reporting_bucket = config['tenancy']
object_storage = oci.object_storage.ObjectStorageClient(config)
report_bucket_objects = object_storage.list_objects(reporting_namespace,
reporting_bucket, prefix=prefix_file)

for o in report_bucket_objects.data.objects:
print('Found file ' + o.name)
object_details = object_storage.get_object(reporting_namespace,
reporting_bucket, o.name)
filename = o.name.rsplit('/', 1)[-1]

with open(destintation_path + '/' + filename, 'wb') as f:


for chunk in object_details.data.raw.stream(1024 * 1024,
decode_content=False):
f.write(chunk)

print('----> File ' + o.name + ' Downloaded')

Cost Analysis Overview


Cost Analysis is an easy-to-use visualization tool to help you track and optimize your Oracle Cloud Infrastructure
spending, allows you to generate charts, and download accurate, reliable tabular reports of aggregated cost data on
your Oracle Cloud Infrastructure consumption. Use the tool for spot checks of spending trends and for generating
reports. Common scenarios you might be interested in include:
• Show monthly costs for compartment X and its children, grouped by service or by tag.
• Show daily costs for tag key A and tag key B, values X, Y and Z, grouped by service and product description
(SKU).
• Show hourly costs for service = compute or database, grouped by compartment name.
You can choose the dates you’re interested in. Filter to the specific tags, compartments, services, or filter you want,
and pick how you want it grouped. As a result, a chart and corresponding data table are generated, and can also be
downloaded as a data table.
If you want to re-create the breakdown provided by the former Classic Version of the Cost Analysis tool, apply the
SKU (Part Number) grouping dimension in the current version of Cost Analysis. To explore your costs in new ways,
we recommended viewing your costs based on Service, or Service and Product Description. If you are doing cost
tracking, we recommended grouping by Compartment or Tag.
Note:

All tags, not only cost tracking tags, are supported.


Note:

Costs for tags are based on the date at which the tag was associated with a
resource. It does not work retroactively for resources on which these tags are
applied.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150.

Oracle Cloud Infrastructure User Guide 286


Service Essentials

To use Cost Analysis, the following policy statement is required:

Allow group <group_name> to read usage-report in tenancy

Authentication and Authorization


Each service in Oracle Cloud Infrastructure integrates with IAM for authentication and authorization, for all
interfaces (the Console, SDK or CLI, and REST API).
An administrator in your organization needs to set up groups, compartments, and policies that control which users
can access which services, which resources, and the type of access. For example, the policies control who can create
new users, create and manage the cloud network, launch instances, create buckets, download objects, etc. For more
information, see Getting Started with Policies on page 2143. For specific details about writing policies for each of
the different services, see Policy Reference on page 2176.
If you’re a regular user (not an administrator) who needs to use the Oracle Cloud Infrastructure resources that
your company owns, contact your administrator to set up a user ID for you. The administrator can confirm which
compartment or compartments you should be using.

Cost Analysis Query Fields


The following table describes the Cost Analysis query fields.

Field Name Description


Time Period (UTC) Allows you to query predefined time ranges for data
available in the usage store. Other time ranges can be
queried using the Classic Version.
The available options in the drop-down (for example, This
Month, Last Month) are according to the UTC time zone,
and are based on the calendar year. The actual time ranges
are indicated in parentheses. When changing the time
period, it is also indicated just above the chart.
Granularity (hourly, daily, monthly) is based on the
requested date range size. The logic is the following:
• Hourly: 48 hours or less
• Daily: > 48 hours, <= 2 months
• Monthly: > 2 months
Note:

Historical data is currently being


back-filled for tenancies and may not
appear immediately. As that process
completes, up to twelve months of
past consumption data will become
available.

Oracle Cloud Infrastructure User Guide 287


Service Essentials

Field Name Description


Show Allows you to view the report in terms of Cost (the
default) or Usage.
To view Usage you must apply a filter for a unit, by
selecting one from the Filters field, then via the Show
Usage Info dialog box. Oracle Cloud Infrastructure
services have unit measures that span from GBs/month to
CPU/hours to API requests.
When selecting Usage, a dialog appears, which prompts
you to add a filter for a unit to view usage data.
Note:

For Usage, unit filtering is the only


possible selection, and you can only
choose one value for the unit.

Cumulative Select this option to modify the values so that they’re


cumulative for the selected time period selected. For
example, consider if you were looking at 10 days of data,
cumulatively, and the values for each day are $5. In such
a case, selecting Cumulative displays values of 5, 10,
15, 20, 25, 30, 35, 40, 45, and 50 across the 10 days,
respectively. In a non-cumulative chart, the values display
as 5, 5, 5, 5, 5, 5, 5, 5, 5, 5.

Oracle Cloud Infrastructure User Guide 288


Service Essentials

Field Name Description


Filters Allows filtering on the following:
• Availability domain
• Compartment
Note:

Filtering by compartment displays


usage and costs attributed to
all resources in the selected
compartments, and their child
compartments.
• By OCID
• By Name
• By Path (for example, root/compartmentname/
compartmentname)
• Platform (Gen-1 are services which are not OCI native.
Gen-2 includes all OCI native services)
• Tag
• By Tag Namespace
• By TagKey + Value
• Region
• Service
• Product description (the human-readable corresponding
name)
• SKU - Part Number (for example, B91444)
• Unit
See Filters for more information on adding, editing, and
removing filters, and filter logic.

Oracle Cloud Infrastructure User Guide 289


Service Essentials

Field Name Description


Grouping Dimensions Allows visualizing the data in terms of the particular
grouping. A grouping dimension by Service is displayed
by default. You can view only one grouping dimension at a
time.
• Availability domain
• Compartment. When you choose to group by
compartment, you can pick the display name
value, and a compartment depth. The compartment
depth corresponds to the lowest level you want the
compartments to be grouped by. All levels above that
grouping level return just what is directly in those
compartments. The grouping level returns values for all
resources in those compartments, plus all resources in
compartments below it.
• Display As
• Display as Compartment Name
• Display as Compartment OCID
• Display as Compartment Path
Note:

If Compartment OCID or
Compartment Name are chosen,
you cannot view Compartment
Level.
• Compartment Level
• All (the default): Every compartment is
displayed. Values would display usage/spend
associated only with the resources in that
specific compartment.
• Level 1 (root only): Only 1 column is returned
(root), and values for resources contained in root
and every child compartment are displayed.
• Level 2 (root/<value>): Displays root, with
values for root equaling only those resources
in root. All compartments that are direct
children of root are also returned. The values
for each of those compartments is the sum of all
resources therein, or within any children of those
compartments.
• Level 3 (root/<value>/<value>): Returns
root, with values for root equaling only those
resources in root. All Level 2 compartments are
also returned, but with values only equal to the
resources contained in each of those specific
compartments. The first child level of the level
2 compartments are also returned. The values
for the third level of compartment (root/child1/
child2 <<) would be equal to the resources in
those compartments, plus all the resources in all
the children of those compartments.
• Level 4 (root/<value>/<value>/<value>)
• Level 5 (root/<value>/<value>/<value>/
<value>)
Oracle Cloud Infrastructure User Guide • 290
Platform (Gen-1 are services which are not Oracle
Cloud Infrastructurenative, while Gen-2 includes all
Service Essentials

Viewing and Working with the Chart Data


When the Cost Analysis page first loads, the default view is to show a grouping of services for the This Month time
period, grouped by Service. The Cost Analysis chart is organized in terms of time (UTC) on the X-axis, and the cost
amount on the Y-axis. When viewing a chart, you can hover the mouse over a data point in the chart to see more
information about it. The tooltip shows the cost value summary for the particular Y-axis item at a particular time,
whether you are viewing the chart as either a Bars (the default), Lines, or Stacked Lines chart.
To the right of the chart, the Legend box shows all the data by default, and each item is color-coded. You can click
the eye icon next to any of the Legend items to toggle the chart data on or off. For example, when viewing a chart
with various services and their costs, the Legend box includes all the impacted services related to the query. Toggling
one or more of the services shows or hides them dynamically from the chart output. Toggling the Legend data,
however, does not change the data shown in the table view, or what is downloaded.
When viewing a chart, you can also add filters or grouping dimensions (or both), to view the cost data according to
one or more filters, or in terms of both filters and a single grouping dimension.
A tabular view of the chart is also provided under the chart, which is updated as you apply different time period,
filtering, and grouping dimension options. When viewing the table data, you can click the column header to sort in
ascending or descending order.
Note:

Data can take up to 48 hours to appear in Cost Analysis.

Filters
See the following for instructions on how to add, edit, and remove filters, as well filter logic.
To add filters
1. Open the navigation menu. Under Governance and Administration, go to Account Management and click Cost
Analysis.
2. From Time Period (UTC), select a time period.
3. From Show, select whether you want to view Cost or Usage.
4. Choose from three chart types, whether Bars (the default), Lines, or Stacked Lines.
5. From Filters, select a filter. A dialog specific to the chosen filter is displayed. For example, if you chose Service,
select a service from the drop-down menu. You can add multiple services if preferred, or click the X icon to
remove service filters. Click Select when you are finished selecting filtering criteria.
6. Click Apply to apply the changes and reload the chart and table with the selected filters.
To edit a filter
1. To edit the filter after it has already been applied to the chart, click the filter. The filter's dialog box is displayed.
2. From the filter dialog drop-down menu, select one or more filters, and click Select.
3. Click Apply to apply the changes and reload the chart and table with the selected filters.
To remove a filter
1. To remove the filter after it has been applied to the chart, click Clear All Filters, or click the filter's X icon under
Filters.
2. Click Apply to apply the changes and reload the chart and table without the selected filters.

Filter Logic
Filters are ORed within each specific filter, and ANDed between filters. For example, a filter for Service = Compute,
Block Storage, Object Storage, Database, and Tag = Tag Key "MyKey" displays data that is for (Compute OR Block
Storage OR Object Storage OR Database) AND Tag Key "MyKey".
The Tag filter, however, is a unique case. You can add multiple Tag filters, which function as a joined OR.

Oracle Cloud Infrastructure User Guide 291


Service Essentials

Note:

Only ten Tag Key values are retrieved and shown in the drop-down when
you attempt to select a possible Tag Key value. Alternatively, you can
manually type in the Tag Key value you want to filter on.
Using Multiple Filters to View Costs
You can start by filtering the Cost Analysis chart data based on a single filter, and then add additional filters. For
example:
1. Set your Grouping Dimensions on page 292 to Service, to view your costs by service.
2. From Filters, add a filter by Tag.
3. Select a Tag Namespace (this example uses "Financial" as the selected namespace).
4. Select a Tag Key (this example uses "Owner" as the selected key).
5. Specify whether to Match any value (AND condition), or Match any of the following (OR condition).
For example, assuming the value "alpha" is the value, and if Match any of the following is chosen, it means show
all services that have "alpha" as the owner. Conversely, assuming multiple values "alpha" and "beta" are chosen,
and if Match any of the following is selected, this corresponds to an OR condition (meaning, filter to show the
costs from all the services from the "Financial" namespace, with the Tag Key "Owner", that matches either the
"alpha" or "beta" values).
6. Click Select, then Apply to reload the Cost Analysis chart with the filtered information.
You can also add another filter by tag, to break the data down further. For example::
1. From Filters, add a filter by Tag.
2. Select a Tag Namespace (for example, the "Cost Center" namespace).
3. Select Match any of the following, and for example, filter for any "Cost Center" values of "1234" or "5678".
After clicking Select, then Apply, this filter shows the costs from all the services with the previous tag filter,
plus this second tag filter ("Financial" namespace, Tag Key "Owner", "alpha" or "beta" values + "Cost Center"
namespace with the values of "1234" or "5678"). The two tag filters together amount to an AND with the previous
filter (the two filters are shown adjacent to the Add Filter drop-down list).
Alternatively, instead of this second tag filter ("Cost Center" namespace with the values of "1234" or "5678"), you
could add a service filter (NETWORK), and that would show the costs from all the services from the "Financial"
namespace, with the Tag Key "Owner", that matches both the "alpha" or "beta" values, and is filtered by the
NETWORK service type.

Grouping Dimensions
See the following for instructions on how to view and change grouping dimensions. Grouping dimensions change the
way data is aggregated, but does not change the sum. If a resource does not have a value for a particular field, a "no
value" column is displayed, which reflects the sum of those resources. Specifically, products which are Gen-1 often
do not have an Availability Domain, Compartment, or Resource ID.
To view grouping dimensions
1. Open the navigation menu. Under Governance and Administration, go to Account Management and click Cost
Analysis.
2. From Time Period (UTC), select a time period.
3. From Show, select whether you want to view Cost or Usage.
4. Choose from three chart types, whether Bars (the default), Lines, or Stacked Lines.
5. From Grouping Dimensions, select the preferred grouping dimension. If Compartment or Tag is chosen,
additional Compartment or Tag selection fields appear.
6. Click Apply to apply the changes and reload the chart and table with the selected grouping dimension.
To change a grouping dimension
1. To change the grouping dimension, select it from Grouping Dimensions.

Oracle Cloud Infrastructure User Guide 292


Service Essentials

2. Click Apply to apply the changes and reload the chart and table with the new grouping dimension.
Identifying a Resource that is Consuming Costs
If you have noticed a large amount of, for example, Database service usage that has appeared in a chart, and
you wanted to identify which resource was responsible, you can group by Resource OCID from the Grouping
Dimensions drop-down list. Next, from Filters, add a filter for the service type (Database in this example), and click
Apply to reload the chart.
The chart reloads to show which resource OCIDs were driving the Database cost, both when hovering over the data
point in the chart, and in the Legend box. These resource OCIDs are also displayed in the data table below the chart.
If desired, you can save this information by clicking Download, and then selecting Download Table as CSV.
Tip:

The Legend box is set to a default size when the charts first loads, but you
can click and drag the box to more easily read lengthy items within it (such as
OCIDs).
To find more information about the resource, copy the OCID and enter it in the Console's Search box, to pinpoint
which resource is driving costs.
Grouping by Service and SKU, or Service and Product Description to Identify Costs
In some instances you may see multiple SKU numbers for a service. When grouping by Service and SKU (Part
Number), multiple SKU numbers for the same service appear in the Legend box. For example, if you had multiple
Compute entries in the Legend box but they have different SKUs, it means there is one resource using multiple
underlying infrastructure components. This case is actually most common for Block Storage (where multiple Block
Storage entries appear with different SKUs). Specifically, for the Block Storage case, you are charged for storage
itself, but you are also charged for any data transfers out of Block Storage. As a result, you will see multiple "Block
Storage" items listed with different SKU numbers in the Legend box. For example:

Block Storage / <SKU number 1>


Block Storage / <SKU number 2>
Block Storage / <SKU number 3>
Block Storage / <SKU number 4>
Another way of looking at this same type of data is to use the Service and Product Description grouping dimension.
The Legend box is sorted in the same manner, but presents the data differently. That is, according to the actual
product descriptions, versus the SKUs that these are associated with. For example:

Block Storage / Block Volume - Backup


Block Storage / Block Volume - Free
Block Storage / Block Volume - Performance Units
Block Storage / Block Volume - Storage
Tip:

By default, these longer descriptions may not be visible, and so you should
resize the Legend box to view them.
The Database service is also a useful example. There could be different instances of a database and you could also be
charged for Block Storage within the Database service. For example, this entry could appear in the Legend for such a
case:

Database / DBaaS - Attached Block Storage Volume - Standard Performance


You could also be charged in the Database service for network transfer. For example:

Database / Oracle Autonomous Data Warehouse - Exadata Storage


Charges for licensing of the application version can also occur:

Oracle Cloud Infrastructure User Guide 293


Service Essentials

Database / Database Cloud Service - Enterprise Edition High Performance


Cost and Usage reports are a good way to further slice such information you have noticed in the Cost Analysis charts.
For example, in a cost report you might notice a SKU number that's associated with Compute, and also see the same
SKU number that's associated with Block Storage. As a result, you may wonder if you were double-billed (though
this is actually not the case). To investigate the actual cost, you can first filter the cost CSV spreadsheet by the SKU.
Once you have applied a filter to the CSV using a particular SKU, you can see which services are consuming from the
product/Description column. For example, a SKU could be using a lot of "Block Volume - Performance Units", but
you also notice that "DBaaS - Attached Block Storage Volume" appears in this column.
A way to further segment the data would be to copy the resource ID from the product/resourceid column, of say the
"DBaaS - Attached Block Storage Volume" entry you noticed amongst all the "Block Volume - Performance Units",
and remove the SKU filter you applied previously. Next, filter the spreadsheet based on the resource ID instead.
This then shows all the components (indicated in the product/Description column) that the particular resource ID is
consuming.
Note:

When viewing a cost report, the cost/productSku and product/description


columns map to one another, and are adjacent columns in the CSV. For more
information on these fields and other fields that appear in the reports, see
Cost and Usage Reports Overview on page 281.
Similarly, you can use Cost Analysis to do a Resource OCID grouping dimension, with a Product description
filter, to show all the resources that are using a particular product. For example, if you choose Block Volume
- Performance Units as the filter, the Cost Analysis chart shows which resources are using "Block Volume -
Performance Units". See Identifying a Resource that is Consuming Costs on page 293 for more information on
identifying the particular resource(s).

Download Your Data


Click the Download button to download a CSV file of the data, or a PNG file of the chart. Downloading generates a
file that corresponds to the chart or table on the Cost Analysis page, inclusive of applied filters, sorting, and grouping
dimensions.

Using the Usage API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials
on page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page
4262.
Use the following operations to manage usage:
• UsageSummary
• RequestSummarizedUsages
The Usage API allows retrieval of usage and cost data. You can:
• Query based on different granularity, for example, MONTHLY or DAILY.
• Specify queryType, for example, COST, USAGE.
• Filter and group by different dimension/tags, functioning like an SQL query.
• Use up to four groupBy parameters.
The following is a sample Usage endpoint URI that conforms to the schema:
• https://usageapi.<region>.oci.oraclecloud.com/20200107/usage
For more information about the API and to view the full list of endpoints, see the Usage API.

Oracle Cloud Infrastructure User Guide 294


Service Essentials

Using granularity
The Usage API supports: MONTLY, DAILY, and HOURLY granularity. All startTime are inclusive, and
endTime are exclusive, the same as a Java substring.
• For HOURLY, only a maximum 36-hour time period is supported, with no more precision than an hour. This
means no minutes or seconds in the input time.
• For MONTHLY, only the first date of the month to another first date of the month is supported. For example,
2020-06-01T00:00:00Z, a maximum 12-month period.
• For DAILY, no more precision than a day is supported, with a maximum 90-day period. You must enter this as
00:00:00. For example, 2020-06-01T00:00:00Z.

Using groupBy
In an API response, dimension is only shown in terms of groupBy. For example, if "service" isn't in groupBy, the
"service" field in the response will be empty.
Note:

Only four groupBy parameters can be used at a time.


In addition:
• If a groupBy list is empty, "currency" will be added into groupBy.
• If the queryType is "Usage", "unit" will be add into groupBy.
• If the queryType is "COST" or "empty", "currency" will be add into groupBy.
• computedAmount works as expected only when "currency" is in groupBy.
• computedQuantity works as expected only when "unit" is in groupBy.

Using queryType
The API can query USAGE or COST. computedQuantity represents usage and computedAmount represents
cost. For getting the expected usage, you need to set queryType to USAGE or add "unit" in groupByKey. This is
due to the fact that usage is aggregated/grouped correctly when grouping by unit.

Using filtering
Nested filtering in API requests is supported. The list of filters are evaluated by the operator. In each filter, all
dimensions and tags are evaluated by the operator. Simultaneous evaluation of the filter list and dimension/tags is not
supported, which means dimensions or tags and the filter list can't be non-empty at the same time.
Supported operators are AND, OR. These two filters below are equal:

"filter": {
"operator": "AND",
"dimensions": [
{
"key": "service",
"value": "compute"
},
{
"key": "compartmentPath",
"value": "abc/cde"
}
],
"tags": [
{
"namespace": "compute",
"key": "created",
"value": "string"
}

Oracle Cloud Infrastructure User Guide 295


Service Essentials

],
"filters": null
}

or

"filter": {
"operator": "AND",
"dimensions": [],
"tags": [],
"filters": [{
"operator": "AND",
"dimensions": [{
"key": "service",
"value": "compute"
}],
"tags": null,
"filters": null
}, {
"operator": "AND",
"dimensions": [{
"key": "compartmentPath",
"value": "abc/cde"
}],
"tags": null,
"filters": null
},
{
"operator": "AND",
"dimensions": null,
"tags": [{
"namespace": "compute",
"key": "created",
"value": "string"
}],
"filters": null
}
]
}

Invalid example because dimensions and filters are non-empty at the same time:

"filter": {
"operator": "AND",
"dimensions": [{
"key": "compartmentPath",
"value": "abc/cde"
}],
"tags": [],
"filters": [{
"operator": "AND",
"dimensions": [{
"key": "service",
"value": "compute"
}],
"tags": null,
"filters": null
},
{
"operator": "AND",
"dimensions": null,
"tags": [{
"namespace": "compute",

Oracle Cloud Infrastructure User Guide 296


Service Essentials

"key": "created",
"value": "string"
}],
"filters": null
}
]
}

Querying with tags


As mentioned previously, we only show the field in groupBy. So you need to add tag related fields in groupBy.
For example:

"tagNamespace", "tagKey", "tagValue"

If you add tagKey, all items in the response will have a tagKey. tagKey also can be empty even if you add a
tagKey. This is because some of your resources don't have a tagKey. We suggested adding all three of these in
groupBy, so you can see a complete tag in the response:

"tagNamespace", "tagKey", "tagValue"

If you want to filter by tag, you need to add the tag in the filter object. This can be filtered by any tagKey/Namespace/
value combination of any tagKey/Namespace/value.

Valid groupBy example

"tagNamespace", "tagKey", "tagValue", "service", "skuName", "skuPartNumber",


"unit",
"compartmentName", "compartmentPath", "compartmentId", "platform", "region",
"logicalAd",
"resourceId", "tenantId", "tenantName"

Note:

Only up to four groupBy parameters can be used in an API call.


"tenantId" and "tenantName" are not currently supported.

Valid filter dimension example

"service", "skuName", "skuPartNumber", "unit", "compartmentName",


"compartmentPath", "compartmentId",
"platform", "region", "logicalAd", "resourceId", "tenantId", "tenantName"

This is case-sensitive. "tenantId" and "tenantName" are not currently supported.

How to groupBy compartment?


groupBy compartment-related keys ("compartmentName", "compartmentPath", "compartmentId")
are different than the other groupBy keys.
To get an expected result, you must request with compartmentDepth. compartmentDepth is >=1 and <=6.
groupBy compartment means all compartments usage or costs with a higher depth will be aggregated to the
compartment with the given depth. For example:

Root 1

B C 2

Oracle Cloud Infrastructure User Guide 297


Service Essentials

D E 3

F 4

If the depth is 1, it means all usage or costs are grouped to the root compartment.
If the depth is 2, it means all compartments with depth 2 will contain the usage or costs with all its children. In the
response, the root will contain its own usage, B will aggregate (B, D, E, F), and C will contain C.

Why are some fields in a response empty?


The fields will show up only when the fields are in groupBy. Not all fields in the response are currently available.
Only the fields mentioned in Valid groupBy example on page 297 are supported.

What is nextPageToken?
This can be set as null. Currently not supported.

Example request body


The best way to understand how the API works is checking how the Console uses the API. You can find the request
body in the web browser's debug mode.

{
"tenantId": "ocid1.tenancy.oc1..<unique_ID>",
"timeUsageStarted": "2020-04-01T00:00:00.000Z",
"timeUsageEnded": "2020-07-01T00:00:00.000Z",
"granularity": "MONTHLY",
"queryType": "COST",
"groupBy": [
"tagNamespace",
"tagKey",
"tagValue",
"service",
"compartmentPath"
],
"compartmentDepth": 2,
"filter": null
}

After you make a request without any filter, you can see what the dimension/tags' value can be. Subequently, you can
make a request with a filter and a correct dimension value.

{
"tenantId": "ocid1.tenancy.oc1..<unique_ID>",
"timeUsageStarted": "2020-04-01T00:00:00.000Z",
"timeUsageEnded": "2020-07-01T00:00:00.000Z",
"granularity": "MONTHLY",
"groupBy": ["tagNamespace","tagKey","tagValue", "service",
"compartmentPath"],
"compartmentDepth": 2,
"filter": {
"operator": "AND",
"dimensions": [],
"tags": [],
"filters": [{
"operator": "AND",
"dimensions": [{
"key": "service",
"value": "compute"
}],
"tags": null,

Oracle Cloud Infrastructure User Guide 298


Service Essentials

"filters": null
}, {
"operator": "AND",
"dimensions": [{
"key": "compartmentPath",
"value": "abc/cde"
}],
"tags": null,
"filters": null
},
{
"operator": "AND",
"dimensions": null,
"tags": [{
"namespace": "compute",
"key": "created",
"value": "string"
}],
"filters": null
}
]
}
}

Using customized scripts, CLI, and SDK


If you write a customized script, Oracle does not support or assist with debugging your script. Only the CLI, SDK,
and Terraform are supported. See 3. Create and Configure a Copy of oci-curl on page 2058 for more information on
using customized scripts, and Command Line Interface (CLI) on page 4228 for more information. For example:

oci raw-request --http-method POST --target-uri https://usageapi.us-


ashburn-1.oci.oraclecloud.com/20200107/usage
--request-body
file:///<system_path>SimpleRequestSummarizedUsagesDetails.json
--config-file ~/Downloads/clitest.conf

SimpleRequestSummarizedUsagesDetails.json:

{
"tenantId": "ocid1.tenancy.oc1..<unique_ID>",
"timeUsageStarted": "2020-03-19T17:00:00.000000-07:00",
"timeUsageEnded": "2020-03-21T00:00:00Z",
"granularity": "DAILY",
"groupBy": [],
"compartmentDepth": null,
"filter": null,
"nextPageToken": "string"
}

clitest.conf:

[DEFAULT]
user=ocid1.user.oc1..<unique_ID>
fingerprint=<MAC_ID>
key_file=<system_path>/oci_api_key.pem
#tenancy=ocid1.tenancy.oc1..<unique_ID>
tenancy=ocid1.tenancy.oc1..<unique_ID>
region=us-ashburn-1

Also see the Oracle Cloud Infrastructure SDK at https://github.com/oracle/oci-java-sdk , https://github.com/oracle/


oci-python-sdk.

Oracle Cloud Infrastructure User Guide 299


Service Essentials

Unified Billing Overview


This topic describes how you can unify billing across multiple tenancies by sharing your subscription. You should
consider sharing your subscription if you want to have multiple tenancies to isolate your cloud workloads, but
you want to have a single Universal Credits commitment. For example, you have a subscription with a $150,000
commitment, but you want to have three tenancies, because the credits are going to be used by three distinct groups
that require strictly isolated environments.
Two types of tenancies are involved when sharing a subscription in the Console:
• The parent tenancy (the one that is associated with the primary funded subscription).
• Child tenancies (those that are consuming from a subscription that is not their own).
Notable benefits of sharing a subscription includes:
• Sharing a single commitment helps to avoid cost overages and allows consolidating your billing.
• Enabling multi-tenancy cost management. You can analyze, report, and monitor across all linked tenancies. The
parent tenancy has the ability to analyze and report across each of your tenancies through Cost Analysis and Cost
and usage reports, and you can receive alerts through Budgets.
• Isolation of data. Customers with strict data isolation requirements can use a multi-tenancy strategy to continue
restricting resources across their tenancies.
The remainder of this topic provides an overview of how to share your subscription between tenancies, and provides
best practices on how to isolate workloads, in order to help you determine if you should use a single-tenancy or multi-
tenancy strategy.

Planning Considerations
Before you get additional tenancies you should evaluate your needs to make sure that a multi-tenancy approach is best
for your workloads. The main reason to have multiple tenancies is for strong isolation. By default, each parent and
child tenancy comes with:
• A distinct set of IAM users (which can be federated to another identity system).
• A distinct set of IAM policies (permissions).
• Its own service limits.
• Isolated Virtual Cloud Networks (VCNs).
• Separate security and governance settings.
The main point to be aware of is that multiple tenancies make it easier to isolate workloads, but that comes at the cost
of needing to manage multiple tenancies. Additional tenancies, however, do create additional management overhead,
so you need to ensure that the isolation is worth it. If you don't require a strong level of isolation, you should consider
using compartments to separate workloads.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150.
To use subscription sharing, the following policy statements are required:

Allow group linkUsers to use organizations-family in tenancy


Allow group linkAdmins to manage organizations-family in tenancy

To accept an invitation but not create one use the following:

allow group linkAccepters to manage organizations-recipient-invitations in


tenancy

Oracle Cloud Infrastructure User Guide 300


Service Essentials

To view the current linked tenancies but not the invitations:

allow group linkViewers to read organizations-links in tenancy

Subscription Sharing Overview


Depending on whether or not you already have multiple tenancies, there are two approaches:
If you don't already have an additional tenancy
1. Sign up for a new PAYG (pay as you go) subscription to get a new tenancy. You do not incur any charges against
the PAYG subscription, as long as you link it with your existing subscription before creating any billable resource.
There are two ways to get a new PAYG subscription:
a. Self-service: Sign up for a new trial using http://signup.oracle.com, and upgrade the trial to a paid account.
b. Through sales: Work with your Oracle sales team to book a new PAYG order.
2. Take note of the tenancy OCID in the new tenancy. You can find it on the Tenancy Details page in the Console.
This page can be accessed by opening the Profile menu and then selecting Tenancy: <your_tenancy_name>. See
the OCID field on the Tenancy Information tab.
3. From the tenancy that owns the primary subscription, invite the new tenancy to share the subscription (described
in To Share a Subscription).
If you already have multiple tenancies
There are two paths you can take:
• If you have one primary tenancy that has a Universal Credits Flex (commitment) subscription and your subsequent
tenancies have a PAYG subscription, you can follow the steps in If you don't already have an additional tenancy
to get a new tenancy.
• If your tenancies have multiple Universal Credits Flex (commitment), you will need to work with your Oracle
sales team to rebalance your commitment balances.
Note:

Rebalancing subscriptions is a manual process that needs to be


coordinated, and rebalancing will not be available under all circumstances.
Your sales team will work with you to rebalance your subscriptions and set
up the parent-child billing relationship.

To Share a Subscription
Important:

Ensure that you understand the terms of your contract before sharing
your subscription with another tenancy. Your contract specifies how your
subscription can be used, and which end users are allowed to use it.
Note:

Before sharing a subscription, in the child tenancy, export historical cost data
from Cost Analysis if you want to save your history.
1. Sign in to the sender tenancy that will send the invitation, as a user that has permissions to manage subscription
sharing.
2. Open the navigation menu. Under Governance and Administration, go to Account Management and click
Subscription Sharing. The Subscription Sharing page is displayed.
3. Click Invite Tenancy. The Invite Tenancy panel is displayed.
4. Enter the invitation details. You will need to specify the following:
• The invitation name in Invitation Name. For the invitation name, it can be helpful to use notation that
signifies the direction and number of sending invitation attempts. For example, entering a1 to b1 v1 can

Oracle Cloud Infrastructure User Guide 301


Service Essentials

signify that tenancy a1 is sending an invitation to b1 and v1 as the first attempt. Such a convention allows the
invitations to be more readable to the Console user, without having to access the Invitation Detail page to
view sender and recipient details.
• The child tenancy OCID in Recipient Tenancy OCID.
• Optionally, an email address in Recipient Email, to notify that a sharing invitation has been sent.
Note:

The recipient needs to have the proper permissions to manage


subscription sharing in the child tenancy, in order to accept the
invitation. For more information, see Required IAM Policy on page
300.
• Optionally, enter tagging information.
5. Click Send Invitation. The invitation is sent to the tenancy you are inviting to share the subscription with.
Note:

Parent tenancies and tenancies that are not already in a sharing relationship
can send invitations. Child tenancies cannot send invitations.
6. On the recipient (child) tenancy: Open the navigation menu. Under Governance and Administration, go to
Account Management and click Subscription Sharing. The Subscription Sharing page is displayed.
7. The invitation from the other tenancy is displayed in the list, with the following information:
• Invitation Name: Click this linked name to go to the Invitation Detail page.
• Status: Displays the invitation status. For example, it is Active when the invitation is received but not yet
accepted. From the parent tenancy, this field shows Pending for an invitation that has been sent but not yet
accepted.
The possible status states for a sender and recipient invitation are the following:

Sender Invitation Recipient Invitation

• PENDING • PENDING
• CANCELED • CANCELED
• ACCEPTED • ACCEPTED
• EXPIRED • IGNORED
• FAILED • EXPIRED
• FAILED
• Type: The invitation type, whether Sent or Received.
• Created: The UTC creation date and time of the invitation.
8. Click the Actions icon (three dots) and select Accept Sharing Invitation. An confirmation acceptance message is
displayed, which indicates that you are about to accept a subscription sharing invitation from tenancy <OCID>.
After clicking Accept, the invitation is processed, and the invitation's Status field changes to Updating. The
tenancy then becomes a child tenancy.
After the sharing invitation is accepted, it will take one to two hours for metering to start flowing to the
subscription in the parent tenancy. From that time onwards, however, all usage in the child tenancy will be
metered against the parent tenancy's subscription. In addition, after linking tenancies, we recommend you wait

Oracle Cloud Infrastructure User Guide 302


Service Essentials

for a few hours before launching resources, that is, if you want to be sure all spending will accrue against the
subscription of the parent tenancy.
If there is a remaining subscription balance, contact your sales representative to move it to a primary subscription
in the sending tenancy.
Note:

Once the tenancy becomes a child tenancy, it cannot invite another


tenancy to become a child tenancy. The Invite Tenancy button on the
Subscription Sharing page becomes disabled to reflect this state.
9. Open the child tenancy's Linked Tenancies page, where you can view the linking between the child and parent
tenancy. The following information is displayed:
• Tenancy OCID: The tenancy OCID.
• Status: Displays the invitation status.
• Established: The UTC date and time that the subscription sharing began.
• Terminated: The UTC date and time that the subscription sharing began. This field is empty if the sharing is
still active.
Meanwhile on the parent tenancy's Linked Tenancies page, you can view the (child) tenancies that are being
metered against your subscription.

To Revoke a Subscription Sharing Invitation


1. Sign in to the primary tenancy as a user that has permissions to manage subscription sharing.
2. Open the navigation menu. Under Governance and Administration, go to Account Management and click
Subscription Sharing. The Subscription Sharing page is displayed.
3. For the invitation you want to revoke, click the Actions icon (three dots) and select Revoke Invitation. A Revoke
Invitation confirmation is displayed. Click Revoke to cancel the sharing invitation.
4. On the Subscription Sharing page, the invitation's Status changes to Canceled.

Viewing Invitation Details


Invitation details can be viewed from both the parent and child tenancy. To view invitation details:
1. Open the navigation menu. Under Governance and Administration, go to Account Management and click
Subscription Sharing. The Subscription Sharing page is displayed.
2. Click the linked invitation name from the Invitation Name field, or click the Actions icon (three dots) and select
View Invitation Details. The Invitation Detail page is displayed.
3. This page displays the invitation status, along with the following details on the Invitation Information tab:
• Sent from Tenancy OCID
• Type
• Last Status Change
• Sent to Tenancy OCID
• Sent Date
You can also click Add Tags to add tagging information, and view it on the Tags tab. See Resource Tags on page
213 for more information.

Using the Organizations API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials
on page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page
4262.
Use the following operations in the Organizations API to manage subscription sharing:
• Link

Oracle Cloud Infrastructure User Guide 303


Service Essentials

• RecipientInvitation
• SenderInvitation
• WorkRequest
• WorkRequestError
• WorkRequestLogEntry

Cost Reporting
Once a subscription is shared, the behavior of cost reporting tools changes. All spending against the subscription (in
the parent and all child tenancies) is included in cost reporting in the parent tenancy, and child tenancies are limited
to seeing spending in their own tenancy. Cost and usage reports are generated only in the parent tenancy, and include
all usage for the parent and all of its children. Budgets are only supported in the parent tenancy. The following table
describes the impact of subscription sharing on cost reporting.

Parent Tenancy Child Tenancies


Cost Analysis Reports on all usage and cost in the Reports on all usage and cost in the
parent, and all children with the ability child tenancy.
to group and filter by tenancy.
Note:

If a child tenancy wants


to use Cost Analysis
from the Console, you
must subscribe to the
parent’s home region.

Cost and usage reports (CSVs) Includes all usage and costs in the parent Not available.
and all children.
Budgets Budgets can be created against Not supported.
compartments or tags in the primary
tenancy but not against child tenancies.

Support
Depending on how you created your tenancy, you will either have separate CSI (Customer Support Identifier)
numbers, and support accounts for each tenancy, or they will be combined. If you want to make sure that you get
multiple CSI numbers, ensure that you work with your account team to create tenancies in a way that will create new
CSIs.

My Services Use Cases


Important:

The My Services dashboard and APIs are deprecated.


To interact programmatically with My Services, you can use the Oracle Cloud My Services API. To help you get
started, here are some use cases:
• Service Discovery Use Case on page 304
• Exadata Use Cases on page 307
• Managing Exadata Instances on page 318
• Using Access Token Authorization with My Services API on page 332

Service Discovery Use Case


This use case shows how you can get the list of your service entitlement IDs.

Oracle Cloud Infrastructure User Guide 304


Service Essentials

Important:

The My Services dashboard and APIs are deprecated.

Discover Current Service Entitlement IDs


Many of the My Services API operations require you to specify the serviceEntitlementId. To get the list of
all your service entitlement IDs, use the GET ServiceEntitlements operation. This operation returns information that
you can use to make more specific requests using the Oracle Cloud My Services API.
Example:

GET /itas/<domain>/myservices/api/v1/serviceEntitlements

Note:

In the examples, <domain> is the identity domain ID. An identity domain


ID can be either the IDCS GUID that identifies the identity domain for the
users within Identity Cloud Service (IDCS) or the Identity Domain name for a
traditional Cloud Account.
Example payload returned for this request:

{
"items": [
{
"id": "cesi-511202718", // Unique
ServiceEntitlementId
"purchaseEntitlement": { // Purchase
Entitlement is the entity bought by a customer
"subscriptionId": "511203590",
"id": "511203590",
"canonicalLink": "/itas/<domain>/myservices/api/v1/
purchaseEntitlements/511203590"
},
"serviceDefinition": {
"canonicalLink": "/itas/<domain>/myservices/api/v1/
serviceDefinitions/500089778",
"id": "500089778",
"name": "Storage" // The
customer is entitled to use the Storage Service
},
"createdOn": "2017-12-20T16:23:23.326Z",
"createdBy": "[email protected]",
"modifiedOn": "2017-12-20T18:35:40.628Z",
"modifiedBy": "[email protected]",
"identityDomain": { // Identity
Domain to which the Service Entitlement is associated
"id": "511203592",
"name": "myenvironment",
"displayName": "myenvironment"
},
"cloudAccount": { // Cloud
Account to which the Service Entitlement is associated
"id": "cacct-be7475efc2c54995bc842d3379d35812",
"name": "myenvironment",
"canonicalLink": "/itas/<domain>/myservices/api/v1/
cloudAccounts/cacct-be7475efc2c54995bc842d3379d35812"
},
"status": "ACTIVE", // Current
Status

Oracle Cloud Infrastructure User Guide 305


Service Essentials

"serviceConfigurations": { // Specific
configuration information such as Exadata configuration
"canonicalLink": "/itas/<domain>/myservices/api/v1/
serviceEntitlements/cesi-511202718/serviceConfigurations"
},
"canonicalLink": "/itas/{domain}/myservices/api/v1/
serviceEntitlements/cesi-511202718"
},
{
"id": "cesi-511202719",
"purchaseEntitlement": {
"subscriptionId": "511203590",
"id": "511203590",
"canonicalLink": "/itas/<domain>/myservices/api/v1/
purchaseEntitlements/511203590"
},
"serviceDefinition": {
"canonicalLink": "/itas/<domain>/myservices/api/v1/
serviceDefinitions/500123193",
"id": "500123193",
"name": "Compute" // The
customer is entitled to use the Compute Service
},
"createdOn": "2017-12-20T16:23:23.326Z",
"createdBy": "[email protected]",
"modifiedOn": "2017-12-20T18:35:40.628Z",
"modifiedBy": "[email protected]",
"identityDomain": {
"id": "511203592",
"name": "myenvironment",
"displayName": "myenvironment"
},
"cloudAccount": {
"id": "cacct-be7475efc2c54995bc842d3379d35812",
"name": "myenvironment",
"canonicalLink": "/itas/<domain>/myservices/api/v1/
cloudAccounts/cacct-be7475efc2c54995bc842d3379d35812"
},
"status": "ACTIVE",
"serviceConfigurations": {
"canonicalLink": "/itas/<domain>/myservices/api/v1/
serviceEntitlements/cesi-511202719/serviceConfigurations"
},
"canonicalLink": "/itas/<domain>/myservices/api/v1/
serviceEntitlements/cesi-511202719"
},
... // More
Service Entitlements could be displayed
],
"canonicalLink": "/itas/<domain>/myservices/api/v1/serviceEntitlements",
"hasMore": false,
"limit": 25,
"offset": 0
}

To obtain the IDCS GUID


Go to the Users page in My Services dashboard and click Identity Console. The URL in the browser address field
displays the IDCS GUID for your identity domain. For example:

https://idcs-105bbbdfe5644611bf7ce04496073adf.identity.oraclecloud.com/ui/
v1/adminconsole/?root=users

Oracle Cloud Infrastructure User Guide 306


Service Essentials

In the above URL, idcs-105bbbdfe5644611bf7ce04496073adf is the IDCS GUID for your identity
domain.

Exadata Use Cases


Important:

The My Services dashboard and APIs are deprecated.


The following use case examples can get you started working with the Exadata operations available in the Oracle
Cloud My Services API.
Important:

These procedures are for use with Oracle Database Exadata


Cloud@Customer ONLY. For more information, see Administering Oracle
Database Exadata Cloud at Customer. These procedures DO NOT apply to
the Exadata Cloud Service available in Oracle Cloud Infrastructure.

Exadata Firewall Allowlisting


To enable access to your Exadata Cloud Service instance, you can configure security rules and associate them with
your instance. The security rules define an allowlist of allowed network access points.
The firewall provides a system of rules and groups. By default, the firewall denies network access to the Exadata
Cloud Service instance. When you enable a security rule, you enable access to the Exadata Cloud Service instance. To
enable access you must:
• Create a security group and create security rules that define specific network access allowances.
• Assign the security group to your Exadata Cloud Service instance.
You can define multiple security groups, and each security group can contain multiple security rules. You can
associate multiple security groups with each Exadata Cloud Service instance, and each security group can be
associated with multiple Exadata Cloud Service instances. You can dynamically enable and disable security rules by
modifying the security groups that are associated with each Exadata Cloud Service instance.
To enable access to an Exadata Cloud Service instance:
Note:

In the following examples, <domain> is the identity domain ID. An identity


domain ID can be either the IDCS GUID that identifies the identity domain
for the users within Identity Cloud Service (IDCS) or the Identity Domain
name for a traditional Cloud Account.
1. Get the service instance IDs.
Operation: GET ServiceInstances
Example
Example request:

GET /itas/<domain>/myservices/api/v1/serviceInstances?
serviceDefinitionNames=Exadata&statuses=ACTIVE

Example payload returned for this request:

{
"items": [
{
"id": "csi-585928949", // Unique ServiceInstanceId
"serviceEntitlement": {

Oracle Cloud Infrastructure User Guide 307


Service Essentials

"id": "cesi-585927251",
"canonicalLink": "/itas/<domain>/myservices/api/v1/
serviceEntitlements/cesi-585927251"
},
"serviceDefinition": {
"canonicalLink": "/itas/<domain>/myservices/api/v1/
serviceDefinitions/502579309",
"id": "502579309",
"name": "Exadata" // The customer is entitled to use
the Exadata Service
},
"cloudAccount": {
"canonicalLink": "/itas/<domain>/myservices/api/v1/cloudAccounts/
cacct-fd7a122448aaaa",
"id": "cacct-fd7a122448aaaa",
"name": "myAccountName"
},
...
"canonicalLink": "/itas/<domain>/myservices/api/v1/serviceInstances/
csi-585928949"
}
... // More Service Instances
could be displayed
],
"canonicalLink": "/itas/<domain>/myservices/api/v1/serviceInstances",
"hasMore": false,
"limit": 25,
"offset": 0
}

This example payload returns the service instance ID csi-585928949, which is part of the service entitlement
ID cesi-585927251.
2. Get the service configuration IDs.
Operation: GET SIServiceConfigurations
Example
Example request, using the service instance ID csi-585928949:

GET /itas/<domain>/myservices/api/v1/serviceInstances/csi-585928949/
serviceConfigurations

Example payload returned for this request:

{
"canonicalLink": "/itas/<domain>/myservices/api/v1/serviceInstances/
csi-585928949/serviceConfigurations",
"items": [
{
"canonicalLink": "/itas/<domain>/myservices/api/v1/serviceInstances/
csi-585928949/serviceConfigurations/Exadata",
"exadata": {
"bursting": {
"canonicalLink": "/itas/<domain>/myservices/api/v1/
serviceInstances/csi-585928949/serviceConfigurations/Exadata/bursting"
},
"id": "Exadata",
"securityGroupAssignments": {
"canonicalLink": "/itas/<domain>/myservices/api/v1/serviceInstances/
csi-585928949/serviceConfigurations/Exadata/securityGroupAssignments"
}
},

Oracle Cloud Infrastructure User Guide 308


Service Essentials

"id": "Exadata"
}
]
}

This example payload shows that /itas/<domain>/myservices/api/v1/serviceInstances/csi-585928949/


serviceConfigurations/Exadata/securityGroupAssignments is used for Exadata Firewall.
3. Get the current security groups for the service entitlement.
Operation: GET SEExadataSecurityGroups
Example
Example request, using the service entitlement ID cesi-585927251:

GET /itas/<domain>/myservices/api/v1/serviceEntitlements/cesi-585927251/
serviceConfigurations/Exadata/securityGroups

Example payload returned for this request:

{
"items": [
{
"id": "1",
"customerId": "585927251",
"name": "SecGroup 1",
"description": "My first Security group",
"version": 10,
"rules": [
{
"direction": "ingress",
"proto": "tcp",
"startPort": 1159,
"endPort": 1159,
"ipSubnet": "0.0.0.0/0",
"ruleInterface": "data"
}
],
"canonicalLink": "/itas/<domain>/myservices/api/v1/
serviceEntitlements/585927251/serviceConfigurations/Exadata/
securityGroups/1"
},
{
"id": "2",
"customerId": "585927251",
"name": " SecGroup 2",
"description": "My second Security group",
"version": 3,
"rules": [
{
"direction": "egress",
"proto": "tcp",
"startPort": 8123,
"endPort": 8123,
"ipSubnet": "192.168.1.0/28",
"ruleInterface": "data"
}
],
"canonicalLink": "/itas/<domain>/myservices/api/v1/
serviceEntitlements/585927251/serviceConfigurations/Exadata/
securityGroups/2"
}
],

Oracle Cloud Infrastructure User Guide 309


Service Essentials

"canonicalLink": "/itas/<domain>/myservices/api/v1/
serviceEntitlements/585927251/serviceConfigurations/Exadata/
securityGroups"
}

This example payload shows two security groups defined for the specified service entitlement ID.
4. Get the current security group assignments for the service instance
Operation: GET SIExadataSecurityGroupAssignments
Example
Example request, using the service instance ID csi-585928949:

GET /itas/<domain>/myservices/api/v1/serviceInstances/csi-585928949/
serviceConfigurations/Exadata/securityGroupAssignments

Example payload returned for this request:

{
"items": [
{
"id": "11",
"securityGroup":
{
"id": "1",
"canonicalLink": "/itas/<domain>/myservices/api/v1/
serviceEntitlements/585927251/serviceConfigurations/Exadata/
securityGroups/1"
},
"canonicalLink": "/itas/<domain>/myservices/api/v1/serviceInstances/
csi-585928949/serviceConfigurations/Exadata/securityGroupAssignments/11"
}
],
"canonicalLink": "/itas/<domain>/myservices/api/v1/serviceInstances/
csi-585928949/serviceConfigurations/Exadata/securityGroupAssignments"
}

This example payload shows one security group assigned to the service instance csi-585928949.
5. Create a security group with security rules.
Operation: POST SEExadataSecurityGroups
Example
Example request, using the service entitlement ID cesi-585927251:

POST /itas/<domain>/myservices/api/v1/serviceEntitlements/cesi-585927251/
serviceConfigurations/Exadata/securityGroups
{
"customerId": "585927251",
"name": "SecGroup 1",
"description": "My third Security group",
"version": 1,
"rules": [
{
"direction": "ingress",
"proto": "tcp",
"startPort": 30,
"endPort": 31,
"ipSubnet": "100.100.100.255",
"ruleInterface": "admin"
},
{

Oracle Cloud Infrastructure User Guide 310


Service Essentials

"direction": "egress",
"proto": "tcp",
"startPort": 32,
"endPort": 32,
"ipSubnet": "100.100.255.0/16",
"ruleInterface": "admin"
}
]
}

Attributes:

Name Description

customerId Required: Yes


String
This must be the same as the <serviceEntitlementId>

direction Required: Yes


String
Allowed values: [ingress | egress] for inbound or
outbound.

proto Required: Yes


String
Allowed values: [tcp | udp].

startPort Required: Yes


Integer
startPort defines the beginning of a range of ports to
open/white-list [0 - 65535].

endPort Required: Yes


Integer
endPort defines the ending of a range of ports to open/
white-list [0 - 65535].

ipSubnet Required: Yes


String
Single IP address or range specified in CIDR notation.

Oracle Cloud Infrastructure User Guide 311


Service Essentials

Name Description

ruleInterface Required: Yes


String
Allowed values: [admin | client | backup] where:
• admin — specifies that the rule applies to network
communications over the administration network
interface. The administration network is typically
used to support administration tasks by using
terminal sessions, monitoring agents, and so on.
• client — specifies that the rule applies to network
communications over the client access network
interface, which is typically used by Oracle Net
Services connections.
• backup — specifies that the rule applies to network
communications over the backup network interface,
which is typically used to transport backup
information to and from network-based storage that
is separate from Exadata Cloud Service.

If successful, the POST request will return the unique ID of the newly created security group. For the next step,
we'll assume that the newly created security group ID is 3.
Note:

A security group can also be modified or deleted. See Oracle Cloud My


Services API.
6. Assign the security group to a service instance.
Operation: POST SIExadataSecurityGroupAssignments
Example
Example request, using the service instance csi-585928949 and the security group ID 3:

POST /itas/<domain>/myservices/api/v1/serviceInstances/csi-585928949/
serviceConfigurations/Exadata/securityGroupAssignments

{
"securityGroup": {
"id": "3",
"customerId": "585927251",
"canonicalLink": "/itas/<domain>/myservices/api/v1/
serviceEntitlements/585927251/serviceConfigurations/Exadata/
securityGroups/3"
}

Oracle Cloud Infrastructure User Guide 312


Service Essentials

Attributes:

Name Description
customerId Required: Yes
String
This must be the same as the serviceEntitlementId.

If successful, the POST request will return the unique Id of the newly created security group assignment.
Note:

A security group assignment can also be deleted. See Oracle Cloud My


Services API.
You can now verify all your security groups and assignments. See:
• Get the current security groups for the service entitlement.
• Get the current security group assignments for the service instance.
To obtain the IDCS GUID
Go to the Users page in My Services dashboard and click Identity Console. The URL in the browser address field
displays the IDCS GUID for your identity domain. For example:

https://idcs-105bbbdfe5644611bf7ce04496073adf.identity.oraclecloud.com/ui/
v1/adminconsole/?root=users

In the above URL, idcs-105bbbdfe5644611bf7ce04496073adf is the IDCS GUID for your identity
domain.

Exadata Scaling with Bursting


You can temporarily modify the capacity of your Exadata environment by configuring bursting. Bursting is a method
you can use to scale Exadata Cloud Service non-metered instances within an Exadata system.
To scale up your non-metered instances, increase the number of compute nodes by modifying the burstOcpu
attribute of the host. When you no longer need the additional nodes, update the burstOcpu attribute back to its
original setting.
Note:

In the following examples, <domain> is the identity domain ID. An identity


domain ID can be either the IDCS GUID that identifies the identity domain
for the users within Identity Cloud Service (IDCS) or the Identity Domain
name for a traditional Cloud Account.

Oracle Cloud Infrastructure User Guide 313


Service Essentials

1. Get the service instance IDs.


Operation: GET ServiceInstances
Example
Example request:

GET /itas/<domain>/myservices/api/v1/serviceInstances?
serviceDefinitionNames=Exadata&statuses=ACTIVE

Example payload returned for this request:

{
"items": [
{
"id": "csi-585928949", // Unique ServiceInstanceId
"serviceEntitlement": {
"id": "cesi-585927251",
"canonicalLink": "/itas/<domain>/myservices/api/v1/
serviceEntitlements/cesi-585927251"
},
"serviceDefinition": {
"canonicalLink": "/itas/<domain>/myservices/api/v1/
serviceDefinitions/502579309",
"id": "502579309",
"name": "Exadata" // The customer is entitled to use
the Exadata Service
},
"cloudAccount": {
"canonicalLink": "/itas/<domain>/myservices/api/v1/cloudAccounts/
cacct-fd7a122448aaaa",
"id": "cacct-fd7a122448aaaa",
"name": "myAccountName"
},
...
"canonicalLink": "/itas/<domain>/myservices/api/v1/serviceInstances/
csi-585928949"
}
... // More Service Instances
could be displayed
],
"canonicalLink": "/itas/<domain>/myservices/api/v1/serviceInstances",
"hasMore": false,
"limit": 25,
"offset": 0
}

This example payload returns the service instance ID csi-585928949.


2. Get the service configuration IDs.
Operation: GET SIServiceConfigurations
Example
Example request, using the service instance ID csi-585928949:

GET /itas/<domain>/myservices/api/v1/serviceInstances/csi-585928949/
serviceConfigurations

Example payload returned for this request:

Oracle Cloud Infrastructure User Guide 314


Service Essentials

"canonicalLink": "/itas/<domain>/myservices/api/v1/serviceInstances/
csi-585928949/serviceConfigurations",
"items": [
{
"canonicalLink": "/itas/<domain>/myservices/api/v1/serviceInstances/
csi-585928949/serviceConfigurations/Exadata",
"exadata": {
"bursting": {
"canonicalLink": "/itas/<domain>/myservices/api/v1/
serviceInstances/csi-585928949/serviceConfigurations/Exadata/bursting"
},
"id": "Exadata",
"securityGroupAssignments": {
"canonicalLink": "/itas/<domain>/myservices/api/v1/serviceInstances/
csi-585928949/serviceConfigurations/Exadata/securityGroupAssignments"
}
},
"id": "Exadata"
}
]
}

This example payload shows that /itas/<domain>/myservices/api/v1/serviceInstances/csi-585928949/


serviceConfigurations/Exadata/securityGroupAssignments is used for Bursting.
3. Get the current compute node configuration.
Operation: GET SIExadataBursting
Example
Example request, using the service instance ID csi-585928949:

GET /itas/<domain>/myservices/api/v1/serviceInstances/csi-585928949/
serviceConfigurations/Exadata/bursting

Example payload returned for this request:

{
"ocpuOpInProgress": false,
"exaunitId": 50,
"ocpuAllocations": [
{
"hostName": "host1.oraclecloud.com",
"subscriptionOcpu": 11,
"meteredOcpu": 0,
"burstOcpu": 0, // Current Burst
value
"minOcpu": 11,
"maxOcpu": 42,
"maxBurstOcpu": 11,
"maxSubOcpu": 38,
"maxMetOcpu": 0
},
{
"hostName": "host2.oraclecloud.com",
"subscriptionOcpu": 11,
"meteredOcpu": 0,
"burstOcpu": 0, // Current Burst
value
"minOcpu": 11,
"maxOcpu": 42,
"maxBurstOcpu": 11,
"maxSubOcpu": 38,

Oracle Cloud Infrastructure User Guide 315


Service Essentials

"maxMetOcpu": 0
}
],
"status": 200,
"op": "exaunit_coreinfo",
"additionalNumOfCores": "0",
"additionalNumOfCoresHourly": "0",
"coreBursting": "Y"
}
4. Modify the values for burstOcpu.
Operation: PUT SIExadataBursting
You can modify burstOcpu to a value that is up to the value of maxBurstOcpu. This example adds two
compute nodes to each host.
Example
Example request, using the service instance csi-585928949:

PUT /itas/<domain>/myservices/api/v1/serviceInstances/csi-585928949/
serviceConfigurations/Exadata/bursting/
{
"ocpuOpInProgress": false,
"exaunitId": 50,
"ocpuAllocations": [
{
"hostName": "host1.oraclecloud.com",
"subscriptionOcpu": 11,
"meteredOcpu": 0,
"burstOcpu": 2,
"minOcpu": 11,
"maxOcpu": 42,
"maxBurstOcpu": 11,
"maxSubOcpu": 38,
"maxMetOcpu": 0
},
{
"hostName": "host2.oraclecloud.com",
"subscriptionOcpu": 11,
"meteredOcpu": 0,
"burstOcpu": 2,
"minOcpu": 11,
"maxOcpu": 42,
"maxBurstOcpu": 11,
"maxSubOcpu": 38,
"maxMetOcpu": 0
}
]

Oracle Cloud Infrastructure User Guide 316


Service Essentials

Attributes:

Name Description
burstOcpu Required: Yes
Type: Integer, Minimum Value: 0, Maximum Value:
maxBurstOcpu
Number of additional cores

Note:

This action may take a few minutes to complete.


5. Verify the new compute node configuration.
Operation: GET SIExadataBursting
Example
Example request, using the service instance ID csi-585928949:

GET /itas/<domain>/myservices/api/v1/serviceInstances/csi-585928949/
serviceConfigurations/Exadata/bursting

Example payload returned for this request:

{
"ocpuOpInProgress": false,
"exaunitId": 50,
"ocpuAllocations": [
{
"hostName": "host1.oraclecloud.com",
"subscriptionOcpu": 11,
"meteredOcpu": 0,
"burstOcpu": 2, // New Burst value
"minOcpu": 11,
"maxOcpu": 42,
"maxBurstOcpu": 11,
"maxSubOcpu": 38,
"maxMetOcpu": 0
},
{
"hostName": "host2.oraclecloud.com",
"subscriptionOcpu": 11,
"meteredOcpu": 0,
"burstOcpu": 2, // New Burst value
"minOcpu": 11,
"maxOcpu": 42,
"maxBurstOcpu": 11,
"maxSubOcpu": 38,
"maxMetOcpu": 0
}
],
"status": 200,
"op": "exaunit_coreinfo",
"additionalNumOfCores": "0",
"additionalNumOfCoresHourly": "0",
"coreBursting": "Y"
}

Oracle Cloud Infrastructure User Guide 317


Service Essentials

To obtain the IDCS GUID


Go to the Users page in My Services dashboard and click Identity Console. The URL in the browser address field
displays the IDCS GUID for your identity domain. For example:

https://idcs-105bbbdfe5644611bf7ce04496073adf.identity.oraclecloud.com/ui/
v1/adminconsole/?root=users

In the above URL, idcs-105bbbdfe5644611bf7ce04496073adf is the IDCS GUID for your identity
domain.

Managing Exadata Instances


Important:

The My Services dashboard and APIs are deprecated.


The following procedures walk you through creating, modifying, and deleting Exadata instances used with the Oracle
Cloud My Services API.
Important:

These procedures are for use with Oracle Database Exadata


Cloud@Customer ONLY. For more information, see Administering Oracle
Database Exadata Cloud at Customer. These procedures DO NOT apply to
the Exadata Cloud Service available in Oracle Cloud Infrastructure.

Prerequisites
Before you can manage Exadata instances, you need to:
• Subscribe to an Oracle Cloud service
• Obtain account credentials with required roles assigned
• Determine your API endpoint
To subscribe to an Oracle Cloud service
To access Oracle Cloud My Services API, you must request a trial or paid subscription to an Oracle Cloud service.
To obtain account credentials and role assignments
Ask your account administrator for the following items to access Oracle Cloud My Services API:
• Account credentials:
• User name and password
• Identity domain ID
An identity domain ID can be either the IDCS GUID that identifies the identity domain for the users within
Identity Cloud Service (IDCS) or the Identity Domain name for a traditional Cloud Account.
• Required roles assigned to above user name
To determine your API endpoint
Insert the identity domain ID provided by the account administrator (<domain>) between /itas/ and /
myservices/.
Example:

https://itra.oraclecloud.com/itas/<domain>/myservices/api/v1/
serviceEntitlements

Oracle Cloud Infrastructure User Guide 318


Service Essentials

Creating Exadata Instances


This section covers how to create a basic Exadata instance, an instance with custom IP network configuration, and an
instance with multi-VM support.
To create a basic Exadata instance
Post a request with the required payload to create a new instance for a given service entitlement (Exadata in our case).
In the following example, <domain> is the identity domain ID.

POST /itas/<domain>/myservices/api/v1/operations
{
"operationItems": [
{
"attributes": [
{
"name": "requestPayload.name",
"value": "newinstanceName"
},
{
"name": "requestPayload.serviceEntitlementId",
"value": "500073421"
},
{
"name": "requestPayload.size",
"value": "CUSTOM"
},
{
"name": "requestPayload.serviceType",
"value": "Exadata"
},
{
"name": "requestPayload.adminUserName",
"value": "[email protected]"
},
{
"name": "requestPayload.adminEmail",
"value": "[email protected]"
},
{
"name": "requestPayload.adminFirstName",
"value": "John"
},
{
"name": "requestPayload.adminLastName",
"value": "Smith"
},
{
"name": "requestPayload.invokerAdminUserName",
"value": "[email protected]"
},
{
"name": "requestPayload.invokerAdminEmail",
"value": "[email protected]"
},
{
"name": "requestPayload.invokerAdminFirstName",
"value": "John"
},
{
"name": "requestPayload.invokerAdminLastName",
"value": "Smith"
},

Oracle Cloud Infrastructure User Guide 319


Service Essentials

{
"name": "requestPayload.customAttributes.ExaUnitName",
"value": "systemname"
},
{
"name": "requestPayload.customAttributes.CreateSparse",
"value": "N"
},
{
"name": "requestPayload.customAttributes.BackupToDisk",
"value": "N"
},
{
"name": "requestPayload.customAttributes.isBYOL",
"value": "N"
},
{
"name": "requestPayload.customAttributes.PickRackSize",
"value": "Quarter Rack"
},
{
"name": "requestPayload.customAttributes.SELECTED_DC_ID",
"value": "US001"
}
],
"operationItemDefinition": {
"id": "CIM-Exadata-CUSTOM-PRODUCTION-CREATE"
}
}
]
}

Attributes

Name Description
requestPayload.name Required: Yes
Type: String
Name of the Exadata instance. This name:
• Must not exceed 25 characters.
• Must start with a letter.
• Must contain only lower case letters and numbers.
• Must not contain spaces or any other special
characters.
• Must be unique within the identity domain.

requestPayload. Required: Yes


serviceEntitlementId Type: String
Service Entitlement for the Exadata instance. See
“Exadata Service Entitlement discovery”. Note that any
“cesi-“ or “sub-“ prefix should not be included.

Oracle Cloud Infrastructure User Guide 320


Service Essentials

Name Description

requestPayload. Required: Yes


customAttributes. Type: String
ExaUnitName
A name for your Exadata Database Machine
environment. This name is also used as the cluster name
for the Oracle Grid Infrastructure installation.

requestPayload. Required: Yes


customAttributes. Type: String
CreateSparse
"Y" to create a disk group that is based on sparse grid
disks, else "N".
You must select this option to enable Exadata Cloud
Service snapshots. Exadata snapshots enable space-
efficient clones of Oracle databases that can be created
and destroyed very quickly and easily.

requestPayload. Required: Yes


customAttributes. Type: String
BackupToDisk
"Y" to use "Database backups on Exadata Storage", else
"N".
This option configures the Exadata storage to enable
local database backups on Exadata storage.

requestPayload. Required: Yes


customAttributes. Type: String
isBYOL "Y" to indicate that the Exadata Cloud Service instance
uses Oracle Database licenses that are provided by you
rather than licenses that are provided are part of the
service subscription, else "N".
This option only affects the billing that is associated
with the service instance. It has no effect on the technical
configuration of the Exadata Cloud Service instance.

requestPayload. Required: Yes


customAttributes. Type: String
PickRackSize Specify the rack configuration for your service instance.
Exact allowed values depend on your purchase. Typical
values are like "Full Rack", "Half Rack", "Quarter Rack"
or "Eighth Rack".

Oracle Cloud Infrastructure User Guide 321


Service Essentials

Name Description

requestPayload. Required: Yes


customAttributes. Type: String
SELECTED_DC_ID Data center that will host your Exadata Cloud Service
instance. See “Exadata Service Entitlement discovery” to
obtain the Eligible Data Center IDs.

To create an Exadata instance with custom IP network configuration


Post a request with the attributes ClientNetwork and BackupNetwork as part of the payload. The following example
includes these optional attributes as well as required attributes.
In the following example, <domain> is the identity domain ID.

POST /itas/<domain>/myservices/api/v1/operations
{
"operationItems": [
{
"attributes": [
{
"name": "requestPayload.name",
"value": "newinstanceName"
},
{
"name": "requestPayload.serviceEntitlementId",
"value": "500073421"
},
{
"name": "requestPayload.size",
"value": "CUSTOM"
},
{
"name": "requestPayload.serviceType",
"value": "Exadata"
},
{
"name": "requestPayload.adminUserName",
"value": "[email protected]"
},
{
"name": "requestPayload.adminEmail",
"value": "[email protected]"
},
{
"name": "requestPayload.adminFirstName",
"value": "John"
},
{
"name": "requestPayload.adminLastName",
"value": "Smith"
},
{
"name": "requestPayload.invokerAdminUserName",
"value": "[email protected]"
},
{
"name": "requestPayload.invokerAdminEmail",
"value": "[email protected]"
},
{

Oracle Cloud Infrastructure User Guide 322


Service Essentials

"name": "requestPayload.invokerAdminFirstName",
"value": "John"
},
{
"name": "requestPayload.invokerAdminLastName",
"value": "Smith"
},
{
"name": "requestPayload.customAttributes.ExaUnitName",
"value": "systemname"
},
{
"name": "requestPayload.customAttributes.CreateSparse",
"value": "N"
},
{
"name": "requestPayload.customAttributes.BackupToDisk",
"value": "N"
},
{
"name": "requestPayload.customAttributes.isBYOL",
"value": "N"
},
{
"name": "requestPayload.customAttributes.PickRackSize",
"value": "Quarter Rack"
},
{
"name": "requestPayload.customAttributes.SELECTED_DC_ID",
"value": "US001"
}
{
"name": "requestPayload.customAttributes.ClientNetwork",
"value": "/root/root/1/ipnetwork1"
},
{
"name": "requestPayload.customAttributes.BackupNetwork",
"value": "/root/root/1/ipnetwork2"
}
],
"operationItemDefinition": {
"id": "CIM-Exadata-CUSTOM-PRODUCTION-CREATE"
}
}
]
}

Attributes

Name Description

requestPayload. Required: Yes


customAttributes. Type: Url
ClientNetwork IP network definitions for the network that is primarily
used for client access to the database servers.
Applications typically access databases on Exadata
Cloud Service through this network using Oracle Net
Services in conjunction with Single Client Access Name
(SCAN) and Oracle RAC Virtual IP (VIP) interfaces.

Oracle Cloud Infrastructure User Guide 323


Service Essentials

Name Description

requestPayload. Required: Yes


customAttributes. Type: Url
BackupNetwork IP network definitions for the network that is typically
used to access the database servers for various purposes,
including backups and bulk data transfers.

To create an Exadata instance with multi-VM support


If your Exadata system environment is enabled to support multiple virtual machine (VM) clusters, then you can
define up to eight clusters and specify how the overall Exadata system resources are allocated to them.
In a configuration with multiple VM clusters, each VM cluster is allocated a dedicated portion of the overall Exadata
system resources, with no over-provisioning or resource sharing. On the compute nodes, a separate VM is defined
for each VM cluster, and each VM is allocated a dedicated portion of the available compute node CPU, memory, and
local disk resources. Each VM cluster is also allocated a dedicated portion of the overall Exadata storage.
Post a request with the attributes EXAUNIT_ALLOCATIONS and MULTIVM_ENABLED as part of the payload.
The following example includes these optional attributes as well as required attributes.
In the following example, <domain> is the identity domain ID and <base64_encoded_string> is a base64 encoding
of the payload following the example.
Example payload for request:

POST /itas/<domain>/myservices/api/v1/operations
{
"operationItems": [
{
"attributes": [
{
"name": "requestPayload.name",
"value": "newinstanceName"
},
{
"name": "requestPayload.serviceEntitlementId",
"value": "500073421"
},
{
"name": "requestPayload.size",
"value": "CUSTOM"
},
{
"name": "requestPayload.serviceType",
"value": "Exadata"
},
{
"name": "requestPayload.adminUserName",
"value": "[email protected]"
},
{
"name": "requestPayload.adminEmail",
"value": "[email protected]"
},
{
"name": "requestPayload.adminFirstName",
"value": "John"
},
{
"name": "requestPayload.adminLastName",

Oracle Cloud Infrastructure User Guide 324


Service Essentials

"value": "Smith"
},
{
"name": "requestPayload.invokerAdminUserName",
"value": "[email protected]"
},
{
"name": "requestPayload.invokerAdminEmail",
"value": "[email protected]"
},
{
"name": "requestPayload.invokerAdminFirstName",
"value": "John"
},
{
"name": "requestPayload.invokerAdminLastName",
"value": "Smith"
},
{
"name": "requestPayload.customAttributes.ExaUnitName",
"value": "systemname"
},
{
"name": "requestPayload.customAttributes.CreateSparse",
"value": "N"
},
{
"name": "requestPayload.customAttributes.BackupToDisk",
"value": "N"
},
{
"name": "requestPayload.customAttributes.isBYOL",
"value": "N"
},
{
"name": "requestPayload.customAttributes.PickRackSize",
"value": "Quarter Rack"
},
{
"name": "requestPayload.customAttributes.SELECTED_DC_ID",
"value": "US001"
}
{
"name": "requestPayload.customAttributes.EXAUNIT_ALLOCATIONS",
"value": "<base64_encoded_string>"
},
{
"name": "requestPayload.customAttributes.MULTIVM_ENABLED",
"value": "true"
}
],
"operationItemDefinition": {
"id": "CIM-Exadata-CUSTOM-PRODUCTION-CREATE"
}
}
]
}

Payload for <base64_encoded_string>:

{
ExaunitProperties: [
{name:requestId, value:27ac0ee3-0c72-4493-b02b-40038f07d2a0},

Oracle Cloud Infrastructure User Guide 325


Service Essentials

{name:Operation, value:AddCluster},
{name:TotalNumOfCoresForCluster, value:4},
{name:TotalMemoryInGb, value:30},
{name:StorageInTb, value:3},
{name:OracleHomeDiskSizeInGb, value:60},
{name:ClientNetwork, value:/root/root/1/ipnetwork1}, // Only if Higgs
is also required
{name:BackupNetwork, value:/root/root/1/ipnetwork2}, // Only if Higgs
is also required
{name:ExaUnitName, value:systemname},
{name:CreateSparse, value:N},
{name:BackupToDisk, value:N}
]
}

Attributes

Name Description

requestId Required: Optional


Type: String
Unique UUID

TotalNumOfCores Required: Yes


ForCluster Type: String
The number of CPU cores that are allocated to the VM
cluster. This is the total number of CPU cores that are
allocated evenly across all of the compute nodes in the
VM cluster. Must be a multiple of numComputes as
returned by a call to ecra/endpoint/clustershapes.

TotalMemoryInGb Required: Yes


Type: String
The amount of memory (in GB) that is allocated to the
VM cluster. This is the total amount of memory that
is allocated evenly across all of the compute nodes in
the VM cluster. Must be a multiple of numComputes as
returned by a call to ecra/endpoint/clustershapes.

StorageInTb Required: Yes


Type: String
The total amount of Exadata storage (in TB) that is
allocated to the VM cluster. This storage is allocated
evenly from all of the Exadata Storage Servers.

OracleHomeDiskSize Required: Yes


InGb Type: String
The amount of local disk storage (in GB) that is
allocated to each database server in the first VM cluster.

Oracle Cloud Infrastructure User Guide 326


Service Essentials

Modifying Exadata Instances


This section covers how to add a cluster to an existing instance, reshape a cluster, and delete a cluster.
To add a cluster to an existing instance
Post a request with the operationItemDefinition of CIM-Exadata-CUSTOM-PRODUCTION-UPDATE and a base64
encoding of a payload that includes the Operation value of AddCluster.
In the following example, <domain> is the identity domain ID, <instanceId> and <serviceEntitlementId> are
returned from iTAS serviceInstances, and <base64_encoded_string> is a base64 encoding of the payload following
the example.
Example payload for request:

POST /itas/<domain>/myservices/api/v1/operations HTTP/1.1


{
"operationItems": [
{
"attributes": [
{
"name": "instanceId",
"value": "<instanceId>"
},
{
"name": "requestPayload.serviceEntitlementId",
"value": "<serviceEntitlementId>"
},
{
"name": "requestPayload.size",
"value": "CUSTOM"
},
{
"name": "requestPayload.serviceType",
"value": "Exadata"
},
{
"name": "requestPayload.customAttributes.EXAUNIT_ALLOCATIONS",
"value": "<base64_encoded_string>"
},
{
"name": "requestPayload.customAttributes. MULTIVM_ENABLED",
"value": "true"
}
],
"operationItemDefinition": {
"id": "CIM-Exadata-CUSTOM-PRODUCTION-UPDATE"
}
}
]
}

Payload for <base64_encoded_string>:

{
ExaunitProperties: [
{name:requestId, value:27ac0ee3-0c72-4493-b02b-40038f07d2a0},
{name:Operation, value:AddCluster},
{name:TotalNumOfCoresForCluster, value:4},
{name:TotalMemoryInGb, value:30},
{name:StorageInTb, value:3},
{name:OracleHomeDiskSizeInGb, value:60},
{name:ClientNetwork, value:/root/root/1/ipnetwork1}, // Only if Higgs
is also required

Oracle Cloud Infrastructure User Guide 327


Service Essentials

{name:BackupNetwork, value:/root/root/1/ipnetwork2}, // Only if Higgs


is also required
{name:ExaUnitName, value:Cluster2},
{name:CreateSparse, value:N},
{name:BackupToDisk, value:N}
]
}

To reshape a cluster
Post a request with the operationItemDefinition of CIM-Exadata-CUSTOM-PRODUCTION-UPDATE and a base64
encoding of a payload that includes the Operation value of ReshapeCluster.
In the following example, <domain> is the identity domain ID and <base64_encoded_string> is a base64 encoding
of the payload following the example.
Example payload for request:

POST /itas/<domain>/myservices/api/v1/operations HTTP/1.1


{
"operationItems": [
{
"attributes": [
{
"name": "instanceId",
"value": "500076173"
},
{
"name": "requestPayload.serviceEntitlementId",
"value": "500073421"
},
{
"name": "requestPayload.size",
"value": "CUSTOM"
},
{
"name": "requestPayload.serviceType",
"value": "Exadata"
},
{
"name": "requestPayload.customAttributes.EXAUNIT_ALLOCATIONS",
"value": "<base64_encoded_string>"
},
{
"name": "requestPayload.customAttributes. MULTIVM_ENABLED",
"value": "true"
}
],
"operationItemDefinition": {
"id": "CIM-Exadata-CUSTOM-PRODUCTION-UPDATE"
}
}
]
}

Payload for <base64_encoded_string>:

{
ExaunitProperties: [
{name:requestId, value:27ac0ee3-0c72-4493-b02b-40038f07d2a0},
{name:ExaunitID, value:1}, // From ecra/endpoint/exaservice/
{serviceInstance}/resourceinfo
{name:Operation, value:ReshapeCluster},

Oracle Cloud Infrastructure User Guide 328


Service Essentials

{name:TotalNumOfCoresForCluster, value:10},
{name:TotalMemoryInGb, value:10},
{name:StorageInTb, value:4},
{name:OhomePartitionInGB, value:100},
{name:ClientNetwork, value:/root/root/1/ipnetwork1}, // Only if Higgs
is also required
{name:BackupNetwork, value:/root/root/1/ipnetwork2} // Only if Higgs
is also required
]
}

Important:

• Only one attribute can be modified per Reshape request. The payload
should contain only the modified attribute. Example:

{ExaunitProperties

[{name:Operation,value

ReshapeCluster},

name:ExaunitID,value:5

},{

name:TotalNumOfCoresForCluster

value:6}]

}
• When doing a Reshape with the OracleHomeDiskSizeInGb
attribute, use the name OhomePartitionInGB.
• The value for TotalNumOfCoresForCluster must be a multiple
of numComputes as returned by a call to ecra/endpoint/
clustershapes.
• The value for TotalMemoryInGb must be a multiple of
numComputes as returned by a call to ecra/endpoint/
clustershapes.
To delete a cluster
Post a request with the operationItemDefinition of CIM-Exadata-CUSTOM-PRODUCTION-UPDATE and a base64
encoding of a payload that includes the Operation value of DeleteCluster.
In the following example, <domain> is the identity domain ID and <base64_encoded_string> is a base64 encoding
of the payload following the example.
Example payload for request:

POST /itas/<domain>/myservices/api/v1/operations HTTP/1.1


{

Oracle Cloud Infrastructure User Guide 329


Service Essentials

"operationItems": [
{
"attributes": [
{
"name": "instanceId",
"value": "500076173"
},
{
"name": "requestPayload.serviceEntitlementId",
"value": "500073421"
},
{
"name": "requestPayload.size",
"value": "CUSTOM"
},
{
"name": "requestPayload.serviceType",
"value": "Exadata"
},
{
"name": "requestPayload.customAttributes.EXAUNIT_ALLOCATIONS",
"value": "<base64_encoded_string>"
},
{
"name": "requestPayload.customAttributes. MULTIVM_ENABLED",
"value": "true"
}
],
"operationItemDefinition": {
"id": "CIM-Exadata-CUSTOM-PRODUCTION-UPDATE"
}
}
]
}

Payload for <base64_encoded_string>:

{
ExaunitProperties: [
{name:requestId, value:27ac0ee3-0c72-4493-b02b-40038f07d202}, //
Optional
{name:ExaunitID, value:2},
{name:Operation, value:DeleteCluster}
]
}

Deleting Exadata Instances


This section covers how to delete Exadata instances.
Important:

Delete all existing multi-VM clusters before deleting the Exadata instance.
Following this guidance prevents the instance ending up in an invalid state.
To delete an instance
Post a request with the operationItemDefinition of CIM-Exadata-CUSTOM-PRODUCTION-DELETE.
In the following example, <domain> is the identity domain ID.
Example payload for request:

POST /itas/<domain>/myservices/api/v1/operations HTTP/1.1

Oracle Cloud Infrastructure User Guide 330


Service Essentials

{
"operationItems": [
{
"attributes": [
{
"name": "instanceId",
"value": "500076173"
},
{
"name": "requestPayload.serviceEntitlementId",
"value": "500073421"
},
{
"name": "requestPayload.serviceType",
"value": "Exadata"
}
],
"operationItemDefinition": {
"id": "CIM-Exadata-CUSTOM-PRODUCTION-DELETE"
}
}
]
}

Discovering Entitlements and Instances


This section describes how to discover service entitlements and service instances.
To discover service entitlements
Send the following request:

GET /itas/<domain>/myservices/api/v1/serviceEntitlements?
serviceDefinitionNames=Exadata

Example payload returned for this request:

{
"items": [
{
"id": "cesi-585927251", // Unique ServiceEntitlementId
"serviceDefinition": {
"canonicalLink": "/itas/a517289/myservices/api/v1/
serviceDefinitions/502579309",
"id": "502579309",
"name": "Exadata" // The customer is entitled to
use the Exadata Service
},
"status": "ACTIVE",
...
"canonicalLink": "/itas/a517289/myservices/api/v1/serviceInstances/
csi-585928949"
}
... // More Service
Entitlements could be displayed
],
"canonicalLink": "/itas/<domain>/myservices/api/v1/serviceEntitlements",
"hasMore": false,
"limit": 25,
"offset": 0
}

Eligible Data Centers:

Oracle Cloud Infrastructure User Guide 331


Service Essentials

Use:

/itas/<domain>/myservices/api/
v1/serviceEntitlements/{ServiceEntitlementId}?expands=serviceInstancesEligibleDataCenter

where {ServiceEntitlementId} is a service entitlement ID such as cesi-500074601. This will provide


additional information such as:

"serviceInstancesEligibleDataCenters": [
{
"id": "US001"
}
],

To discover service instances


Send the following request:

GET /<domain>/myservices/api/v1/serviceInstances?
serviceDefinitionNames=Exadata

Example payload returned for this request:

{
"items": [
{
"id": "csi-585928949", // Unique ServiceInstanceId
"serviceEntitlement": {
"id": "cesi-585927251", // Related ServiceEntitlementId

"canonicalLink": "/itas/a517289/myservices/api/v1/
serviceEntitlements/cesi-585927251"
},
"serviceDefinition": {
"canonicalLink": "/itas/a517289/myservices/api/v1/
serviceDefinitions/502579309",
"id": "502579309",
"name": "Exadata" // The customer is entitled to
use the Exadata Service
},
...
"canonicalLink": "/itas/a517289/myservices/api/v1/serviceInstances/
csi-585928949"
}
... // More Service
Entitlements could be displayed
],
"canonicalLink": "/itas/<domain>/myservices/api/v1/serviceEntitlements",
"hasMore": false,
"limit": 25,
"offset": 0
}

Using Access Token Authorization with My Services API


Important:

The My Services dashboard and APIs are deprecated.

Oracle Cloud Infrastructure User Guide 332


Service Essentials

This topic explains how to set up and use access token authorization with the Oracle Cloud My Services API. Access
token authorization allows a developer to access programmatic endpoints (APIs) to obtain some information (for
example, entitlements, instances, or metering data) for your cloud account.

About Access Tokens


An access token contains the information required to allow a developer to access information on your cloud account.
A developer presents the token when making API calls. The allowed actions and endpoints depend on the scopes
(permissions) that you select when you generate the token. An access token is valid for about an hour.
A refresh token allows the developer to generate a new access token without having to contact an administrator. A
refresh token is valid for about one year.

Process Overview
Setup steps for the Administrator:
1. Create an Identity Cloud Service client application with the specific privileges you want to grant to developers.
2. Generate an access token that contains the required privileges for the intended developer.
3. Provide the access token and required information to the developer.
4. Configure Identity Cloud Service for access token validation.
Steps for developer to use the token:
1. Issue requests against My Services API endpoints. Include the access token for the authorization parameter.
2. When the access token expires, refresh the access token without administrator intervention until the privilege is
terminated.

Administrator Tasks to Set Up Token Validation


Perform the following tasks to enable developer access with an access token:
Create the IDCS client application
1. Sign in to Identity Cloud Services as an Administrator and go to the administration console. See How to Access
Oracle Identity Cloud Service if you need help signing in.
2. Click the Applications tile. A list of the applications is displayed.
3. Click + Add to create a new application.
4. Click Confidential Application as the type of application.
5. In the App Details section, enter a Name and Description. Avoid entering confidential information.
6. Click Next.
7. In the Client section:
a. Select Configure this application as a client now.
b. Under Authorization, for Allowed Grant Types, select the following options:
• JWT Assertion
• Refresh Token
8. Under Token Issuance Policy, under Resources, click Add Scope.
9. In the Select Scope dialog, select CloudPortalResourceApp and click the arrow to select scopes for the resource.
10. Select the box next to each authorization that you might want to give the developers to whom you will provide an
Access Token. (The permissions are assigned in another step.)
11. Click Add to close the dialog. Your selections are displayed.
12. Click Next.
13. In the Resources section, accept the default and click Next.
14. In the Web Tier Policy section, accept the default and click Next.

Oracle Cloud Infrastructure User Guide 333


Service Essentials

15. In the Authorization section, click Finish.


The Application Added notification displays the new Client ID and Client Secret for the application.
Important:

Copy and store the Client ID and Client Secret in a safe place and then
click Close. The Client ID and Client Secret are credentials that are
specific to the application that you just created. You will need these
credentials later.
16. To complete the creation process, click Activate at the top of the page.
Generate an access token
1. Navigate to the IDCS application that you created in the preceding task and select the Details tab.
2. Click Generate Access Token.
3. On the Generate Token dialog, select Customized Scopes, then select Invokes Other APIs.
4. Select the scopes that you want to give to the developer who will receive this access token.
Note:

Oracle recommends that you provide only the minimum required


privileges.
5. Select Include Refresh Token.
6. Click Download Token. Your browser will prompt you to download a token file (.tok). The token file contains an
access token and a refresh token.
7. Provide this file to the developer.
Send the access information to a developer
To call API endpoints, the developer needs:
• A token file that you generated.
• The Client ID and Client Secret for the IDCS application used to generate the token file. The Client ID and Secret
are required for the developer to generate a new access token from the refresh token.
• The endpoints for the APIs.
• End points related to the itas:myservices scopes are: https://itra.oraclecloud.com/
itas/<tenant-IDCS-ID>/myservices/api/v1
• End points related to the itas:metering scopes are: https://itra.oraclecloud.com/metering/
api/v1
Make sure that you send the above information in a secure way. If you think that this information has been
compromised, see Revoking a Developer's Ability to Refresh Access Tokens on page 335.
Configure Identity Cloud Service for access token validation
To allow clients to access the tenant signing certificate without logging in to Oracle Identity Cloud Service:
1. Sign in to the Oracle Identity Cloud Services admin console. See How to Access Oracle Identity Cloud Service if
you need help signing in.
2. Open the navigation menu. Under Settings select Default Settings.
3. Set the Access Signing Certificate toggle button to on.

Using the Access Token


The token file has a .tok extension. The file contains the access token and the refresh token. The content looks like:

{"app_access_token":"eyJ4N...aabb...CpNwA","refresh_token":"AQID...9NCA="}

To use the token with the My Services API:

Oracle Cloud Infrastructure User Guide 334


Service Essentials

1. Open the token file.


2. Issue a request to a valid endpoint, inserting the access token for the Authorization parameter.
For example:

curl -X GET https://itra.oraclecloud.com/itas/<tenant-IDCS-ID>/


myservices/api/v1/serviceEntitlements -H 'Authorization: Bearer
eyJ4N...aabb...CpNwA'

Requesting a New Access Token from a Refresh Token


An access token is valid for about one hour. When the token is no longer valid you will get a 401 response code and
an Error Message (“errorMessage”) value containing “Expired”.
You can generate a new short-lived access token from the refresh token. You'll need the Client ID and Client Secret
to generate the new token. You can only generate tokens with the same or lower access (scopes) as your original
token.
Example using the curl command:

curl -i -H 'Authorization: Basic <base64Encoded clientid:secret>'


-H 'Content-Type: application/x-www-form-urlencoded;charset=UTF-8'
--request POST https://<tenant-IDCS-ID>/oauth2/v1/token -d
'grant_type=refresh_token&refresh_token=<refresh-token>'

Using the sample token file from the previous section, the value for <refresh-token> would be AQID...9NCA=.
Sample response:

{ "access_token": "eyJraWQiO....2nqA", "token_type": "Bearer", "expires_in":


3600, "refresh_token": "AQIDBAUn…VkxNCB7djF9NCA=" }

Note:

When a developer generates a new access token and refresh token, the
previous refresh token becomes invalid.

Revoking a Developer's Ability to Refresh Access Tokens


If you need to revoke a developer's ability to refresh access tokens, you can either invalidate the existing refresh
token by generating a new Client Secret for the token; or, you can temporarily revoke access by deactivating the
application.
Important:

Taking either of these actions will terminate or suspend the ability of all
developers using the current Client Secret or application. When generating
tokens for multiple developers, consider creating more than one IDCS
application to isolate developers from each other.
To terminate a developer's ability to refresh their access token
1. Sign in to Identity Cloud Services as an Administrator and go to the administration console. See How to Access
Oracle Identity Cloud Service if you need help signing in.
2. Click the Applications tile. A list of the applications is displayed.
3. Click the application used to generate the token to view its details.
4. Click Configuration.
5. Under General Information, next to Client Secret, click Regenerate to generate a new Client Secret.
To restore the ability for the developer to generate an access token from a refresh token, generate a new access token.
Then provide the token along with the new Client Secret to the developer.

Oracle Cloud Infrastructure User Guide 335


Service Essentials

To temporarily suspend a developer's ability to refresh their access token


1. Sign in to Identity Cloud Services as an Administrator and go to the administration console. See How to Access
Oracle Identity Cloud Service if you need help signing in.
2. Click the Applications tile. A list of the applications is displayed.
3. Click the application used to generate the token to view its details.
4. In the upper right corner of the page, click Deactivate.
5. At the prompt, click Deactivate Application.
To re-enable developers to use the same tokens, click Activate.

Oracle Cloud Infrastructure User Guide 336


Service Essentials

Oracle Cloud Infrastructure User Guide 337


API Gateway

Chapter

6
API Gateway
This chapter explains how to use the API Gateway service to create protected RESTful API endpoints for Oracle
Functions, Container Engine for Kubernetes, and other services running on Oracle Cloud Infrastructure and beyond.

Overview of API Gateway


The API Gateway service enables you to publish APIs with private endpoints that are accessible from within
your network, and which you can expose with public IP addresses if you want them to accept internet traffic.
The endpoints support API validation, request and response transformation, CORS, authentication and authorization,
and request limiting.
Using the API Gateway service, you create one or more API gateways in a regional subnet to process traffic from API
clients and route it to back-end services. You can use a single API gateway to link multiple back-end services (such as
load balancers, compute instances, and Oracle Functions) into a single consolidated API endpoint.
You can access the API Gateway service to define API gateways and API deployments using the Console and the
REST API.
The API Gateway service is integrated with Oracle Cloud Infrastructure Identity and Access Management (IAM),
which provides easy authentication with native Oracle Cloud Infrastructure identity functionality.
To get set up and running quickly with the API Gateway service, see the Quick Start Guide. A number of related
Developer Tutorials are also available.

Ways to Access Oracle Cloud Infrastructure


You can access Oracle Cloud Infrastructure using the Console (a browser-based interface) or the REST API.
Instructions for the Console and API are included in topics throughout this guide. For a list of available SDKs, see
Software Development Kits and Command Line Interface on page 4262.
To access the Console, you must use a supported browser.
Oracle Cloud Infrastructure supports the following browsers and versions:
• Google Chrome 69 or later
• Safari 12.1 or later
• Firefox 62 or later
For general information about using the API, see REST APIs on page 4409.

Resource Identifiers
Most types of Oracle Cloud Infrastructure resources have a unique, Oracle-assigned identifier called an Oracle
Cloud ID (OCID). For information about the OCID format and other ways to identify your resources, see Resource
Identifiers on page 199.

Oracle Cloud Infrastructure User Guide 338


API Gateway

Authentication and Authorization


Each service in Oracle Cloud Infrastructure integrates with IAM for authentication and authorization, for all
interfaces (the Console, SDK or CLI, and REST API).
An administrator in your organization needs to set up groups, compartments, and policies that control which users
can access which services, which resources, and the type of access. For example, the policies control who can create
new users, create and manage the cloud network, launch instances, create buckets, download objects, etc. For more
information, see Getting Started with Policies on page 2143. For specific details about writing policies for each of
the different services, see Policy Reference on page 2176.
If you’re a regular user (not an administrator) who needs to use the Oracle Cloud Infrastructure resources that
your company owns, contact your administrator to set up a user ID for you. The administrator can confirm which
compartment or compartments you should be using.

API Gateway Capabilities and Limits


The number of API gateways, API resources, and certificate resources you can define in a region is controlled by
API Gateway service limits (see API Gateway Limits on page 220). The default service limits vary according to
your payment method. If you need more capacity, you can submit a request to increase the default service limits (see
Requesting a Service Limit Increase on page 219).
Some other API Gateway capabilities and limits are also fixed. However, there are also a number that you can
change. See API Gateway Internal Limits on page 480.

Required IAM Service Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
If you're new to policies, see Getting Started with Policies and Common Policies.
For more details about policies for the API Gateway service, see:
• Create Policies to Control Access to Network and API Gateway-Related Resources on page 346
• Details for API Gateway on page 2181

API Gateway Concepts


This topic describes key concepts you need to understand when using the API Gateway service.

API Gateways
In the API Gateway service, an API gateway is a virtual network appliance in a regional subnet. Private API gateways
can only be accessed by resources in the same subnet. Public API gateways are publicly accessible, including from
the internet.
An API gateway routes inbound traffic to back-end services including public, private, and partner HTTP APIs,
as well as Oracle Functions. Each API gateway is a private endpoint that you can optionally expose over a public
IP address as a public API gateway.
To ensure high availability, you can only create API gateways in regional subnets (not AD-specific subnets). You
can create private API gateways in private or public subnets, but you can only create public API gateways in public
subnets.
An API gateway is bound to a specific VNIC.
You create an API gateway within a compartment in your tenancy. Each API gateway has a single front end, zero or
more back ends, and has zero or more APIs deployed on it as API deployments.

Oracle Cloud Infrastructure User Guide 339


API Gateway

APIs
In the API Gateway service, an API is a set of back-end resources, and the methods (for example, GET, PUT) that can
be performed on each back-end resource in response to requests sent by an API client.
To enable an API gateway to process API requests, you must deploy the API on the API gateway by creating an
API deployment.
To deploy an API on an API gateway, you have the option to create an API resource in the API Gateway service.
An API resource includes an API description that defines the API resource. Note that creating an API resource is
optional. You can deploy an API on an API gateway without creating an API resource in the API Gateway service.

API Deployments
In the API Gateway service, an API deployment is the means by which you deploy an API on an API gateway. Before
the API gateway can handle requests to the API, you must create an API deployment.
When you create an API deployment, you set properties for the API deployment, including an API deployment
specification. Every API deployment has an API deployment specification.
You can deploy multiple APIs on the same API gateway, so a single API gateway can host multiple
API deployments.

API Deployment Specifications


In the API Gateway service, an API deployment specification describes some aspects of an API deployment.
When you create the API deployment, you set properties for the API deployment, including an API deployment
specification. Every API deployment has an API deployment specification. You can create an API deployment
specification:
• by using dialogs in the Console
• by using your preferred JSON editor to create a JSON file
• by using an API description that you've uploaded from an API description file written in a supported language (for
example, OpenAPI Specification version 3.0)
Each API deployment specification describes one or more back-end resources, the route to each back-end resource,
and the methods (for example, GET, PUT) that can be performed on each resource. The API deployment specification
describes how the API gateway integrates with the back end to execute those methods. The API deployment
specification can also include request and response policies.

API Resources and API Descriptions


In the API Gateway service, you have the option to create an API resource. An API resource is the design-time
representation of an API. You can use an API resource to deploy an API on an API gateway.
An API description defines an API resource, including:
• available endpoints
• available operations on each endpoint
• parameters that can be input and output for each operation
• authentication methods
If you use an API resource to deploy an API on an API gateway, its API description pre-populates some of the
properties of the API deployment specification.
You import the API description from a file (sometimes called an 'API specification', or 'API spec') written in a
supported language. Currently, OpenAPI Specification version 2.0 (formerly Swagger Specification 2.0) and version
3.0 are supported.
Note that creating an API resource in the API Gateway service is optional. You can deploy an API on an API gateway
without creating an API resource in the API Gateway service. Note also that you can create an API resource that
doesn't have an API description initially, and then add an API description later.

Oracle Cloud Infrastructure User Guide 340


API Gateway

Front ends
In the API Gateway service, a front end is the means by which requests flow into an API gateway. An API gateway
can have either a public front end or a private front end:
• A public front end exposes the APIs deployed on an API gateway via a public IP address.
• A private front end exposes the APIs deployed on an API gateway to a VCN via a private endpoint.

Back ends
In the API Gateway service, a back end is the means by which a gateway routes requests to the back-end services that
implement APIs. If you add a private endpoint back end to an API gateway, you give the API gateway access to the
VCN associated with that private endpoint.
You can also grant an API gateway access to other Oracle Cloud Infrastructure services as back ends. For example,
you could grant an API gateway access to Oracle Functions, so you can create and deploy an API that is backed by a
serverless function.

API Providers, API Consumers, API Clients, and End Users


An API provider is a person or team who designs, implements, delivers, and operates APIs. These users interact
with interfaces such as the Oracle Cloud Infrastructure Console, SDK, CLI, and Terraform provider. They use
API Gateway to deploy, monitor, and operate APIs. Some organizations segment the API provider role further, for
example into:
• API developers, with responsibility for building APIs and deploying them on API gateways
• API Gateway managers, with responsibility for monitoring and managing API gateways, typically in production
An API consumer is a person or team who builds apps and services (API clients) and wants to leverage one or more
APIs offered by an API provider. The API consumer is typically not sharing an Oracle Cloud Infrastructure tenancy
with the API provider. The API consumer is a customer of the API provider.
An API client is an application or device created by an API consumer. The API client invokes the API at runtime by
sending requests to the API gateway on which the API is deployed. API clients typically authenticate with the API
using OAuth, Basic Auth, mTLS and might use some other token such as an API key for metering and monetization.
An end user is a user of an API client, and is sometimes referred to as the "resource-owner" in terms of authorization.
The end user only ever interacts with an API using the API client, and is typically unaware of the API itself. The end
user is a customer of the API consumer.

Routes
In the API Gateway service, a route is the mapping between a path, one or more methods, and a back-end service.
Routes are defined in API deployment specifications.

Policies
In the API Gateway service, there are different types of policy:
• a request policy describes actions to be performed on an incoming request from an API client before it is sent to a
back end
• a response policy describes actions to be performed on a response returned from a back end before it is sent to an
API client
• a logging policy describes how to store information about requests and responses going through an API gateway,
and information about processing within an API gateway
You can use request policies and/or response policies to:
• limit the number of requests sent to back-end services
• enable CORS (Cross-Origin Resource Sharing) support
• provide authentication and authorization

Oracle Cloud Infrastructure User Guide 341


API Gateway

• modify incoming requests and outgoing responses


You can add policies to an API deployment specification that apply globally to all routes in the API deployment
specification, as well as policies that apply only to particular routes.
Note that API Gateway policies are different to IAM policies, which control access to Oracle Cloud Infrastructure
resources.

Preparing for API Gateway


Before you can use the API Gateway service to create API gateways and deploy APIs on them as API deployments:
• You must have access to an Oracle Cloud Infrastructure tenancy. The tenancy must be subscribed to one or more
of the regions in which API Gateway is available (see Availability by Region on page 342).
• Your tenancy must have sufficient quota on API Gateway-related resources (see Service Limits on page 217).
• Within your tenancy, there must already be a compartment to own the necessary network resources. If such
a compartment does not exist already, you will have to create it. See Create Compartments to Own Network
Resources and API Gateway Resources in the Tenancy, if they don't exist already on page 344.
• The compartment that owns network resources must contain a VCN, a public or private regional subnet, and other
resources (such as an internet gateway, a route table, security lists). To ensure high availability, API gateways can
only be created in regional subnets (not AD-specific subnets). Note that an API gateway must be able to reach the
back ends defined in the API deployment specification. For example, if the back end is on the public internet, the
VCN must have an internet gateway to enable the API gateway to route requests to the back end.
• The VCN must have a set of DHCP options that includes an appropriate DNS resolver to map host names defined
in an API deployment specification to IP addresses. If such a DHCP options set does not exist in the VCN already,
you will have to create it. Select the DHCP options set for the API gateway's subnet as follows:
• If the host name is publicly published on the internet, or if the host name belongs to an instance in the same
VCN, select a DHCP options set that has the Oracle-provided Internet and VCN Resolver as the DNS Type.
This is the default if you do not explicitly select a DHCP options set.
• If the host name is on your own private or internal network (for example, connected to the VCN by
FastConnect), select a DHCP options set that has Custom Resolver as the DNS Type, and has the URL of a
suitable DNS server that can resolve the host name to an IP address.
Note that you can change the DNS server details in the DHCP options set specified for an API gateway's
subnet. The API gateway will be reconfigured to use the updated DNS server details within two hours. For more
information about resolving host names to IP addresses, see DNS in Your Virtual Cloud Network on page 2936
and DHCP Options on page 2943.
• Within your tenancy, there must already be a compartment to own API Gateway-related resources (API gateways,
API deployments). This compartment can be, but need not be, the same compartment that contains the network
resources. See Create Compartments to Own Network Resources and API Gateway Resources in the Tenancy,
if they don't exist already on page 344. Note that the API Gateway-related resources can reside in the root
compartment. However, if you expect multiple teams to create API gateways, best practice is to create a separate
compartment for each team.
• To create API gateways and deploy APIs on them, you must belong to one of the following:
• The tenancy's Administrators group.
• A group to which policies grant the appropriate permissions on network and API Gateway-related resources.
See Create Policies to Control Access to Network and API Gateway-Related Resources on page 346.
• Policies must be defined to give the API gateways you create access to additional resources, if necessary. See
Create a Policy to Give API Gateways Access to Functions on page 349.

Availability by Region
The API Gateway service is available in the Oracle Cloud Infrastructure regions listed at Regions and Availability
Domains on page 182. Refer to that topic to see region identifiers, region keys, and availability domain names.

Oracle Cloud Infrastructure User Guide 342


API Gateway

Configuring Your Tenancy for API Gateway Development


Before you can start using the API Gateway service to create API gateways and deploy APIs on them, you have to set
up your tenancy for API gateway development.
When a tenancy is created, an Administrators group is automatically created for the tenancy. Users that are members
of the Administrators group can perform any operation on resources in the tenancy. API Gateway service users are
typically not members of the Administrators group, and do not have to be. However, a member of the Administrators
group does need to perform a number of administrative tasks to enable users to use the API Gateway service.
To set up your tenancy for API gateway development, you have to complete the following tasks in the order shown in
this checklist (the instructions in the topics below assume that you are a tenancy administrator):

Task # Tenancy Configuration Task Done?


1 Create Groups and Users to Use API Gateway, if
these don't exist already on page 343
2 Create Compartments to Own Network
Resources and API Gateway Resources in the
Tenancy, if they don't exist already on page
344
3 Create a VCN to Use with API Gateway, if one
doesn't exist already on page 344
See Example Network Resource Configurations
on page 350 for details of typical network
configurations.

4 Create Policies to Control Access to Network


and API Gateway-Related Resources on page
346, and more specifically:
• Create a Policy to Give API Gateway Users
Access to API Gateway-Related Resources
on page 346
• Create a Policy to Give API Gateway Users
Access to Network Resources on page 347
• Create a Policy to Give API Gateway Users
Access to Functions on page 348
• Create a Policy to Give API Gateways
Access to Functions on page 349

Click each of the links in turn, and follow the instructions.


Create Groups and Users to Use API Gateway, if these don't exist already
Before users can start using the API Gateway service to create API gateways and deploy APIs on them, as a tenancy
administrator you have to create Oracle Cloud Infrastructure user accounts, along with a group to which the user
accounts belong. Later on, you'll define policies to give the group (and the user accounts that belong to it) access to
API Gateway-related resources. If a suitable group and user accounts already exist, there's no need to create new ones.
To create groups and users to use the API Gateway service:
1. Log in to the Console as a tenancy administrator.

Oracle Cloud Infrastructure User Guide 343


API Gateway

2. If a suitable group for API Gateway users doesn't exist already, create such a group as follows:
a. Open the navigation menu. Under Governance and Administration, go to Identity and click Groups. A list
of the groups in your tenancy is displayed.
b. Click Create Group and create a new group (see To create a group on page 2439). Give the group a
meaningful name (for example, api-gateway-developers) and description. Avoid entering confidential
information.
3. If suitable user accounts for API Gateway users don't exist already, create users as follows:
a. Open the navigation menu. Under Governance and Administration, go to Identity and click Users. A list of
the users in your tenancy is displayed.
b. Click Create User and create one or more new users (see To create a user on page 2436).
4. If they haven't been added already, add users to the group to use the API Gateway service as follows:
a. Open the navigation menu. Under Governance and Administration, go to Identity and click Users. A list of
the users in your tenancy is displayed.
b. Select one or more users and add them to the group authorized to use the API Gateway service (see To add a
user to a group on page 2436).
Create Compartments to Own Network Resources and API Gateway Resources in the Tenancy, if
they don't exist already
Before users can start using the API Gateway service to create API gateways and deploy APIs on them, as a tenancy
administrator you have to create:
• a compartment to own network resources (a VCN, a public or private regional subnet, and other resources such as
an internet gateway, a route table, security lists)
• a compartment to own API Gateway resources (API gateways, API deployments)
Note that the same compartment can own both network resources and API Gateway-related resources. Alternatively,
you can create two separate compartments for network resources and API Gateway-related resources.
If suitable compartments already exist, there's no need to create new ones.
To create a compartment to own network resources and/or API Gateway-related resources in the tenancy:
1. Log in to the Console as a tenancy administrator.
2. Open the navigation menu. Under Governance and Administration, go to Identity and click Compartments. A
list of the compartments in your tenancy is displayed.
3. Click Create Compartment and create a new compartment (see To create a compartment on page 2461). Give
the compartment a meaningful name (for example, acme-network, acme-api-gateway-compartment, )
and description. Avoid entering confidential information.
Tip:

Normally, API gateways and API deployments are created in the same
compartment. However, in large development teams with many API
developers, you might find it useful to create separate compartments for
API gateways and for API deployments. Doing so will enable you to give
different groups of users appropriate access to those resources.
Create a VCN to Use with API Gateway, if one doesn't exist already
Before users can start using the API Gateway service to create API gateways and deploy APIs on them, as a tenancy
administrator you have to create one or more VCNs containing a public or private regional subnet in which to create
API gateways.
The VCN can be, but need not be, owned by the same compartment to which other API Gateway-related resources
will belong. To ensure high availability, API gateways can only be created in regional subnets (not AD-specific
subnets). Note that an API gateway must be able to reach the back ends defined in the API deployment specification.
For example, if the back end is on the public internet, the VCN must have an internet gateway to enable the API

Oracle Cloud Infrastructure User Guide 344


API Gateway

gateway to route requests to the back end. The VCN must have a set of DHCP options that includes an appropriate
DNS resolver to map host names defined in an API deployment specification to IP addresses.
The public or private regional subnet in which to create API gateways must have a CIDR block that provides a
minimum of 32 free IP addresses. Note that Oracle strongly recommends the CIDR block provides more than the
minimum.
To support the largest possible number of concurrent connections, Oracle also strongly recommends that the security
lists used by the subnet only have stateless rules.
If a suitable VCN already exists, there's no need to create a new one.
If you do decide to create a new VCN, you have several options, including the following:
• You can create just the VCN initially, and then create the regional subnets and other related resources later (as
described in this topic). In this case, you can choose whether to create a public regional subnet and an internet
gateway (see Internet Gateway on page 3271), or a private regional subnet and a service gateway (see Access
to Oracle Services: Service Gateway on page 3284). For example, if you don't want to expose traffic over the
public internet, create a private regional subnet and a service gateway.
• You can create the new VCN and have related resources created automatically at the same time by selecting the
Start VCN Wizard option. In this case, a public regional subnet and a private regional subnet are created, along
with an internet gateway, a NAT gateway, and a service gateway. Although a default security list is also created,
you have to add a new stateful ingress rule for the regional subnet to allow traffic on port 443. That's because API
Gateway communicates on port 443, and port 443 is not open by default (see the corresponding step in this topic).
See Example Network Resource Configurations on page 350 for details of typical network configurations.
To create a VCN to use with API Gateway:
1. Log in to the Console as a tenancy administrator.
2. Open the navigation menu. Under Core Infrastructure, go to Networking and click Virtual Cloud Networks.
3. Choose the compartment that will own the network resources (on the left side of the page). For example, acme-
network.
The VCN can be, but need not be, owned by the same compartment to which API Gateway-related resources will
belong. The page updates to display only the resources in the compartment you select.
4. Click Create Virtual Cloud Network to create a new VCN.
5. In the Create Virtual Cloud Network dialog box, enter the following:
• Name: A meaningful name for the VCN, such as acme-apigw-vcn. The name doesn't have to be unique,
but you cannot change it later using the Console(although you can change it using the API). Avoid entering
confidential information.
• Other details for the VCN (see To create a VCN on page 2851).
6. Click Create Virtual Cloud Network to create the VCN.
The VCN is created and displayed on the Virtual Cloud Networks page in the compartment you chose.
7. On the Virtual Cloud Networks page, click Create Subnet.
8. In the Create Subnet dialog box, enter the following:
• Name: A meaningful name for the subnet, such as acme-apigw-subnet. The name doesn't have to be
unique, but you cannot change it later using the Console(although you can change it using the API). Avoid
entering confidential information.
• Subnet Type: Select Regional (Recommended). To ensure high availability, API gateways can only be
created in regional subnets (not AD-specific subnets).
• CIDR Block: A CIDR block that provides a minimum of 32 free IP addresses.
• DHCP Options: (Optional) Select a set of DHCP options that includes an appropriate DNS resolver to map
host names defined in an API deployment specification to IP addresses. If you do not explicitly specify a
DHCP options set, the default DHCP options set uses the Oracle-provided Internet and VCN Resolver to

Oracle Cloud Infrastructure User Guide 345


API Gateway

return IP addresses for host names publicly published on the internet, and host names belonging to an instance
in the same VCN.
• Other details for the subnet (see To create a subnet on page 2851).
9. Click Create to create the subnet.
The subnet is created and displayed on the Subnets page in the compartment you chose.
API Gateway communicates on port 443, which is not open by default. You have to add a new stateful ingress rule
for the regional subnet to allow traffic on port 443.
10. Click the name of the regional subnet, then the name of the default security list, and then click Add Ingress Rules
and enter the following:
• Source Type: CIDR
• Source CIDR: 0.0.0.0/0
• IP Protocol: TCP
• Source Port Range: All
• Destination Port Range: 443
11. Click Add Ingress Rules to add the new rule to the default security list.
Create Policies to Control Access to Network and API Gateway-Related Resources
Before users can start using the API Gateway service to create API gateways and deploy APIs on them, as a tenancy
administrator you have to create a number of Oracle Cloud Infrastructure policies to grant access to API Gateway-
related and network resources.
To grant access to API Gateway-related and network resources, you have to:
• Grant users access to API Gateway-related resources, network resources, and (optionally) function resources.
More specifically, you have to:
• Create a Policy to Give API Gateway Users Access to API Gateway-Related Resources on page 346
• Create a Policy to Give API Gateway Users Access to Network Resources on page 347
• Create a Policy to Give API Gateway Users Access to Functions on page 348
• Grant API gateways access to functions defined in Oracle Functions, if required. If API Gateway users define a
new API gateway with a serverless function in Oracle Functions as an API back end, the API Gateway service
verifies that the new API gateway will have access to the specified function. To provide access, you have to create
a policy that grants API gateways access to functions defined in Oracle Functions. See Create a Policy to Give
API Gateways Access to Functions on page 349.
See Details for API Gateway on page 2181 for more information about policies.
Create a Policy to Give API Gateway Users Access to API Gateway-Related Resources
When API Gateway users define a new API gateway and new API deployments, they have to specify a compartment
for those API Gateway-related resources. Users can only specify a compartment that the groups to which they belong
have been granted access. To enable users to specify a compartment, you must create an identity policy to grant the
groups access.
To create a policy to give users access to API Gateway-related resources in the compartment that will own those
resources:
1. Log in to the Console as a tenancy administrator.
2. In the Console, open the navigation menu. Under Governance and Administration, go to Identity and click
Policies. A list of the policies in the compartment you're viewing is displayed.
3. Select the compartment that will own API Gateway-related resources from the list on the left.
4. Click Create Policy.

Oracle Cloud Infrastructure User Guide 346


API Gateway

5. Enter the following:


• Name: A meaningful name for the policy (for example, acme-apigw-developers-manage-access).
The name must be unique across all policies in your tenancy. You cannot change this later. Avoid entering
confidential information.
• Description: A meaningful description (for example, Gives api-gateway developers access to
all resources in the acme-apigw-compartment). You can change this later if you want to.
• Statement: The following policy statement to give the group access to all API Gateway-related resources in
the compartment:
As Statement 1:, enter the following policy statement to give the group access to all API Gateway-related
resources in the compartment:

Allow group <group-name> to manage api-gateway-family in compartment


<compartment-name>

For example:

Allow group acme-apigw-developers to manage api-gateway-family in


compartment acme-apigw-compartment
• Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags
to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more
information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip
this option (you can apply tags later) or ask your administrator.
6. Click Create to create the policy giving API Gateway users access to API Gateway-related resources in the
compartment.
Tip:

Normally, API gateways and API deployments are created in the same
compartment. However, in large development teams with many API
developers, you might find it useful to create separate compartments for
API gateways and for API deployments. Doing so will enable you to give
different groups of users appropriate access to those resources.
Create a Policy to Give API Gateway Users Access to Network Resources
When API Gateway users define a new API gateway, they have to specify a VCN and a subnet in which to create
the API gateway. Users can only specify VCNs and subnets that the groups to which they belong have been granted
access. To enable users to specify a VCN and subnet, you must create an identity policy to grant the groups access.
In addition, if you want to enable users to create public API gateways, the identity policy must allow the groups to
manage public IP addresses in the compartment that owns the network resources.
To create a policy to give API Gateway users access to network resources:
1. Log in to the Console as a tenancy administrator.
2. In the Console, open the navigation menu. Under Governance and Administration, go to Identity and click
Policies. A list of the policies in the compartment you're viewing is displayed.
3. Select the compartment that owns the network resources from the list on the left.
4. Click Create Policy.

Oracle Cloud Infrastructure User Guide 347


API Gateway

5. Enter the following:


• Name: A meaningful name for the policy (for example, acme-apigw-developers-network-
access). The name must be unique across all policies in your tenancy. You cannot change this later. Avoid
entering confidential information.
• Description: A meaningful description (for example, Gives api-gateway developers access to
all network resources in the acme-network compartment). You can change this later if
you want to.
• Statement: The following policy statement to give the group access to network resources in the compartment
(including the ability to manage public IP addresses):

Allow group <group-name> to manage virtual-network-family in compartment


<compartment-name>

For example:

Allow group acme-apigw-developers to manage virtual-network-family in


compartment acme-network
• Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags
to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more
information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip
this option (you can apply tags later) or ask your administrator.
6. Click Create to create the policy giving API Gateway users access to network resources and public IP addresses
in the compartment.
Create a Policy to Give API Gateway Users Access to Functions
When API Gateway users define a new API gateway, one option is to specify a serverless function defined in Oracle
Functions as the API back end. Users can only specify functions that the groups to which they belong have been
granted access. If you want to enable users to specify functions as API back ends, you must create an identity policy
to grant the groups access. Note that in addition to this policy for the user group, to enable users to specify functions
as API back ends you also have to create a policy to give API gateways access to Oracle Functions (see Create a
Policy to Give API Gateways Access to Functions on page 349).
Another reason to create an identity policy that grants groups access to Oracle Functions is if you want to enable
users to use the Console (rather than a JSON file) to define an authentication request policy and specify an authorizer
function defined in Oracle Functions (see Using Authorizer Functions to Add Authentication and Authorization to
API Deployments on page 435).
To create a policy to give API Gateway users access to functions defined in Oracle Functions:
1. Log in to the Console as a tenancy administrator.
2. In the Console, open the navigation menu. Under Governance and Administration, go to Identity and click
Policies. A list of the policies in the compartment you're viewing is displayed.
3. Select the compartment that owns the functions from the list on the left.
4. Click Create Policy.

Oracle Cloud Infrastructure User Guide 348


API Gateway

5. Enter the following:


• Name: A meaningful name for the policy (for example, acme-apigw-developers-functions-
access). The name must be unique across all policies in your tenancy. You cannot change this later. Avoid
entering confidential information.
• Description: A meaningful description (for example, Gives api-gateway developers access to
all functions in the acme-functions-compartment). You can change this later if you want
to.
• Statement:The following policy statement to give the group access to the functions in the compartment:

Allow group <group-name> to use functions-family in compartment


<compartment-name>

For example:

Allow group acme-apigw-developers to use functions-family in compartment


acme-functions-compartment
• Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags
to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more
information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip
this option (you can apply tags later) or ask your administrator.
6. Click Create to create the policy giving API Gateway users access to functions in the compartment.
Create a Policy to Give API Gateways Access to Functions
When API Gateway users define a new API gateway, one option is to specify a serverless function defined in Oracle
Functions as the API back end. Before creating the API gateway, the API Gateway service verifies that the new API
gateway will have access to the specified function through an IAM policy.
Note that in addition to this policy for API gateways, to enable users to specify functions as API back ends you also
have to create a policy to give users access to Oracle Functions (see Create a Policy to Give API Gateway Users
Access to Functions on page 348).
To create a policy to give API gateways access to functions defined in Oracle Functions:
1. Log in to the Console as a tenancy administrator.

Oracle Cloud Infrastructure User Guide 349


API Gateway

2. Create a new policy to give API gateways access to functions defined in Oracle Functions:
a. Open the navigation menu. Under Governance and Administration, go to Identity and click Policies.
b. Select the compartment containing the function-related resources to which you want to grant access. If the
resources are in different compartments, select a common parent compartment (for example, the tenancy's root
compartment).
c. Follow the instructions in To create a policy on page 2473, and give the policy a name (for example, acme-
apigw-gateways-functions-policy).
d. Enter a policy statement to give API gateways access to the compartment containing functions defined in
Oracle Functions:

ALLOW any-user to use functions-family in compartment <functions-


compartment-name> where ALL {request.principal.type= 'ApiGateway',
request.resource.compartment.id = '<api-gateway-compartment-OCID>'}

where:
• <functions-compartment-name> is the name of the compartment containing the functions you
want to use as back ends for API gateways.
• <api-gateway-compartment-OCID> is the OCID of the compartment containing the API gateways
that you want to have access to the functions.
For example:

ALLOW any-user to use functions-family in compartment acme-


functions-compartment where ALL {request.principal.type=
'ApiGateway', request.resource.compartment.id =
'ocid1.compartment.oc1..aaaaaaaa7______ysq'}
e. Click Create to create the policy giving API gateways access to functions defined in Oracle Functions.
Example Network Resource Configurations
Before you can use the API Gateway service to create API gateways and deploy APIs on them as API deployments:
• You must have access to an Oracle Cloud Infrastructure tenancy. The tenancy must be subscribed to one or more
of the regions in which API Gateway is available (see Availability by Region on page 342).
• Your tenancy must have sufficient quota on API Gateway-related resources (see Service Limits on page 217).
• Within your tenancy, there must already be a compartment to own the necessary network resources. If such
a compartment does not exist already, you will have to create it. See Create Compartments to Own Network
Resources and API Gateway Resources in the Tenancy, if they don't exist already on page 344.
• The compartment that owns network resources must contain a VCN, a public or private regional subnet, and other
resources (such as an internet gateway, a route table, security lists). To ensure high availability, API gateways can
only be created in regional subnets (not AD-specific subnets). Note that an API gateway must be able to reach the
back ends defined in the API deployment specification. For example, if the back end is on the public internet, the
VCN must have an internet gateway to enable the API gateway to route requests to the back end.
• The VCN must have a set of DHCP options that includes an appropriate DNS resolver to map host names defined
in an API deployment specification to IP addresses. If such a DHCP options set does not exist in the VCN already,
you will have to create it. Select the DHCP options set for the API gateway's subnet as follows:
• If the host name is publicly published on the internet, or if the host name belongs to an instance in the same
VCN, select a DHCP options set that has the Oracle-provided Internet and VCN Resolver as the DNS Type.
This is the default if you do not explicitly select a DHCP options set.
• If the host name is on your own private or internal network (for example, connected to the VCN by
FastConnect), select a DHCP options set that has Custom Resolver as the DNS Type, and has the URL of a
suitable DNS server that can resolve the host name to an IP address.
Note that you can change the DNS server details in the DHCP options set specified for an API gateway's
subnet. The API gateway will be reconfigured to use the updated DNS server details within two hours. For more

Oracle Cloud Infrastructure User Guide 350


API Gateway

information about resolving host names to IP addresses, see DNS in Your Virtual Cloud Network on page 2936
and DHCP Options on page 2943.
• Within your tenancy, there must already be a compartment to own API Gateway-related resources (API gateways,
API deployments). This compartment can be, but need not be, the same compartment that contains the network
resources. See Create Compartments to Own Network Resources and API Gateway Resources in the Tenancy,
if they don't exist already on page 344. Note that the API Gateway-related resources can reside in the root
compartment. However, if you expect multiple teams to create API gateways, best practice is to create a separate
compartment for each team.
• To create API gateways and deploy APIs on them, you must belong to one of the following:
• The tenancy's Administrators group.
• A group to which policies grant the appropriate permissions on network and API Gateway-related resources.
See Create Policies to Control Access to Network and API Gateway-Related Resources on page 346.
• Policies must be defined to give the API gateways you create access to additional resources, if necessary. See
Create a Policy to Give API Gateways Access to Functions on page 349.
This topic gives examples of how you might configure network resources for API gateways with a serverless function
as a back end:
• for a public API gateway in a public subnet (see Example 1: Example Network Resource Configuration for a
Public API Gateway in a Public Subnet with a Serverless Function as an HTTP Back End on page 351)
• for a private API gateway in a private subnet (see Example 2: Network Resource Configuration for a Private API
Gateway in a Private Subnet with a Serverless Function as an HTTP Back End on page 354)
These examples assume the default helloworld function has been created and deployed in Oracle Functions with the
name helloworld-func and belonging to the helloworld-app application (see Creating, Deploying, and Invoking a
Helloworld Function on page 2067).
Example 1: Example Network Resource Configuration for a Public API Gateway in a Public Subnet with a
Serverless Function as an HTTP Back End
This example assumes you want a public API gateway that can be accessed directly from the internet, with a
serverless function as an HTTP back end.

Oracle Cloud Infrastructure User Guide 351


API Gateway

To achieve this example configuration, you create the following resources in the sequence shown, with the properties
shown in the Example Resource Configuration table below:
1. A VCN named 'acme-vcn1'.
2. An internet gateway named 'acme-internet-gateway'.
3. A route table named 'acme-routetable-public'.
4. A security list named 'acme-security-list-public', with an ingress rule that allows public access to the API gateway
and an egress rule that allows access to Oracle Functions.
5. A public subnet named 'acme-public-subnet'.
6. An API gateway named 'acme-public-gateway', with an API deployment named 'acme-public-deployment'.
Issuing a curl command from the public internet against the API deployment returns the response shown:

[user@machinename ~]$ curl -X GET https://lak...sjd.apigateway.us-


phoenix-1.oci.customer-oci.com/marketing/hello

Hello, world!

Example Network Resource Configuration

Resource Example
VCN Created manually, and defined as follows:
• Name: acme-vcn1
• CIDR Block: 10.0.0.0/16
• DNS Resolution: Selected

Internet Gateway Created manually, and defined as follows:


• Name: acme-internet-gateway

Route Table One route table created manually, named, and defined as
follows:
• Name:acme-routetable-public, with a route rule
defined as follows:
• Destination CIDR block: 0.0.0.0/0
• Target Type: Internet Gateway
• Target Internet Gateway: acme-internet-
gateway

DHCP Options Created automatically and defined as follows:


• DNS Type set to Internet and VCN Resolver

Oracle Cloud Infrastructure User Guide 352


API Gateway

Resource Example
Security List One security list created manually (in addition to the
default security list), named, and defined as follows:
• Security List Name: acme-security-list-public, with
an ingress rule that allows public access to the API
gateway, and an egress rule that allows access to
Oracle Functions.
• Ingress Rule 1:
• State: Stateful
• Source Type: CIDR
• Source CIDR: 0.0.0.0/0
• IP Protocol: TCP
• Source Port Range: All
• Destination Port Range: 443
• Egress Rule 1:
• State: Stateful
• Destination Type: CIDR
• Destination CIDR: 0.0.0.0/0
• IP Protocol: All Protocols

Subnet One regional public subnet created manually, named,


and defined as follows:
• Name: acme-public-subnet with the following
properties:
• CIDR Block: 10.0.0.0/24
• Route Table: acme-routetable-public
• Subnet access: Public
• DNS Resolution: Selected
• DHCP Options: Default
• Security List: acme-security-list-public

API Gateway One public API gateway created and defined as follows:
• Name: acme-public-gateway
• Type: Public
• VCN: acme-vcn1
• Subnet: acme-public-subnet
• Hostname: (for the purpose of this example,
the hostname is lak...sjd.apigateway.us-
phoenix-1.oci.customer-oci.com)

Oracle Cloud Infrastructure User Guide 353


API Gateway

Resource Example
API Deployment One API deployment created and defined as follows:
• Name: acme-public-deployment
• Path Prefix: /marketing
• API Request Policies: None specified
• API Logging: None specified
• Route:
• Path: /hello
• Methods: GET
• Type: Oracle Functions
• Application: helloworld-app
• Function Name: helloworld-func

Example 2: Network Resource Configuration for a Private API Gateway in a Private Subnet with a
Serverless Function as an HTTP Back End
This example assumes you want a private API gateway that can only be accessed via a bastion host (rather than
accessed directly from the internet), with a serverless function as an HTTP back end.

To achieve this example configuration, you create the following resources in the sequence shown, with the properties
shown in the Example Resource Configuration table below:
1. A VCN named acme-vcn2
2. An internet gateway named acme-internet-gateway

Oracle Cloud Infrastructure User Guide 354


API Gateway

3. A service gateway named acme-service-gateway. (In this example, you only need to create a service gateway,
because the API gateway only has an Oracle Functions back end. However, if the API gateway has both an Oracle
Functions back end and also an HTTP back end on the public internet, you could create a NAT gateway instead to
access both back ends.)
4. A route table named acme-routetable-private
5. A security list named acme-security-list-private, with an ingress rule that allows the bastion host to access the API
gateway and an egress rule that allows access to Oracle Functions.
6. A private subnet named acme-private-subnet
7. An API gateway named acme-private-gateway, with an API deployment named acme-private-deployment
8. A route table named acme-routetable-bastion
9. A security list named acme-security-list-bastion, with an ingress rule that allows public SSH access to the bastion
host and an egress rule that allows the bastion host to access the API gateway.
10. A public subnet named acme-bastion-public-subnet
11. A compute instance with a public IP address to act as the bastion host, called acme-bastion-instance
Having SSH'd into the bastion host, issuing a curl command against the API deployment returns the response shown:

[user@machinename ~]$ ssh [email protected]

[opc@acme-bastion-instance ~]$ curl -X GET https://pwa...djt.apigateway.us-


phoenix-1.oci.customer-oci.com/marketing-private/hello

Hello, world!

Example Resource Configuration

Resource Example
VCN Created manually, and defined as follows:
• Name: acme-vcn2
• CIDR Block: 10.0.0.0/16
• DNS Resolution: Selected

Internet Gateway Created manually, and defined as follows:


• Name: acme-internet-gateway

Service Gateway Created manually, and defined as follows:


• Name: acme-service-gateway
• Services: All <region> Services in Oracle Services
Network

Oracle Cloud Infrastructure User Guide 355


API Gateway

Resource Example
Route Tables Two route tables created manually, named, and defined
as follows:
• Name:acme-routetable-bastion, with a route rule
defined as follows:
• Destination CIDR block: 0.0.0.0/0
• Target Type: Internet Gateway
• Target Internet Gateway: acme-internet-
gateway
• Name:acme-routetable-private, with a route rule
defined as follows:
• Destination CIDR block: 0.0.0.0/0
• Target Type:Service Gateway
• Destination Service: All <region> Services in
Oracle Services Network
• Target Service Gateway: acme-service-gateway

DHCP Options Created automatically and defined as follows:


• DNS Type set to Internet and VCN Resolver

Oracle Cloud Infrastructure User Guide 356


API Gateway

Resource Example
Security List Two security lists created manually (in addition to the
default security list), named, and defined as follows:
• Security List Name: acme-security-list-bastion,with
an ingress rule that allows public SSH access to the
bastion host and an egress rule that allows the bastion
host to access the API gateway:
• Ingress Rule 1:
• State: Stateful
• Source Type: CIDR
• Source CIDR: 0.0.0.0/0
• IP Protocol: TCP
• Source Port Range: All
• Destination Port Range: 22
• Egress Rule 1:
• State: Stateful
• Destination Type: CIDR
• Destination CIDR: 0.0.0.0/0
• IP Protocol: All Protocols
• Security List Name: acme-security-list-private, with
an ingress rule that allows the bastion host to access
the API gateway and an egress rule that allows access
to Oracle Functions:
• Ingress Rule 1:
• State: Stateful
• Source Type: CIDR
• Source CIDR: 10.0.0.0/16
• IP Protocol: TCP
• Source Port Range: All
• Destination Port Range: 443
• Egress Rule 1:
• State: Stateful
• Destination Type: CIDR
• Destination CIDR: 0.0.0.0/0
• IP Protocol: All Protocols

Oracle Cloud Infrastructure User Guide 357


API Gateway

Resource Example
Subnet Two regional subnets created manually, named, and
defined as follows:
• Name: acme-bastion-public-subnet, with the
following properties:
• CIDR Block: 10.0.1.0/24
• Route Table: acme-routetable-bastion
• Subnet access: Public
• DNS Resolution: Selected
• DHCP Options: Default
• Security List: acme-security-list-bastion
• Name: acme-private-subnet, with the following
properties:
• CIDR Block: 10.0.2.0/24
• Route Table: acme-routetable-private
• Subnet access: Private
• DNS Resolution: Selected
• DHCP Options: Default
• Security List: acme-security-list-private

API Gateway One private API gateway created and defined as follows:
• Name: acme-private-gateway
• Type: Private
• VCN: acme-vcn2
• Subnet: acme-private-subnet
• Hostname: (for the purpose of this example,
the hostname is pwa...djt.apigateway.us-
phoenix-1.oci.customer-oci.com)

API Deployment One API deployment created and defined as follows:


• Name: acme-private-deployment
• Path Prefix: /marketing-private
• API Request Policies: None specified
• API Logging: None specified
• Route:
• Path: /hello
• Methods: GET
• Type: Oracle Functions
• Application: helloworld-app
• Function Name: helloworld-func

Oracle Cloud Infrastructure User Guide 358


API Gateway

Resource Example
Instance One compute instance created and defined as follows:
• Name: acme-bastion-instance
• Availability Domain: AD1
• Instance Type: Virtual Machine
• VCN: acme-vcn2
• Subnet:acme-bastion-public-subnet
• Assign a public IP address: Selected (for the
purpose of this example, the instance is given the
IP address 198.51.100.254)

Configuring Your Client Environment for API Gateway Development


When using the API Gateway service to create API gateways and API deployments, you can perform many
operations using the Console. However, as well as using the Console, you'll typically also want to create and manage
API gateways and API deployments programmatically using the API Gateway service's REST API.
You can use the API Gateway REST API using the Oracle Cloud Infrastructure CLI (for more information and
configuration instructions, see Configuring Your Client Environment to use the CLI for API Gateway Development
on page 359).
Configuring Your Client Environment to use the CLI for API Gateway Development
In addition to using the Console to create API gateways and API deployments, you'll typically also want to create and
manage API gateways and API deployments programmatically using the API Gateway service's REST API.
One way to use the API Gateway REST API is to use the Oracle Cloud Infrastructure CLI.
Before you can start using the Oracle Cloud Infrastructure CLI to create and manage API gateways and API
deployments programmatically using the API Gateway REST API, you have to set up your client environment
appropriately. Note that prior to setting up your client environment, you must already have set up your tenancy (see
Configuring Your Tenancy for API Gateway Development on page 343).
To set up your client environment for API development using the Oracle Cloud Infrastructure CLI, you have to
complete the following tasks in the order shown in this checklist:

Task # Development Environment Configuration Done?


Task
1 Installing the CLI on page 4231
2 Setting up the Config File on page 4233

Click each of the links in the checklist in turn, and follow the instructions.

Creating an API Gateway


You can create one or more API gateways to process traffic from API clients and route it to back-end services.
Having created an API gateway, you then deploy an API on the API gateway by creating an API deployment.
You can use a single API gateway as the front end for multiple back-end services by:
• Creating a single API deployment on the API gateway, with an API deployment specification that defines multiple
back-end services.
• Creating multiple API deployments on the same API gateway, each with an API deployment specification that
defines one (or more) back-end services.

Oracle Cloud Infrastructure User Guide 359


API Gateway

Having a single API gateway as a front end enables you to present a single cohesive API to API consumers and API
clients, even if the API is actually comprised of smaller microservices written by different software teams using
different programming languages or technologies.

Using the Console


To create an API gateway:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
API Gateway.
2. Choose a Compartment you have permission to work in.
3. On the Gateways page, click Create Gateway and specify:
• Name: The name of the new API gateway. Avoid entering confidential information.
• Type: The type of API gateway to create. Select Private if you want the API gateway (and the APIs deployed
on it) to be accessible only from the same subnet in which the API gateway is created. Select Public if you
want the API gateway (and the APIs deployed on it) to be accessible from the internet. In the case of a public
API gateway, an internet gateway must exist to give access to the VCN.
• Compartment: The compartment to which the new API gateway is to belong.
• Custom DNS: Use this option to determine the TLS certificate (and associated domain name) that the API
gateway uses.
Do not select this option if you want the domain name to be generated automatically for you, and you want the
API gateway to use a TLS certificate obtained by the API Gateway service (the default behavior). The auto-
generated domain name will comprise a random string of characters followed by .apigateway.<region-
identifier>.oci.customer-oci.com. For example, laksjd.apigateway.us-
phoenix-1.oci.customer-oci.com .
Do select this option if you want the API gateway to use a custom TLS certificate (and associated custom
domain name). In this case, specify the name of the default API Gateway certificate resource containing details
of the custom TLS certificate (and associated custom domain name) that you want the API gateway to use.
Note that for public or production systems, Oracle recommends using custom TLS certificates. Oracle
recommends only using TLS certificates obtained by the API Gateway service for private or non-production
systems (for example, for development and testing).
See Setting Up Custom Domains and TLS Certificates on page 375.
• VCN in <compartment-name>: The VCN in which to create the API gateway. The VCN can belong to the
same compartment as the new API gateway, but does not have to.
• Subnet in <compartment-name>: The name of a public or private regional subnet in which to create the API
gateway. If you want to create a public API gateway, you must specify a public regional subnet.
• Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags
to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more
information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip
this option (you can apply tags later) or ask your administrator.
4. Click Create to create the new API gateway.
Note that it can take a few minutes to create the new API gateway. While it is being created, the API gateway
is shown with a state of Creating on the Gateways page. When it has been created successfully, the new
API gateway is shown with a state of Active.
5. If you have waited more than a few minutes for the API gateway to be shown with an Active state (or if the API
gateway creation operation has failed):
a. Click the name of the API gateway, and click Work Requests to see an overview of the API gateway creation
operation.
b. Click the Create Gateway operation to see more information about the operation (including error messages,
log messages, and the status of associated resources).
c. If the API gateway creation operation has failed and you cannot diagnose the cause of the problem from the
work request information, see Troubleshooting API Gateway on page 478.

Oracle Cloud Infrastructure User Guide 360


API Gateway

Having successfully created an API gateway, you can deploy an API on it (see Deploying an API on an API Gateway
by Creating an API Deployment on page 367).

Using the CLI


To create a new API gateway using the CLI:
1. Configure your client environment to use the CLI (Configuring Your Client Environment to use the CLI for API
Gateway Development on page 359).
2. Open a command prompt and run oci api-gateway gateway create to create the API gateway:

oci api-gateway gateway create --display-name "<gateway-name>" --


compartment-id <compartment-ocid> --endpoint-type "<gateway-type>" --
subnet-id <subnet-ocid> --certificate-id <certificate-ocid>

where:
• <gateway-name> is the name of the new API gateway. Avoid entering confidential information.
• <compartment-ocid> is the OCID of the compartment to which the new API gateway will belong.
• <gateway-type> is the type of API gateway to create. Specify PRIVATE if you want the API gateway
(and the APIs deployed on it) to be accessible only from the same subnet in which the API gateway is created.
Specify PUBLIC if you want the API gateway (and the APIs deployed on it) to be accessible from the internet.
• <subnet-ocid> is the OCID of a public or private regional subnet in which to create the API gateway. If
you want to create a public API gateway, you must specify a public regional subnet.
• <certificate-ocid> (optional) is the OCID of the API Gateway certificate resource created for the API
gateway's custom TLS certificate. See Setting Up Custom Domains and TLS Certificates on page 375.
For example:

oci api-gateway gateway create --display-name "Hello World Gateway" --


compartment-id ocid1.compartment.oc1..aaaaaaaa7______ysq --endpoint-type
"PRIVATE" --subnet-id ocid1.subnet.oc1.iad.aaaaaaaaz______rca

The response to the command includes:


• The API gateway's OCID.
• The host name, as the domain name to use when calling an API deployed on the API gateway. If you
didn't specify an API Gateway certificate resource when creating the API gateway, a domain name
is automatically generated in the format <gateway-identifier>.apigateway.<region-
identifier>.oci.customer-oci.com, where:
• <gateway-identifier> is a string of characters that identifies the API gateway. For example,
lak...sjd (abbreviated for readability).
• <region-identifier> is the identifier of the region in which the API gateway has been created. See
Availability by Region on page 342.
For example, lak...sjd.apigateway.us-phoenix-1.oci.customer-oci.com.
• The lifecycle state (for example, ACTIVE, FAILED).
• The id of the work request to create the API gateway (details of work requests are available for seven days
after completion, cancellation, or failure).
If you want the command to wait to return control until the API gateway is active (or the request has failed),
include either or both the following parameters:
• --wait-for-state ACTIVE
• --wait-for-state FAILED
For example:

oci api-gateway gateway create --display-name "Hello World Gateway" --


compartment-id ocid1.compartment.oc1..aaaaaaaa7______ysq --endpoint-type

Oracle Cloud Infrastructure User Guide 361


API Gateway

"PRIVATE" --subnet-id ocid1.subnet.oc1.iad.aaaaaaaaz______rca --wait-for-


state ACTIVE

Note that you cannot use the API gateway until the work request has successfully created it and the API gateway
is active.
3. (Optional) To see the status of the API gateway, enter:

oci api-gateway gateway get --gateway-id <gateway-ocid>


4. (Optional) To see the status of the work request that is creating the API gateway, enter:

oci api-gateway work-request get --work-request-id <work-request-ocid>


5. (Optional) To view the logs of the work request that is creating the API gateway, enter:

oci api-gateway work-request-log list --work-request-id <work-request-


ocid>
6. (Optional) If the work request that is creating the API gateway fails and you want to review the error logs, enter:

oci api-gateway work-request-error --work-request-id <work-request-ocid>

For more information about using the CLI, see Command Line Interface (CLI). For a complete list of flags and
options available for CLI commands, see CLI Help.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the CreateGateway operation to create an API gateway.

Creating an API Resource with an API Description


When using the API Gateway service, you have the option to create an API resource. You can use the API resource to
deploy an API on an API gateway. The API resource has an API description that describes the API.
If you use an API resource to deploy an API on an API gateway, its API description pre-populates some of the
properties of the API deployment specification.
You import the API description from a file (sometimes called an 'API specification', or 'API spec') written in a
supported language. Currently, OpenAPI Specification version 2.0 (formerly Swagger Specification 2.0) and version
3.0 are supported.
Note that creating an API resource in the API Gateway service is optional. You can deploy an API on an API gateway
without creating an API resource in the API Gateway service. Note also that you can create an API resource that
doesn't have an API description initially, and then add an API description later.

Using the Console


To create an API resource, optionally with an API description created from an uploaded API description file, using
the Console:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
API Gateway.
2. Choose a Compartment you have permission to work in.

Oracle Cloud Infrastructure User Guide 362


API Gateway

3. On the APIs page, click Create API Resource and specify:


• Name: The name of the new API resource. Avoid entering confidential information.
• Compartment: The compartment to which the new API resource is to belong.
• Upload API Description File: (optional) A file containing an API description (in a supported language) to
upload and from which to create the API description. The file can be up to 1MB in size. The file is parsed to
confirm that it is in a supported language and correctly formatted. Currently, OpenAPI Specification version
2.0 (formerly Swagger Specification 2.0) and version 3.0 files are supported.
• Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags
to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more
information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip
this option (you can apply tags later) or ask your administrator.
4. Click Create to create the new API resource.
If you uploaded an API description file, an API description is created and validated. Note that it can take a few
minutes to validate the API description. While it is being validated, the API description is shown with a state of
Validating on the Validations page. When the API description has been validated successfully:
• The Validations page shows successful validation.
• The API description page shows the API description created from the API description file.
• The API Deployment Specification page shows any additional information about the default API deployment
specification created from the API description.
5. If you have waited more than a few minutes for the API description to be shown as Valid (or if the API
description validation operation has failed):
a. Click Work Requests to see an overview of the API description validation operation.
b. Click the Validate API operation to see more information about the operation (including error messages, log
messages, and the status of associated resources).
c. If the API description validation operation has failed and you cannot diagnose the cause of the problem from
the work request information, see Troubleshooting API Gateway on page 478.
6. If you don't upload an API description file when you first create an API resource, or if you subsequently want to
upload a different API description file:
a. On the APIs page, select Edit from the Actions menu beside the API resource.
b. Provide details of the API description file from which to create the API description.
Having successfully created an API resource with an API description, you can deploy it on an API gateway (see
Using the Console to Create an API Deployment from an API Resource on page 370).

Using the CLI


To create an API resource, optionally with an API description created from an uploaded API description file, using
the CLI:
1. Configure your client environment to use the CLI (Configuring Your Client Environment to use the CLI for API
Gateway Development on page 359).

Oracle Cloud Infrastructure User Guide 363


API Gateway

2. Open a command prompt and run oci api-gateway api create to create the API resource:

oci api-gateway api create --display-name "<api-name>" --compartment-id


<compartment-ocid> --content "<api-description>"

where:
• <api-name> is the name of the new API resource. Avoid entering confidential information.
• <compartment-ocid> is the OCID of the compartment to which the new API resource will belong.
• <api-description> is optionally an API description (in a supported language). The value you specify for
<api-description> can be:
• The entire API description, enclosed within double quotes. Inside the description, each double quote must
be escaped with a backslash (\) character. For example (and abbreviated for readability), --content
"swagger:\"2.0\",title:\"Sample API\",..."
• The name and location of an API description file, enclosed within double quotes and in the format "$(<
<path>/<filename>.yaml)". For example, --content "$(< /users/jdoe/api.yaml)"
The description is parsed to confirm that it is in a supported language and correctly formatted. Currently,
OpenAPI Specification version 2.0 (formerly Swagger Specification 2.0) and version 3.0 files are supported.

For example:

oci api-gateway api create --display-name "Hello World API Resource"


--compartment-id ocid1.compartment.oc1..aaaaaaaa7______ysq --content
"swagger:\"2.0\",title:\"Sample API\",..."

The response to the command includes:


• The API resource's OCID.
• The lifecycle state (for example, SUCCEEDED, FAILED).
• The id of the work request to create the API resource (details of work requests are available for seven days
after completion, cancellation, or failure).
If you want the command to wait to return control until the API resource has been created (or the request has
failed), include either or both the following parameters:
• --wait-for-state SUCCEEDED
• --wait-for-state FAILED
For example:

oci api-gateway api create --display-name "Hello World API Resource"


--compartment-id ocid1.compartment.oc1..aaaaaaaa7______ysq --content
"swagger:\"2.0\",title:\"Sample API\",..." --wait-for-state SUCCEEDED

Note that you cannot use the API resource until the work request has successfully created it.
3. (Optional) To see the status of the work request that is creating the API resource, enter:

oci api-gateway work-request get --work-request-id <work-request-ocid>


4. (Optional) To view the logs of the work request that is creating the API resource, enter:

oci api-gateway work-request-log list --work-request-id <work-request-


ocid>
5. (Optional) If the work request that is creating the API resource fails and you want to review the error logs, enter:

oci api-gateway work-request-error --work-request-id <work-request-ocid>

Oracle Cloud Infrastructure User Guide 364


API Gateway

For more information about using the CLI, see Command Line Interface (CLI). For a complete list of flags and
options available for CLI commands, see CLI Help.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the CreateAPI operation to create an API resource.

Creating an API Deployment Specification


Before you can deploy an API on an API gateway, you have to create an API deployment specification. Every
API deployment has an API deployment specification.
Each API deployment specification describes a set of resources, and the methods (for example, GET, PUT) that can
be performed on each resource.
You can use a single API gateway as the front end for multiple back-end services by:
• Creating a single API deployment on the API gateway, with an API deployment specification that defines multiple
back-end services.
• Creating multiple API deployments on the same API gateway, each with an API deployment specification that
defines one (or more) back-end services.
Typically, back-end services will be in the same VCN as the API gateway on which you deploy an API. However,
they don't have to be. In the API deployment specification, you can describe back-end services that are on a private or
public subnet in your tenancy, as well as services outside your tenancy (including on the public internet). Wherever
they are located, the back-end services must be routable from the subnet containing the API gateway on which
the API is deployed. For example, if the back-end service is on the public internet, the VCN must have an internet
gateway to enable the API gateway to route requests to the back-end service.
You can create an API deployment specification:
• Using dialogs in the Console whilst creating an API deployment.
• Using your preferred JSON editor to create a separate JSON file. You can then specify the JSON file when using
the Console, the CLI, or the API to create an API deployment.
• Using an API description file you upload for an API resource. The API description file provides some initial
values for the API deployment specification, which you can modify and extend when deploying the API resource
on an API gateway. See Creating an API Resource with an API Description on page 362.
The instructions in this topic show a basic API deployment specification with a single backend, only one route, and
no request or response policies. See Example API Deployment Specification with Multiple Back Ends on page 367
for a more typical API deployment specification that includes multiple back ends, each with one or more routes.
In addition, you can add request and response policies that apply to routes in an API deployment specification (see
Adding Request Policies and Response Policies to API Deployment Specifications on page 421).

Using the Console to Create an API Deployment Specification


To create an API deployment specification whilst creating an API deployment using dialogs in the Console, see Using
the Console to Create an API Deployment from Scratch on page 368.

Using a JSON Editor to Create an API Deployment Specification in a Separate


JSON File
To create an API deployment specification in a JSON file:
1. Using your preferred JSON editor, create the API deployment specification in a JSON file in the format:

{
"requestPolicies": {},

Oracle Cloud Infrastructure User Guide 365


API Gateway

"routes": [
{
"path": "<api-route-path>",
"methods": ["<method-list>"],
"backend": {
"type": "<backend-type>",
"<backend-target>": "<identifier>"
},
"requestPolicies": {}
}
]
}

where:
• "requestPolicies" specifies optional policies to control the behavior of an API deployment. If you
want to apply policies to all routes in an API deployment specification, place the policies outside the routes
section. If you want to apply the policies just to a particular route, place the policies inside the routes
section. See Adding Request Policies and Response Policies to API Deployment Specifications on page
421.
• <api-route-path> specifies a path for API calls using the listed methods to the back-end service. Note
that the route path you specify:
• is relative to the deployment path prefix (see Deploying an API on an API Gateway by Creating an API
Deployment on page 367)
• must be preceded by a forward slash ( / ), and can be just that single forward slash
• can contain multiple forward slashes (provided they are not adjacent), and can end with a forward slash
• can include alphanumeric uppercase and lowercase characters
• can include the special characters $ - _ . + ! * ' ( ) , % ; : @ & =
• can include parameters and wildcards (see Adding Path Parameters and Wildcards to Route Paths on page
381)
• <method-list> specifies one or more methods accepted by the back-end service, separated by commas.
For example, "GET, PUT".
• <backend-type> specifies the type of the back-end service. Valid values are
ORACLE_FUNCTIONS_BACKEND, HTTP_BACKEND, and STOCK_RESPONSE_BACKEND.
• <backend-target> and <identifier> specify the back-end service. Valid values for <backend-
target> and <identifier> depend on the value of <backend-type>, as follows:
• If you set <backend-type> to ORACLE_FUNCTIONS_BACKEND, then replace <backend-
target> with functionId, and replace <identifier> with the OCID of the function. For
example, "functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab5b...". See Adding a
Function in Oracle Functions as an API Gateway Back End on page 414.
• If you set <backend-type> to HTTP_BACKEND, then replace <backend-target> with url, and
replace <identifier> with the URL of the back-end service. For example, "url": "https://
api.weather.gov". See Adding an HTTP or HTTPS URL as an API Gateway Back End on page
410.
• If you set <backend-type> to STOCK_RESPONSE_BACKEND, then replace <backend-target>
and <identifier> with appropriate key-value pairs. See Adding Stock Responses as an API Gateway
Back End on page 417.
For example, the following basic API deployment specification defines a simple Hello World serverless function
in Oracle Functions as a single back end:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {

Oracle Cloud Infrastructure User Guide 366


API Gateway

"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}

For a more complex example that defines multiple back ends, see Example API Deployment Specification with
Multiple Back Ends on page 367.
2. Save the JSON file containing the API deployment specification.
3. Use the API deployment specification when you create or update an API deployment in the following ways:
• by specifying the JSON file in the Console when you select the Upload an existing API option
• by specifying the JSON file in a request to the API Gateway REST API
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367.

Using an API Description File to Create an API Deployment Specification


To create an API deployment specification based on an API description file you upload for an API resource, see
Creating an API Resource with an API Description on page 362
The API description file provides some initial values for the API deployment specification, which you can modify
and extend when deploying the API resource on an API gateway.

Example API Deployment Specification with Multiple Back Ends


You can create a single API deployment on an API gateway, with an API deployment specification that defines
multiple back-end services.
For example, the following API deployment specification defines a simple Hello World serverless function in Oracle
Functions as one back end, and the National Weather Service API as a second back end.

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
},
{
"path": "/weather",
"methods": ["GET"],
"backend": {
"type": "HTTP_BACKEND",
"url": "https://api.weather.gov"
}
}
]
}

Deploying an API on an API Gateway by Creating an API Deployment


Having created an API gateway, you deploy an API on the API gateway by creating an API deployment. When you
create an API deployment, you include an API deployment specification that defines the API. The API Gateway
service inspects the API deployment specification to confirm that it is valid.
You can use a single API gateway as the front end for multiple back-end services by:

Oracle Cloud Infrastructure User Guide 367


API Gateway

• Creating a single API deployment on the API gateway, with an API deployment specification that defines multiple
back-end services.
• Creating multiple API deployments on the same API gateway, each with an API deployment specification that
defines one (or more) back-end services.

Using the Console to Create an API Deployment from Scratch


To use the Console to create an API deployment, entering the API deployment specification in dialogs in the Console
as you go:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
API Gateway.
2. Choose a Compartment you have permission to work in.
3. On the Gateways page, click the name of the API gateway on which you want to deploy the API to show the
Gateway Details page.
4. On the Gateway Details page, select Deployments from the Resources list, and then click Create Deployment.
5. Click From Scratch and in the Basic Information section, specify:
• Name: The name of the new API deployment. Avoid entering confidential information.
• Path Prefix: A path on which to deploy all routes contained in the API deployment specification. For
example:
• /v1
• /v2
• /test/20191122
Note that the deployment path prefix you specify:
•must be preceded by a forward slash ( / )
•can contain multiple forward slashes (provided they are not adjacent), but must not end with a forward
slash
• can include alphanumeric uppercase and lowercase characters
• can include the special characters $ - _ . + ! * ' ( ) , % ; : @ & =
• cannot include parameters and wildcards
• Compartment: The compartment in which to create the new API deployment.
6. (Optional) In the API Request Policies section, optionally specify request policy details to provide support for:
• Authentication: Click Add and enter details for an authentication request policy (see Adding Authentication
and Authorization to API Deployments on page 435).
• CORS: Click Add and enter details for a CORS request policy (see Adding CORS support to API
Deployments on page 428).
• Rate Limiting: Click Add and enter details for a rate limiting request policy (see Limiting the Number of
Requests to API Gateway Back Ends on page 425).
7. (Optional) In the API Logging Policies section, optionally specify an execution log level to record information
about processing within the API gateway. See Adding Logging to API Deployments on page 401.
8. (Optional) Click Show Advanced Options and optionally specify:
• Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags
to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more
information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip
this option (you can apply tags later) or ask your administrator.
9. Click Next to enter details of the routes in the API deployment.

Oracle Cloud Infrastructure User Guide 368


API Gateway

10. In the Route 1 section, specify the first route in the API deployment that maps a path and one or more methods to
a back-end service:
• Path: A path for API calls using the listed methods to the back-end service. Note that the route path you
specify:
• is relative to the deployment path prefix
• must be preceded by a forward slash ( / ), and can be just that single forward slash
• can contain multiple forward slashes (provided they are not adjacent), and can end with a forward slash
• can include alphanumeric uppercase and lowercase characters
• can include the special characters $ - _ . + ! * ' ( ) , % ; : @ & =
• can include parameters and wildcards (see Adding Path Parameters and Wildcards to Route Paths on page
381)
• Methods: One or more methods accepted by the back-end service, separated by commas. For example, GET,
PUT.
• Type: The type of the back-end service, as one of:
• HTTP: For an HTTP back end, you also need to specify a URL, timeout details, and whether to disable
SSL verification (see Adding an HTTP or HTTPS URL as an API Gateway Back End on page 410).
• Oracle Functions: For an Oracle Functions back end, you also need to specify the application and function
(see Adding a Function in Oracle Functions as an API Gateway Back End on page 414).
• Stock Response: For a stock response back end, you also need to specify the HTTP status code, the
content in the body of the response, and one or more HTTP header fields (see Adding Stock Responses as
an API Gateway Back End on page 417).
11. (Optional) Click Another Route to enter details of additional routes.
12. Click Next to review the details you entered for the new API deployment.
13. Click Create to create the new API deployment.
Note that it can take a few minutes to create the new API deployment. While it is being created, the API
deployment is shown with a state of Creating on the Gateway Details page. When it has been created
successfully, the new API deployment is shown with a state of Active.
14. If you have waited more than a few minutes for the API deployment to be shown with an Active state (or if the
API deployment creation operation has failed):
a. Click the name of the API deployment, and click Work Requests to see an overview of the API deployment
creation operation.
b. Click the Create Deployment operation to see more information about the operation (including error
messages, log messages, and the status of associated resources).
c. If the API deployment creation operation has failed and you cannot diagnose the cause of the problem from the
work request information, see Troubleshooting API Gateway on page 478.
15. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).

Using the Console to Create an API Deployment from a JSON File


To use the Console to create an API deployment, uploading the API deployment specification from a JSON file:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
API Gateway.
2. Choose a Compartment you have permission to work in.
3. On the Gateways page, click the name of the API gateway on which you want to deploy the API to show the
Gateway Details page.
4. On the Gateway Details page, select Deployments from the Resources list, and then click Create Deployment.
5. Click Upload an existing API.

Oracle Cloud Infrastructure User Guide 369


API Gateway

6. In the Upload Information section, specify:


• Name: The name of the new API deployment. Avoid entering confidential information.
• Path Prefix: A path on which to deploy all routes contained in the API deployment specification. For
example:
• /v1
• /v2
• /test/20191122
Note that the deployment path prefix you specify:
• must be preceded by a forward slash ( / )
• can contain multiple forward slashes (provided they are not adjacent), but must not end with a forward
slash
• can include alphanumeric uppercase and lowercase characters
• can include the special characters $ - _ . + ! * ' ( ) , % ; : @ & =
• cannot include parameters and wildcards
• Compartment: The compartment in which to create the new API deployment.
• Specification: The JSON file containing the API deployment specification, either by dragging and dropping
the file, or by clicking select one. See Creating an API Deployment Specification on page 365.
7. (Optional) Click Show Advanced Options and optionally specify:
• Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags
to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more
information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip
this option (you can apply tags later) or ask your administrator.
8. Click Next to review the details you entered for the new API deployment.
9. Click Create to create the new API deployment.
Note that it can take a few minutes to create the new API deployment. While it is being created, the API
deployment is shown with a state of Creating on the Gateway Details page. When it has been created
successfully, the new API deployment is shown with a state of Active.
10. If you have waited more than a few minutes for the API deployment to be shown with an Active state (or if the
API deployment creation operation has failed):
a. Click the name of the API deployment, and click Work Requests to see an overview of the API deployment
creation operation.
b. Click the Create Deployment operation to see more information about the operation (including error
messages, log messages, and the status of associated resources).
c. If the API deployment creation operation has failed and you cannot diagnose the cause of the problem from the
work request information, see Troubleshooting API Gateway on page 478.
11. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).

Using the Console to Create an API Deployment from an API Resource


You can create an API deployment from an existing API resource, using the API resource's API description. In this
case, the API description is based on an API description file you've uploaded for the API resource (see Creating an
API Resource with an API Description on page 362). The API description file provides some initial values for the
API deployment specification, which you can modify and extend when creating the API deployment. In particular, a
default route is created for each path and associated method in the API description.
To use the Console to create an API deployment from an existing API resource, using an API deployment
specification derived from an API description file:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
API Gateway.
2. Choose a Compartment you have permission to work in.

Oracle Cloud Infrastructure User Guide 370


API Gateway

3. On the APIs page, click the name of the API resource that you want to deploy.
4. (Optional) On the API Details page, select API Deployment Specification from the Resources list to confirm
that a valid API deployment specification has been created for the API resource from an uploaded API description
file. If no API deployment specification is available, see Creating an API Resource with an API Description on
page 362
5. On the API Details page, click Deploy API Gateway to use the Console dialogs for creating an API deployment.
Some of the initial values for the API deployment specification properties shown in the Console dialogs are
derived from the API description file.
The API Information section shows details about the API resource from which to create the API deployment.
6. In the Gateway section, select the API gateway on which to create the API deployment. If a suitable API gateway
does not already exist, click Create Gateway to create one (see Creating an API Gateway on page 359).
7. In the Basic Information section, specify:
• Name: The name of the new API deployment. Avoid entering confidential information.
• Path Prefix: A path on which to deploy all routes contained in the API deployment specification.
For example:
• /v1
• /v2
• /test/20191122
Note that the deployment path prefix you specify:
•must be preceded by a forward slash ( / )
•can contain multiple forward slashes (provided they are not adjacent), but must not end with a forward
slash
• can include alphanumeric uppercase and lowercase characters
• can include the special characters $ - _ . + ! * ' ( ) , % ; : @ & =
• cannot include parameters and wildcards
• Compartment: The compartment in which to create the new API deployment.
8. (Optional) In the API Request Policies section, optionally specify request policy details to provide support for:
• Authentication: Click Add and enter details for an authentication request policy (see Adding Authentication
and Authorization to API Deployments on page 435).
• CORS: Click Add and enter details for a CORS request policy (see Adding CORS support to API
Deployments on page 428).
• Rate Limiting: Click Add and enter details for a rate limiting request policy (see Limiting the Number of
Requests to API Gateway Back Ends on page 425).
9. (Optional) In the API Logging Policies section, optionally specify an execution log level to record information
about processing within the API gateway. See Adding Logging to API Deployments on page 401.
10. (Optional) Click Show Advanced Options and optionally specify:
• Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags
to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more
information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip
this option (you can apply tags later) or ask your administrator.
11. Click Next to review and enter details of the routes in the API deployment.
By default, a route is created for every path and associated method that is present in the API description. Initially,
each of these default routes is created with a stock response back end. The HTTP status code, the content in the
body of the response body content, and the header are obtained from the details in the API description. If the API
description does not include response information for a particular path and associated method, a default stock
response back end is created for that route with 501 as the HTTP status code.

Oracle Cloud Infrastructure User Guide 371


API Gateway

12. Review each default route in turn, modifying its configuration if necessary to meet your requirements, and adding
request, response, and logging policies:
• Path: A path for API calls using the listed methods to the back-end service. Note that the route path you
specify:
• is relative to the deployment path prefix
• must be preceded by a forward slash ( / ), and can be just that single forward slash
• can contain multiple forward slashes (provided they are not adjacent), and can end with a forward slash
• can include alphanumeric uppercase and lowercase characters
• can include the special characters $ - _ . + ! * ' ( ) , % ; : @ & =
• can include parameters and wildcards (see Adding Path Parameters and Wildcards to Route Paths on page
381)
• Methods: One or more methods accepted by the back-end service, separated by commas. For example, GET,
PUT.
• Type: The type of the back-end service, as one of:
• HTTP: For an HTTP back end, you also need to specify a URL, timeout details, and whether to disable
SSL verification (see Adding an HTTP or HTTPS URL as an API Gateway Back End on page 410).
• Oracle Functions: For an Oracle Functions back end, you also need to specify the application and function
(see Adding a Function in Oracle Functions as an API Gateway Back End on page 414).
• Stock Response: For a stock response back end, you also need to specify the HTTP status code, the
content in the body of the response, and one or more HTTP header fields (see Adding Stock Responses as
an API Gateway Back End on page 417).
• Show Route Request Policies: and Show Route Response Policies: Review and optionally update the
request policies and response policies that apply to the route. See Adding Request Policies and Response
Policies to API Deployment Specifications on page 421.
• Show Route Logging Polices: Review and optionally update the logging policy that applies to the route. See
Adding Logging to API Deployments on page 401.
13. (Optional) Click Another Route to enter details of more routes, in addition to those created by default from the
API description.
14. Click Next to review the details you entered for the new API deployment.
15. Click Create to create the new API deployment.
Note that it can take a few minutes to create the new API deployment. While it is being created, the API
deployment is shown with a state of Creating on the Gateway Details page. When it has been created
successfully, the new API deployment is shown with a state of Active.
16. If you have waited more than a few minutes for the API deployment to be shown with an Active state (or if the
API deployment creation operation has failed):
a. Click the name of the API deployment, and click Work Requests to see an overview of the API deployment
creation operation.
b. Click the Create Deployment operation to see more information about the operation (including error
messages, log messages, and the status of associated resources).
c. If the API deployment creation operation has failed and you cannot diagnose the cause of the problem from the
work request information, see Troubleshooting API Gateway on page 478.
17. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).

Using the CLI


To create a new API deployment using the CLI:
1. Configure your client environment to use the CLI (Configuring Your Client Environment to use the CLI for API
Gateway Development on page 359).

Oracle Cloud Infrastructure User Guide 372


API Gateway

2. Open a command prompt and run oci api-gateway deployment create to create the deployment:

oci api-gateway deployment create --compartment-id <compartment-ocid>


--display-name <api-name> --gateway-id <gateway-ocid> --path-prefix "/
<deployment-path-prefix>" --specification file:///<filename>

where:
• <compartment-ocid> is the OCID of the compartment in which to create the new API deployment.
• <api-name> is the name of the new API deployment. Avoid entering confidential information.
• <gateway-ocid> is the OCID of the existing gateway on which to deploy the API. To find out the API
gateway's OCID, see Listing API Gateways and API Deployments on page 390.
• /<deployment-path-prefix> is a path on which to deploy all routes contained in the API deployment
specification.
Note that the deployment path prefix you specify:
• must be preceded by a forward slash ( / ) in the JSON file
• can contain multiple forward slashes (provided they are not adjacent), but must not end with a forward
slash
• can include alphanumeric uppercase and lowercase characters
• can include the special characters $ - _ . + ! * ' ( ) , % ; : @ & =
• cannot include parameters and wildcards
• <filename> is the API deployment specification, including a path, one or more methods, and a back end
definition. See Creating an API Deployment Specification on page 365.
For example:

oci api-gateway deployment create --compartment-id


ocid1.compartment.oc1..aaaaaaaa7______ysq --display-name "Marketing
Deployment" --gateway-id ocid1.apigateway.oc1..aaaaaaaab______hga

Oracle Cloud Infrastructure User Guide 373


API Gateway

--path-prefix "/marketing" --specification file:///Users/jdoe/work/


deployment.json

The response to the command includes:


• The API deployment's OCID.
• The host name on which the API deployment has been created, as a domain name in the format <gateway-
identifier>.apigateway.<region-identifier>.oci.customer-oci.com, where:
• <gateway-identifier> is the string of characters that identifies the API gateway. For example,
lak...sjd (abbreviated for readability).
• <region-identifier> is the identifier of the region in which the API deployment has been created.
See Availability by Region on page 342.
For example, lak...sjd.apigateway.us-phoenix-1.oci.customer-oci.com.
The host name will be the domain name to use when calling an API deployed on the API gateway.
• The lifecycle state (for example, ACTIVE, FAILED).
• The id of the work request to create the API deployment (details of work requests are available for seven days
after completion, cancellation, or failure).
If you want the command to wait to return control until the API deployment is active (or the request has failed),
include either or both the following parameters:
• --wait-for-state ACTIVE
• --wait-for-state FAILED
For example:

oci api-gateway deployment create --compartment-id


ocid1.compartment.oc1..aaaaaaaa7______ysq --display-name "Marketing
Deployment" --gateway-id ocid1.apigateway.oc1..aaaaaaaab______hga
--path-prefix "/marketing" --specification file:///Users/jdoe/work/
deployment.json --wait-for-state ACTIVE

Note that you cannot use the API deployment until the work request has successfully created it and the API
deployment is active.
3. (Optional) To see the status of the API deployment, enter:

oci api-gateway deployment get --deployment-id <deployment-ocid>


4. (Optional) To see the status of the work request that is creating the API deployment, enter:

oci api-gateway work-request get --work-request-id <work-request-ocid>


5. (Optional) To view the logs of the work request that is creating the API deployment, enter:

oci api-gateway work-request-log list --work-request-id <work-request-


ocid>
6. (Optional) If the work request that is creating the API deployment fails and you want to review the error logs,
enter:

oci api-gateway work-request-error --work-request-id <work-request-ocid>

For more information about using the CLI, see Command Line Interface (CLI). For a complete list of flags and
options available for CLI commands, see CLI Help.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Oracle Cloud Infrastructure User Guide 374


API Gateway

Use the CreateDeployment operation to create an API deployment.

Setting Up Custom Domains and TLS Certificates


The API gateways you create with the API Gateway service are TLS-enabled, and therefore require TLS certificates
(formerly SSL certificates) issued by a Certificate Authority to secure them. To specify a particular custom domain
name for an API gateway, you must obtain a custom TLS certificate from a Certificate Authority yourself, rather than
have the API Gateway service obtain a TLS certificate for you.
When you create an API gateway, you specify that the API gateway uses one of the following:
• A TLS certificate that the API Gateway service obtains for you (the default behavior). In this case, the API
Gateway service requests a TLS certificate from an Oracle-designated Certificate Authority.
• A custom TLS certificate that you obtain from your chosen Certificate Authority yourself. Your request to the
Certificate Authority includes the custom domain name. The Certificate Authority returns a file containing the
custom TLS certificate, and typically one or more files containing intermediate certificates forming a certificate
chain from the TLS certificate back to the Certificate Authority.
To enable API gateways to use a custom TLS certificate, you create an API Gateway certificate resource comprising
the custom TLS certificate, any intermediate certificates, and the private key used to generate the TLS certificate. You
then specify that API Gateway certificate resource when creating a new API gateway.
The way in which the TLS certificate is obtained determines how much control you have over the API gateway's
domain name:
• If the API Gateway service obtains a TLS certificate for you, the API Gateway service gives the API gateway
an auto-generated domain name. The auto-generated domain name comprises a random string of characters
followed by .apigateway.<region-identifier>.oci.customer-oci.com. For example,
laksjd.apigateway.us-phoenix-1.oci.customer-oci.com .
• If you obtain a custom TLS certificate yourself, the API Gateway service gives the API gateway the custom
domain name you specified in your request to the Certificate Authority.
The way in which the TLS certificate is obtained also determines responsibility for recording the mapping between
the API gateway's domain name and its public IP address with a DNS provider:
• If the API Gateway service obtains a TLS certificate for you, the API Gateway service takes responsibility for
recording the mapping between the API gateway's auto-generated domain name and its public IP address with the
Oracle Cloud Infrastructure DNS service.
• If you obtain a custom TLS certificate yourself, you are responsible for recording the mapping between the API
gateway's custom domain name and its public IP address with your chosen DNS provider as an A record.
Similarly, the handling of TLS certificate expiry and renewal is determined by how the TLS certificate is originally
obtained:
• If the API Gateway service obtains a TLS certificate for you, the API Gateway service automatically renews the
TLS certificate with the Oracle-designated Certificate Authority before it expires.
• If you obtain a custom TLS certificate yourself, you are responsible for renewing the TLS certificate with your
chosen Certificate Authority before it expires. Having received a new custom TLS certificate from the Certificate
Authority, you create a new API Gateway certificate resource with the details of the new custom TLS certificate.
You then update any API gateways that used the original API Gateway certificate resource to use the new
certificate resource instead.
For some customers, use of custom domains and custom TLS certificates is obligatory. For example, if you are using
Oracle Cloud Infrastructure Government Cloud, you are required to:
• only obtain a TLS certificate from a particular, approved Certificate Authority
• only use a particular, approved DNS provider
For other customers, use of custom domains is likely to be driven by commercial requirements. For
example, typically you'll want to include your company name in the API gateway's domain name, rather
than using the auto-generated random string of characters followed by .apigateway.<region-
identifier>.oci.customer-oci.co.

Oracle Cloud Infrastructure User Guide 375


API Gateway

Note the following:


• You cannot delete an API Gateway certificate resource that is currently being used by an API gateway. To delete
the API Gateway certificate resource, you must first remove it from any API gateway that is using it.
• You can only specify one API Gateway certificate resource for an API gateway.
• You cannot update an API Gateway certificate resource after you have created it.
• You can change the API Gateway certificate resource for an existing API gateway if you originally specified one
when you first created the API gateway.
Important:

For public or production systems, Oracle recommends using custom TLS


certificates. Oracle recommends only using TLS certificates obtained by the
API Gateway service for private or non-production systems (for example, for
development and testing).

Setting up a Custom Domain Name and TLS Certificate for an API Gateway
To set up a custom domain name and TLS certificate for an API gateway:

Step 1: Obtain a TLS Certificate from your Chosen Certificate Authority


The precise steps to obtain a TLS certificate will be different, according to the Certificate Authority you choose to
use. At a high level, the steps will probably be somewhat similar to the following, but always refer to the Certificate
Authority documentation for more detailed information:
1. Create a certificate signing request for your chosen Certificate Authority.
Typically, you'll include information like the organization name, locality, and country in the certificate signing
request.
You'll also include a common name in the certificate signing request as the fully qualified domain name of the
site you want to secure. The common name usually connects the TLS certificate with a particular domain. This
domain name is used as the custom domain name for API gateways.
When you create a certificate signing request, a public key is added to the request, and a corresponding private
key is also generated and stored in a local file. You'll use this private key when you set up an API Gateway
certificate resource, so make a note of its location. The private key you use to obtain a TLS certificate:
• must be an RSA key
• must be in PEM-encoded X.509 format
• must start with -----BEGIN RSA PRIVATE KEY-----
• must end with -----END RSA PRIVATE KEY-----
• must not be protected by a passphrase
• must have a minimum length of 2048 bits and must not exceed 4096 bits
2. Submit the certificate signing request to the Certificate Authority.
The Certificate Authority returns:
• a file containing the custom TLS certificate for the API gateway itself (known as the 'leaf certificate' or 'end-
entity certificate')
• typically one or more files containing intermediate certificates that form a certificate chain from the
leaf certificate back to the Certificate Authority
You can now use these certificate files to create an API Gateway certificate resource.

Step 2: Create an API Gateway Certificate Resource


An API Gateway certificate resource is a named definition of a TLS certificate that you can use when creating or
updating an API gateway using the API Gateway service.

Oracle Cloud Infrastructure User Guide 376


API Gateway

An API Gateway certificate resource comprises:


• a name of your choice for the certificate resource itself
• the custom TLS certificate for the API gateway (the 'leaf certificate' or 'end-entity certificate') returned by the
Certificate Authority
• any intermediate certificates forming a certificate chain from the leaf certificate back to the Certificate Authority
• the private key paired with the public key that was included in the original certificate signing request
Note that you cannot update an API Gateway certificate resource after you have created it.
You can create an API Gateway certificate resource using the Console or the CLI.

Using the Console


To create an API Gateway certificate resource using the Console:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
API Gateway.
2. Choose a Compartment you have permission to work in.
3. On the Certificates page, click Add Certificate and specify:
• Name: The name of the new API Gateway certificate resource. Avoid entering confidential information.
• Certificate: The custom TLS certificate returned by the Certificate Authority (the 'leaf certficate' or 'end-entity
certificate'). Drag and drop, select, or paste a valid TLS certificate. Note that the TLS certificate you specify:
• must be in PEM-encoded X.509 format
• must start with -----BEGIN CERTIFICATE-----
• must end with -----END CERTIFICATE-----
• must not exceed 4096 bits in length
• Intermediate Certificates: (Optional) If there is one or more intermediate certificates forming a certificate
chain from the TLS certificate back to the Certificate Authority, include the contents of the intermediate
certificate files in the correct order. The correct order begins with the certificate directly signed by the
Trusted Root Certificate Authority at the bottom, with any additional certificate directly above the Certificate
Authority that signed it. For example:

-----BEGIN CERTIFICATE-----
<PEM_encoded_certificate>
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
<PEM_encoded_certificate>
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
<PEM_encoded_certificate>
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
<PEM_encoded_certificate>

Oracle Cloud Infrastructure User Guide 377


API Gateway

-----END CERTIFICATE-----

If you have concatenated the contents of intermediate certificate files into a single certificate chain file in the
correct order, drag and drop that file, or select that file, or paste that file's content.
If you don't have a concatenated certificate chain file, paste the contents of the individual certificate files, in
the correct order.
The combined length of any intermediate certificates you specify must not exceed 10240 bits.
• Private Key: The private key used to obtain the TLS certificate from the Certificate Authority. Drag and drop,
select, or paste a valid private key in this field. Note that the key you specify:
• must be an RSA key
• must be in PEM-encoded X.509 format
• must start with -----BEGIN RSA PRIVATE KEY-----
• must end with -----END RSA PRIVATE KEY-----
• must not be protected by a passphrase
• must have a minimum length of 2048 bits and must not exceed 4096 bits
4. Click Create to create the API Gateway certificate resource.

Using the CLI


To create an API Gateway certificate resource using the CLI:
1. Configure your client environment to use the CLI (Configuring Your Client Environment to use the CLI for API
Gateway Development on page 359).
2. Open a command prompt and run oci api-gateway certificate create to create the API Gateway
certificate resource:

oci api-gateway certificate create --display-name "<certificate-name>" --


compartment-id <compartment-ocid> --certificate-file <certificate-file-
path> --intermediate-certificates-file <intermediate-certificates-file-
path> --private-key-file <private-key-file-path>

where:
• <certificate-name> is the name of the new API Gateway certificate resource. Avoid entering
confidential information.
• <compartment-ocid> is the OCID of the compartment to which the new API Gateway certificate
resource will belong.
• <certificate-file-path> is the path and name of the file containing the leaf certificate returned by
the Certificate Authority. For example, ~/.certs/cert.pem. Note that the leaf certificate in the file you
specify:
• must be in PEM-encoded X.509 format
• must start with -----BEGIN CERTIFICATE-----
• must end with -----END CERTIFICATE-----
• must not exceed 4096 bits in length
• <intermediate-certificates-file-path> is optionally the path and name of a file containing
one or more intermediate certificates forming a certificate chain from the leaf certificate back to the Certificate
Authority. For example, ~/.certs/int_cert.pem. If the file contains multiple intermediate certificates,
the intermediate certificates must be in the correct order. The correct order ends with the certificate directly
signed by the Trusted Root Certificate Authority, with any additional certificate directly preceding the
Certificate Authority that signed it. For example:

-----BEGIN CERTIFICATE-----<PEM_encoded_certificate>-----
END CERTIFICATE----------BEGIN CERTIFICATE-----
<PEM_encoded_certificate>-----END CERTIFICATE----------BEGIN

Oracle Cloud Infrastructure User Guide 378


API Gateway

CERTIFICATE-----<PEM_encoded_certificate>-----END CERTIFICATE----------
BEGIN CERTIFICATE-----<PEM_encoded_certificate>-----END CERTIFICATE-----

The combined length of any intermediate certificates you specify must not exceed 10240 bits.
• <private-key-file-path> is the path and name of the file containing the private key used to obtain the
TLS certificate from the Certificate Authority. For example, ~/.certs/key.pem. Note that the private key
in the file you specify:
• must be an RSA key
• must be in PEM-encoded X.509 format
• must start with -----BEGIN RSA PRIVATE KEY-----
• must end with -----END RSA PRIVATE KEY-----
• must not be protected by a passphrase
• must have a minimum length of 2048 bits and must not exceed 4096 bits
For example:

oci api-gateway certificate create --display-name "Acme gateway


certificate" --compartment-id ocid1.compartment.oc1..aaaaaaaa7______ysq
--certificate-file ~/.certs/cert.pem --private-key-file ~/.certs/key.pem

The response to the command includes:


• The OCID of the new API Gateway certificate resource.
• The lifecycle state (for example, ACTIVE, FAILED).
• The id of the work request to create the API Gateway certificate resource (details of work requests are
available for seven days after completion, cancellation, or failure).
If you want the command to wait to return control until the API Gateway certificate resource is active (or the
request has failed), include either or both the following parameters:
• --wait-for-state ACTIVE
• --wait-for-state FAILED
For example:

oci api-gateway certificate create --display-name "Acme gateway


certificate" --compartment-id ocid1.compartment.oc1..aaaaaaaa7______ysq
--certificate-file ~/.certs/cert.pem --private-key-file ~/.certs/key.pem
--wait-for-state ACTIVE

For more information about using the CLI, see Command Line Interface (CLI). For a complete list of flags and
options available for CLI commands, see Command Line Reference.

Step 3: Specify the API Gateway certificate resource when creating an API gateway
To specify the API Gateway certificate resource when creating an API gateway:
1. Follow the instructions in Creating an API Gateway on page 359 to create an API gateway using either the
Console or the CLI.
2. Specify the API Gateway certificate resource as described in the instructions:
• If using the Console: Use the Certificate field.
• If using the CLI: Set the --certificate-id <certificate-ocid> property.
The API Gateway service creates the new API gateway, and installs the custom TLS certificate and private key.
3. Obtain the public IP address of the API gateway.

Oracle Cloud Infrastructure User Guide 379


API Gateway

Step 4: Record the Mapping Between the API Gateway's Custom Domain Name
and Public IP Address With Your Chosen DNS Provider
The precise steps to record the mapping between an API gateway's custom domain name and its public IP address
depend on the DNS provider you choose to use. Typically, you will create a new record (specifically a new A record)
in the DNS provider's configuration. The record associates the domain name you configured when requesting the TLS
certificate from your chosen Certificate Authority with the public IP address that the API Gateway service assigns to
the new API gateway. Refer to your chosen DNS provider's documentation for more detailed information.

Renewing Custom TLS Certificates Used by API Gateways


TLS certificates are valid for a limited period of time (typically one or two years) before they expire. If the
TLS certificate used by an API gateway expires, calls to an API deployed on the API gateway return warning
messages. To avoid warning messages, you have to renew TLS certificates before they expire (sometime referred to
as 'rotating' certificates)
If the API Gateway service obtained the original TLS certificate for you, the API Gateway service automatically
renews the TLS certificate with the Oracle-designated Certificate Authority before it expires. However, if you
obtained a custom TLS certificate yourself, you are responsible for renewing the custom TLS certificate with your
chosen Certificate Authority before it expires (although the Console does show a warning if the TLS certificate is due
to expire shortly).
When you request the renewal of a TLS certificate, the Certificate Authority returns a completely new
TLS certificate. You cannot simply update the existing API Gateway certificate resource with the new TLS
certificate. Instead, you create a new API Gateway certificate resource, add the new TLS certificate to the new
certificate resource, and then update any API gateways that used the previous certificate resource to use the new
certificate resource.
To renew the custom TLS certificate used by an API gateway:
1. Submit a TLS certificate renewal request to the Certificate Authority from which you originally obtained the TLS
certificate. The exact steps to renew a TLS certificate depend on the Certificate Authority you use, so always refer
to the Certificate Authority documentation for more detailed information.
When you create the certificate renewal request, a public key is added to the request, and a corresponding private
key is also generated and stored in a local file. You'll use this private key when you set up the new API Gateway
certificate resource, so make a note of its location.
The Certificate Authority returns a file containing the new custom TLS certificate, and typically one or more files
containing intermediate certificates forming a certificate chain from the TLS certificate back to the Certificate
Authority.
2. Create a new API Gateway certificate resource and add to it the new TLS certificate, any intermediate certificates,
and the new private key (see Step 2: Create an API Gateway Certificate Resource on page 376).
3. Update all the existing API gateways that used the original API Gateway certificate resource to use the new API
Gateway certificate resource (see Updating API Gateways and API Deployments on page 391).
4. When there are no API gateways still using the original API Gateway certificate resource, delete the original
certificate resource:
• If using the Console: On the Certificates page, click the Actions icon (three dots) beside the original API
Gateway certificate resource you want to delete, and then click Delete.
• If using the CLI: Use the following command to delete the original API Gateway certificate resource:

oci api-gateway certificate delete --certificate-id <certificate-ocid>

Note that you cannot delete an API Gateway certificate resource if there are API gateways still using it.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Oracle Cloud Infrastructure User Guide 380


API Gateway

Use the:
• CreateCertificate operation to create a new API Gateway certificate resource
• DeleteCertificate operation to delete an existing API Gateway certificate resource
• UpdateCertificate operation to change the details of an existing API Gateway certificate resource
• GetCertificate operation to see details of an existing API Gateway certificate resource
• ListCertificates operation to list all the certificates in a compartment
• ChangeCertificateCompartment operation to change the compartment to which an existing API Gateway
certificate resource belongs

Adding Path Parameters and Wildcards to Route Paths


You might want different API requests routed to the same backend, even when the request URLs vary to a greater or
lesser extent from the route path definition in the API deployment specification.
When defining a route path in an API deployment specification, you can include a path parameter to exactly replace
an individual segment of the path. If necessary, you can include multiple path parameters in the route path. You can
also append the asterisk (*) wildcard to a path parameter in the route path to provide even more flexibility when
identifying requests to send to the same backend.
The examples in this topic assume you are adding route paths to an API deployment specification in a JSON file.
Note the examples also apply when you're defining an API deployment specification using dialogs in the Console.

Example: Adding Path Parameters to Match Similar URLs


You might have a requirement to route requests with similar URLs to the same backend. For example:
• https://<gateway-hostname>/marketing/hello/us/index.html
• https://<gateway-hostname>/marketing/hello/apac/index.html
• https://<gateway-hostname>/marketing/hello/emea/index.html
To enable calls to these similar URLs to resolve to the same backend, add a path parameter name enclosed within
curly brackets as the segment of the route path that will vary between API calls. For example, {region} as shown
below:

{
"routes": [
{
"path": "/hello/{region}/index.html",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}

Note that path parameter names:


• Can include alphanumeric uppercase and lowercase characters.
• Can include the underscore _ special character.
• Cannot include other special characters. In particular, note that you cannot include spaces, forward slashes, and
curly brackets in path parameter names.

Example: Adding Path Parameters with a Wildcard to Match Dissimilar URLs


You might have a requirement to route requests to the same backend, even though the request URLs are significantly
different. For example:
• https://<gateway-hostname>/marketing/hello/us/index.html

Oracle Cloud Infrastructure User Guide 381


API Gateway

• https://<gateway-hostname>/marketing/hello/apac/introduction/
• https://<gateway-hostname>/marketing/hello/emea/welcome.html
• https://<gateway-hostname>/marketing/hello/introduction
• https://<gateway-hostname>/marketing/hello/top.html
• https://<gateway-hostname>/marketing/hello/
To enable calls to these significantly different URLs to resolve to the same backend:
• add a path parameter name enclosed within curly brackets as the first segment of the route path that will differ
between the various API calls
• add the asterisk (*) wildcard to the end of the path parameter name
For example, {generic_welcome*} as shown below:

{
"routes": [
{
"path": "/hello/{generic_welcome*}",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}

Note that a path parameter name with an asterisk wildcard will match:
• no path segment
• a single path segment
• multiple path segments

Adding Context Variables to Policies and HTTP Back End Definitions


Calls to APIs deployed on an API gateway typically include parameters that you'll want to use when defining the
following in API deployment specifications:
• request policies and response policies
• HTTP and HTTPS back ends
To enable you to use parameters included in API calls, the API Gateway service saves the values of the following
types of parameter in temporary 'context tables':
• Path parameters you define in the API deployment specification (see Adding Path Parameters and Wildcards to
Route Paths on page 381) are saved in records in the request.path table.
• Query parameters included in the call to the API are saved in records in the request.query table.
• Header parameters included in the call to the API are saved in records in the request.headers table.
• Authentication parameters returned by an authorizer function or contained in a JSON Web Token (JWT) are
saved in records in the request.auth table (see Using Authorizer Functions to Add Authentication and
Authorization to API Deployments on page 435 and Using JSON Web Tokens (JWTs) to Add Authentication
and Authorization to API Deployments on page 444 respectively).
Each record in a context table is identified by a unique key.
When defining request and response policies, and HTTP and HTTPS back ends, you can reference the value of
a parameter in a context table using a 'context variable'. A context variable has the format <context-table-
name>[<key>] where:
• <context-table-name> is one of request.path, request.query, request.headers, or
request.auth

Oracle Cloud Infrastructure User Guide 382


API Gateway

• <key> is one of:


• a path parameter name defined in the API deployment specification
• a query parameter name included in the request to the API
• a header name included in the request to the API
• an authentication parameter name returned by an authorizer function or contained in a JWT token
If you want to include the context variable within a string in the API deployment specification (for example, in the url
property of an HTTP back end definition), use the format ${<context-table-name>[<key>]}.
For example, the request.path[region] context variable in the example below returns the value of the record
identified by the region key in the request.path context table.

{
"routes": [
{
"path": "/weather/{region}",
"methods": ["GET"],
"backend": {
"type": "HTTP_BACKEND",
"url": "https://api.weather.gov/${request.path[region]}"
}
}
]
}

Note the following:


• A single record is created in the context table for each discrete parameter in an HTTP request. If the HTTP request
includes two (or more) parameters of the same type and with the same name, the value of each parameter with that
name is saved in the same record in the context table and identified by the same key. However, only the first value
in the context table record can be substituted in place of a context variable. When adding a context variable for
which multiple values can exist in the context table record and you want the first value in the context table record
to be substituted in place of the context variable, add the context variable to the API deployment specification in
the format ${<context-table-name>[<key>]}
• If a parameter value includes special characters that have been encoded, the encoding is preserved when the value
is saved in the context table. When the value is substituted for a context variable, the encoded value is substituted
in place of the context variable. For example, if San José is included in a query parameter as San+Jos&#233;,
the encoded version is what will be substituted in place of the context variable for that query parameter.
• If a context variable key does not exist in the specified context table, an empty string is substituted in place of the
context variable.
• If a context variable key contains a dot, the dot is treated as any other character. It is not treated as an indicator of
a parent-child relationship between the strings either side of it.
• If a path parameter includes a wildcard (for example, generic_welcome*), the path parameter without the
wildcard is used as the key.
• You can include a context variable as a path segment in the URL property of an HTTP back end definition, but not
as a query parameter. For example:
• You can use the request.query[state] context variable as a path segment in the URL property, as
shown in the following valid HTTP back end definition:

{
"path": "/weather/{region}",
"methods": ["GET"],
"backend": {
"type": "HTTP_BACKEND",
"url": "https://api.weather.gov/${request.path[region]}/
${request.query[state]}"
}

Oracle Cloud Infrastructure User Guide 383


API Gateway

}
• You cannot use the request.query[state] context variable as a query parameter in the URL property,
as shown in the following invalid HTTP back end definition:

{
"path": "/weather/{region}",
"methods": ["GET"],
"backend": {
"type": "HTTP_BACKEND",
"url": "https://api.weather.gov/${request.path[region]}?state=
${request.query[state]}"
}
}

Examples
The examples in this section assume the following API deployment definition and basic API deployment specification
in a JSON file:

{
"displayName": "Marketing Deployment",
"gatewayId": "ocid1.apigateway.oc1..aaaaaaaab______hga",
"compartmentId": "ocid1.compartment.oc1..aaaaaaaa7______ysq",
"pathPrefix": "/marketing",
"specification": {
"routes": [
{
"path": "/weather",
"methods": ["GET"],
"backend": {
"type": "HTTP_BACKEND",
"url": "https://api.weather.gov"
}
}
]
},
"freeformTags": {},
"definedTags": {}
}

Note the examples also apply when you're defining an API deployment specification using dialogs in the Console.

Example 1: Query path parameter in a definition


You can define a path parameter in the API deployment specification, and then use it elsewhere in the API
deployment specification as a context variable.
This example creates a path parameter, region, and uses it in a context variable request.path[region] in the
HTTP back end definition.

{
"path": "/weather/{region}",
"methods": ["GET"],
"backend": {
"type": "HTTP_BACKEND",
"url": "https://api.weather.gov/${request.path[region]}"
}
}

Oracle Cloud Infrastructure User Guide 384


API Gateway

In this example, a request like https://<gateway-hostname>/marketing/weather/west resolves to


https://api.weather.gov/west.

Example 2: Different types of context variable in the same definition


You can include different types of context variable in the same definition in the API deployment specification.
This example uses the following in the HTTP back end definition:
• a path parameter context variable, request.path[region]
• a query parameter context variable, request.query[state]

{
"path": "/weather/{region}",
"methods": ["GET"],
"backend": {
"type": "HTTP_BACKEND",
"url": "https://api.weather.gov/${request.path[region]}/
${request.query[state]}"
}
}

In this example, a request like https://<gateway-hostname>/marketing/weather/west?


state=california resolves to https://api.weather.gov/west/california.

Example 3: Multiple context variables of the same type in the same definition
You can include the same type of context variable multiple times in the same definition.
This example uses the following in the HTTP back end definition:
• a path parameter context variable, request.path[region]
• two query parameter context variables, request.query[state] and request.query[city]

{
"path": "/weather/{region}",
"methods": ["GET"],
"backend": {
"type": "HTTP_BACKEND",
"url": "https://api.weather.gov/${request.path[region]}/
${request.query[state]}/${request.query[city]}"
}
}

In this example, a request like https://<gateway-hostname>/marketing/weather/west?


state=california&city=fremont resolves to https://api.weather.gov/west/california/
fremont.

Example 4: Multiple values for the same parameter


It is often valid for an HTTP request to include the same query parameter multiple times. The API Gateway service
saves the value of each parameter with the same name to the same record in the context table. However, in the API
deployment specification, it's typically the case that only a single value can be substituted for a context variable. In
these situations, you can indicate that only the first value recorded in the context table for a key is substituted in place
of a context variable by enclosing the context variable within ${...}.
For example, a valid request like "https://<gateway-hostname>/marketing/weather/west?
state=california&city=fremont&city=belmont" has two occurrences of the city query parameter.
On receipt of the HTTP request, the API Gateway service writes both values of the city query parameter
(fremont and belmont) to the same record in the request.query table. When the definition of an HTTP back

Oracle Cloud Infrastructure User Guide 385


API Gateway

end includes ${request.query[city]}, only the first value in the record is substituted in place of the context
variable.
This example uses the following in the HTTP back end definition:
• a path parameter context variable, request.path[region]
• two query parameter context variables, request.query[state] and request.query[city]

{
"path": "/weather/{region}",
"methods": ["GET"],
"backend": {
"type": "HTTP_BACKEND",
"url": "https://api.weather.gov/${request.path[region]}/
${request.query[state]}/${request.query[city]}"
}
}

In this example, a request like https://<gateway-hostname>/marketing/weather/west?


state=california&city=fremont&city=belmont resolves to https://api.weather.gov/
west/california/fremont. Note that only fremont (as the first value in the request.query context
table record identified by the city key) is substituted for the request.query[city] context variable.

Example 5: Parameter value includes encoded special characters


If an HTTP request includes special characters (for example, the character é, the space character) that have been
encoded, the value is stored in the context table in its encoded form. When the value from the context table is
substituted for a context variable, the encoding is preserved.
This example uses the following in the HTTP back end definition:
• a path parameter context variable, request.path[region]
• a query parameter context variable, request.query[city]

{
"path": "/weather/{region}",
"methods": ["GET"],
"backend": {
"type": "HTTP_BACKEND",
"url": "https://api.weather.gov/${request.path[region]}/
${request.query[state]}/${request.query[city]}"
}
}

In this example, a request like https://<gateway-hostname>/marketing/weather/west?city=San


+Jos&#233; resolves to https://api.weather.gov/west/california/San+Jos&#233;.

Example 6: Header parameters in a definition


You can include values passed in the headers of a request as context variables in a definition. If the request includes a
header, the value of the header is stored in the request.headers table, and the name of the header is used as the
key.
This example uses the following in the HTTP back end definition:
• a path parameter context variable, request.path[region]
• a header parameter context variable, request.headers[X-Api-Key]

{
"path": "/weather/{region}",
"methods": ["GET"],
"backend": {

Oracle Cloud Infrastructure User Guide 386


API Gateway

"type": "HTTP_BACKEND",
"url": "https://api.weather.gov/${request.path[region]}/
${request.headers[X-Api-Key]}"
}
}

In this example, a request like https://<gateway-hostname>/marketing/weather/west


included an X-Api-Key header with the value abc123def456fhi789. The request resolves to https://
api.weather.gov/west/abc123def456fhi789.

Example 7: Authentication parameters in a definition


You can include values returned from an authorizer function or contained in a JWT token as context variables in a
definition:
• An authorizer function validates the token passed by an API client when calling the API Gateway service.
The authorizer function returns a response that includes information such as the validity of the authorization,
information about the end user, access scope, and a number of claims in key-value pairs. Depending on the
authorization token, the information might be contained within the token, or the authorizer function might invoke
end-points provided by the authorization server to validate the token and to retrieve information about the end
user. When the API Gateway service receives a key-value pair from the authorizer function, it saves the key-value
pair in the request.auth table as an authentication parameter.
• A JWT token can optionally include a custom claim named scope, comprising a key-value pair. When the
JWT token has been validated, the API Gateway service saves the key-value pair in the request.auth table as
an authentication parameter.
This example uses the key-value pair returned by an authorizer function as the authentication parameter context
variable request.auth[region] in the HTTP back end definition.

{
"requestPolicies": {
"authentication": {
"type": "CUSTOM_AUTHENTICATION",
"isAnonymousAccessAllowed": false,
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaac2______kg6fq",
"tokenHeader": "Authorization"
}
},
"routes": [
{
"path": "/weather",
"methods": ["GET"],
"backend": {
"type": "HTTP_BACKEND",
"url": "https://api.weather.gov/${request.auth[region]}"
},
"requestPolicies": {
"authorization": {
"type": "ANY_OF",
"allowedScope": [ "weatherwatcher" ]
}
}
}
]
}

Assume an authorizer function ocid1.fnfunc.oc1.phx.aaaaaaaaac2______kg6fq validates the token


passed by an API client in a call to the API Gateway service. The authorizer function returns a response to the API
Gateway service that includes the region associated with the end user as a key-value pair, and also the authenticated
end user's access scope. When the API Gateway service receives the key-value pair, it saves the key-value pair in the
request.auth table as an authentication parameter.

Oracle Cloud Infrastructure User Guide 387


API Gateway

In this example, a request like https://<gateway-hostname>/marketing/weather is made by an end


user jdoe using an API client. The authorizer function validates the token passed by the API client in the request, and
also determines that jdoe has the "weatherwatcher" access scope. The authorizer function identifies that jdoe
is associated with the west region. The authorizer function returns jdoe's access scope to the API Gateway service,
along with the region associated with jdoe. The API Gateway service saves the region associated with jdoe as an
authentication parameter. The HTTP back end definition specifies that end users with the "weatherwatcher"
access scope are allowed to access the HTTP back end. The API Gateway service uses the value of the authentication
parameter context variable request.auth[region] in the request. The request resolves to https://
api.weather.gov/west.

Calling an API Deployed on an API Gateway


Having deployed an API on an API gateway, you can call the deployed API.
Tip:

When assembling the curl command described in this topic, you can
quickly get the value of the https://<gateway-hostname>/
<deployment-path-prefix> string as the API deployment's endpoint
using:
• The Console, by going to the Gateway Details page and clicking Copy
beside the endpoint of the API deployment.
• The API, by using the GetDeployments operation.

Oracle Cloud Infrastructure User Guide 388


API Gateway

Using curl
To call an API deployed on an API gateway:
1. Open a terminal window and type a cURL command similar to the following that is appropriate for the deployed
API:

curl -k -X <method> https://<gateway-hostname>/<deployment-path-prefix>/


<api-route-path>

where:
• <method> is a valid method for the deployed API (for example, GET, PUT).
• <gateway-hostname> is an automatically generated domain name in the format <gateway-
identifier>.apigateway.<region-identifier>.oci.customer-oci.com, where:
• <gateway-identifier> is the string of characters that identifies the API gateway. For example,
lak...sjd (abbreviated for readability).
• <region-identifier> is the identifier of the region in which the API gateway has been created. See
Availability by Region on page 342.
For example, lak...sjd.apigateway.us-phoenix-1.oci.customer-oci.com.
Use the Console or the API to find out the domain name to use as the value of <gateway-hostname>.
• /<deployment-path-prefix> is the prefix added to the path of every route in the API deployment.
Note that the deployment path prefix in the request:
• can contain multiple forward slashes (provided they are not adjacent)
• can include alphanumeric uppercase and lowercase characters
• can include the special characters $ - _ . + ! * ' ( ) , % ; : @ & =
• cannot include parameters and wildcards
• must match exactly the deployment path prefix defined for the API deployment (see Deploying an API on
an API Gateway by Creating an API Deployment on page 367)
Use the Console or the API to find out the path prefix to use as the value of <deployment-path-
prefix>.
• /<api-route-path> is the path to a particular route defined in the API deployment specification. Note
that the route path in the request:
• is relative to the deployment path prefix
• can be a single forward slash
• can contain multiple forward slashes (provided they are not adjacent), and can end with a forward slash
• can include alphanumeric uppercase and lowercase characters
• can include the special characters $ - _ . + ! * ' ( ) , % ; : @ & =
• need not match exactly the route path defined in the API deployment specification, provided the route path
in the API deployment specification includes a path parameter with or without a wildcard (see Adding Path
Parameters and Wildcards to Route Paths on page 381)
Use the Console or the API to find out the path to use as the value of <api-route-path>.
For example:

curl -k -X GET https://lak...sjd.apigateway.us-phoenix-1.oci.customer-


oci.com/marketing/hello/

If the API gateway back end is a serverless function that accepts parameters, include those parameters in the call
to the API. For example:

curl -k -X POST https://lak...sjd.apigateway.us-phoenix-1.oci.customer-


oci.com/marketing/hello/ -d "name=john"

Oracle Cloud Infrastructure User Guide 389


API Gateway

Listing API Gateways and API Deployments


Having created API gateways, and deployed APIs on API gateways by creating API deployments, you might need
to list the existing API gateways or API deployments. For example, you might want to see whether there are any
API gateways that are no longer required, quickly locate an API gateway by its OCID, or obtain the OCID of an API
deployment.

Using the Console


To list API gateways or API deployments using the Console:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
API Gateway.
2. Choose a Compartment you have permission to work in.
3. To see a list of all existing API gateways in the current region and compartment (including their OCIDs), use the
Gateways page.
4. To see more detail about an individual API gateway, click the name of the API gateway on the Gateways page to
show the Gateway Details page.
5. To see a list of API deployments on an API gateway (including their endpoint, state, and OCID), click the name of
the API gateway on the Gateways page and select Deployments from the Resources list.

Using the CLI


To list the API gateways and API deployments in a compartment using the CLI:
1. Configure your client environment to use the CLI (Configuring Your Client Environment to use the CLI for API
Gateway Development on page 359).
2. To list all the API gateways in a compartment, open a command prompt and run oci api-gateway
gateway list to list the API gateways:

oci api-gateway gateway list --compartment-id <compartment-ocid>

where:
• <compartment-ocid> is the OCID of the compartment containing the API gateway.
For example:

oci api-gateway gateway list --compartment-id


ocid1.compartment.oc1..aaaaaaaa7______ysq

If you want to list just those API gateways with a status of Active, include the --lifecycle-state ACTIVE
parameter in the request. For example:

oci api-gateway gateway list --compartment-id


ocid1.compartment.oc1..aaaaaaaa7______ysq --lifecycle-state ACTIVE

Oracle Cloud Infrastructure User Guide 390


API Gateway

3. To list all the API deployments in a compartment, open a command prompt and run oci api-gateway
deployment list to list the API deployments:

oci api-gateway deployment list --compartment-id <compartment-ocid>

where:
• <compartment-ocid> is the OCID of the compartment containing the API deployments.
For example:

oci api-gateway deployment list --compartment-id


ocid1.compartment.oc1..aaaaaaaa7______ysq

If you want to list just those API deployments with a status of Active, include the --lifecycle-state
ACTIVE parameter in the request. For example:

oci api-gateway deployment list --compartment-id


ocid1.compartment.oc1..aaaaaaaa7______ysq --lifecycle-state ACTIVE

If you want to list all the API deployments on a particular API gateway in a compartment, include the --
gateway-id parameter in the request and specify the API gateway's OCID. For example:

oci api-gateway deployment list --compartment-id


ocid1.compartment.oc1..aaaaaaaa7______ysq --gateway-id
ocid1.apigateway.oc1..aaaaaaaab______hga

For more information about using the CLI, see Command Line Interface (CLI). For a complete list of flags and
options available for CLI commands, see CLI Help.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the:
• ListGateways operation to list API gateways
• ListDeployments operation to list API deployments

Updating API Gateways and API Deployments


Having created an API gateway, and deployed an API on the API gateway by creating an API deployment, you might
decide to change either or both. For example, to change the API gateway's name or the tags applied to it, or to change
an API deployment specification to add additional back ends to the API deployment.
Note that there are some properties of API gateways and API deployments for which you cannot change the original
values.
You can update API gateways and API deployments using the Console, the CLI, and the API.
You can update an API deployment specification by:
• using dialogs in the Console
• editing a JSON file

Using the Console


To update an existing API gateway or an API deployment using the Console:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
API Gateway.

Oracle Cloud Infrastructure User Guide 391


API Gateway

2. Choose a Compartment you have permission to work in.


3. On the Gateways page, click the name of the API gateway that you want to update (or that contains the
API deployment you want to update) to show the Gateway Details page.
4. To update an API gateway:
• Click Edit to change the API gateway's name. If you have set up a custom TLS certificate (and associated
custom domain name), you can also change the API Gateway certificate resource used by the API gateway
(see Setting Up Custom Domains and TLS Certificates on page 375). Avoid entering confidential
information.
• Click Move Resource to move the API gateway to a different compartment.
• Click Add Tag(s) and View Tag(s) to change and view the tags applied to the API gateway.
5. To update an API deployment, on the Gateway Details page, select Deployments from the Resources list and
click the Actions icon (three dots) beside the API deployment you want to update:
• Click Edit to change the API deployment's name, or to replace the original API deployment specification. You
can change the original API deployment specification by selecting one of two options:
• From Scratch: Select to change API deployment specification properties using dialogs in the Console.
• Upload an existing API: Select to change API deployment specification properties by uploading a
replacement JSON file.
For more information about defining API deployment specifications, see Creating an API Deployment
Specification on page 365. Avoid entering confidential information.
• Click Move Resource to move the API deployment to a different compartment.
• Click Add Tag(s) and View Tag(s) to change and view the tags applied to the API deployment.

Using the CLI


To update existing API gateways and API deployments using the CLI:
1. Configure your client environment to use the CLI (Configuring Your Client Environment to use the CLI for API
Gateway Development on page 359).
2. To update an existing API gateway:
a. Open a command prompt and run oci api-gateway gateway update to update the API gateway:

oci api-gateway gateway update --gateway-id <gateway-ocid> --<property-


to-update> <property-value>

where:
• <gateway-ocid> is the OCID of the API gateway to update. To find out the API gateway's OCID, see
Listing API Gateways and API Deployments on page 390.
• <property-to-update> is the property to update. Note that you can only change the values for
display-name, freeform-tags and defined-tags (and certificate-id> if this was

Oracle Cloud Infrastructure User Guide 392


API Gateway

originally set for the API gateway). All other values must be identical to values in the original gateway
definition.
• <property-value> is the new value of the property you want to change.
For example:

oci api-gateway gateway update --gateway-id


ocid1.apigateway.oc1..aaaaaaaab______hga --display-name "Hello World
Gateway - version 2"

The response to the command includes:


• The lifecycle state (for example, ACTIVE, FAILED).
• The id of the work request to update the API gateway (details of work requests are available for seven days
after completion, cancellation, or failure).
If you want the command to wait to return control until the API gateway is active (or the request has failed),
include either or both the following parameters:
• --wait-for-state ACTIVE
• --wait-for-state FAILED
For example:

oci api-gateway gateway update --gateway-id


ocid1.apigateway.oc1..aaaaaaaab______hga --display-name "Hello World
Gateway - version 2" --wait-for-state ACTIVE
b. (Optional) To see the status of the work request that is updating the API gateway, enter:

oci api-gateway work-request get --work-request-id <work-request-ocid>


c. (Optional) To view the logs of the work request that is updating the API gateway, enter:

oci api-gateway work-request-log list --work-request-id <work-request-


ocid>
d. (Optional) If the work request that is updating the API gateway fails and you want to review the error logs,
enter:

oci api-gateway work-request-error --work-request-id <work-request-ocid>


e. (Optional) To verify that the API gateway has been updated, enter the following command and confirm that
the API gateway's properties are as you expect:

oci api-gateway gateway get --gateway-id <gateway-ocid>


3. To update an existing API deployment:
a. Open a command prompt and run oci api-gateway deployment update to update the API
deployment:

oci api-gateway deployment update --deployment-id <deployment-ocid> --


specification file:///<filename>

where:
• <deployment-ocid> is the OCID of the API deployment to update. To find out the API deployment's
OCID, see Listing API Gateways and API Deployments on page 390.
• <filename> is the relative location and filename of the JSON file containing the replacement
API deployment specification. For example, replacement-specification.json. For more

Oracle Cloud Infrastructure User Guide 393


API Gateway

information about defining API deployment specifications, see Creating an API Deployment Specification
on page 365.
For example:

oci api-gateway deployment update --deployment-id


ocid1.apideployment.oc1..aaaaaaaaab______pwa --specification file:///
Users/jdoe/work/replacement-specification.json

The response to the command includes:


• The lifecycle state (for example, ACTIVE, FAILED).
• The id of the work request to update the API deployment (details of work requests are available for seven
days after completion, cancellation, or failure).
If you want the command to wait to return control until the API deployment is active (or the request has
failed), include either or both the following parameters:
• --wait-for-state ACTIVE
• --wait-for-state FAILED
For example:

oci api-gateway deployment update --deployment-id


ocid1.apideployment.oc1..aaaaaaaaab______pwa --specification file:///
Users/jdoe/work/replacement-specification.json --wait-for-state ACTIVE
b. (Optional) To see the status of the work request that is updating the API deployment, enter:

oci api-gateway work-request get --work-request-id <work-request-ocid>


c. (Optional) To view the logs of the work request that is updating the API deployment, enter:

oci api-gateway work-request-log list --work-request-id <work-request-


ocid>
d. (Optional) If the work request that is updating the API deployment fails and you want to review the error logs,
enter:

oci api-gateway work-request-error --work-request-id <work-request-ocid>


e. (Optional) To verify that the API deployment has been updated, enter the following command and confirm
that the API deployment's properties are as you expect:

oci api-gateway deployment get --deployment-id <deployment-ocid>

For more information about using the CLI, see Command Line Interface (CLI). For a complete list of flags and
options available for CLI commands, see CLI Help.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the:
• UpdateGateway operation to update an API gateway
• UpdateDeployment operation to update an API deployment

Oracle Cloud Infrastructure User Guide 394


API Gateway

Moving API Gateways and API Deployments Between Compartments


Having created an API gateway, and deployed an API on the API gateway by creating an API deployment, you might
decide to move either or both from one compartment to another. An API gateway and the individual API deployments
deployed on it can be in different compartments.
Note that calls to an API deployment will be disrupted while the API deployment (or the API gateway on which it
is deployed) is being moved to a different compartment. Do not call the API deployment until the move operation is
complete.

Using the Console


To move an API gateway or an API deployment to a different compartment using the Console:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
API Gateway.
2. Choose a Compartment you have permission to work in.
3. On the Gateways page, click the name of the API gateway that you want to move (or that contains the
API deployment you want to move) to show the Gateway Details page.
4. On the Gateway Details page:
• To move the API gateway, click Move Resource, select the compartment to which you want to move
the API gateway, and click Move Resource to start the process of moving the API gateway. Note that
API deployments on the API gateway are not moved to the new compartment.
• To move an API deployment, select Deployments from the Resources list, click Move Resource, select the
compartment to which you want to move the API deployment, and click Move Resource to start the process of
moving the API deployment.
Do not call an API deployment while the API deployment (or the API gateway on which it is deployed) is in the
process of being moved to the new compartment.
5. On the Gateway Details page, select Work Requests from the Resources list and confirm the move operation is
complete.
When the move operation is complete, resume calls to the API deployment.

Using the CLI


To move API gateways and API deployments to a different compartment using the CLI:
1. Configure your client environment to use the CLI (Configuring Your Client Environment to use the CLI for API
Gateway Development on page 359).

Oracle Cloud Infrastructure User Guide 395


API Gateway

2. To move an API gateway to a different compartment:


a. Open a command prompt and run oci api-gateway gateway change-compartment to move the
API gateway:

oci api-gateway gateway change-compartment --gateway-id <gateway-ocid>


--compartment-id <compartment-ocid>

where:
• <gateway-ocid> is the OCID of the API gateway to move. To find out the API gateway's OCID, see
Listing API Gateways and API Deployments on page 390.
• <compartment-ocid> is the OCID of the compartment to which to move the API gateway.
For example:

oci api-gateway gateway change-compartment --gateway-id


ocid1.apigateway.oc1..aaaaaaaab______hga --compartment-id
ocid1.compartment.oc1..aaaaaaaa7______ysq

Note that API deployments on the API gateway are not moved.
The response to the command includes:
• The lifecycle state (for example, ACTIVE, FAILED).
• The id of the work request to move the API gateway (details of work requests are available for seven days
after completion, cancellation, or failure).
If you want the command to wait to return control until the API gateway is active (or the request has failed),
include either or both the following parameters:
• --wait-for-state ACTIVE
• --wait-for-state FAILED
For example:

oci api-gateway gateway change-compartment --gateway-id


ocid1.apigateway.oc1..aaaaaaaab______hga --compartment-id
ocid1.compartment.oc1..aaaaaaaa7______ysq" --wait-for-state ACTIVE
b. (Optional) To see the status of the work request that is moving the API gateway, enter:

oci api-gateway work-request get --work-request-id <work-request-ocid>


c. (Optional) To view the logs of the work request that is moving the API gateway, enter:

oci api-gateway work-request-log list --work-request-id <work-request-


ocid>
d. (Optional) If the work request that is moving the API gateway fails and you want to review the error logs,
enter:

oci api-gateway work-request-error --work-request-id <work-request-ocid>


e. (Optional) To verify that the API gateway has been moved, enter the following command and confirm that the
API gateway's new compartment OCID is as you expect:

oci api-gateway gateway get --gateway-id <gateway-ocid>

Oracle Cloud Infrastructure User Guide 396


API Gateway

3. To move an API deployment to a different compartment:


a. Open a command prompt and run oci api-gateway deployment change-compartment to move
the API deployment:

oci api-gateway deployment change-compartment --deployment-id


<deployment-ocid> --compartment-id <compartment-ocid>

where:
• <deployment-ocid> is the OCID of the API deployment to move. To find out the API deployment's
OCID, see Listing API Gateways and API Deployments on page 390.
• <compartment-ocid> is the OCID of the compartment to which to move the API deployment.
For example:

oci api-gateway deployment change-compartment --deployment-id


ocid1.apideployment.oc1..aaaaaaaaab______pwa --compartment-id
ocid1.compartment.oc1..aaaaaaaa7______ysq

The response to the command includes:


• The lifecycle state (for example, ACTIVE, FAILED).
• The id of the work request to move the API deployment (details of work requests are available for seven
days after completion, cancellation, or failure).
If you want the command to wait to return control until the API deployment is active (or the request has
failed), include either or both the following parameters:
• --wait-for-state ACTIVE
• --wait-for-state FAILED
For example:

oci api-gateway deployment change-compartment --deployment-id


ocid1.apideployment.oc1..aaaaaaaaab______pwa --compartment-id
ocid1.compartment.oc1..aaaaaaaa7______ysq --wait-for-state ACTIVE
b. (Optional) To see the status of the work request that is moving the API deployment, enter:

oci api-gateway work-request get --work-request-id <work-request-ocid>


c. (Optional) To view the logs of the work request that is moving the API deployment, enter:

oci api-gateway work-request-log list --work-request-id <work-request-


ocid>
d. (Optional) If the work request that is moving the API deployment fails and you want to review the error logs,
enter:

oci api-gateway work-request-error --work-request-id <work-request-ocid>


e. (Optional) To verify that the API deployment has been moved, enter the following command and confirm that
the API deployment's new compartment OCID is as you expect:

oci api-gateway deployment get --deployment-id <deployment-ocid>

For more information about using the CLI, see Command Line Interface (CLI). For a complete list of flags and
options available for CLI commands, see CLI Help.

Oracle Cloud Infrastructure User Guide 397


API Gateway

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the:
• ChangeGatewayCompartment operation to move an API gateway to a different compartment.
• ChangeDeploymentCompartment operation to move an API deployment to a different compartment.

Deleting API Gateways and API Deployments


Having created an API gateway, and deployed an API on the API gateway by creating an API deployment, you might
decide that either or both are no longer required.
You can delete an API gateway from the API Gateway service, provided there are no API deployments on it.
You can delete individual API deployments on an API gateway, one at a time. Note that when you delete an API
deployment, its API deployment specification is permanently removed.
Note that deleted API gateways and API deployments continue to be shown in the Console for 90 days, with a status
of Deleted. After 90 days, deleted API gateways and API deployments are no longer shown.

Using the Console


To delete an API gateway or an API deployment using the Console:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
API Gateway.
2. Choose a Compartment you have permission to work in.
3. On the Gateways page, click the name of the API gateway that you want to delete (or that contains the
API deployment you want to delete) to show the Gateway Details page.
4. On the Gateway Details page:
• To delete the API gateway, click Delete below the API gateway's name. The API gateway is permanently
removed. Note that you cannot delete an API gateway if it still has API deployments on it. You must delete the
API deployments first.
• To delete an API deployment, select Deployments from the Resources list, click the Actions icon (three dots)
beside the API deployment you want to delete, and select Delete. The API deployment and its API deployment
specification are permanently removed.

Using the CLI


To delete API gateways and API deployments using the CLI:
1. Configure your client environment to use the CLI (Configuring Your Client Environment to use the CLI for API
Gateway Development on page 359).

Oracle Cloud Infrastructure User Guide 398


API Gateway

2. To delete an existing API gateway:


a. Open a command prompt and run oci api-gateway gateway delete to delete the API gateway:

oci api-gateway gateway delete --gateway-id <gateway-ocid>

where:
• <gateway-ocid> is the OCID of the API gateway to delete. To find out the API gateway's OCID, see
Listing API Gateways and API Deployments on page 390.
For example:

oci api-gateway gateway delete --gateway-id


ocid1.apigateway.oc1..aaaaaaaab______hga

Note that you cannot delete an API gateway if it still has API deployments on it (including API deployments
that are in different compartments to the API gateway itself). You must delete the API deployments first.
The response to the command includes:
• The lifecycle state (for example, DELETED, FAILED).
• The id of the work request to delete the API gateway (details of work requests are available for seven days
after completion, cancellation, or failure).
If you want the command to wait to return control until the API gateway has been deleted (or the request has
failed), include either or both the following parameters:
• --wait-for-state DELETED
• --wait-for-state FAILED
For example:

oci api-gateway gateway delete --gateway-id


ocid1.apigateway.oc1..aaaaaaaab______hga --wait-for-state DELETED
b. (Optional) To see the status of the work request that is deleting the API gateway, enter:

oci api-gateway work-request get --work-request-id <work-request-ocid>


c. (Optional) To view the logs of the work request that is deleting the API gateway, enter:

oci api-gateway work-request-log list --work-request-id <work-request-


ocid>
d. (Optional) If the work request that is deleting the API gateway fails and you want to review the error logs,
enter:

oci api-gateway work-request-error --work-request-id <work-request-ocid>


e. (Optional) To verify that the API gateway has been deleted, enter the following command and confirm that
the API gateway's lifecycle state is DELETED:

oci api-gateway gateway get --gateway-id <gateway-ocid>

Oracle Cloud Infrastructure User Guide 399


API Gateway

3. To delete an existing API deployment:


a. Open a command prompt and run oci api-gateway deployment delete to delete the API
deployment:

oci api-gateway deployment delete --deployment-id <deployment-ocid>

where:
• <deployment-ocid> is the OCID of the API deployment to delete. To find out the API deployment's
OCID, see Listing API Gateways and API Deployments on page 390.
For example:

oci api-gateway deployment delete --deployment-id


ocid1.apideployment.oc1..aaaaaaaaab______pwa

The response to the command includes:


• The lifecycle state (for example, ACTIVE, DELETED).
• The id of the work request to delete the API deployment (details of work requests are available for seven
days after completion, cancellation, or failure).
If you want the command to wait to return control until the API deployment is active (or the request has
failed), include either or both the following parameters:
• --wait-for-state DELETED
• --wait-for-state FAILED
For example:

oci api-gateway deployment delete --deployment-id


ocid1.apideployment.oc1..aaaaaaaaab______pwa --wait-for-state DELETED
b. (Optional) To see the status of the work request that is deleting the API deployment, enter:

oci api-gateway work-request get --work-request-id <work-request-ocid>


c. (Optional) To view the logs of the work request that is deleting the API deployment, enter:

oci api-gateway work-request-log list --work-request-id <work-request-


ocid>
d. (Optional) If the work request that is deleting the API deployment fails and you want to review the error logs,
enter:

oci api-gateway work-request-error --work-request-id <work-request-ocid>


e. (Optional) To verify that the API deployment has been deleted, enter the following command and confirm that
the API deployment 's lifecycle state is DELETED:

oci api-gateway deployment get --deployment-id <deployment-ocid>

For more information about using the CLI, see Command Line Interface (CLI). For a complete list of flags and
options available for CLI commands, see CLI Help.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the:

Oracle Cloud Infrastructure User Guide 400


API Gateway

• DeleteGateway operation to delete an API gateway


• DeleteDeployment operation to delete an API deployment

Adding Logging to API Deployments


Having created an API gateway and deployed one or more APIs on it, there will likely be occasions when you'll need
to see more detail about the flow of traffic into and out of the API gateway. For example, you might want to review
responses returned to API clients, or to troubleshoot errors. You can specify that the API Gateway service stores
information about requests and responses going through an API gateway, and information about processing within an
API gateway, as logs in the Oracle Cloud Infrastructure Logging service.
You can define and store two kinds of logs for API deployments in the Oracle Cloud Infrastructure Logging service:
• Access logs, that record a summary of every request and response that goes through the API gateway to and from
an API deployment. For more information about access log content, see API Deployment Access Log on page
2625.
• Execution logs, that record information about processing within the API gateway for an API deployment. For
more information about execution log content, see API Deployment Execution Log on page 2626. You can
specify a log level for execution logs as one of the following:
• Information, to record a summary of every processing stage.
• Warning, to record only transient errors that occur during processing. For example, a connection reset.
• Error, to record only persistent errors that occur during processing. For example, an internal error, or a call to a
function that returns a 404 message.
You can set an execution log level for an API deployment, and also set different execution log levels for
individual routes to override the execution log level inherited from the API deployment.
Note:

In earlier API Gateway releases (prior to the release of the Oracle Cloud
Infrastructure Logging service), you could direct API Gateway to store
access logs and execution logs as objects in a storage bucket in Oracle Cloud
Infrastructure Object Storage. This functionality is still available for any API
deployments you created in earlier releases where you previously specified a
logging policy.
You can set up a logging policy in the API deployment specification for
all routes in the API deployment specification, and optionally also set up
different logging policies for individual routes to override the policy set at
the API deployment level. Each log object contains the log messages output
in a ten minute logging window. Log objects are available in Object Storage
approximately ten minutes after the end of the logging window.
However, note the following:
• If you set up Oracle Cloud Infrastructure Logging to store logs for an
API deployment, you are no longer able to store logs as objects in a
storage bucket in Oracle Cloud Infrastructure Object Storage for that API
deployment.
• The ability to store access logs and execution logs as objects in a storage
bucket in Oracle Cloud Infrastructure Object Storage will be deprecated
in a future release.
• To store access logs and execution logs as objects in a storage bucket
in Oracle Cloud Infrastructure Object Storage, you have to define the
logging policy in a JSON file. You cannot use the Console to store access
logs and execution logs in Oracle Cloud Infrastructure Object Storage.
You can add logging to an API deployment specification by:
• using the Console

Oracle Cloud Infrastructure User Guide 401


API Gateway

• editing a JSON file

Using the Console to Add Logging

Using the Console to Configure and Enable Logs in Oracle Cloud Infrastructure Logging
To configure and enable API deployment logs using the Console to store logs in Oracle Cloud Infrastructure Logging:
1. Create or update an API deployment using the Console, select the From Scratch option, and enter details on the
Basic Information page.
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
and Updating API Gateways and API Deployments on page 391.
2. In the API Logging Policies section of the Basic Information page, specify one of the following options as the
Execution Log Level to record information about processing within the API gateway:
• Information: Record a summary of every processing stage. This is the default option.
• Warning: Record only transient errors that occur during processing. For example, a connection reset.
• Error: Record only persistent errors that occur during processing. For example, an internal error, or a call to a
function that returns a 404 message.
3. Click Next to enter details for individual routes in the API deployment on the Routes page and click Show Route
Logging Policies.
4. Specify one of the following options as the Execution Log Level Override that applies to an individual route (to
override the execution log level inherited from the API deployment):
• Information: Record a summary of every processing stage.
• Warning: Record only transient errors that occur during processing. For example, a connection reset.
• Error: Record only persistent errors that occur during processing. For example, an internal error, or a call to a
function that returns a 404 message.
5. Click Next to review the details you entered for the API deployment.
6. Click Create or Save Changes to create or update the API deployment.
The API deployment is shown on the API Deployment Details page.
7. Under Resources, click Logs, and then click the Enable Logging slider to create and enable a new API
deployment log in the Oracle Cloud Infrastructure Logging service in the Create Log entry panel:
• Compartment: By default, the current compartment.
• Log Group: By default, the first log group in the compartment.
• Log Category: Select either Execution or Access.
• Log Name: By default, <deployment-name>_execution or <deployment-name>_access,
depending on which category you select.
For more information, see Enabling Logging for a Resource on page 2619.
8. Click Enable Log to create the new log and enable it.

Editing a JSON File to Add Logging

Editing a JSON File to Set Execution Log Level for Logs Stored in Oracle Cloud Infrastructure
Logging
To edit the API deployment specification in a JSON file to set the log level for execution logs stored in Oracle Cloud
Infrastructure Logging:

Oracle Cloud Infrastructure User Guide 402


API Gateway

1. Using your preferred JSON editor, edit the existing API deployment specification in which you want to set the
log level for execution logs stored in Oracle Cloud Infrastructure Logging, or create a new API deployment
specification (see Creating an API Deployment Specification on page 365).
At a minimum, the API deployment specification will include a routes section containing:
• A path. For example, /hello
• One or more methods. For example, GET
• A definition of a back end. For example, a URL, or the OCID of a function in Oracle Functions.
For example, the following basic API deployment specification defines a simple Hello World serverless function
in Oracle Functions as a single back end:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
2. (Optional) To set the log level for execution logs that applies globally to all routes in the API deployment
specification:
a. Insert a loggingPolicies section before the routes section. For example:

{
"loggingPolicies": {},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
b. Specify the level of detail to record about processing within the API gateway for all routes by including the
executionLog policy in the loggingPolicies section, and setting the logLevel property to one of
the following:
• INFO to record a summary of every processing stage.
• WARN to record only transient errors that occur during processing. For example, a connection reset.
• ERROR to record only persistent errors that occur during processing. For example, an internal error, or a
call to a function that returns a 404 message.
For example:

{
"loggingPolicies": {
"executionLog": {
"logLevel": "INFO"
}

Oracle Cloud Infrastructure User Guide 403


API Gateway

},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
3. (Optional) To set the log level for execution logs for a particular route (overriding the global execution log level
inherited from the API deployment):
a. Insert a loggingPolicies section after the route's backend section. For example:

{
"loggingPolicies": {
"executionLog": {
"logLevel": "INFO"
}
},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"loggingPolicies": {}
}
]
}
b. Specify the level of detail to record about processing within the API gateway for the route by including the
executionLog policy in the loggingPolicies section, and setting the logLevel property to one of
the following:
• INFO to record a summary of every processing stage.
• WARN to record only transient errors that occur during processing. For example, a connection reset.
• ERROR to record only persistent errors that occur during processing. For example, an internal error, or a
call to a function that returns a 404 message.
For example:

{
"loggingPolicies": {
"executionLog": {
"logLevel": "INFO"
}
},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"

Oracle Cloud Infrastructure User Guide 404


API Gateway

},
"loggingPolicies": {
"executionLog": {
"logLevel": "ERROR"
}
}
}
]
}
4. Save the JSON file containing the API deployment specification.
5. Use the API deployment specification when you create or update an API deployment in the following ways:
• by specifying the JSON file in the Console when you select the Upload an existing API option
• by specifying the JSON file in a request to the API Gateway REST API
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367.
6. Having set the log level for execution logs, follow the instructions in Enabling Logging for a Resource on page
2619 to create and enable a new API deployment log in the Oracle Cloud Infrastructure Logging service.

Editing a JSON File to Set Up Logging Policies to Store Logs in Object Storage
In earlier API Gateway releases (prior to the release of the Oracle Cloud Infrastructure Logging service), you could
direct API Gateway to store and view access logs and execution logs as objects in a storage bucket in Oracle Cloud
Infrastructure Object Storage. This functionality is still available for any API deployments you created in earlier
releases where you previously specified a logging policy, as described below. However, note the following:
• If you set up Oracle Cloud Infrastructure Logging to store logs for an API deployment, you are no longer able to
store logs as objects in a storage bucket in Oracle Cloud Infrastructure Object Storage for that API deployment.
• The ability to store access logs and execution logs as objects in a storage bucket in Oracle Cloud Infrastructure
Object Storage will be deprecated in a future release.
To edit the API deployment specification in a JSON file to store logs in Object Storage:
1. Using your preferred JSON editor, edit the existing API deployment specification to which you want to add a
logging policy, or create a new API deployment specification (see Creating an API Deployment Specification on
page 365).
At a minimum, the API deployment specification will include a routes section containing:
• A path. For example, /hello
• One or more methods. For example, GET
• A definition of a back end. For example, a URL, or the OCID of a function in Oracle Functions.
For example, the following basic API deployment specification defines a simple Hello World serverless function
in Oracle Functions as a single back end:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}

Oracle Cloud Infrastructure User Guide 405


API Gateway

2. (Optional) To add a logging policy to the API deployment specification that applies globally to all routes in the
API deployment specification:
a. Insert a loggingPolicies section before the routes section. For example:

{
"loggingPolicies": {},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
b. (Optional) Record a summary of every request and response that goes through the gateway by including the
accessLog policy in the loggingPolicies section, and setting the isEnabled property to true.
For example:

{
"loggingPolicies": {
"accessLog": {
"isEnabled": true
},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
c. (Optional) Record information about processing within the API gateway by including the executionLog
policy in the loggingPolicies section, setting the isEnabled property to true, and setting the
logLevel property to one of the following:
• INFO to record a summary of every processing stage.
• WARN to record only transient errors that occur during processing. For example, a connection reset.
• ERROR to record only persistent errors that occur during processing. For example, an internal error, or a
call to a function that returns a 404 message.
For example:

{
"loggingPolicies": {
"accessLog": {
"isEnabled": true
},
"executionLog": {
"isEnabled": true,
"logLevel": "INFO"

Oracle Cloud Infrastructure User Guide 406


API Gateway

}
},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
3. (Optional) To add a logging policy to the API deployment specification that applies to a particular route
(overriding the global logging policy):
a. Insert a loggingPolicies section after the route's backend section. For example:

{
"loggingPolicies": {
"accessLog": {
"isEnabled": true
},
"executionLog": {
"isEnabled": true,
"logLevel": "INFO"
}
},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"loggingPolicies": {}
}
]
}
b. (Optional) Record a summary of every request and response that goes through the gateway to the route by
including the accessLog policy in the loggingPolicies section, and setting the isEnabled property
to true.
For example:

{
"loggingPolicies": {
"accessLog": {
"isEnabled": true
},
"executionLog": {
"isEnabled": true,
"logLevel": "INFO"
}
},
"routes": [
{
"path": "/hello",
"methods": ["GET"],

Oracle Cloud Infrastructure User Guide 407


API Gateway

"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"loggingPolicies": {
"accessLog": {
"isEnabled": true
}
}
}
]
}
c. (Optional) Record information about processing within the API gateway for the route by including the
executionLog policy in the loggingPolicies section, setting the isEnabled property to true, and
setting the logLevel property to one of the following:
• INFO to record a summary of every processing stage.
• WARN to record only transient errors that occur during processing. For example, a connection reset.
• ERROR to record only persistent errors that occur during processing. For example, an internal error, or a
call to a function that returns a 404 message.
For example:

{
"loggingPolicies": {
"accessLog": {
"isEnabled": true
},
"executionLog": {
"isEnabled": true,
"logLevel": "INFO"
}
},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"loggingPolicies": {
"accessLog": {
"isEnabled": true
},
"executionLog": {
"isEnabled": true,
"logLevel": "ERROR"
}
}
}
]
}
4. Save the JSON file containing the API deployment specification.
5. Use the API deployment specification when you create or update an API deployment in the following ways:
• by specifying the JSON file in the Console when you select the Upload an existing API option
• by specifying the JSON file in a request to the API Gateway REST API
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367.

Oracle Cloud Infrastructure User Guide 408


API Gateway

Viewing Logs
Having added logging to an API deployment specification and deployed the API on an API gateway, the API
Gateway service writes logs accordingly.

Viewing Logs in Oracle Cloud Infrastructure Logging


You can view the content of an API deployment log in Oracle Cloud Infrastructure Logging from the API
Deployment Details page. Under Resources, click Logs, and then click the name of the log you want to view.
Alternatively, you can view the content of an API deployment log from the Oracle Cloud Infrastructure Logging Log
Search page. See To view the contents of logs on page 2609.
For more information about the content of access and execution logs, see:
• API Deployment Access Log on page 2625
• API Deployment Execution Log on page 2626

Viewing Logs in Oracle Cloud Infrastructure Object Storage


In earlier API Gateway releases (prior to the release of the Oracle Cloud Infrastructure Logging service), you could
direct API Gateway to store and view access logs and execution logs as objects in a storage bucket in Oracle Cloud
Infrastructure Object Storage. This functionality is still available for any API deployments you created in earlier
releases where you previously specified a logging policy, as described below. However, note the following:
• If you set up Oracle Cloud Infrastructure Logging to store logs for an API deployment, you are no longer able to
store logs as objects in a storage bucket in Oracle Cloud Infrastructure Object Storage for that API deployment.
• The ability to store access logs and execution logs as objects in a storage bucket in Oracle Cloud Infrastructure
Object Storage will be deprecated in a future release.

To view logs that have been stored in a storage bucket in Oracle Cloud Infrastructure Object Storage:
1. In the Console, under Core Infrastructure, click Object Storage.
2. Choose the Compartment that owns API Gateway-related resources. If the API gateway and the API deployment
are in different compartments, choose the compartment that owns the API gateway.
The Object Storage page shows the storage buckets in the compartment.
3. On the Object Storage page, click the storage bucket name that includes the OCID of the compartment that owns
the API Gateway-related resources.
4. Click Objects under Resources to see a list of log objects in the storage bucket, each with a name in the format:

api_gateway_deployment_access_log/<deployment-ocid>/<datetimestamp>.log.gz

where:
• <deployment-ocid> is the OCID of the API deployment.
• <datetimestamp> is the date and time at the start of the logging window.
5. Click the Actions icon (three dots) beside the log you want to view, and then click Download and save the log as
a file in a convenient location.

Oracle Cloud Infrastructure User Guide 409


API Gateway

6. Open the log file in a text editor to view the log messages.
Each log message contains:
• the number of bytes sent in the message
• the compartment OCID
• the API deployment OCID
• the API gateway OCID
• the user agent
• the request and the endpoint
• the client IP address
• the request duration (in seconds)
• the message number
• the timestamp
For example:

{"bodybytesSent":292,"compartmentId":"ocid1.compartment.oc1..aaaaaaaa7______ysq","depl
marketing/hello
HTTP/1.1","remoteAddr":"123.45.678.90","remoteUser":"","requestDuration":"0.167","req

Adding an HTTP or HTTPS URL as an API Gateway Back End


A common requirement is to build an API with the HTTP or HTTPS URL of a back-end service, and an API gateway
providing front-end access to the back-end URL.
Having used the API Gateway service to create an API gateway, you can create an API deployment to access
HTTP and HTTPS URLs.
The HTTP or HTTPS URL that you specify for the back-end service can be:
• The URL of a service that is publicly available on the internet.
• The URL of an Oracle Cloud Infrastructure service (for example, Oracle Functions).
• The URL of a service on your own private or internal network (for example, connected to the VCN by
FastConnect).
The URL you provide in the API deployment specification to identify the HTTP or HTTPS back-end service can
include the host name or the host IP address. If you provide the host name, use the DHCP Options property of
the API gateway's subnet to control how host names included in the API deployment specification are resolved to
IP addresses at runtime:
• If the host name for a back-end service is publicly published on the internet, or if the host name belongs to an
instance in the same VCN, select a DHCP options set for the API gateway's subnet that has the Oracle-provided
Internet and VCN Resolver as the DNS Type. This is the default if you do not explicitly select a DHCP options
set.
• If the host name is for a back-end service on your own private or internal network, select a DHCP options set for
the API gateway's subnet that has Custom Resolver as the DNS Type, and has the URL of a suitable DNS server
that can resolve the host name to an IP address.
Note that you can change the DNS server details in the DHCP options set specified for an API gateway's subnet. The
API gateway will be reconfigured to use the updated DNS server details within two hours. For more information
about resolving host names to IP addresses, see DNS in Your Virtual Cloud Network on page 2936 and DHCP
Options on page 2943.
You can add HTTP and HTTPS back ends to an API deployment specification by:
• using the Console
• editing a JSON file

Oracle Cloud Infrastructure User Guide 410


API Gateway

Using the Console to Add HTTP or HTTPS Back Ends to an API Deployment
Specification
To add an HTTP or HTTPS back end to an API deployment specification using the Console:
1. Create or update an API deployment using the Console, select the From Scratch option, and enter details on the
Basic Information page.
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
and Updating API Gateways and API Deployments on page 391.
2. On the Routes page, create a new route and specify:
• Path: A path for API calls using the listed methods to the back-end service. Note that the route path you
specify:
• is relative to the deployment path prefix (see Deploying an API on an API Gateway by Creating an API
Deployment on page 367)
• must be preceded by a forward slash ( / ), and can be just that single forward slash
• can contain multiple forward slashes (provided they are not adjacent), and can end with a forward slash
• can include alphanumeric uppercase and lowercase characters
• can include the special characters $ - _ . + ! * ' ( ) , % ; : @ & =
• can include parameters and wildcards (see Adding Path Parameters and Wildcards to Route Paths on page
381)
• Methods: One or more methods accepted by the back-end service. For example, GET, PUT.
• Type: The type of the back-end service as HTTP.
• URL: The URL you want to use as the back-end service, in the format <protocol>://
<host>:<port>/<path> where:
• <protocol> is either http or https.
• <host> is the host name or the host IP address of the back-end service. For example,
api.weather.gov. If you provide a host name, use the DHCP Options property of the API gateway's
subnet to control how host names are resolved to IP addresses at runtime.
• <port> is optionally a port number.
• <path> is optionally a subdirectory or file at the host where the back-end service is located.
Note that <path> must not contain parameters directly, but can contain context variables. For more
information and examples showing how to use context variables to substitute path, query, and header
parameters into the path, see Adding Context Variables to Policies and HTTP Back End Definitions on
page 382.
For example, "url": "https://api.weather.gov".
• Connection establishment timeout in seconds: Optionally, a floating point value indicating how long
(in seconds) to allow when establishing a connection with the back-end service. The minimum is 1.0, the
maximum is 75.0. If not specified, the default of 60.0 seconds is used.
• Request transmit timeout in seconds: Optionally, a floating point value indicating how long (in seconds) to
allow when transmitting a request to the back-end service. The minimum is 1.0, the maximum is 300.0. If not
specified, the default of 10.0 seconds is used.
• Reading response timeout in seconds: Optionally, a floating point value indicating how long (in seconds) to
allow when reading a response from the back-end service. The minimum is 1.0, the maximum is 300.0. If not
specified, the default of 10.0 seconds is used.
• Disable SSL verification: Whether to disable SSL verification when communicating with the back-end
service. By default, this option is not selected.
In this example, the route defines a weather service as an HTTP back end.

Field: Enter:
Path: /weather

Oracle Cloud Infrastructure User Guide 411


API Gateway

Field: Enter:
Methods: GET
Type: HTTP
URL: https://api.weather.gov
Connection establishment timeout in seconds: 45
Request transmit timeout in seconds: 15
Reading response timeout in seconds: 15
Disable SSL verification: (Not selected)
3. (Optional) Click Another Route to enter details of additional routes.
4. Click Next to review the details you entered for the API deployment.
5. Click Create or Save Changes to create or update the API deployment.
6. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).

Editing a JSON File to Add HTTP or HTTPS Back Ends to an API Deployment
Specification
To add an HTTP or HTTPS back end to an API deployment specification in a JSON file:
1. Using your preferred JSON editor, create a new API deployment specification (see Creating an API Deployment
Specification on page 365) in the format:

{
"requestPolicies": {},
"routes": [
{
"path": "<api-route-path>",
"methods": ["<method-list>"],
"backend": {
"type": "HTTP_BACKEND",
"url": "<identifier>",
"connectTimeoutInSeconds": <seconds>,
"readTimeoutInSeconds": <seconds>,
"sendTimeoutInSeconds": <seconds>,
"isSSLVerifyDisabled": <true|false>
},
"requestPolicies": {},
}
]
}

where:
• "requestPolicies" specifies optional policies to control the behavior of an API deployment. If you
want to apply policies to all routes in an API deployment specification, place the policies outside the routes
section. If you want to apply the policies just to a particular route, place the policies inside the routes

Oracle Cloud Infrastructure User Guide 412


API Gateway

section. See Adding Request Policies and Response Policies to API Deployment Specifications on page
421.
• <api-route-path> specifies a path for API calls using the listed methods to the back-end service. Note
that the route path you specify:
• is relative to the deployment path prefix (see Deploying an API on an API Gateway by Creating an API
Deployment on page 367)
• must be preceded by a forward slash ( / ), and can be just that single forward slash
• can contain multiple forward slashes (provided they are not adjacent), and can end with a forward slash
• can include alphanumeric uppercase and lowercase characters
• can include the special characters $ - _ . + ! * ' ( ) , % ; : @ & =
• can include parameters and wildcards (see Adding Path Parameters and Wildcards to Route Paths on page
381)
• <method-list> specifies one or more methods accepted by the back-end service, separated by commas.
For example, "GET, PUT".
• "type": "HTTP_BACKEND" specifies the API gateway back end is an HTTP or HTTPS URL.
• "url: "<identifier>" specifies the URL you want to use as the back-end service, in the format
<protocol>://<host>:<port>/<path> where:
• <protocol> is either http or https.
• <host> is the host name or the host IP address of the back-end service. For example,
api.weather.gov. If you provide a host name, use the DHCP Options property of the API gateway's
subnet to control how host names are resolved to IP addresses at runtime.
• <port> is optionally a port number.
• <path> is optionally a subdirectory or file at the host where the back-end service is located.
Note that <path> must not contain parameters directly, but can contain context variables. For more
information and examples showing how to use context variables to substitute path, query, and header
parameters into the path, see Adding Context Variables to Policies and HTTP Back End Definitions on
page 382.
For example, "url": "https://api.weather.gov".
• "connectTimeoutInSeconds": <seconds> is an optional floating point value indicating how long
(in seconds) to allow when establishing a connection with the back-end service. The minimum is 0.0, the
maximum is 75.0. If not specified, the default of 60.0 seconds is used.
• "readTimeoutInSeconds": <seconds> is an optional floating point value indicating how long (in
seconds) to allow when reading a response from the back-end service. The minimum is 0.0, the maximum is
300.0. If not specified, the default of 10.0 seconds is used.
• "sendTimeoutInSeconds": <seconds> is an optional floating point value indicating how long (in
seconds) to allow when transmitting a request to the back-end service. The minimum is 0.0, the maximum is
300.0. If not specified, the default of 10.0 seconds is used.
• "isSSLVerifyDisabled": <true|false> is an optional boolean value (either true or false)
indicating whether to disable SSL verification when communicating with the back-end service. If not
specified, the default of false is used.
For example, the following basic API deployment specification defines a weather service as an HTTP back end:

{
"routes": [
{
"path": "/weather",
"methods": ["GET"],
"backend": {
"type": "HTTP_BACKEND",
"url": "https://api.weather.gov",
"connectTimeoutInSeconds": 45,
"readTimeoutInSeconds": 15,
"sendTimeoutInSeconds": 15,

Oracle Cloud Infrastructure User Guide 413


API Gateway

"isSSLVerifyDisabled": false
}
}
]
}
2. Save the JSON file containing the API deployment specification.
3. Use the API deployment specification when you create or update an API deployment in the following ways:
• by specifying the JSON file in the Console when you select the Upload an existing API option
• by specifying the JSON file in a request to the API Gateway REST API
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
.
4. (Optional) Confirm the API has been deployed by calling it (see Calling an API Deployed on an API Gateway on
page 388).

Adding a Function in Oracle Functions as an API Gateway Back End


A common requirement is to build an API with serverless functions as a back end, and an API gateway providing
front-end access to those functions.
Oracle Functions enables you to create serverless functions that are built as Docker images and pushed to a specified
Docker registry. A definition of each function is stored as metadata in the Oracle Functions server. When a function
is invoked for the first time, Oracle Functions pulls the function's Docker image from the specified Docker registry,
runs it as a Docker container, and executes the function. If there are subsequent requests to the same function, Oracle
Functions directs those requests to the same running container. After a period being idle, the Docker container is
stopped.
Having used the API Gateway service to create an API gateway, you can create an API deployment that invokes
serverless functions defined in Oracle Functions.
Before you can use serverless functions in Oracle Functions as the back end for an API:
• Serverless functions referenced in the API deployment specification must have already been created and deployed
in Oracle Functions. The functions must be routable from the VCN specified for the API gateway, either through
an internet gateway (in the case of a public API gateway) or through a service gateway (in the case of a private
API gateway). See Creating and Deploying Functions on page 2071. For a related Developer Tutorial, see
Functions: Call a Function using API Gateway.
• Appropriate policies must already exist that give access to serverless functions defined in Oracle Functions to:
• a group to which your user account belongs (see Create a Policy to Give API Gateway Users Access to
Functions on page 348)
• API gateways (see Create a Policy to Give API Gateways Access to Functions on page 349)
You can add serverless function back ends to an API deployment specification by:
• using the Console
• editing a JSON file

Creating and Deploying a Serverless Function in Oracle Functions for Use as an


API Gateway Back End
To create a serverless function in Oracle Functions that can be invoked from an API gateway, follow the instructions
in the Oracle Functions documentation to:
• Confirm that you have completed the prerequisite steps for using Oracle Functions, as described in Preparing for
Oracle Functions on page 2046.
• Create and deploy the function in a compartment to which API gateways have been granted access, as described in
Creating and Deploying Functions on page 2071.

Oracle Cloud Infrastructure User Guide 414


API Gateway

Using the Console to Add Serverless Function Back Ends to an API Deployment
Specification
To add an Oracle Functions function back end to an API deployment specification using the Console:
1. Create or update an API deployment using the Console, select the From Scratch option, and enter details on the
Basic Information page.
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
and Updating API Gateways and API Deployments on page 391.
2. On the Routes page, create a new route and specify:
• Path: A path for API calls using the listed methods to the back-end service. Note that the route path you
specify:
• is relative to the deployment path prefix (see Deploying an API on an API Gateway by Creating an API
Deployment on page 367)
• must be preceded by a forward slash ( / ), and can be just that single forward slash
• can contain multiple forward slashes (provided they are not adjacent), and can end with a forward slash
• can include alphanumeric uppercase and lowercase characters
• can include the special characters $ - _ . + ! * ' ( ) , % ; : @ & =
• can include parameters and wildcards (see Adding Path Parameters and Wildcards to Route Paths on page
381)
• Methods: One or more methods accepted by the back-end service. For example, GET, PUT.
• Type: The type of the back-end service as Oracle Functions.
• Application in <compartment-name>: The name of the application in Oracle Functions that contains the
function. You can select an application from a different compartment.
• Function Name: The name of the function in Oracle Functions.
In this example, the route defines a simple Hello World serverless function in Oracle Functions as a single back
end.

Field: Enter:
Path: /hello
Methods: GET
Type: Oracle Functions
Application in <compartment-name>: acmeapp
Function Name: acme-func
3. (Optional) Click Another Route to enter details of additional routes.
4. Click Next to review the details you entered for the API deployment.
5. Click Create or Save Changes to create or update the API deployment.
6. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).
If the serverless function accepts parameters, include those in the call to the API. For example:

curl -k -X GET https://lak...sjd.apigateway.us-phoenix-1.oci.customer-


oci.com/marketing/hello/ -d "name=john"

Editing a JSON File to Add Serverless Function Back Ends to an API Deployment
Specification
To add an Oracle Functions function back end to an API deployment specification in a JSON file:

Oracle Cloud Infrastructure User Guide 415


API Gateway

1. Using your preferred JSON editor, create the API deployment specification in a JSON file in the format:

{
"requestPolicies": {},
"routes": [
{
"path": "<api-route-path>",
"methods": ["<method-list>"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "<identifier>"
},
"requestPolicies": {}
}
]
}

where:
• "requestPolicies" specifies optional policies to control the behavior of an API deployment. If you
want to apply policies to all routes in an API deployment specification, place the policies outside the routes
section. If you want to apply the policies just to a particular route, place the policies inside the routes
section. See Adding Request Policies and Response Policies to API Deployment Specifications on page
421.
• <api-route-path> specifies a path for API calls using the listed methods to the back-end service. Note
that the route path you specify:
• is relative to the deployment path prefix (see Deploying an API on an API Gateway by Creating an API
Deployment on page 367)
• must be preceded by a forward slash ( / ), and can be just that single forward slash
• can contain multiple forward slashes (provided they are not adjacent), and can end with a forward slash
• can include alphanumeric uppercase and lowercase characters
• can include the special characters $ - _ . + ! * ' ( ) , % ; : @ & =
• can include parameters and wildcards (see Adding Path Parameters and Wildcards to Route Paths on page
381)
• <method-list> specifies one or more methods accepted by the back-end service, separated by commas.
For example, "GET, PUT".
• <identifier> specifies the OCID of the function you want to use as the back-end service. For example,
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq".
For example, the following basic API deployment specification defines a simple Hello World serverless function
in Oracle Functions as a single back end:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
2. Save the JSON file containing the API deployment specification.

Oracle Cloud Infrastructure User Guide 416


API Gateway

3. Use the API deployment specification when you create or update an API deployment in the following ways:
• by specifying the JSON file in the Console when you select the Upload an existing API option
• by specifying the JSON file in a request to the API Gateway REST API
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
.
4. (Optional) Confirm the API has been deployed and that the serverless function in Oracle Functions can be invoked
successfully by calling the API (see Calling an API Deployed on an API Gateway on page 388).
If the serverless function accepts parameters, include those in the call to the API. For example:

curl -k -X GET https://lak...sjd.apigateway.us-phoenix-1.oci.customer-


oci.com/marketing/hello/ -d "name=john"

Adding Stock Responses as an API Gateway Back End


You'll often want to verify that an API has been successfully deployed on an API gateway without having to set up
an actual back-end service. One approach is to define a route in the API deployment specification that has a path to a
'dummy' back end. On receiving a request to that path, the API gateway itself acts as the back end and returns a stock
response you've specified.
Equally, there are some situations in a production deployment where you'll want a particular path for a route to
consistently return the same stock response without sending a request to a back end. For example, when you want a
call to a path to always return a specific HTTP status code in the response.
Using the API Gateway service, you can define a path to a stock response back end that always returns the same:
• HTTP status code
• HTTP header fields (name-value pairs)
• content in the body of the response
Note the following restrictions when defining stock responses and stock response back ends:
• each header name must not exceed 1KB in length
• each header value must not exceed 4KB in length
• each body response must not exceed 5KB in length (including any encoding)
• a stock response back end definition must not include more than 50 header fields
You can add stock response back ends to an API deployment specification by:
• using the Console
• editing a JSON file

Using the Console to Add Stock Responses to an API Deployment Specification


To add stock responses to an API deployment specification using the Console:
1. Create or update an API deployment using the Console, select the From Scratch option, and enter details on the
Basic Information page.
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
and Updating API Gateways and API Deployments on page 391.

Oracle Cloud Infrastructure User Guide 417


API Gateway

2. On the Routes page, create a new route and specify:


• Path: A path for API calls using the listed methods to the back-end service. Note that the route path you
specify:
• is relative to the deployment path prefix (see Deploying an API on an API Gateway by Creating an API
Deployment on page 367)
• must be preceded by a forward slash ( / ), and can be just that single forward slash
• can contain multiple forward slashes (provided they are not adjacent), and can end with a forward slash
• can include alphanumeric uppercase and lowercase characters
• can include the special characters $ - _ . + ! * ' ( ) , % ; : @ & =
• can include parameters and wildcards (see Adding Path Parameters and Wildcards to Route Paths on page
381)
• Methods: One or more methods accepted by the back-end service. For example, GET, PUT.
• Type: The type of the back-end service as Stock Response.
• Status Code: Any valid HTTP response code. For example, 200
• Body: Optionally specifies the content of the response body, in an appropriate format. For example:
• If you specify a Header Name and Header Value of Content-Type and text/plain respectively,
the response body might be "Hello world".
• If you specify a Header Name and Header Value of Content-Type and application/json
respectively, the response body might be {"username": "john.doe"}.
Note that the response body must not exceed 5KB in length (including any encoding).
• Header Name and Header Value: Optionally, you can specify the name of an HTTP response header and
its value. For example, a name of Content-Type and a value of application/json. You can specify
multiple header name and value pairs (up to a maximum of 50). Note that in each case:
• the header name must not exceed 1KB in length
• the header value must not exceed 4KB in length
In this example, a request to the /test path returns a 200 status code and a JSON payload in the body of the
response.

Field: Enter:
Path: /test
Methods: GET
Type: Stock Response
Status Code: 200
Body: {"username": "john.doe"}
Header Name: Content-Type
Header Value: application/json

In this example, a request to the /test-redirect path returns a 302 status code and a temporary url in the
Location header of the response.

Field: Enter:
Path: /test-redirect
Methods: GET
Type: Stock Response
Status Code: 302

Oracle Cloud Infrastructure User Guide 418


API Gateway

Field: Enter:
Body: n/a
Header Name: Location
Header Value: http://www.example.com
3. (Optional) Click Another Route to enter details of additional routes.
4. Click Next to review the details you entered for the API deployment.
5. Click Create or Save Changes to create or update the API deployment.
6. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).

Editing a JSON File to Add Stock Responses to an API Deployment Specification


To add stock responses to an API deployment specification in a JSON file:
1. Using your preferred JSON editor, edit the existing API deployment specification to which you want to add
a stock response back end, or create a new API deployment specification (see Creating an API Deployment
Specification on page 365).
For example, the following basic API deployment specification defines a simple Hello World serverless function
in Oracle Functions as a single back end:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
2. In the routes section, include a new path section for a stock response back end:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
},
{
"path": "<api-route-path>",
"methods": ["<method-list>"],
"backend": {
"type": "STOCK_RESPONSE_BACKEND",
"status": <http-response-code>,
"headers": [{
"name": "<header-name>",
"value": "<header-value>"
}],
"body": "<body-content>"
}
}

Oracle Cloud Infrastructure User Guide 419


API Gateway

]
}

where:
• <api-route-path> specifies a path for API calls using the listed methods to the stock response back end.
Note that the route path you specify:
• is relative to the deployment path prefix (see Deploying an API on an API Gateway by Creating an API
Deployment on page 367)
• must be preceded by a forward slash ( / ), and can be just that single forward slash
• can contain multiple forward slashes (provided they are not adjacent), and can end with a forward slash
• can include alphanumeric uppercase and lowercase characters
• can include the special characters $ - _ . + ! * ' ( ) , % ; : @ & =
• can include parameters and wildcards (see Adding Path Parameters and Wildcards to Route Paths on page
381)
• <method-list> specifies one or more methods accepted by the stock response back end, separated by
commas. For example, "GET, PUT".
• "type": "STOCK_RESPONSE_BACKEND" indicates that the API gateway itself will act as the back end
and return the stock response you define (the status code, the header fields and the body content).
• <http-response-code> is any valid HTTP response code. For example, 200
• "name": "<header-name>", "value": "<header-value>" optionally specifies the
name of an HTTP response header and its value. For example, "name": "Content-Type",
"value":"application/json" . You can specify multiple "name": "<header-name>",
"value": "<header-value>" pairs in the headers: section (up to a maximum of 50). Note that in
each case:
• <header-name> must not exceed 1KB in length
• <header-value> must not exceed 4KB in length
• "body": "<body-content>" optionally specifies the content of the response body, in an appropriate
format. For example:
• If the Content-Type header is text/plain, the response body might be "body": "Hello
world".
• If the Content-Type header is application/json, the response body might be "body":
"{\"username\": \"john.doe\"}". In the case of a JSON response, note that quotation marks in
the response have to be escaped with a backslash ( \ ) character.
Note that <body-content> must not exceed 5KB in length (including any encoding).
In this example, a request to the /test path returns a 200 status code and a JSON payload in the body of the
response.

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
},
{
"path": "/test",
"methods": ["GET"],
"backend": {
"type": "STOCK_RESPONSE_BACKEND",
"status": 200,
"headers": [{

Oracle Cloud Infrastructure User Guide 420


API Gateway

"name": "Content-Type",
"value": "application/json"
}],
"body" : "{\"username\": \"john.doe\"}"
}
}
]
}

In this example, a request to the /test-redirect path returns a 302 status code and a temporary url in the
Location header of the response. This example also demonstrates that you can create an API deployment
specification with just one route to a back end of type STOCK_RESPONSE_BACKEND.

{
"routes": [
{
"path": "/test-redirect",
"methods": ["GET"],
"backend": {
"type": "STOCK_RESPONSE_BACKEND",
"status": 302,
"headers": [{
"name": "Location",
"value": "http://www.example.com"
}]
}
}
]
}
3. Save the JSON file containing the API deployment specification.
4. Use the API deployment specification when you create or update an API deployment in the following ways:
• by specifying the JSON file in the Console when you select the Upload an existing API option
• by specifying the JSON file in a request to the API Gateway REST API
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
and Updating API Gateways and API Deployments on page 391.
5. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).

Adding Request Policies and Response Policies to API Deployment Specifications


You can control the behavior of an API deployment you create on an API gateway by adding request and response
policies to the API deployment specification:
• a request policy describes actions to be performed on an incoming request from an API client before it is sent to a
back end
• a response policy describes actions to be performed on a response returned from a back end before it is sent to an
API client
You can use request policies and/or response policies to:
• limit the number of requests sent to back-end services
• enable CORS (Cross-Origin Resource Sharing) support
• provide authentication and authorization
• modify incoming requests and outgoing responses
You can add policies to an API deployment specification that apply globally to all routes in the API deployment
specification, as well as policies that apply only to particular routes.

Oracle Cloud Infrastructure User Guide 421


API Gateway

Note that API Gateway request policies and response policies are different to IAM policies, which control access to
Oracle Cloud Infrastructure resources.
You can add request and response policies to an API deployment specification by:
• using the Console
• editing a JSON file

Using the Console to Add Request Policies and Response Policies


To add request policies and response policies to an API deployment specification using the Console:
1. Create or update an API deployment using the Console, select the From Scratch option, and enter details on the
Basic Information page.
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
and Updating API Gateways and API Deployments on page 391.
2. In the API Request Policies section of the Basic Information page, specify request policies that apply globally to
all routes in the API deployment specification:
• Authentication: A policy to control access to APIs you deploy to API gateways based on the end user
sending a request, and define what it is that they are allowed to do. Having specified a global authentication
policy first, you can then specify authorization policies that apply to individual routes in the API deployment
specification. See Adding Authentication and Authorization to API Deployments on page 435.
• CORS: A policy to enable CORS support in the APIs you deploy to API gateways. You can also specify
CORS policies that apply to individual routes in the API deployment specification (you don't need to have
entered a global CORS policy first). See Adding CORS support to API Deployments on page 428.
• Rate Limiting: A policy to limit the rate at which API clients can make requests to back-end services.
You can only apply a rate-limiting policy globally to all routes in the API deployment specification (not to
individual routes). See Limiting the Number of Requests to API Gateway Back Ends on page 425.
3. Click Next to enter details for individual routes in the API deployment on the Routes page.
4. To specify request policies that apply to an individual route, click Show Route Request Policies and specify:
• Authorization: A policy to specify the operations an end user is allowed to perform, based on the end user's
access scopes. Note that you must have already specified a global authentication policy before you can
specify an authorization policy on an individual route. See Adding Authentication and Authorization to API
Deployments on page 435.
• CORS: A policy to enable CORS support for individual routes in the API deployment specification (you don't
need to have entered a global CORS policy first). See Adding CORS support to API Deployments on page
428.
• Header Transformations: A policy to add, remove, and modify headers in requests. See Using the Console to
Add Header Transformation Request Policies on page 457.
• Query Transformations: A policy to add, remove, and modify query parameters in requests. See Using the
Console to Add Query Parameter Transformation Request Policies on page 463.
5. To specify response policies that apply to an individual route, click Show Route Response Policies and specify:
• Header Transformations: A policy to add, remove, and modify headers in responses. See Using the Console
to Add Header Transformation Response Policies on page 469.
6. Click Next to review the details you entered for the API deployment.
7. Click Create or Save Changes to create or update the API deployment.
8. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).

Editing a JSON File to Add Request Policies and Response Policies


To add request policies and response policies to an API deployment specification in a JSON file:

Oracle Cloud Infrastructure User Guide 422


API Gateway

1. Using your preferred JSON editor, edit the existing API deployment specification to which you want to add a
request policy or response policy, or create a new API deployment specification (see Creating an API Deployment
Specification on page 365).
At a minimum, the API deployment specification will include a routes section containing:
• A path. For example, /hello
• One or more methods. For example, GET
• A definition of a back end. For example, a URL, or the OCID of a function in Oracle Functions.
For example, the following basic API deployment specification defines a simple Hello World serverless function
in Oracle Functions as a single back end:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
2. To add a request policy that applies globally to all routes in the API deployment specification:
a. Insert a requestPolicies section before the routes section. For example:

{
"requestPolicies": {},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
b. Include a request policy in the requestPolicies section.
For example, to limit the number of requests sent to all routes in an API deployment specification, you'd
include the rateLimiting policy in the requestPolicies section as follows:

{
"requestPolicies": {
"rateLimiting": {
"rateKey": "CLIENT_IP",
"rateInRequestsPerSecond": 10
}
},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"

Oracle Cloud Infrastructure User Guide 423


API Gateway

}
}
]
}

For more information about the rateLimiting request policy, see Limiting the Number of Requests to API
Gateway Back Ends on page 425.
3. To add a request policy that applies to an individual route in the API deployment specification:
a. Insert a requestPolicies section after the route's backend section. For example:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {}
}
]
}
b. Include a request policy in the requestPolicies section.
For example, to enable CORS support in an API deployment for a particular route, you'd include the cors
policy in the requestPolicies section as follows:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {
"cors":{
"allowedOrigins": ["*", "https://oracle.com"],
"allowedMethods": ["*", "GET"],
"allowedHeaders": [],
"exposedHeaders": [],
"isAllowCredentialsEnabled": false,
"maxAgeInSeconds": 3000
}
}
}
]
}

For more information about the cors request policy, see Adding CORS support to API Deployments on page
428.
4. To add a response policy that applies to an individual route in the API deployment specification:
a. Insert a responsePolicies section after the route's backend section. For example:

{
"routes": [
{
"path": "/hello",

Oracle Cloud Infrastructure User Guide 424


API Gateway

"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"responsePolicies": {}
}
]
}
b. Include a response policy in the responsePolicies section.
For example, to rename any X-Username header to X-User-ID in the response from a particular route,
you'd include the headerTransformations policy in the responsePolicies section as follows:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"responsePolicies": {
"headerTransformations": {
"renameHeaders": {
"items": [
{
"from": "X-Username",
"to": "X-User-ID"
}
]
}
}
}
}
]
}

For more information about the headerTransformations response policy, see Editing a JSON File to
Add Header Transformation Response Policies on page 470.
5. Save the JSON file containing the API deployment specification.
6. Use the API deployment specification when you create or update an API deployment in the following ways:
• by specifying the JSON file in the Console when you select the Upload an existing API option
• by specifying the JSON file in a request to the API Gateway REST API
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367.

Limiting the Number of Requests to API Gateway Back Ends


Having created an API gateway and deployed one or more APIs on it, you'll typically want to limit the rate at which
API clients can make requests to back-end services. For example, to:
• maintain high availability and fair use of resources by protecting back ends from being overwhelmed by too many
requests
• prevent denial-of-service attacks
• constrain costs of resource consumption
• restrict usage of APIs by your customers' users in order to monetize APIs
You apply a rate limit globally to all routes in an API deployment specification.

Oracle Cloud Infrastructure User Guide 425


API Gateway

If a request is denied because the rate limit has been exceeded, the response header specifies when the request can be
retried.
You use a request policy to limit the number of requests (see Adding Request Policies and Response Policies to API
Deployment Specifications on page 421).
You can add a rate-limiting request policy to an API deployment specification by:
• using the Console
• editing a JSON file

Using the Console to Add Rate-Limiting Request Policies


To add a rate-limiting request policy to an API deployment specification using the Console:
1. Create or update an API deployment using the Console, select the From Scratch option, and enter details on the
Basic Information page.
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
and Updating API Gateways and API Deployments on page 391.
2. In the API Request Policies section of the Basic Information page, click the Add button beside Rate Limiting
and specify:
• Number of Requests per Second: The maximum number of requests per second to send to any one route.
• Type of Rate Limit: How the maximum number of requests per second threshold is applied. You can specify
that the maximum applies either to the number of requests sent from any one API client (identified by its IP
address), or to the total number of requests sent from all API clients.
3. Click Save Changes, and then click Next to enter details for individual routes in the API deployment on the
Routes page. Note that you cannot apply rate-limiting policies to individual routes in the API deployment
specification.
4. Click Next to review the details you entered for the API deployment.
5. Click Create or Save Changes to create or update the API deployment.
6. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).

Editing a JSON File to Add Rate-Limiting Request Policies


To add a rate-limiting request policy to an API deployment specification in a JSON file:
1. Using your preferred JSON editor, edit the existing API deployment specification to which you want to add a
request limit, or create a new API deployment specification (see Creating an API Deployment Specification on
page 365).
For example, the following basic API deployment specification defines a simple Hello World serverless function
in Oracle Functions as a single back end:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
2. Insert a requestPolicies section before the routes section, if one doesn't exist already. For example:

Oracle Cloud Infrastructure User Guide 426


API Gateway

"requestPolicies": {},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
3. Add the following rateLimiting policy to the new requestPolicies section to apply to all routes
defined in the specification:

{
"requestPolicies": {
"rateLimiting": {
"rateKey": "<ratekey-value>",
"rateInRequestsPerSecond": <requests-per-second>
}
},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}

where:
• <ratekey-value> specifies whether the maximum number of requests threshold applies to the number of
requests from individual API clients (each identified by their IP address) or to the total number of requests sent
to the back-end service. Valid values are CLIENT_IP and TOTAL.
• <requests-per-second> is the maximum number of requests per second to send to any route.
For example:

{
"requestPolicies": {
"rateLimiting": {
"rateKey": "CLIENT_IP",
"rateInRequestsPerSecond": 10
}
},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}

Oracle Cloud Infrastructure User Guide 427


API Gateway

4. Save the JSON file containing the API deployment specification.


5. Use the API deployment specification when you create or update an API deployment in the following ways:
• by specifying the JSON file in the Console when you select the Upload an existing API option
• by specifying the JSON file in a request to the API Gateway REST API
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
and Updating API Gateways and API Deployments on page 391.
6. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).

Adding CORS support to API Deployments


Web browsers typically implement a "same-origin policy" to prevent code from making requests against a different
origin to the one from which the code was served. The intention of the same-origin policy is to provide protection
from malicious web sites. However, the same-origin policy can also prevent legitimate interactions between a server
and clients of a known and trusted origin.
Cross-Origin Resource Sharing (CORS) is a cross-origin sharing standard to relax the same-origin policy by allowing
code on a web page to consume a REST API served from a different origin. The CORS standard uses additional
HTTP request headers and response headers to specify the origins that can be accessed.
The CORS standard also requires that for certain HTTP request methods, the request must be "pre-flighted". Before
sending the actual request, the web browser sends a pre-flight request to the server to determine whether the methods
in the actual request are supported. The server responds with the methods it will allow in an actual request. The web
browser only sends the actual request if the response from the server indicates that the methods in the actual request
are allowed. The CORS standard also enables servers to notify clients whether requests can include credentials
(cookies, authorization headers, or TLS client certificates).
For more information about CORS, see resources available online including those from W3C and Mozilla.
Using the API Gateway service, you can enable CORS support in the APIs you deploy to API gateways. When you
enable CORS support in an API deployment, HTTP pre-flight requests and actual requests to the API deployment
return one or more CORS response headers to the API client. You set the CORS response header values in the API
deployment specification.
You use request policies to add CORS support to APIs (see Adding Request Policies and Response Policies to API
Deployment Specifications on page 421). You can apply a CORS request policy globally to all routes in an API
deployment specification, or just to particular routes.
You can add a CORS request policy to an API deployment specification by:
• using the Console
• editing a JSON file

Using the Console to Add CORS Request Policies


To add a CORS request policy to an API deployment specification using the Console:
1. Create or update an API deployment using the Console, select the From Scratch option, and enter details on the
Basic Information page.
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
and Updating API Gateways and API Deployments on page 391.

Oracle Cloud Infrastructure User Guide 428


API Gateway

2. In the API Request Policies section of the Basic Information page, click the Add button beside CORS and
specify:
• Allowed Origins: An origin that is allowed to access the API deployment. For example, https://
oracle.com. Click + Another Origin to enter second and subsequent origins.
• Allowed Methods: One or more methods that are allowed in the actual request to the API deployment. For
example, GET, PUT.
• Allowed Headers: Optionally, an HTTP header that is allowed in the actual request to the API deployment.
For example, opc-request-id or If-Match. Click + Another Header to enter second and subsequent
headers.
• Exposed Headers: Optionally, an HTTP header that API clients can access in the API deployment's response
to an actual request. For example, ETag or opc-request-id. Click + Another Header to enter second
and subsequent headers.
• Max age in seconds: Optionally, an integer value indicating how long (in delta-seconds) the results of a
preflight request can be cached by a browser. If you don't specify a value, the default is 0.
• Enable Allow Credentials: Whether the actual request to the API deployment can be made using credentials
(cookies, authorization headers, or TLS client certificates). By default, this option is not selected.
To find out how the different fields in the CORS request policy map onto different CORS response headers, see
How a CORS Request Policy Maps to a CORS Response on page 433.
3. Click Save Changes, and then click Next to enter details for individual routes in the API deployment on the
Routes page. To specify CORS request policies that apply to an individual route, click Show Route Request
Policies, click the Add button beside CORS, and specify:
• Allowed Origins: An origin that is allowed to access the route. For example, https://oracle.com.
Click + Another Origin to enter second and subsequent origins.
• Allowed Methods: One or more methods that are allowed in the actual request to the route. For example,
GET, PUT.
• Allowed Headers: Optionally, an HTTP header that is allowed in the actual request to the route. For example,
opc-request-id or If-Match. Click + Another Header to enter second and subsequent headers.
• Exposed Headers: Optionally, an HTTP header that API clients can access in the API deployment's response
to an actual request. For example, ETag or opc-request-id. Click + Another Header to enter second
and subsequent headers.
• Max age in seconds: Optionally, an integer value indicating how long (in delta-seconds) the results of a
preflight request can be cached by a browser. If you don't specify a value, the default is 0.
• Enable Allow Credentials: Whether the actual request to the route can be made using credentials (cookies,
authorization headers, or TLS client certificates). By default, this option is not selected.
To find out how the different fields in the CORS request policy map onto different CORS response headers, see
How a CORS Request Policy Maps to a CORS Response on page 433.
4. Click Save Changes, and then click Next to review the details you entered for the API deployment and for
individual routes.
5. Click Create or Save Changes to create or update the API deployment.
6. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).

Editing a JSON File to Add CORS Request Policies


To add a CORS request policy to an API deployment specification in a JSON file:
1. Using your preferred JSON editor, edit the existing API deployment specification to which you want to add
CORS support, or create a new API deployment specification (see Creating an API Deployment Specification on
page 365).
For example, the following basic API deployment specification defines a simple Hello World serverless function
in Oracle Functions as a single back end:

Oracle Cloud Infrastructure User Guide 429


API Gateway

"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
2. To specify a CORS request policy that applies globally to all the routes in an API deployment:
a. Insert a requestPolicies section before the routes section, if one doesn't exist already. For example:

{
"requestPolicies": {},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
b. Add the following cors policy to the new requestPolicies section to apply globally to all the routes in
an API deployment:

{
"requestPolicies": {
"cors":{
"allowedOrigins": [<list-of-origins>],
"allowedMethods": [<list-of-methods>],
"allowedHeaders": [<list-of-implicit-headers>],
"exposedHeaders": [<list-of-exposed-headers>],
"isAllowCredentialsEnabled": <true|false>,
"maxAgeInSeconds": <seconds>
}
},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]

Oracle Cloud Infrastructure User Guide 430


API Gateway

where:
• "allowedOrigins": [<list-of-origins>] is a required comma-separated list of origins
that are allowed to access the API deployment. For example, "allowedOrigins": ["*",
"https://oracle.com"]
• "allowedMethods": [<list-of-methods>] is an optional comma-separated list
of HTTP methods that are allowed in the actual request to the API deployment. For example,
"allowedMethods": ["*", "GET"]
• "allowedHeaders": [<list-of-implicit-headers>] is an optional comma-separated
list of HTTP headers that are allowed in the actual request to the API deployment. For example,
"allowedHeaders": ["opc-request-id", "If-Match"]
• "exposedHeaders": [<list-of-exposed-headers>] is an optional comma-separated list
of HTTP headers that API clients can access in the API deployment's response to an actual request. For
example, "exposedHeaders": ["ETag", "opc-request-id"]
• "isAllowCredentialsEnabled": <true|false> is either true or false, indicating whether
the actual request to the API deployment can be made using credentials (cookies, authorization headers, or
TLS client certificates). If not specified, the default is false.
• "maxAgeInSeconds": <seconds> is an integer value, indicating how long (in delta-seconds) the
results of a preflight request can be cached by a browser. If not specified, the default is 0.
For example:

{
"requestPolicies": {
"cors":{
"allowedOrigins": ["*", "https://oracle.com"],
"allowedMethods": ["*", "GET"],
"allowedHeaders": [],
"exposedHeaders": [],
"isAllowCredentialsEnabled": false,
"maxAgeInSeconds": 3000
}
},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}

To find out how the different fields in the CORS request policy map onto different CORS response headers,
see How a CORS Request Policy Maps to a CORS Response on page 433.
3. To specify a CORS request policy that applies to an individual route:
a. Insert a requestPolicies section after the backend section for the route to which you want the policy to
apply. For example:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {

Oracle Cloud Infrastructure User Guide 431


API Gateway

"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {}
}
]
}
b. Add the following cors policy to the new requestPolicies section to apply to just this particular route:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {
"cors":{
"allowedOrigins": [<list-of-origins>],
"allowedMethods": [<list-of-methods>],
"allowedHeaders": [<list-of-implicit-headers>],
"exposedHeaders": [<list-of-exposed-headers>],
"isAllowCredentialsEnabled": <true|false>,
"maxAgeInSeconds": <seconds>
}
}
}
]
}

where:
• "allowedOrigins": [<list-of-origins>] is a required comma-separated list of origins
that are allowed to access the API deployment. For example, "allowedOrigins": ["*",
"https://oracle.com"]
• "allowedMethods": [<list-of-methods>] is an optional comma-separated list
of HTTP methods that are allowed in the actual request to the API deployment. For example,
"allowedMethods": ["*", "GET"]
• "allowedHeaders": [<list-of-implicit-headers>] is an optional comma-separated
list of HTTP headers that are allowed in the actual request to the API deployment. For example,
"allowedHeaders": ["opc-request-id", "If-Match"]
• "exposedHeaders": [<list-of-exposed-headers>] is an optional comma-separated list
of HTTP headers that API clients can access in the API deployment's response to an actual request. For
example, "exposedHeaders": ["ETag", "opc-request-id"]
• "isAllowCredentialsEnabled": <true|false> is either true or false, indicating whether
the actual request to the API deployment can be made using credentials (cookies, authorization headers, or
TLS client certificates). If not specified, the default is false.
• "maxAgeInSeconds": <seconds> is an integer value, indicating how long (in delta-seconds) the
results of a preflight request can be cached by a browser. If not specified, the default is 0.
For example:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {

Oracle Cloud Infrastructure User Guide 432


API Gateway

"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {
"cors":{
"allowedOrigins": ["*", "https://oracle.com"],
"allowedMethods": ["*", "GET"],
"allowedHeaders": [],
"exposedHeaders": [],
"isAllowCredentialsEnabled": false,
"maxAgeInSeconds": 3000
}
}
}
]
}

To find out how the different fields in the CORS request policy map onto different CORS response headers,
see How a CORS Request Policy Maps to a CORS Response on page 433.
4. Save the JSON file containing the API deployment specification.
5. Use the API deployment specification when you create or update an API deployment in the following ways:
• by specifying the JSON file in the Console when you select the Upload an existing API option
• by specifying the JSON file in a request to the API Gateway REST API
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
and Updating API Gateways and API Deployments on page 391.
6. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).

How a CORS Request Policy Maps to a CORS Response


The different fields in a CORS request policy map onto different CORS response headers as shown in the table:

Field Maps
Included
Included
Notes
to
in
in
CORS
response
response
response
to
to
header
pre-
actual
flight
request
request
allowed Origins Access-
Yes
Yes
Required?: Yes
Control-
Datatype: string[]
Allow-
Origin
Default: n/a
Used to return a comma-separated list of origins that are allowed to
access the API deployment.
Only one origin is allowed by the CORS specification, so in the case
of multiple origins the client origin needs to be dynamically checked
against the list of allowed values. Values "*" and "null" are allowed.

Oracle Cloud Infrastructure User Guide 433


API Gateway

Field Maps
Included
Included
Notes
to
in
in
CORS
response
response
response
to
to
header
pre-
actual
flight
request
request
allowed Methods Access-
Yes
No
Required?: No
Control-
Datatype: string[]
Allow-
Methods
Default: empty list
Used to return a comma-separated list of HTTP methods that are
allowed in the actual request to the API deployment.
The default of Access-Control-Allow-Methods is to allow through
all simple methods, even on preflight requests.

allowed Headers Access-


Yes
No
Required?: No
Control-
Datatype: string[]
Allow-
Headers
Default: empty list
Used to return a comma-separated list of HTTP headers that are
allowed in the actual request to the API deployment.

exposed Headers Access-


No
Yes
Required?: No
Control-
Datatype: string[]
Expose-
Headers
Default: empty list
Used to return a comma-separated list of HTTP headers that clients
can access in the API deployment's response to an actual request.
This list of HTTP headers is in addition to the CORS-safelisted
response headers.

isAllow Credentials Enabled Access-


Yes
Yes
Required?: No
Control-
Datatype: boolean
Allow-
Credentials
Default: false
Used to return true or false, indicating whether the actual
request to the API deployment can be made using credentials
(cookies, authorization headers, or TLS client certificates).
To allow requests to be made with credentials, set
isAllowCredentialsEnabled to true.

maxAge InSeconds Access-


Yes
No
Required?: No
Control-
Datatype: integer
Max-
Age
Default: 0
Used to indicate how long (in delta-seconds) the results of a
preflight request can be cached by a browser. Ignored if set to 0.

Oracle Cloud Infrastructure User Guide 434


API Gateway

Adding Authentication and Authorization to API Deployments


You can control access to APIs you deploy to API gateways based on the end user sending a request, and define what
it is that they are allowed to do. For the APIs you deploy, you'll typically provide:
• Authentication functionality to determine the end user's identity. Is the end user really who they claim to be?
• Authorization functionality to determine appropriate access for an end user, and grant the necessary permissions.
What is the end user allowed to do?
You can add authentication and authorization functionality to API gateways to support:
• HTTP Basic Authentication
• API Key Authentication
• OAuth Authentication and Authorization
• Oracle Identity Cloud Service (IDCS) Authentication
You can add authentication and authorization functionality to an API gateway as follows:
• You can have the API gateway pass an access token included in a request to an authorizer function deployed on
Oracle Functions to perform validation (see Using Authorizer Functions to Add Authentication and Authorization
to API Deployments on page 435).
• You can have the API gateway itself validate a JSON Web Token (JWT) included in the request with an identity
provider (see Using JSON Web Tokens (JWTs) to Add Authentication and Authorization to API Deployments on
page 444).

Using Authorizer Functions to Add Authentication and Authorization to API


Deployments
You can control access to APIs you deploy to API gateways using an 'authorizer function' (as described in this topic),
or using JWTs (as described in Using JSON Web Tokens (JWTs) to Add Authentication and Authorization to API
Deployments on page 444).
You can add authentication and authorization functionality to API gateways by writing an 'authorizer function' that:
• Processes request attributes to verify the identity of an end user with an identity provider.
• Determines the operations that the end user is allowed to perform.
• Returns the operations the end user is allowed to perform as a list of 'access scopes' (an 'access scope' is an
arbitrary string used to determine access).
• Optionally returns a key-value pair for use by the API deployment. For example, as a context variable for use in
an HTTP back end definition (see Adding Context Variables to Policies and HTTP Back End Definitions on page
382).
You then deploy the authorizer function to Oracle Functions. See Creating an Authorizer Function on page 436.
For a related Developer Tutorial containing an example authorizer function, see Functions: Validate an API Key with
API Gateway.
Having deployed the authorizer function, you enable authentication and authorization for an API deployment by
including two different kinds of request policy in the API deployment specification:
• An authentication request policy for the entire API deployment that specifies:
• The OCID of the authorizer function that you deployed to Oracle Functions that will perform authentication
and authorization.
• The request attributes to pass to the authorizer function.
• Whether unauthenticated end users can access routes in the API deployment.
• An authorization request policy for each route that specifies the operations an end user is allowed to perform,
based on the end user's access scopes as returned by the authorizer function.
You can add authentication and authorization request policies to an API deployment specification by:
• Using the Console.

Oracle Cloud Infrastructure User Guide 435


API Gateway

• Editing a JSON file.


Tip:

To help troubleshoot issues with the authorizer function, consider adding


an execution log to the API deployment, with its log level set to Info (see
Adding Logging to API Deployments on page 401).
To see details in the log files related to authentication and authorization,
search for customAuth.
Prerequisites for Using Authorizer Functions
Before you can enable authentication and authorization for API deployments using authorizer functions:
• An identity provider (for example, Oracle Identity Cloud Service (IDCS), Auth0) must have already been
set up, containing access scopes for users allowed to access the API deployment. See the identity provider
documentation for more information (for example, the Oracle Identity Cloud Service (IDCS) documentation, the
Auth0 documentation).
• An authorizer function must have been deployed to Oracle Functions already, and an appropriate policy must give
API gateways access to Oracle Functions. For more information, see Creating an Authorizer Function on page
436.
If you use the Console to include an authentication request policy (rather than by editing a JSON file), you select the
authorizer function and the application that contains it from a list.
Note that to use the Console (rather than a JSON file) to define an authentication request policy and specify an
authorizer function, your user account must belong to a group that has been given access to the authorizer function by
an IAM policy (see Create a Policy to Give API Gateway Users Access to Functions on page 348).
Creating an Authorizer Function
To create an authorizer function:
1. Write code to implement authentication and authorization:
a. Write code in the authorizer function that accepts the following JSON input from API Gateway:

{
"type": "TOKEN",
"token": "<token-value>"
}

where:
• "type": "TOKEN" indicates that the value being passed to the authorizer function is an auth token.
• "token": "<token-value>" is the auth token being passed to the authorizer function.
For example:

{
"type": "TOKEN",
"token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1nHyDtTwR3SEJ3z489..."
}
b. Write code in the authorizer function that returns the following JSON to API Gateway as an HTTP 200
response when the access token has been successfully verified:

{
"active": true,
"principal": "<user-principal>",
"scope": ["<scopes>"],
"clientId": "<client-id>",

Oracle Cloud Infrastructure User Guide 436


API Gateway

"expiresAt": "<date-time>",
"context": {
"<key>": "<value>", ...
}
}

where:
• "active": true indicates the access token originally passed to the authorizer function has been
successfully verified.
• "principal": "<user-principal>" is the user or application obtained by the authorizer function
from the identity provider.
• "scope": ["<scopes>"] is a comma-delimited list of strings that are the access scopes obtained by
the authorizer function from the identity provider.
• "clientId": "<client-id>" is optionally the requestor's host (for example, the hostname or the
client IP). Returning a clientId is not required.
• "expiresAt": "<date-time>" is a date-time string in ISO-8601 format indicating when the access
token originally passed to the authorizer function will expire. This value is used when determining how
long to cache results after calling the authorizer function.
• "context": {"<key>": "<value>", ... } is an optional comma-delimited list of key-value
pairs in JSON format to return to API Gateway. The authorizer function can return any key-value pair
for use by the API deployment (for example, the username or email address of the end user). For more
information about using the value in the key-value pair returned by an authorizer function as a context
variable in an HTTP back end definition, see Adding Context Variables to Policies and HTTP Back End
Definitions on page 382.
For example:

{
"active": true,
"principal": "https://example.com/users/jdoe",
"scope": ["list:hello", "read:hello", "create:hello", "update:hello",
"delete:hello", "someScope"],
"clientId": "host123",
"expiresAt": "2019-05-30T10:15:30+01:00",
"context": {
"email": "[email protected]"
}
}
c. Write code that returns the following JSON to API Gateway as an HTTP 5xx response if token verification is
unsuccessful, or in the event of an error in the authorizer function or in Oracle Functions:

{
"active": false,
"expiresAt": "<date-time>",
"context": {
"<key>": "<value>", ... "
},
"wwwAuthenticate": "<directive>"
}

where:
• "active": false indicates the access token originally passed to the authorizer function has not been
successfully verified.
• "expiresAt": "<date-time>" is a date-time string in ISO-8601 format indicating when the access
token originally passed to the authorizer function will expire.
• "context": {"<key>": "<value>", ... } is an optional comma-delimited list of key-value
pairs in JSON format to return to API Gateway. The authorizer function can return any key-value pair

Oracle Cloud Infrastructure User Guide 437


API Gateway

(for example, the username or email address of the end user). Returning context key-value pairs is not
required.
• "wwwAuthenticate": "<directive>" is the value of the WWW-Authenticate header to be
returned by the authorizer function if verification fails, indicating the type of authentication that is required
(such as Basic or Bearer). API Gateway returns the WWW-Authenticate header in the response to the
API client, along with a 401 status code. For example, "wwwAuthenticate": "Bearer realm=
\"example.com\"". For more information, see RFC 2617 HTTP Authentication: Basic and Digest
Access Authentication.
For example:

{
"active": false,
"expiresAt": "2019-05-30T10:15:30+01:00",
"context": {
"email": "[email protected]"
},
"wwwAuthenticate": "Bearer realm=\"example.com\""
}

For a related Developer Tutorial containing an example authorizer function, see Functions: Validate an API Key
with API Gateway.
2. Build a Docker image from the code, push the Docker image to a Docker registry, and create a new function in
Oracle Functions based on the image. You can do this in different ways:
• You can use the Fn Project CLI command fn deploy to build a new Docker image, push the image to
the Docker registry, and create a new function in Oracle Functions based on the image. See Creating and
Deploying Functions on page 2071.
• You can use Docker commands to build the image and push it to the Docker registry, and then use the Fn
Project CLI command fn create function (or the CreateFunction API operation) to create a new
function in Oracle Functions based on the image. See Creating Functions from Existing Docker Images on
page 2074.
3. Make a note of the OCID of the function you create in Oracle Functions. For example,
ocid1.fnfunc.oc1.phx.aaaaaaaaac2______kg6fq
4. If one doesn't exist already, create an Oracle Cloud Infrastructure policy and specify a policy statement to give
API gateways access to function-related resources. The policy enables API deployments on those API gateways
to invoke the authorizer function. For more information, see Create a Policy to Give API Gateways Access to
Functions on page 349
Using the Console to Add Authentication and Authorization Request Policies
To add authentication and authorization request policies to an API deployment specification using the Console:
1. Create or update an API deployment using the Console, select the From Scratch option, and enter details on the
Basic Information page.
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
and Updating API Gateways and API Deployments on page 391.

Oracle Cloud Infrastructure User Guide 438


API Gateway

2. In the API Request Policies section of the Basic Information page, click the Add button beside Authentication
and specify:
• Authentication Type: Select Custom.
• Application in <compartment-name>: The name of the application in Oracle Functions that contains the
authorizer function. You can select an application from a different compartment.
• Function Name: The name of the authorizer function in Oracle Functions.
• Authentication Token: Whether the access token is contained in a request header or a query parameter.
• Authentication Token Value: Depending on whether the access token is contained in a request header or a
query parameter, specify:
• Header Name: If the access token is contained in a request header, enter the name of the header.
• Parameter Name: If the access token is contained in a query parameter, enter the name of the query
parameter.
• Enable Anonymous Access: Whether unauthenticated (that is, anonymous) end users can access routes in
the API deployment. By default, this option is not selected. If you never want anonymous users to be able to
access routes, don't select this option. Note that if you do select this option, you also have to explicitly specify
every route to which anonymous access is allowed by selecting Anonymous as the Authorization Type in
each route's authorization policy.
3. Click Save Changes, and then click Next to enter details for individual routes in the API deployment on the
Routes page. To specify an authorization policy that applies to an individual route, click Show Route Request
Policies, click the Add button beside Authorization, and specify:
• Authorization Type: How to grant access to the route. Specify:
• Any: Only grant access to end users that have been successfully authenticated, provided the authorizer
function has also returned one of the access scopes you specify in the Allowed Scope field. In this case, the
authentication policy's Enable Anonymous Access option has no effect.
• Anonymous: Grant access to all end users, even if they have not been successfully authenticated by the
authorizer function. In this case, you must have selected the authentication policy's Enable Anonymous
Access option.
• Authentication only: Only grant access to end users that have been successfully authenticated by the
authorizer function. In this case, the authentication policy's Enable Anonymous Access option has no
effect.
• Allowed Scope: If you selected Any as the Authorization Type, enter a comma-delimited list of one or more
strings that correspond to access scopes returned by the authorizer function. Access will only be granted to end
users that have been successfully authenticated if the authorizer function returns one of the access scopes you
specify. For example, read:hello
Note:

If you don't include an authorization policy for a particular route, access


is granted as if such a policy does exist and Authorization Type is set
to Authentication only. In other words, regardless of the setting of the
authentication policy's Enable Anonymous Access option:
• only authenticated end users can access the route
• all authenticated end users can access the route regardless of access
scopes returned by the authorizer function
• anonymous end users cannot access the route
4. Click Save Changes, and then click Next to review the details you entered for the API deployment.
5. Click Create or Save Changes to create or update the API deployment.
6. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).
Editing a JSON File to Add Authentication and Authorization Request Policies
To add authentication and authorization request policies to an API deployment specification in a JSON file:

Oracle Cloud Infrastructure User Guide 439


API Gateway

1. Using your preferred JSON editor, edit the existing API deployment specification to which you want to add
authentication and authorization functionality, or create a new API deployment specification (see Creating an API
Deployment Specification on page 365).
At a minimum, the API deployment specification will include a routes section containing:
• A path. For example, /hello
• One or more methods. For example, GET
• A definition of a back end. For example, a URL, or the OCID of a function in Oracle Functions.
For example, the following basic API deployment specification defines a simple Hello World serverless function
in Oracle Functions as a single back end:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
2. Add an authentication request policy that applies to all routes in the API deployment specification:
a. Insert a requestPolicies section before the routes section, if one doesn't exist already. For example:

{
"requestPolicies": {},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
b. Add the following authentication policy to the new requestPolicies section.

{
"requestPolicies": {
"authentication": {
"type": "<type-value>",
"isAnonymousAccessAllowed": <true|false>,
"functionId": "<function-ocid>",
<"tokenHeader"|"tokenQueryParam">: <"<token-header-name>"|"<token-
query-param-name>">
}
},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"

Oracle Cloud Infrastructure User Guide 440


API Gateway

}
}
]
}

where:
• <type-value> is the authentication type. To use an authorizer function for authentication, specify
CUSTOM_AUTHENTICATION.
• "isAnonymousAccessAllowed": <true|false> optionally indicates whether unauthenticated
(that is, anonymous) end users can access routes in the API deployment specification. If you never want
anonymous end users to be able to access routes, set this property to false. If you don't include this
property in the authentication policy, the default of false is used. Note that if you do include this
property and set it to true, you also have to explicitly specify every route to which anonymous access is
allowed by setting the type property to "ANONYMOUS" in each route's authorization policy.
• <function-ocid> is the OCID of the authorizer function deployed to Oracle Functions.
• <"tokenHeader"|"tokenQueryParam">: <"<token-header-name>"|"<token-
query-param-name>"> indicates whether it is a request header that contains the access token (and if
so, the name of the header), or a query parameter that contains the access token (and if so, the name of the
query parameter). Note that you can specify either "tokenHeader": "<token-header-name>" or
"tokenQueryParam": "<token-query-param-name>">, but not both.
For example, the following authentication policy specifies an OCI function that will validate the access
token in the Authorization request header:

{
"requestPolicies": {
"authentication": {
"type": "CUSTOM_AUTHENTICATION",
"isAnonymousAccessAllowed": false,
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaac2______kg6fq",
"tokenHeader": "Authorization"
}
},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
3. Add an authorization request policy for each route in the API deployment specification:
a. Insert a requestPolicies section after the first route's backend section, if one doesn't exist already. For
example:

{
"requestPolicies": {
"authentication": {
"type": "CUSTOM_AUTHENTICATION",
"isAnonymousAccessAllowed": false,
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaac2______kg6fq",
"tokenHeader": "Authorization"
}
},
"routes": [

Oracle Cloud Infrastructure User Guide 441


API Gateway

{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {}
}
]
}
b. Add the following authorization policy to the requestPolicies section:

{
"requestPolicies": {
"authentication": {
"type": "CUSTOM_AUTHENTICATION",
"isAnonymousAccessAllowed": false,
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaac2______kg6fq",
"tokenHeader": "Authorization"
}
},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {
"authorization": {
"type": <"AUTHENTICATION_ONLY"|"ANY_OF"|"ANONYMOUS">,
"allowedScope": [ "<scope>" ]
}
}
}
]
}

where:
• "type": <"AUTHENTICATION_ONLY"|"ANY_OF"|"ANONYMOUS"> indicates how to grant
access to the route:
• "AUTHENTICATION_ONLY": Only grant access to end users that have been successfully
authenticated. In this case, the "isAnonymousAccessAllowed" property in the API deployment
specification's authentication policy has no effect.
• "ANY_OF": Only grant access to end users that have been successfully authenticated, provided the
authorizer function has also returned one of the access scopes you specify in the allowedScope
property. In this case, the "isAnonymousAccessAllowed" property in the API deployment
specification's authentication policy has no effect.
• "ANONYMOUS": Grant access to all end users, even if they have not been successfully authenticated.
In this case, you must explicitly set the "isAnonymousAccessAllowed" property to true in the
API deployment specification's authentication policy.
• "allowedScope": [ "<scope>" ] is a comma-delimited list of one or more strings that
correspond to access scopes returned by the authorizer function. In this case, you must set the type
property to "ANY_OF" (the "allowedScope" property is ignored if the type property is set to

Oracle Cloud Infrastructure User Guide 442


API Gateway

"AUTHENTICATION_ONLY" or "ANONYMOUS"). Also note that if you specify more than one scope,
access to the route is granted if any of the scopes you specify is returned by the authorizer function.
For example, the following request policy defines a /hello route that only allows authenticated end users
with the read:hello scope to access it:

{
"requestPolicies": {
"authentication": {
"type": "CUSTOM_AUTHENTICATION",
"isAnonymousAccessAllowed": false,
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaac2______kg6fq",
"tokenHeader": "Authorization"
}
},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {
"authorization": {
"type": "ANY_OF",
"allowedScope": [ "read:hello" ]
}
}
}
]
}
c. Add an authorization request policy for all remaining routes in the API deployment specification.
Note:

If you don't include an authorization policy for a particular route,


access is granted as if such a policy does exist and the type property is set
to "AUTHENTICATION_ONLY". In other words, regardless of the setting
of the isAnonymousAccessAllowed property in the API deployment
specification's authentication policy:

only authenticated end users can access the route

all authenticated end users can access the route regardless of access
scopes returned by the authorizer function
• anonymous end users cannot access the route
4. Save the JSON file containing the API deployment specification.
5. Use the API deployment specification when you create or update an API deployment in the following ways:
• by specifying the JSON file in the Console when you select the Upload an existing API option
• by specifying the JSON file in a request to the API Gateway REST API
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
and Updating API Gateways and API Deployments on page 391.
6. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).

Oracle Cloud Infrastructure User Guide 443


API Gateway

Using JSON Web Tokens (JWTs) to Add Authentication and Authorization to API
Deployments
You can control access to APIs you deploy to API gateways using JSON Web Tokens (JWTs) as described in this
topic, or using an 'authorizer function' (as described in Using Authorizer Functions to Add Authentication and
Authorization to API Deployments on page 435).
A JWT is a JSON-based access token sent in an HTTP request from an API client to a resource. JWTs are issued by
identity providers (for example, Oracle Identity Cloud Service (IDCS), Auth0, Okta). When an API client attempts to
access a protected resource, it must include a JWT. The resource validates the JWT with an authorization server using
a corresponding public verification key, either by invoking a validation end-point on the authorization server or by
using a local verification key provided by the authorization server.
A JWT comprises:
• A header, which identifies the type of token and the cryptographic algorithm used to generate the signature.
• A payload, containing claims about the end user's identity, and the properties of the JWT itself. A claim is a key
value pair, where the key is the name of the claim. A payload is recommended (although not required) to contain
certain reserved claims with particular names, such as expiration time (exp), audience (aud), issuer (iss), and
not before (nbf). A payload can also contain custom claims with user-defined names.
• A signature, to validate the authenticity of the JWT (derived by base64 encoding the header and the payload).
You enable an API deployment to use JWTs for authentication and authorization by including two different kinds of
request policy in the API deployment specification:
• An authentication request policy for the entire API deployment that specifies the use of JWTs, including how to
validate them and whether unauthenticated end users can access routes in the API deployment.
• An authorization request policy for each route that specifies the operations an end user is allowed to perform,
optionally based on values specified for the scope claim in the JWT.
Before an end user can access an API deployment that uses JWTs for authentication and authorization, they must
obtain a JWT from an identity provider.
When calling an API deployed on an API gateway, the API client provides the JWT as a query parameter or in the
header of the request. The API gateway validates the JWT using a corresponding public verification key provided by
the issuing identity provider. Using the API deployment's authentication request policy, you can configure how the
API gateway validates JWTs:
• You can configure the API gateway to retrieve public verification keys from the identity provider at runtime. In
this case, the identity provider acts as the authorization server.
• You can configure the API gateway in advance with public verification keys already issued by an identity
provider (referred to as 'static keys'), enabling the API gateway to verify JWTs locally at runtime without having
to contact the identity provider. The result is faster token validation.
As well as using the public verification key from the issuing identity provider to verify the authenticity of a JWT, you
can specify that reserved claims in the JWT's payload must have particular values before the API gateway considers
the JWT to be valid. By default, API gateways validate JWTs using the expiration (exp), audience (aud), and issuer
(iss) claims, along with the not before (nbf) claim if present. You can also specify acceptable values for custom
claims. See Identity Provider Details to Use for iss and aud Claims, and for the JWKS URI on page 455.
When the JWT has been validated, the API gateway extracts claims from the JWT's payload as key value pairs
and saves them as records in the request.auth context table for use by the API deployment. For example, as
context variables for use in an HTTP back end definition (see Adding Context Variables to Policies and HTTP Back
End Definitions on page 382). If the JWT's payload contains the scope claim, you can use the claim's values in
authorization request policies for individual routes to specify the operations an end user is allowed to perform.
You can add authentication and authorization request policies to an API deployment specification by:
• Using the Console.
• Editing a JSON file.

Oracle Cloud Infrastructure User Guide 444


API Gateway

Prerequisites for using JWTs


Before you can enable authentication and authorization for API deployments using JWTs:
• An identity provider (for example, Oracle Identity Cloud Service (IDCS), Auth0) must have already been set up to
issue JWTs for users allowed to access the API deployment.
• If you want to use custom claims in authorization policies, the identity provider must be set up to add the custom
claims to the JWTs it issues.
See the identity provider documentation for more information (for example, the Oracle Identity Cloud Service (IDCS)
documentation, the Auth0 documentation).
Also note that to validate the JWT using a corresponding public verification key provided by the issuing identity
provider:
• the signing algorithm used to generate the JWT's signature must be one of RS256, RS384, or RS512
• the public verification key must have a minimum length of 2048 bits and must not exceed 4096 bits
Using the Console to Add Authentication and Authorization Request Policies
To add authentication and authorization request policies to an API deployment specification using the Console:
1. Create or update an API deployment using the Console, select the From Scratch option, and enter details on the
Basic Information page.
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
and Updating API Gateways and API Deployments on page 391.
2. In the API Request Policies section of the Basic Information page, click the Add button beside Authentication
and specify:
• Authentication Type: Select JWT.
• Authentication Token: Whether the JWT is contained in a request header or a query parameter.
• Authentication Token Value: Depending on whether the JWT is contained in a request header or a query
parameter, specify:
• Header Name: and Authentication Scheme: If the JWT is contained in a request header, enter the name
of the header (for example Authorization), and the HTTP authentication scheme (only Bearer is
currently supported).
• Parameter Name: If the JWT is contained in a query parameter, enter the name of the query parameter.
• Enable Anonymous Access: Whether unauthenticated (that is, anonymous) end users can access routes in the
API deployment. By default, this option is not selected. If you never want anonymous end users to be able to
access routes, don't select this option. Note that if you do select this option, you also have to explicitly specify
every route to which anonymous access is allowed by selecting Anonymous as the Authorization Type in
each route's authorization policy.
3. In the Issuers section, specify values that are allowed in the issuer (iss) claim of a JWT being used to access the
API deployment:
• Allowed Issuers: Specify the URL (or a text string) for an identity provider that is allowed in the issuer
(iss) claim of a JWT to be used to access the API deployment. For example, to enable a JWT issued
by the Oracle Identity Cloud Service (IDCS) to be used to access the API deployment, enter https://
identity.oraclecloud.com/ . See Identity Provider Details to Use for iss and aud Claims, and for the
JWKS URI on page 455.
• Another Issuer: Click to add additional identity providers (up to a maximum of five).
4. In the Audiences section, specify values that are allowed in the audience (aud) claim of a JWT being used to
access the API deployment :
• Allowed Audiences: Specify a value that is allowed in the audience (aud) claim of a JWT to identify the
intended recipient of the token. For example, the audience could be, but need not be, the API gateway's
hostname. See Identity Provider Details to Use for iss and aud Claims, and for the JWKS URI on page 455.
• Another Audience: Click to add additional audiences (up to a maximum of five).

Oracle Cloud Infrastructure User Guide 445


API Gateway

5. In the Public Keys section of the Authentication Policy window, specify how you want the API gateway to
validate JWTs using public verification keys:
• To configure the API gateway to validate JWTs by retrieving public verification keys from the identity
provider at runtime, select Remote JWKS from the Type list and specify:
• URI: The URI from which to retrieve the JSON Web Key Set (JWKS) to use to verify the signature on
JWTs. For example, https://www.somejwksprovider.com/oauth2/v3/certs. For more information about the
URI to specify, see Identity Provider Details to Use for iss and aud Claims, and for the JWKS URI on page
455.
Note the following:
• The URI must be routable from the subnet containing the API gateway on which the API is deployed.
• URIs that require authentication or authorization to return the JWKS are not supported.
• If the API gateway fails to retrieve the JWKS, all requests to the API deployment will return an
HTTP 500 response code. Refer to the API gateway's execution log for more information about the
error (see Adding Logging to API Deployments on page 401).
• Certain key parameters must be present in the JWKS to verify the JWT's signature (see Key Parameters
Required to Verify JWT Signatures on page 456).
• Cache Duration in Hours: The number of hours (between 1 and 24) the API gateway is to cache the
JWKS set after retrieving it.
• Disable SSL Verification: Whether to disable SSL verification when communicating with the identity
provider. By default, this option is not selected. Oracle recommends not selecting this option because it can
compromise JWT validation. API Gateway trusts certificates from multiple Certificate Authorities issued
for Oracle Identity Cloud Service (IDCS), Auth0, Okta.
• To configure the API gateway to validate JWTs with public verification keys already issued by an identity
provider (enabling the API gateway to verify JWTs locally without having to contact the identity provider),
select Static Keys from the Type list and specify:
• Key ID: The identifier of the static key used to sign the JWT. The value must match the kid claim in the
JWT header. For example, master_key.
• Format: The format of the static key, as either a JSON Web Key or a PEM-encoded Public Key.
• JSON Web Key: If the static key is a JSON Web Key, paste the key into this field.
For example:

{
"kty": "RSA",
"n": "0vx7agoebGc...KnqDKgw",
"e": "AQAB",
"alg": "RS256",
"use": "sig"
}

Note that certain parameters must be present in the static key to verify the JWT's signature (see Key
Parameters Required to Verify JWT Signatures on page 456). Also note that RSA is currently the
only supported key type (kty).
• PEM-Encoded Public Key: If the static key is a PEM-encoded public key, paste the key into this field.
For example:

-----BEGIN PUBLIC KEY-----


XsEiCeYgglwW/
KAhSSNRVdD60QlXYMWHOhXzSFDZCLf1WXxKMZCiMvVrsBIzmFEXnFmcsO2mxwlL5/8qQudomoP
+yycJ2gWPIgqsZcQRheJWxVC5ep0MeEHlvLnEvCi9utpAnjrsZCQ7plfZVPX7XORvezwqQhBfYzwA2

Oracle Cloud Infrastructure User Guide 446


API Gateway

-----END PUBLIC KEY-----

Note that the -----BEGIN PUBLIC KEY----- and -----END PUBLIC KEY----- markers
are required.
• Another Key: Click to add additional keys (up to a maximum of five).
6. (Optional) Click Show Advanced Options to specify a time difference to take into account when validating
JWTs, and to specify additional claims in JWTs to process:
• Maximum Clock Skew in Seconds: (Optional) The maximum time difference between the system clocks
of the identity provider that issued a JWT and the API gateway. The value you enter here is taken into
account when the API gateway validates the JWT to determine whether it is still valid, using the not before
(nbf) claim (if present) and the expiration (exp) claim in the JWT. The minimum (and default) is 0, the
maximum is 120.
• Verify Claims: (Optional) In addition to the values for the audience aud and issuer iss claims that you
already specified, you can specify names and values for one or more additional claims to validate in a JWT.
Note that any key names and values you enter are simply handled as strings, and must match exactly with
names and values in the JWT. Pattern matching and other datatypes are not supported. Also note that you can
specify a claim can or must appear in the JWT without specifying a value for the claim by entering the claim's
name in the Claim Key field, leaving the Claim Values field blank, and selecting Required:
• Claim Key: (Optional) Specify the name of a claim that can be, or must be, included in a JWT. If the claim
must be included in the JWT, select Required. The claim name you specify can be a reserved claim name
such as the subject (sub) claim, or a custom claim name issued by a particular identity provider.
• Claim Values: (Optional) Specify an acceptable value for the claim in the Claim Key field. Click the plus
sign (+) to enter another acceptable value. If you specify one or more acceptable values for the claim, the
API gateway validates that the claim has one of the values you specify.
• Required: Select if the claim in the Claim Key field must be included in the JWT.
• Another Claim: Click to add additional claims (up to a maximum of ten).
7. Click Save Changes, and then click Next to enter details for individual routes in the API deployment on the
Routes page. To specify an authorization policy that applies to an individual route, click Show Route Request
Policies, click the Add button beside Authorization, and specify:
• Authorization Type: How to grant access to the route. Specify:
• Any: Only grant access to end users that have been successfully authenticated, provided the JWT has a
scope claim that includes at least one of the access scopes you specify in the Allowed Scope field. In this
case, the authentication policy's Enable Anonymous Access option has no effect.
• Anonymous: Grant access to all end users, even if they have not been successfully authenticated using the
JWT. In this case, you must have selected the authentication policy's Enable Anonymous Access option.
• Authentication only: Only grant access to end users that have been successfully authenticated using the
JWT. In this case, the authentication policy's Enable Anonymous Access option has no effect.
• Allowed Scope: If you selected Any as the Authorization Type, enter a comma-delimited list of one or more
strings that correspond to access scopes in the JWT. Access will only be granted to end users that have been
successfully authenticated if the JWT has a scope claim that includes one of the access scopes you specify.
For example, read:hello
Note:

If you don't include an authorization policy for a particular route, access


is granted as if such a policy does exist and Authorization Type is set
to Authentication only. In other words, regardless of the setting of the
authentication policy's Enable Anonymous Access option:

only authenticated end users can access the route

all authenticated end users can access the route regardless of access
scopes in the JWT's scope claim
• anonymous end users cannot access the route
8. Click Save Changes, and then click Next to review the details you entered for the API deployment.

Oracle Cloud Infrastructure User Guide 447


API Gateway

9. Click Create or Save Changes to create or update the API deployment.


10. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).
Editing a JSON File to Add Authentication and Authorization Request Policies
To add authentication and authorization request policies to an API deployment specification in a JSON file:
1. Using your preferred JSON editor, edit the existing API deployment specification to which you want to add
authentication and authorization functionality, or create a new API deployment specification (see Creating an API
Deployment Specification on page 365).
At a minimum, the API deployment specification will include a routes section containing:
• A path. For example, /hello
• One or more methods. For example, GET
• A definition of a back end. For example, a URL, or the OCID of a function in Oracle Functions.
For example, the following basic API deployment specification defines a simple Hello World serverless function
in Oracle Functions as a single back end:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
2. Add an authentication request policy that applies to all routes in the API deployment specification:
a. Insert a requestPolicies section before the routes section, if one doesn't exist already. For example:

{
"requestPolicies": {},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
b. Add the following authentication policy to the new requestPolicies section.

{
"requestPolicies": {
"authentication": {
"type": "<type-value>",
"isAnonymousAccessAllowed": <true|false>,
"issuers": ["<issuer-url>", "<issuer-url>"],
<"tokenHeader"|"tokenQueryParam">: <"<token-header-name>"|"<token-
query-param-name>">,
"tokenAuthScheme": "<authentication-scheme>",

Oracle Cloud Infrastructure User Guide 448


API Gateway

"audiences": ["<intended-audience>"],
"publicKeys": {
"type": <"REMOTE_JWKS"|"STATIC_KEYS">,
<public-key-config>
},
"verifyClaims": [
{"key": "<claim-name>",
"values": ["<acceptable-value>", "<acceptable-value>"],
"isRequired": <true|false>
}
],
"maxClockSkewInSeconds": <seconds-difference>
}
},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}

where:
• <type-value> is the authentication type. To use JWTs for authentication, specify
JWT_AUTHENTICATION.
• "isAnonymousAccessAllowed": <true|false> optionally indicates whether unauthenticated
(that is, anonymous) end users can access routes in the API deployment specification. If you never want
anonymous end users to be able to access routes, set this property to false. If you don't include this
property in the authentication policy, the default of false is used. Note that if you do include this
property and set it to true, you also have to explicitly specify every route to which anonymous access is
allowed by setting the type property to "ANONYMOUS" in each route's authorization policy.
• <issuer-url> is the URL (or a text string) for an identity provider that is allowed in the issuer
(iss) claim of a JWT to be used to access the API deployment. For example, to enable a JWT issued
by the Oracle Identity Cloud Service (IDCS) to be used to access the API deployment, enter https://
identity.oraclecloud.com/ . You can specify one or multiple identity providers (up to a
maximum of five). See Identity Provider Details to Use for iss and aud Claims, and for the JWKS URI on
page 455.
• <"tokenHeader"|"tokenQueryParam">: <"<token-header-name>"|"<token-
query-param-name>"> indicates whether it is a request header that contains the JWT (and if so, the
name of the header), or a query parameter that contains the access token (and if so, the name of the query
parameter). Note that you can specify either "tokenHeader": "<token-header-name>" or
"tokenQueryParam": "<token-query-param-name>">, but not both.
• <tokenAuthScheme> is the name of the authentication scheme to use if the JWT is contained in a
request header. For example, "Bearer".
• <intended-audience> is a value that is allowed in the audience (aud) claim of a JWT to identify
the intended recipient of the token. For example, the audience could be, but need not be, the API gateway's
hostname. You can specify one audience or multiple audiences (up to a maximum of five). See Identity
Provider Details to Use for iss and aud Claims, and for the JWKS URI on page 455.
• "type": <"REMOTE_JWKS"|"STATIC_KEYS"> indicates how you want the API gateway to
validate JWTs using public verification keys. Specify REMOTE_JWKS to configure the API gateway
to retrieve public verification keys from the identity provider at runtime. Specify STATIC_KEYS to

Oracle Cloud Infrastructure User Guide 449


API Gateway

configure the API gateway with public verification keys already issued by an identity provider (enabling
the API gateway to verify JWTs locally without having to contact the identity provider).
• <public-key-config> provides the details of JWT validation, according to whether you specified
"REMOTE_JWKS" or "STATIC_KEYS" as the value of "type": as follows:
• If you specified "type": "REMOTE_JWKS" to configure the API gateway to validate JWTs by
retrieving public verification keys from the identity provider at runtime, provide details as follows:

"publicKeys": {
"type": "REMOTE_JWKS",
"uri": "<uri-for-jwks>",
"maxCacheDurationInHours": <cache-time>,
"isSslVerifyDisabled": <true|false>
}

where:
• "uri": "<uri-for-jwks>" specifies the URI from which to retrieve the JSON Web Key
Set (JWKS) to use to verify the signature on JWTs. For more information about the URI to specify,
see Identity Provider Details to Use for iss and aud Claims, and for the JWKS URI on page 455.
Note the following:
• The URI must be routable from the subnet containing the API gateway on which the API is
deployed.
• URIs that require authentication or authorization to return the JWKS are not supported.
• If the API gateway fails to retrieve the JWKS, all requests to the API deployment will return an
HTTP 500 response code. Refer to the API gateway's execution log for more information about
the error (see Adding Logging to API Deployments on page 401).
• Certain key parameters must be present in the JWKS to verify the JWT's signature (see Key
Parameters Required to Verify JWT Signatures on page 456).
• "maxCacheDurationInHours": <cache-time> specifies the number of hours (between 1
and 24) the API gateway is to cache the JWKS set after retrieving it.
• "isSslVerifyDisabled": <true|false> indicates whether to disable SSL verification
when communicating with the identity provider. Oracle recommends not setting this option to
truebecause it can compromise JWT validation. API Gateway trusts certificates from multiple
Certificate Authorities issued for Oracle Identity Cloud Service (IDCS), Auth0, and Okta.
For example:

"publicKeys": {
"type": "REMOTE_JWKS",
"uri": "https://www.somejwksprovider.com/oauth2/v3/certs",
"maxCacheDurationInHours": 3,
"isSslVerifyDisabled": false
}
• If you specified "type": "STATIC_KEYS", the details to provide depend on the format of the key
already issued by the identity provider:
• If the static key is a JSON Web Key, specify "format": "JSON_WEB_KEY", specify the
identifier of the static key used to sign the JWT as the value of the "kid" parameter, and provide
values for other parameters to verify the JWT's signature.
For example:

"publicKeys": {
"type": "STATIC_KEYS",
"keys": [
{
"format": "JSON_WEB_KEY",

Oracle Cloud Infrastructure User Guide 450


API Gateway

"kid": "master_key",
"kty": "RSA",
"n": "0vx7agoebGc...KnqDKgw",
"e": "AQAB",
"alg": "RS256",
"use": "sig"
}
]

Note that certain parameters must be present in the static key to verify the JWT's signature (see Key
Parameters Required to Verify JWT Signatures on page 456). Also note that RSA is currently the
only supported key type (kty).
• If the static key is a PEM-encoded public key, specify "format": "PEM", specify the identifier
of the static key used to sign the JWT as the value of "kid", and provide the key as the value of
"key".
For example:

"publicKeys": {
"type": "STATIC_KEYS",
"keys": [
{
"format": "PEM",
"kid": "master_key"
"key": -----BEGIN PUBLIC KEY-----XsEiCeYgglwW/
KAhSSNRVdD60QlXYMWHOhXzSFDZCLf1WXxKMZCiMvVrsBIzmFEXnFmcsO2mxwlL5/8qQudomoP
+yycJ2gWPIgqsZcQRheJWxVC5ep0MeEHlvLnEvCi9utpAnjrsZCQ7plfZVPX7XORvezwqQhBfYzw
END PUBLIC KEY-----
}
]

Note that the -----BEGIN PUBLIC KEY----- and -----END PUBLIC KEY-----
markers are required.
• verifyClaims optionally specifies additional claim names and values for one or more additional claims
to validate in a JWT (up to a maximum of ten).
• "key": "<claim-name>" is the name of a claim that can be, or must be, included in a JWT. The
claim name you specify can be a reserved claim name such as the subject (sub) claim, or a custom
claim name issued by a particular identity provider.
• "values": ["<acceptable-value>", "<acceptable-value>"] (optionally) indicates
one or more acceptable values for the claim.
• "isRequired": <true|false> indicates whether the claim must be included in the JWT.
Note that any key names and values you enter are simply handled as strings, and must match exactly with
names and values in the JWT. Pattern matching and other datatypes are not supported
• maxClockSkewInSeconds: <seconds-difference> optionally specifies the maximum time
difference between the system clocks of the identity provider that issued a JWT and the API gateway. The
value you specify is taken into account when the API gateway validates the JWT to determine whether it
is still valid, using the not before (nbf) claim (if present) and the expiration (exp) claim in the JWT. The
minimum (and default) is 0, the maximum is 120.
For example, the following authentication policy specifies an OCI function that will validate the access
token in the Authorization request header:

{
"requestPolicies": {
"authentication": {
"type": "JWT_AUTHENTICATION",
"isAnonymousAccessAllowed": false,
"issuers": ["https://identity.oraclecloud.com/"],

Oracle Cloud Infrastructure User Guide 451


API Gateway

"tokenHeader": "Authorization",
"tokenAuthScheme": "Bearer",
"audiences": ["api.dev.io"],
"publicKeys": {
"type": "STATIC_KEYS",
"keys": [
{
"format": "JSON_WEB_KEY",
"kid": "master_key",
"kty": "RSA",
"n": "0vx7agoebGc...KnqDKgw",
"e": "AQAB",
"alg": "RS256",
"use": "sig"
}
]
},
"verifyClaims": [
{
"key": "is_admin",
"values": ["service:app", "read:hello"],
"isRequired": true
}
],
"maxClockSkewInSeconds": 10
}
},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
3. Add an authorization request policy for each route in the API deployment specification:
a. Insert a requestPolicies section after the first route's backend section, if one doesn't exist already. For
example:

{
"requestPolicies": {
"authentication": {
"type": "JWT_AUTHENTICATION",
"isAnonymousAccessAllowed": false,
"issuers": ["https://identity.oraclecloud.com/"],
"tokenHeader": "Authorization",
"tokenAuthScheme": "Bearer",
"audiences": ["api.dev.io"],
"publicKeys": {
"type": "STATIC_KEYS",
"keys": [
{
"format": "JSON_WEB_KEY",
"kid": "master_key",
"kty": "RSA",
"n": "0vx7agoebGc...KnqDKgw",
"e": "AQAB",
"alg": "RS256",
"use": "sig"

Oracle Cloud Infrastructure User Guide 452


API Gateway

}
]
},
"verifyClaims": [
{
"key": "is_admin",
"values": ["service:app", "read:hello"],
"isRequired": true
}
],
"maxClockSkewInSeconds": 10
}
},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {}
}
]
}
b. Add the following authorization policy to the requestPolicies section:

{
"requestPolicies": {
"authentication": {
"type": "JWT_AUTHENTICATION",
"isAnonymousAccessAllowed": false,
"issuers": ["https://identity.oraclecloud.com/"],
"tokenHeader": "Authorization",
"tokenAuthScheme": "Bearer",
"audiences": ["api.dev.io"],
"publicKeys": {
"type": "STATIC_KEYS",
"keys": [
{
"format": "JSON_WEB_KEY",
"kid": "master_key",
"kty": "RSA",
"n": "0vx7agoebGc...KnqDKgw",
"e": "AQAB",
"alg": "RS256",
"use": "sig"
}
]
},
"verifyClaims": [
{
"key": "is_admin",
"values": ["service:app", "read:hello"],
"isRequired": true
}
],
"maxClockSkewInSeconds": 10
}
},
"routes": [
{
"path": "/hello",

Oracle Cloud Infrastructure User Guide 453


API Gateway

"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {
"authorization": {
"type": <"AUTHENTICATION_ONLY"|"ANY_OF"|"ANONYMOUS">,
"allowedScope": [ "<scope>" ]
}
}
}
]
}

where:
• "type": <"AUTHENTICATION_ONLY"|"ANY_OF"|"ANONYMOUS"> indicates how to grant
access to the route:
• "AUTHENTICATION_ONLY": Only grant access to end users that have been successfully
authenticated. In this case, the "isAnonymousAccessAllowed" property in the API deployment
specification's authentication policy has no effect.
• "ANY_OF": Only grant access to end users that have been successfully authenticated, provided the
JWT's scope claim includes one of the access scopes you specify in the allowedScope property.
In this case, the "isAnonymousAccessAllowed" property in the API deployment specification's
authentication policy has no effect.
• "ANONYMOUS": Grant access to all end users, even if they have not been successfully authenticated.
In this case, you must explicitly set the "isAnonymousAccessAllowed" property to true in the
API deployment specification's authentication policy.
• "allowedScope": [ "<scope>" ] is a comma-delimited list of one or more strings that
correspond to access scopes included in the JWT's scope claim. In this case, you must set the type
property to "ANY_OF" (the "allowedScope" property is ignored if the type property is set to
"AUTHENTICATION_ONLY" or "ANONYMOUS"). Also note that if you specify more than one scope,
access to the route is granted if any of the scopes you specify is included in the JWT's scope claim.
For example, the following request policy defines a /hello route that only allows authenticated end users
with the read:hello scope to access it:

{
"requestPolicies": {
"authentication": {
"type": "JWT_AUTHENTICATION",
"isAnonymousAccessAllowed": false,
"issuers": ["https://identity.oraclecloud.com/"],
"tokenHeader": "Authorization",
"tokenAuthScheme": "Bearer",
"audiences": ["api.dev.io"],
"publicKeys": {
"type": "STATIC_KEYS",
"keys": [
{
"format": "JSON_WEB_KEY",
"kid": "master_key",
"kty": "RSA",
"n": "0vx7agoebGc...KnqDKgw",
"e": "AQAB",
"alg": "RS256",
"use": "sig"
}
]

Oracle Cloud Infrastructure User Guide 454


API Gateway

},
"verifyClaims": [
{
"key": "is_admin",
"values": ["service:app", "read:hello"],
"isRequired": true
}
],
"maxClockSkewInSeconds": 10
}
},
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {
"authorization": {
"type": "ANY_OF",
"allowedScope": [ "read:hello" ]
}
}
}
]
}
c. Add an authorization request policy for all remaining routes in the API deployment specification.
Note:

If you don't include an authorization policy for a particular route,


access is granted as if such a policy does exist and the type property is set
to "AUTHENTICATION_ONLY". In other words, regardless of the setting
of the isAnonymousAccessAllowed property in the API deployment
specification's authentication policy:
• only authenticated end users can access the route
• all authenticated end users can access the route regardless of access
scopes in the JWT's scope claim
• anonymous end users cannot access the route
4. Save the JSON file containing the API deployment specification.
5. Use the API deployment specification when you create or update an API deployment in the following ways:
• by specifying the JSON file in the Console when you select the Upload an existing API option
• by specifying the JSON file in a request to the API Gateway REST API
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
and Updating API Gateways and API Deployments on page 391.
6. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).
Identity Provider Details to Use for iss and aud Claims, and for the JWKS URI
The identity provider that issued the JWT determines the allowed values you have to specify for the issuer (iss)
and the audience (aud) claims in the JWT. Which identity provider issued the JWT also determines the URI from
which to retrieve the JSON Web Key Set (JWKS) to verify the signature on the JWT. Note that URIs that require
authentication or authorization to return the JWKS are not supported.
Use the following table to find out what to specify for JWTs issued by the Oracle Identity Cloud Service (IDCS),
Okta, and Auth0 identity providers.

Oracle Cloud Infrastructure User Guide 455


API Gateway

Identity Provider Issuer (iss) claim Audience (aud) claim Format of URI from
which to retrieve the
JWKS

IDCS https:// Customer-specific. https://<tenant-base-url>/


identity.oraclecloud.com/ admin/v1/SigningCert/jwk
See Validating Access
Tokens in the Oracle To obtain the JWKS
Identity Cloud Service without logging in to
documentation. Oracle Identity Cloud
Service, see Change
Default Settings in the
Oracle Identity Cloud
Service documentation.

Okta https://<your-okta-tenant- Customer-specific. https://<your-okta-tenant-


name>.com name>.com/oauth2/<auth-
The audience configured
server-id> /v1/keys
for the Authorization
Server in the Okta See the Okta
Developer Console. See documentation.
Additional validation for
access tokens in the Okta
documentation.

Auth0 https://<your-account- Customer-specific. https://<your-account-


name>.auth0.com/ name>.auth0.com/.well-
See Audience in the Auth0 known/jwks.json
documentation.

Key Parameters Required to Verify JWT Signatures


To verify the signature on a JWT, API gateways require the following key parameters are present in either the JWKS
returned from a URI or the static JSON Web Key you specify.

Key Parameter Notes


kid The identifier of the key used to sign the JWT. The
value must match the kid claim in the JWT header. For
example, master_key.
kty The type of the key used to sign the JWT. Note that RSA
is currently the only supported key type.
use or key_ops If the use parameter is present, then it must be set
to sig. If the key-ops parameter is present, then
verify must be one of the valid values.

n The public key modulus.


e The public key exponent.
alg The signing algorithm (if present) must be set to one of
RS256, RS384 or RS512.

Oracle Cloud Infrastructure User Guide 456


API Gateway

Transforming Incoming Requests and Outgoing Responses


There are often situations when you'll want an API gateway to modify incoming requests before sending them to
back-end services. Similarly, you might want the API gateway to modify responses returned by back-end services.
For example:
• Back-end services might require requests to include a particular set of HTTP headers (for example, Accept-
Language and Accept-Encoding). To hide this implementation detail from API consumers and API clients, you
can use your API gateway to add the required headers.
• Web servers often include full version information in response headers. For security reasons, you might want
to prevent API consumers and API clients knowing about the underlying technology stack. You can use your
API gateway to remove server headers from responses.
• Back-end services might include sensitive information in a response. You can use your API gateway to remove
such information.
Using an API gateway, you can:
• Add, remove, and modify headers in requests and responses.
• Add, remove, and modify query parameters in requests.
• Rewrite request URLs from a public format to an internal format, perhaps to support legacy applications and
migrations.
You use request and response policies to transform the headers and query parameters of incoming requests, and the
headers of outgoing responses (see Adding Request Policies and Response Policies to API Deployment Specifications
on page 421).
You can include context variables in header and query parameter transformation request and response policies.
Including context variables enables you to modify headers and query parameters with the values of other headers,
query parameters, path parameters, and authentication parameters. Note that values of context variable values
are extracted from the original request or response, and are not subsequently updated as an API gateway uses a
transformation policy to evaluate a request or response. For more information about context variables, see Adding
Context Variables to Policies and HTTP Back End Definitions on page 382.
If a header or query parameter transformation request or response policy will result in an invalid header or query
parameter, the transformation policy is ignored.
You can add header and query parameter transformation request and response policies to an API deployment
specification by:
• using the Console
• editing a JSON file

Adding Header Transformation Request Policies


You can add header transformation request policies to API deployment specifications using the Console or by editing
a JSON file.

Using the Console to Add Header Transformation Request Policies


To add header transformation request policies to an API deployment specification using the Console:
1. Create or update an API deployment using the Console, select the From Scratch option, and enter details on the
Basic Information page.
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
and Updating API Gateways and API Deployments on page 391.
2. Click Save Changes, and then click Next to enter details for individual routes in the API deployment on the
Routes page.
3. On the Routes page, select the route for which you want to specify header transformation request policies.
4. Click Show Route Request Policies.

Oracle Cloud Infrastructure User Guide 457


API Gateway

5. Click the Add button beside Header Transformations to update the headers included in a request to the
API gateway for the current route.
6. To limit the headers included in a request, specify:
• Action: Filter.
• Type: Either Block to remove from the request the headers you explicitly list, or Allow to only allow in the
request the headers you explicitly list (any other headers are removed from the request).
• Header Names: The list of headers to remove from the request or allow in the request (depending on the
setting of Type). The names you specify are not case-sensitive, and must not be included in any other
transformation request policies for the route (with the exception of items you filter as allowed). For example,
User-Agent.
7. To change the name of a header included in a request (whilst keeping its original value), specify:
• Action: Rename.
• From: The original name of the header that you are renaming. The name you specify is not case-sensitive, and
must not be included in any other transformation request policies for the route. For example, X-Username.
• To: The new name of the header you are renaming. The name you specify is not case-sensitive (capitalization
might be ignored), and must not be included in any other transformation request policies for the route (with the
exception of items you filter as allowed). For example, X-User-ID.
8. To add a new header to a request (or to change or retain the values of an existing header already included in a
request), specify:
• Action: Set.
• Behavior: If the header already exists, specify what to do with the header's existing value:
• Overwrite, to replace the header's existing value with the value you specify.
• Append, to append the value you specify to the header's existing value.
• Skip, to keep the header's existing value.
• Name: The name of the header to add to the request (or to change the value of). The name you specify is not
case-sensitive (capitalization might be ignored), and must not be included in any other transformation request
policies for the route (with the exception of items you filter as allowed). For example, X-Api-Key.
• Values: The value of the new header (or the value to replace or append to an existing header's value,
depending on the setting of Behavior). The value you specify can be a simple string, or can include context
variables enclosed within ${...} delimiters. For example, "value": "zyx987wvu654tsu321",
"value": "${request.path[region]}", "value": "${request.headers[opc-
request-id]}". You can specify multiple values.
9. Click Save Changes, and then click Next to review the details you entered for individual routes.
10. Click Create or Save Changes to create or update the API deployment.
11. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).

Editing a JSON File to Add Header Transformation Request Policies


To add header transformation request policies to an API deployment specification in a JSON file:
1. Using your preferred JSON editor, edit the existing API deployment specification to which you want to add
header transformation request policies, or create a new API deployment specification (see Creating an API
Deployment Specification on page 365).
For example, the following basic API deployment specification defines a simple Hello World serverless function
in Oracle Functions as a single back end:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {

Oracle Cloud Infrastructure User Guide 458


API Gateway

"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
2. Insert a requestPolicies section after the backend section for the route to which you want the header
transformation request policy to apply. For example:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {}
}
]
}
3. Add a headerTransformations section to the requestPolicies section.

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {
"headerTransformations":{}
}
}
]
}
4. To limit the headers included in a request, specify a filterHeaders header transformation request policy:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {
"headerTransformations": {
"filterHeaders": {
"type": "<BLOCK|ALLOW>",
"items": [
{
"name": "<header-name>"
}
]
}

Oracle Cloud Infrastructure User Guide 459


API Gateway

}
}
}
]
}

where:
• "type": "<BLOCK|ALLOW>" indicates what to do with the headers specified by "items":
[{"name":"<header-name>"}]:
• Use BLOCK to remove from the request the headers you explicitly list.
• Use ALLOW to only allow in the request the headers you explicitly list (any other headers are removed from
the request).
• "name":"<header-name> is a header to remove from the request or allow in the request (depending on
the setting of "type": "<BLOCK|ALLOW>"). The name you specify is not case-sensitive, and must not be
included in any other transformation request policies for the route (with the exception of items in ALLOW lists).
For example, User-Agent.
You can remove and allow up to 50 headers in a filterHeaders header transformation request policy.
For example:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {
"headerTransformations": {
"filterHeaders": {
"type": "BLOCK",
"items": [
{
"name": "User-Agent"
}
]
}
}
}
}
]
}

In this example, the API gateway removes the User-Agent header from all incoming requests.
5. To change the name of a header included in a request (whilst keeping its original value), specify a
renameHeaders header transformation request policy:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {

Oracle Cloud Infrastructure User Guide 460


API Gateway

"headerTransformations": {
"renameHeaders": {
"items": [
{
"from": "<original-name>",
"to": "<new-name>"
}
]
}
}
}
}
]
}

where:
• "from": "<original-name>" is the original name of the header that you are renaming. The name you
specify is not case-sensitive, and must not be included in any other transformation request policies for the
route. For example, X-Username.
• "to": "<new-name>" is the new name of the header you are renaming. The name you specify is not
case-sensitive (capitalization might be ignored), and must not be included in any other transformation request
policies for the route (with the exception of items in ALLOW lists). For example, X-User-ID.
You can rename up to 20 headers in a renameHeaders header transformation request policy.
For example:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {
"headerTransformations": {
"renameHeaders": {
"items": [
{
"from": "X-Username",
"to": "X-User-ID"
}
]
}
}
}
}
]
}

In this example, the API gateway renames any X-Username header to X-User-ID, whilst keeping the header's
original value.
6. To add a new header to a request (or to change or retain the values of an existing header already included in a
request), specify a setHeaders header transformation request policy:

{
"routes": [
{
"path": "/hello",

Oracle Cloud Infrastructure User Guide 461


API Gateway

"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {
"headerTransformations": {
"setHeaders": {
"items": [
{
"name": "<header-name>",
"values": ["<header-value>"],
"ifExists": "<OVERWRITE|APPEND|SKIP>"
}
]
}
}
}
}
]
}

where:
• "name":"<header-name> is the name of the header to add to the request (or to change the value of). The
name you specify is not case-sensitive, and must not be included in any other transformation request policies
for the route (with the exception of items in ALLOW lists). For example, X-Api-Key.
• "values": ["<header-value>"] is the value of the new header (or the value to replace or
append to an existing header's value, depending on the setting of "ifExists": "<OVERWRITE|
APPEND|SKIP>"). The value you specify can be a simple string, or can include context variables enclosed
within ${...} delimiters. For example, "values": "zyx987wvu654tsu321", "values":
"${request.path[region]}", "values": "${request.headers[opc-request-id]}".
You can specify up to 10 values. If you specify multiple values, the API gateway adds a header for each value.
• "ifExists": "<OVERWRITE|APPEND|SKIP>" indicates what to do with the header's existing value if
the header specified by <header-name> already exists:
• Use OVERWRITE to replace the header's existing value with the value you specify.
• Use APPEND to append the value you specify to the header's existing value.
• Use SKIP to keep the header's existing value.
If not specified, the default is OVERWRITE.
You can add (or change the values of) up to 20 headers in a setHeaders header transformation request policy.
For example:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {
"headerTransformations": {
"setHeaders": {
"items": [
{
"name": "X-Api-Key",
"values": ["zyx987wvu654tsu321"],

Oracle Cloud Infrastructure User Guide 462


API Gateway

"ifExists": "OVERWRITE"
}
]
}
}
}
}
]
}

In this example, the API gateway adds the X-Api-Key:zyx987wvu654tsu321 header to all incoming
requests. If an incoming request already has an X-Api-Key header set to a different value, the API gateway
replaces the existing value with zyx987wvu654tsu321.
7. Save the JSON file containing the API deployment specification.
8. Use the API deployment specification when you create or update an API deployment in the following ways:
• by specifying the JSON file in the Console when you select the Upload an existing API option
• by specifying the JSON file in a request to the API Gateway REST API
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
and Updating API Gateways and API Deployments on page 391.
9. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).

Adding Query Parameter Transformation Request Policies


You can add query parameter transformation request policies to API deployment specifications using the Console or
by editing a JSON file.

Using the Console to Add Query Parameter Transformation Request Policies


To add query parameter transformation request policies to an API deployment specification using the Console:
1. Create or update an API deployment using the Console, select the From Scratch option, and enter details on the
Basic Information page.
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
and Updating API Gateways and API Deployments on page 391.
2. Click Save Changes, and then click Next to enter details for individual routes in the API deployment on the
Routes page.
3. On the Routes page, select the route for which you want to specify query parameter transformation request
policies.
4. Click Show Route Request Policies.
5. Click the Add button beside Query Parameter Transformations to update the query parameters included in a
request to the API gateway for the current route.
6. To limit the query parameters included in a request, specify:
• Action: Filter.
• Type: Either Block to remove from the request the query parameters you explicitly list, or Allow to only
allow in the request the query parameters you explicitly list (any other query parameters are removed from the
request).
• Query Parameter Names: The list of query parameters to remove from the request or allow in the request
(depending on the setting of Type). The names you specify are case-sensitive, and must not be included in
any other transformation request policies for the route (with the exception of items you filter as allowed).. For
example, User-Agent.

Oracle Cloud Infrastructure User Guide 463


API Gateway

7. To change the name of a query parameter included in a request (whilst keeping its original value), specify:
• Action: Rename.
• From: The original name of the query parameter that you are renaming. The name you specify is case-
sensitive, and must not be included in any other transformation request policies for the route. For example, X-
Username.
• To: The new name of the query parameter you are renaming. The name you specify is case-sensitive
(capitalization is respected), and must not be included in any other transformation request policies for the route
(with the exception of items you filter as allowed). For example, X-User-ID.
8. To add a new query parameter to a request (or to change or retain the values of an existing query parameter
already included in a request), specify:
• Action: Set.
• Behavior: If the query parameter already exists, specify what to do with the query parameter's existing value:
• Overwrite, to replace the query parameter's existing value with the value you specify.
• Append, to append the value you specify to the query parameter's existing value.
• Skip, to keep the query parameter's existing value.
• Query Parameter Name: The name of the query parameter to add to the request (or to change the value of).
The name you specify is case-sensitive, and must not be included in any other transformation request policies
for the route (with the exception of items you filter as allowed). For example, X-Api-Key.
• Values: The value of the new query parameter (or the value to replace or append to an existing query
parameter's value, depending on the setting of Behavior). The value you specify can be a simple
string, or can include context variables enclosed within ${...} delimiters. For example, "value":
"zyx987wvu654tsu321", "value": "${request.path[region]}", "value":
"${request.headers[opc-request-id]}". You can specify multiple values.
9. Click Save Changes, and then click Next to review the details you entered for individual routes.
10. Click Create or Save Changes to create or update the API deployment.
11. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).

Editing a JSON File to Add Query Parameter Transformation Request Policies


To add query parameter transformation request policies to an API deployment specification in a JSON file:
1. Using your preferred JSON editor, edit the existing API deployment specification to which you want to add query
parameter transformation request policies, or create a new API deployment specification (see Creating an API
Deployment Specification on page 365).
For example, the following basic API deployment specification defines a simple Hello World serverless function
in Oracle Functions as a single back end:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
2. Insert a requestPolicies section after the backend section for the route to which you want the query
parameter transformation request policy to apply. For example:

Oracle Cloud Infrastructure User Guide 464


API Gateway

"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {}
}
]
}
3. Add a queryParameterTransformations section to the requestPolicies section.

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {
"queryParameterTransformations":{}
}
}
]
}
4. To limit the query parameters included in a request, specify a filterQueryParameters query parameters
transformation request policy:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {
"queryParameterTransformations": {
"filterQueryParameters": {
"type": "<BLOCK|ALLOW>",
"items": [
{
"name": "<query-parameter-name>"
}
]
}
}
}
}
]

Oracle Cloud Infrastructure User Guide 465


API Gateway

where:
• "type": "<BLOCK|ALLOW>" indicates what to do with the query parameters specified by "items":
[{"name":"<query-parameter-name>"}]:
• Use BLOCK to remove from the request the query parameters you explicitly list.
• Use ALLOW to only allow in the request the query parameters you explicitly list (any other query
parameters are removed from the request).
• "name":"<query-parameter-name> is a query parameter to remove from the request or allow
in the request (depending on the setting of "type": "<BLOCK|ALLOW>"). The name you specify is
case-sensitive, and must not be included in any other transformation request policies for the route (with the
exception of items in ALLOW lists). For example, User-Agent.
You can remove and allow up to 50 query parameters in a filterQueryParameters query parameter
transformation request policy.
For example:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {
"queryParameterTransformations": {
"filterQueryParameters": {
"type": "BLOCK",
"items": [
{
"name": "User-Agent"
}
]
}
}
}
}
]
}

In this example, the API gateway removes the User-Agent query parameter from all incoming requests.
5. To change the name of a query parameter included in a request (whilst keeping its original value), specify a
renameQueryParameters query parameter transformation request policy:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {
"queryParameterTransformations": {
"renameQueryParameters": {
"items": [

Oracle Cloud Infrastructure User Guide 466


API Gateway

{
"from": "<original-name>",
"to": "<new-name>"
}
]
}
}
}
}
]
}

where:
• "from": "<original-name>" is the original name of the query parameter that you are renaming. The
name you specify is case-sensitive, and must not be included in any other transformation request policies for
the route. For example, X-Username.
• "to": "<new-name>" is the new name of the query parameter you are renaming. The name you specify
is case-sensitive (capitalization is respected), and must not be included in any other transformation request
policies for the route (with the exception of items in ALLOW lists). For example, X-User-ID.
You can rename up to 20 query parameters in a renameQueryParameters query parameter transformation
request policy.
For example:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {
"queryParameterTransformations": {
"renameQueryParameters": {
"items": [
{
"from": "X-Username",
"to": "X-User-ID"
}
]
}
}
}
}
]
}

In this example, the API gateway renames any X-Username query parameter to X-User-ID, whilst keeping
the query parameter's original value.
6. To add a new query parameter to a request (or to change or retain the values of an existing query parameter
already included in a request), specify a setQueryParameters query parameter transformation request
policy:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],

Oracle Cloud Infrastructure User Guide 467


API Gateway

"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {
"queryParameterTransformations": {
"setQueryParameters": {
"items": [
{
"name": "<query-parameter-name>",
"values": ["<query-parameter-value>"],
"ifExists": "<OVERWRITE|APPEND|SKIP>"
}
]
}
}
}
}
]
}

where:
• "name": "<query-parameter-name>" is the name of the query parameter to add to the request
(or to change the value of). The name you specify is case-sensitive, and must not be included in any other
transformation request policies for the route (with the exception of items in ALLOW lists). For example, X-
Api-Key.
• "values": ["<query-parameter-value>"] is the value of the new query parameter (or the value
to replace or append to an existing query parameter's value, depending on the setting of "ifExists":
"<OVERWRITE|APPEND|SKIP>"). The value you specify can be a simple string, or can include context
variables enclosed within ${...} delimiters. For example, "values": "zyx987wvu654tsu321",
"values": "${request.path[region]}", "values": "${request.headers[opc-
request-id]}".
You can specify up to 10 values. If you specify multiple values, the API gateway adds a query parameter for
each value.
• "ifExists": "<OVERWRITE|APPEND|SKIP>" indicates what to do with the query parameter's
existing value if the query parameter specified by <query-parameter-name> already exists:
• Use OVERWRITE to replace the query parameter's existing value with the value you specify.
• Use APPEND to append the value you specify to the query parameter's existing value.
• Use SKIP to keep the query parameter's existing value.
If not specified, the default is OVERWRITE.
You can add (or change the values of) up to 20 query parameters in a setQueryParameters query parameter
transformation request policy.
For example:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"requestPolicies": {
"queryParameterTransformations": {
"setQueryParameters": {

Oracle Cloud Infrastructure User Guide 468


API Gateway

"items": [
{
"name": "X-Api-Key",
"values": ["zyx987wvu654tsu321"],
"ifExists": "OVERWRITE"
}
]
}
}
}
}
]
}

In this example, the API gateway adds the X-Api-Key:zyx987wvu654tsu321 query parameter to all
incoming requests. If an incoming request already has an X-Api-Key query parameter set to a different value,
the API gateway replaces the existing value with zyx987wvu654tsu321.
7. Save the JSON file containing the API deployment specification.
8. Use the API deployment specification when you create or update an API deployment in the following ways:
• by specifying the JSON file in the Console when you select the Upload an existing API option
• by specifying the JSON file in a request to the API Gateway REST API
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
and Updating API Gateways and API Deployments on page 391.
9. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).

Adding Header Transformation Response Policies


You can add header transformation response policies to API deployment specifications using the Console or by
editing a JSON file.

Using the Console to Add Header Transformation Response Policies


To add header transformation response policies to an API deployment specification using the Console:
1. Create or update an API deployment using the Console, select the From Scratch option, and enter details on the
Basic Information page.
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
and Updating API Gateways and API Deployments on page 391.
2. Click Save Changes, and then click Next to enter details for individual routes in the API deployment on the
Routes page.
3. On the Routes page, select the route for which you want to specify header transformation response policies.
4. Click Show Route Response Policies.
5. Click the Add button beside Header Transformations to update the headers included in a response from the
API gateway for the current route.
6. To limit the headers included in a response, specify:
• Action: Filter.
• Type: Either Block to remove from the response the headers you explicitly list, or Allow to only allow in the
response the headers you explicitly list (any other headers are removed from the response).
• Header Names: The list of headers to remove from the response or allow in the response (depending on
the setting of Type). The names you specify are not case-sensitive, and must not be included in any other
transformation response policies for the route (with the exception of items you filter as allowed). For example,
User-Agent.

Oracle Cloud Infrastructure User Guide 469


API Gateway

7. To change the name of a header included in a response (whilst keeping its original value), specify:
• Action: Rename.
• From: The original name of the header that you are renaming. The name you specify is not case-sensitive, and
must not be included in any other transformation response policies for the route. For example, X-Username.
• To: The new name of the header you are renaming. The name you specify is not case-sensitive (capitalization
might be ignored), and must not be included in any other transformation response policies for the route (with
the exception of items in ALLOW lists). For example, X-User-ID.
8. To add a new header to a response (or to change or retain the values of an existing header already included in a
response), specify:
• Action: Set.
• Behavior: If the header already exists, specify what to do with the header's existing value:
• Overwrite, to replace the header's existing value with the value you specify.
• Append, to append the value you specify to the header's existing value.
• Skip, to keep the header's existing value.
• Name: The name of the header to add to the response (or to change the value of). The name you specify is not
case-sensitive, and must not be included in any other transformation response policies for the route (with the
exception of items you filter as allowed). For example, X-Api-Key.
• Values: The value of the new header (or the value to replace or append to an existing header's value,
depending on the setting of Behavior). The value you specify can be a simple string, or can include context
variables enclosed within ${...} delimiters. For example, "value": "zyx987wvu654tsu321". You
can specify multiple values.
9. Click Save Changes, and then click Next to review the details you entered for individual routes.
10. Click Create or Save Changes to create or update the API deployment.
11. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).

Editing a JSON File to Add Header Transformation Response Policies


To add header transformation response policies to an API deployment specification in a JSON file:
1. Using your preferred JSON editor, edit the existing API deployment specification to which you want to add
header transformation response policies, or create a new API deployment specification (see Creating an API
Deployment Specification on page 365).
For example, the following basic API deployment specification defines a simple Hello World serverless function
in Oracle Functions as a single back end:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
}
}
]
}
2. Insert a responsePolicies section after the backend section for the route to which you want the header
transformation response policy to apply. For example:

{
"routes": [
{
"path": "/hello",

Oracle Cloud Infrastructure User Guide 470


API Gateway

"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"responsePolicies": {}
}
]
}
3. Add a headerTransformations section to the responsePolicies section.

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"responsePolicies": {
"headerTransformations":{}
}
}
]
}
4. To limit the headers included in a response, specify a filterHeaders header transformation response policy:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"responsePolicies": {
"headerTransformations": {
"filterHeaders": {
"type": "<BLOCK|ALLOW>",
"items": [
{
"name": "<header-name>"
}
]
}
}
}
}
]

Oracle Cloud Infrastructure User Guide 471


API Gateway

where:
• "type": "<BLOCK|ALLOW>" indicates what to do with the headers specified by "items":
[{"name":"<header-name>"}]:
• Use BLOCK to remove from the response the headers you explicitly list.
• Use ALLOW to only allow in the response the headers you explicitly list (any other headers are removed
from the response).
• "name":"<header-name> is a header to remove from the response or allow in the response (depending
on the setting of "type": "<BLOCK|ALLOW>"). The name you specify is not case-sensitive, and must not
be included in any other transformation response policies for the route (with the exception of items in ALLOW
lists). For example, User-Agent.
You can remove and allow up to 20 headers in a filterHeaders header transformation response policy.
For example:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"responsePolicies": {
"headerTransformations": {
"filterHeaders": {
"type": "BLOCK",
"items": [
{
"name": "User-Agent"
}
]
}
}
}
}
]
}

In this example, the API gateway removes the User-Agent header from all outgoing responses.
5. To change the name of a header included in a response (whilst keeping its original value), specify a
renameHeaders header transformation response policy:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"responsePolicies": {
"headerTransformations": {
"renameHeaders": {
"items": [
{

Oracle Cloud Infrastructure User Guide 472


API Gateway

"from": "<original-name>",
"to": "<new-name>"
}
]
}
}
}
}
]
}

where:
• "from": "<original-name>" is the original name of the header that you are renaming. The name you
specify is not case-sensitive, and must not be included in any other transformation response policies for the
route. For example, X-Username.
• "to": "<new-name>" is the new name of the header you are renaming. The name you specify is not case-
sensitive (capitalization might be ignored), and must not be included in any other transformation response
policies for the route (with the exception of items in ALLOW lists). For example, X-User-ID.
You can rename up to 20 headers in a renameHeaders header transformation response policy.
For example:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"responsePolicies": {
"headerTransformations": {
"renameHeaders": {
"items": [
{
"from": "X-Username",
"to": "X-User-ID"
}
]
}
}
}
}
]
}

In this example, the API gateway renames any X-Username header to X-User-ID, whilst keeping the header's
original value.
6. To add a new header to a response (or to change or retain the values of an existing header already included in a
response), specify a setHeaders header transformation response policy:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"

Oracle Cloud Infrastructure User Guide 473


API Gateway

},
"responsePolicies": {
"headerTransformations": {
"setHeaders": {
"items": [
{
"name": "<header-name>",
"values": ["<header-value>"],
"ifExists": "<OVERWRITE|APPEND|SKIP>"
}
]
}
}
}
}
]
}

where:
• "name":"<header-name> is the name of the header to add to the response (or to change the value of).
The name you specify is not case-sensitive, and must not be included in any other transformation response
policies for the route (with the exception of items in ALLOW lists). For example, X-Api-Key.
• "values": ["<header-value>"] is the value of the new header (or the value to replace or append
to an existing header's value, depending on the setting of "ifExists": "<OVERWRITE|APPEND|
SKIP>"). The value you specify can be a simple string, or can include context variables enclosed within
${...} delimiters. For example, "values": "zyx987wvu654tsu321".
You can specify up to 10 values. If you specify multiple values, the API gateway adds a header for each value.
• "ifExists": "<OVERWRITE|APPEND|SKIP>" indicates what to do with the header's existing value if
the header specified by <header-name> already exists:
• Use OVERWRITE to replace the header's existing value with the value you specify.
• Use APPEND to append the value you specify to the header's existing value.
• Use SKIP to keep the header's existing value.
If not specified, the default is OVERWRITE.
You can add (or change the values of) up to 20 headers in a setHeaders header transformation response policy.
For example:

{
"routes": [
{
"path": "/hello",
"methods": ["GET"],
"backend": {
"type": "ORACLE_FUNCTIONS_BACKEND",
"functionId": "ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq"
},
"responsePolicies": {
"headerTransformations": {
"setHeaders": {
"items": [
{
"name": "X-Api-Key",
"values": ["zyx987wvu654tsu321"],
"ifExists": "OVERWRITE"
}
]
}
}

Oracle Cloud Infrastructure User Guide 474


API Gateway

}
}
]
}

In this example, the API gateway adds the X-Api-Key:zyx987wvu654tsu321 header to all outgoing
responses. If an outgoing response already has an X-Api-Key header set to a different value, the API gateway
replaces the existing value with zyx987wvu654tsu321.
7. Save the JSON file containing the API deployment specification.
8. Use the API deployment specification when you create or update an API deployment in the following ways:
• by specifying the JSON file in the Console when you select the Upload an existing API option
• by specifying the JSON file in a request to the API Gateway REST API
For more information, see Deploying an API on an API Gateway by Creating an API Deployment on page 367
and Updating API Gateways and API Deployments on page 391.
9. (Optional) Confirm the API has been deployed successfully by calling it (see Calling an API Deployed on an API
Gateway on page 388).

Examples
The examples in this section assume the following API deployment definition and basic API deployment specification
in a JSON file:

{
"displayName": "Marketing Deployment",
"gatewayId": "ocid1.apigateway.oc1..aaaaaaaab______hga",
"compartmentId": "ocid1.compartment.oc1..aaaaaaaa7______ysq",
"pathPrefix": "/marketing",
"specification": {
"routes": [
{
"path": "/weather",
"methods": ["GET"],
"backend": {
"type": "HTTP_BACKEND",
"url": "https://api.weather.gov"
},
"requestPolicies": {}
}
]
},
"freeformTags": {},
"definedTags": {}
}

Note the examples also apply when you're defining an API deployment specification using dialogs in the Console.

Example 1: Transforming header parameters to query parameters


In this example, assume an existing HTTP back end only handles requests containing query parameters, not header
parameters. However, you want the HTTP back end to handle requests that include header parameters. To achieve
this, you create an API deployment specification that includes a query parameter transformation request policy to pass
the value obtained from a request header to the HTTP back end as a query parameter.

"requestPolicies": {
"queryParameterTransformations": {
"setQueryParameters": {
"items": [
{
"name": "region",

Oracle Cloud Infrastructure User Guide 475


API Gateway

"values": ["${request.headers[region]}"],
"ifExists": "OVERWRITE"
}
]
}
}
}

In this example, a request like curl -H "region: west" https://<gateway-hostname>/


marketing/weather resolves to https://api.weather.gov?region=west.

Example 2: Transforming one header to a different header


In this example, assume an existing HTTP back end only handles requests containing a particular header. However,
you want the HTTP back end to handle requests that include a different header. To achieve this, you create an API
deployment specification that includes a header transformation request policy to take the value obtained from one
request header and pass it to the HTTP back end as a different request header.

"requestPolicies": {
"headerTransformations": {
"setHeaders": {
"items": [
{
"name": "region",
"values": ["${request.headers[locale]}"],
"ifExists": "OVERWRITE"
}
]
}
}
}

In this example, a request like curl -H "locale: west" https://<gateway-hostname>/


marketing/weather resolves to the request curl -H "region: west" https://
api.weather.gov.

Example 3: Adding an authentication parameter obtained from a JWT as a request header


In this example, assume an existing HTTP back end requires the value of the sub claim in a validated JSON Web
Token (JWT) to be included in a request as a header with the name JWT_SUBJECT. The API Gateway service has
saved the value of the sub claim included in the JWT as an authentication parameter in the request.auth table.
To include the value of sub in a header named JWT_SUBJECT, you create an API deployment specification that
includes a header transformation request policy. The request policy obtains the sub value from the request.auth
table and passes it to the HTTP back end as the value of the JWT_SUBJECT header.

"requestPolicies": {
"headerTransformations": {
"setHeaders": {
"items": [
{
"name": "JWT_SUBJECT",
"values": ["${request.auth[sub]}"],
"ifExists": "OVERWRITE"
}
]
}
}
}

Oracle Cloud Infrastructure User Guide 476


API Gateway

In this example, when a request has been successfully validated, the JWT_SUBJECT header is added to the request
passed to the HTTP back end.

Example 4: Adding a key obtained from an authorizer function as a query parameter


In this example, assume an existing HTTP back end requires requests to include a query parameter named
access_key for authentication purposes. You want the access_key query parameter to have the value of a key
named apiKey that has been returned by an authorizer function that has successfully validated the request. The API
Gateway service has saved the apiKey value as an authentication parameter in the request.auth table.
To include the access_key query parameter in the request, you create an API deployment specification that
includes a query parameter transformation request policy. The request policy obtains the apiKey value from the
request.auth table and passes it to the HTTP back end as the value of the access_key query parameter.

"requestPolicies": {
"queryParameterTransformations": {
"setQueryParameters": {
"items": [
{
"name": "access_key",
"values": ["${request.auth[apiKey]}"],
"ifExists": "OVERWRITE"
}
]
}
}
}

In this example, the access_key query parameter is added to the request passed to the HTTP back
end, with the apiKey value from the request.auth table. A request like https://<gateway-
hostname>/marketing/weather resolves to a request like https://api.weather.gov?
access_key=fw5n9abi0ep

Example 5: Adding a default value for an optional query parameter


In this example, assume an existing HTTP back end requires requests to include a query parameter named country.
However, the country query parameter is optional, and it's not included by some of the API clients sending
requests. If a request already includes a country query parameter and a value, you want both passed as-is to the
HTTP back end. However, if a request doesn't already include a country query parameter, you want the country
query parameter and a default value added to the request.
To make sure every request includes a country query parameter, you create an API deployment specification that
includes a query parameter transformation request policy. The request policy adds the country query parameter and
a default value to any requests that do not already include the country query parameter.

"requestPolicies": {
"queryParameterTransformations": {
"setQueryParameters": {
"items": [
{
"name": "country",
"values": ["usa"],
"ifExists": "SKIP"
}
]
}
}
}

Oracle Cloud Infrastructure User Guide 477


API Gateway

In this example, a request like https://<gateway-hostname>/marketing/weather resolves to


https://api.weather.gov?country=usa. A request like https://<gateway-hostname>/
marketing/weather?country=canada resolves to https://api.weather.gov?country=canada.

Troubleshooting API Gateway


This topic covers common issues related to the API Gateway service and how you can address them.

Creating a new API gateway stalls with a state of Creating, or fails


It can take a few minutes to create a new API gateway. While it is being created, the API gateway is shown with a
state of Creating on the Gateways page. When it has been created successfully, the new API gateway is shown with a
state of Active.
If you have waited more than a few minutes for the API gateway to be shown with an Active state (or if the API
gateway creation operation has failed):
1. Click the name of the API gateway, and click Work Requests to see an overview of the API gateway creation
operation.
2. Click the Create Gateway operation to see more information about the operation (including error messages, log
messages, and the status of associated resources) that will help to diagnose the cause of the problem.

Creating a new API deployment stalls with a state of Creating, or fails


It can take a few minutes to create a new API deployment. While it is being created, the API deployment is
shown with a state of Creating on the Gateway Details page. When it has been created successfully, the new
API deployment is shown with a state of Active.
If you have waited more than a few minutes for the API deployment to be shown with an Active state (or if the API
deployment creation operation has failed):
1. Click the name of the API deployment, and click Work Requests to see an overview of the API deployment
creation operation.
2. Click the Create Deployment operation to see more information about the operation (including error messages,
log messages, and the status of associated resources) that will help to diagnose the cause of the problem .

Creating a new API gateway returns a "VNIC attachment failed due to the limit for
number of private IP addresses for this subnet" message
When creating a new API gateway, you might see the following message:

VNIC attachment failed due to the limit for number of private IP addresses
for this subnet

This message indicates that there are not enough private IP addresses available to create the API gateway in the
specified subnet. An API gateway consumes three private IP addresses in the subnet.
To address this issue, use a subnet that has at least three available private IP addresses.

Creating a new API gateway returns a "The limit for number of private IP
addresses for this subnet has been exceeded" message
When creating a new API gateway, you might see the following message:

The limit for number of private IP addresses for this subnet has been
exceeded

This message indicates that there are not enough private IP addresses available to create the API gateway in the
specified subnet. An API gateway consumes three private IP addresses in the subnet.

Oracle Cloud Infrastructure User Guide 478


API Gateway

To address this issue, use a subnet that has at least three available private IP addresses.

Creating a new public API gateway returns a "The limit for number of public IP
addresses for this compartment has been exceeded" message
When creating a new public API gateway, you might see the following message:

The limit for number of public IP addresses for this compartment has been
exceeded

This message indicates that there are not enough public IP addresses available to create the public API gateway in the
specified subnet's compartment. When creating a new public API gateway, an attempt is made to create a new public
IP address in the subnet's compartment. The attempt to create the new public IP address fails if the quota for public
IP addresses is exceeded in that compartment or tenancy. Note the public IP address is in addition to the three private
IP addresses consumed by all API gateways.
To address this issue:
• If the compartment's quota is exceeded, contact your tenancy administrator to request additional public IP
addresses for the compartment.
• If the tenancy's quota is exceeded, contact Oracle Support to request an increase to the tenancy's quota.

Creating a new API gateway returns a "Work request was cancelled" message
When creating a new API gateway, you might see the following message:

Work request was cancelled

This message indicates that you cancelled the work request to create the API gateway. The API gateway is shown
with a status of Failed.

Creating a new API gateway returns a "An unexpected error occurred. Contact
Oracle Support for assistance" message
When creating a new API gateway, you might see the following message:

An unexpected error occurred. Contact Oracle Support for assistance.

To address this issue, contact Oracle Support to request assistance.

Creating a new API gateway returns an "Unknown resource <subnet-ocid>, make


sure subnet exists,..." message and a 400 error
When creating a new API gateway using the Console, API, SDK, or CLI, you might see the following error message
and a 400 error code:

Unknown resource <subnet-ocid>, make sure subnet exists, the user can access
the subnet and it is in the same region where the gateway will be created

This message indicates that the API Gateway service cannot access the subnet specified for the new API gateway.
To address this issue, double-check that:
• The subnet exists.
• You can access the subnet.
• The subnet is in the same region in which the API gateway will be created.

Oracle Cloud Infrastructure User Guide 479


API Gateway

API Gateway Internal Limits


This topic describes various internal limits enforced by the API Gateway service, their default values, and whether
you can change them.

API Gateway Resource Limits


This table describes internal limits enforced by the API Gateway service on API gateway resources.

Limit Description Default Limit Value Can you change it?


Number of API gateways Maximum number of 10 Yes, contact us.
active API gateways per
tenant.

API Deployment Resource Limits


This table describes internal limits enforced by the API Gateway service on API deployment resources.

Limit Description Default Limit Value Can you change it?


Number of API Maximum number of 20 Yes, contact us.
deployments active API deployments per
gateway.
Number of routes per API Maximum number of 50 Yes, contact us.
deployment routes defined inside
the API deployment
specification.
Path prefix length Maximum length of path 255 characters No
for API deployment.
Route pattern length Maximum length of path 2,000 characters No
for a route in an API
deployment.
API deployment Maximum length of json 1,000,000 bytes No
specification Size encoded API deployment
specification in bytes.
Stock Response - header Maximum length of UTF-8 4096 bytes No
length encoded json of stock
response headers.
Stock Response - header Maximum length of a stock 1024 bytes No
name length response header name.
Stock Response - header Maximum length of a stock 4096 bytes No
value length response header value.
Stock Response - number Maximum number of stock 50 No
of headers response headers.
Stock Response - body size Maximum body size in 4096 bytes No
UTF-8 bytes.
CORS Policy - number of Maximum number of 50 No
headers CORS allowed/exposed
headers.

Oracle Cloud Infrastructure User Guide 480


API Gateway

Limit Description Default Limit Value Can you change it?


CORS Policy - number of Maximum number of 50 No
allowed methods CORS allowed methods.

API Gateway Certificate Resource Limits


This table describes internal limits enforced by the API Gateway service on API Gateway certificate resources.

Limit Description Default Limit Value Can you change it?


Leaf Certificate - Maximum length of the 4096 bits No
maximum length leaf certificate.
Intermediate Certificates - Maximum combined 10240 bits No
maximum length length of any intermediate
certificates.
Private key- maximum Maximum private key size. 4096 bits No
length
Private key - minimum Minimum private key size. 2048 bits No
length

HTTP Back End Resource Limits


This table describes internal limits enforced by the API Gateway service on HTTP back ends.

Limit Description Default Limit Value Can you change it?


Connect timeout Maximum configurable 60.0 seconds Yes, by changing the
HTTP back end connect timeout setting in
timeout in seconds. the API deployment
specification to between
1.0 and 75.0 seconds
(see Adding an HTTP or
HTTPS URL as an API
Gateway Back End on page
410).
Read timeout Maximum configurable 10.0 seconds Yes, by changing the
HTTP back end read timeout setting in
timeout in seconds. the API deployment
specification to between
1.0 and 300.0 seconds
(see Adding an HTTP or
HTTPS URL as an API
Gateway Back End on page
410).
Send timeout Maximum configurable 10.0 seconds Yes, by changing the
HTTP back end send timeout setting in
timeout in seconds. the API deployment
specification to between
1.0 and 300.0 seconds
(see Adding an HTTP or
HTTPS URL as an API
Gateway Back End on page
410).

Oracle Cloud Infrastructure User Guide 481


API Gateway

API Gateway Invocation Limits


This table describes internal limits enforced by the API Gateway service on API gateway invocations.

Limit Description Default Limit Value Can you change it?


Simultaneous connections Maximum number of 1000 No
per IP address simultaneous HTTPS
connections from a single
IP address to an API
gateway.
Request body size Maximum request body 6 MB No
size.
Request header read Time between reads of 15 seconds No
timeout request header bytes.
Request body read timeout Time between reads of 15 seconds No
request body bytes.
Response body read Time between sends of 15 seconds No
timeout response body bytes.
Maximum header size Maximum length of header 8 KB No
(including method, URI,
and headers).
Function back end latency Maximum duration of a 300 seconds No
full request to a function
back end.
HTTP back end latency Maximum duration of a 300 seconds No
full request to an HTTP
back end.

API Gateway Metrics


You can monitor the health, capacity, and performance of API gateways and API deployments managed by the API
Gateway service using metrics, alarms, and notifications.
This topic describes the metrics emitted by the API Gateway service in the oci_apigateway metric namespace.
Resources: gateways

Overview of the API Gateway Service Metrics


The API Gateway service metrics help you measure the connections to API gateways, and the quantity of data
received and sent by API gateways. You can use metrics data to diagnose and troubleshoot API gateway and API
deployment issues.
To view a default set of metrics charts in the Console, navigate to the API gateway you're interested in, and then click
Metrics. You also can use the Monitoring service to create custom queries.

Prerequisites
IAM policies: To monitor resources, you must be given the required type of access in a policy written by an
administrator, whether you're using the Console or the REST API with an SDK, CLI, or other tool. The policy must
give you access to the monitoring services as well as the resources being monitored. If you try to perform an action
and get a message that you don’t have permission or are unauthorized, confirm with your administrator the type of
access you've been granted and which compartment you should work in. For more information on user authorizations
for monitoring, see the Authentication and Authorization section for the related service: Monitoring or Notifications.

Oracle Cloud Infrastructure User Guide 482


API Gateway

Available Metrics: oci_apigateway


The metrics listed in the following tables are automatically available for any API gateways you create. You do not
need to enable monitoring on the resource to get these metrics.
API Gateway metrics include the following dimensions:
RESOURCEID
The OCID of the API gateway.
DEPLOYMENTID
The OCID of the API deployment.
ROUTE
The route path for API calls to the back-end service.
HTTPMETHODTYPE
The HTTP methods of incoming connections accepted by the back-end service (such as GET, HEAD, POST,
PUT, DELETE).
HTTPSTATUSCODE
The HTTP response status code received from the API gateway (such as 200, 201, 502, 504).
HTTPSTATUSCATEGORY
The category of the HTTP response status code received from the API gateway (such as 2xx, 3xx, 4xx, 5xx).
BACKENDTYPE
The type of back end by which an API gateway routes requests to a back-end service (such as
HTTP_BACKEND, ORACLE_FUNCTIONS_BACKEND, STOCK_RESPONSE_BACKEND).
BACKENDHTTPSTATUSCODE
The HTTP response status code received from the back end (such as 200, 201, 502, 504).
BACKENDHTTPSTATUSCATEGORY
The category of the HTTP response status code received from the back end (such as 2xx, 3xx, 4xx, 5xx).

Metric Metric Display Unit Description Dimensions


Name
BytesReceived Bytes Received Bytes Number of bytes resourceId
received by the
API gateway from deploymentId
API clients. route
httpMethodType
httpStatusCode
httpStatusCategory
backendType

Oracle Cloud Infrastructure User Guide 483


API Gateway

Metric Metric Display Unit Description Dimensions


Name
BytesSent Bytes Sent Bytes Number of resourceId
bytes sent by the
deploymentId
API gateway to API
clients. route
httpMethodType
httpStatusCode
httpStatusCategory
backendType

HttpRequests API Requests count Number of resourceId


incoming API client
deploymentId
requests to the
API gateway. route
httpMethodType
backendType

HttpResponses API Responses count Number of http resourceId


responses that the
deploymentId
API gateway has
sent back. route
httpMethodType
httpStatusCode
httpStatusCategory
backendType

Backend
BackendHttpResponses count Count of the HTTP resourceId
Responses responses returned
deploymentId
by the back-end
services. route
httpMethodType
httpStatusCode
httpStatusCategory
backendType
backendHttpStatusCode
backendHttpStatusCategor

Oracle Cloud Infrastructure User Guide 484


API Gateway

Metric Metric Display Unit Description Dimensions


Name
Latency Gateway Latency Seconds Average time resourceId
that it takes for
deploymentId
a request to be
processed and its route
response to be sent.
httpMethodType
This is calculated
from the time httpStatusCode
the API gateway
httpStatusCategory
receives the first
byte of an HTTP backendType
request to the time
when the response
send operation is
completed.
Backend Latency
IntegrationLatency Seconds Time between resourceId
the API gateway
deploymentId
sending a request
to the back- route
end service and
httpMethodType
receiving a response
from the back-end httpStatusCode
service.
httpStatusCategory
backendType

InternalLatencyInternal Latency Seconds Time spent resourceId


internally in the
deploymentId
API gateway to
process the request. route
httpMethodType
httpStatusCode
httpStatusCategory

Using the Console


To view default metric charts for a single API gateway
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
API Gateway.
2. Select the region you are using with API Gateway.
3. Select the compartment containing the API gateway for which you want to view metrics.
The Gateways page shows all the API gateways in the compartment you selected.
4. Click the name of the API gateway for which you want to view metrics.
5. Under Resources, click Metrics.
The Metrics page displays a chart for each metric that is emitted by the metric namespace for API Gateway. For
more information about the emitted metrics, see Available Metrics: oci_apigateway on page 483.
For more information about monitoring metrics and using alarms, see Monitoring Overview on page 2686. For
information about notifications for alarms, see Notifications Overview on page 3378.

Oracle Cloud Infrastructure User Guide 485


API Gateway

Not seeing the API gateway metrics data you expect?


If you don't see the metrics data for an API gateway that you expect, see the following possible causes and
resolutions.

Problem Possible How to Resolution


Cause check
I called an API deployed on an API gateway but the HTTP Requests You might Confirm the Adjust the
chart doesn't show the API call. have Start Time Start Time
called the and End and End
API outside Time cover Time as
the time the period necessary.
period when you
covered by called the
the HTTP API.
Requests
chart.
I called an API deployed on an API gateway but the HTTP Requests Although Confirm Adjust
chart doesn't show the API call, even though I called the API you called the x-axis the x-axis
between the Start Time and End Time. the API (window (window
between of data of data
the Start display) display) as
Time and covers the necessary.
End Time, period when
the x-axis the API was
(window called.
of data
display)
might be
excluding
the API call.
I want to see data in the charts as a continuous line over time, but This is Increase the Adjust the
the line has gaps in it. expected Interval Interval as
behavior. If (for necessary.
there is no example,
metrics data from 1
to show in minute to
the selected 5 minutes,
interval, the or from 1
data line is minute to 1
discontinuous.hour).

To view default metric charts for all the API gateways in a compartment
1. Open the navigation menu. Under Solutions and Platform, go to Monitoring and click Service Metrics.
2. Select the region you are using with API Gateway
3. Select the compartment containing the API gateways for which you want to view metrics.
4. For Metric Namespace, select oci_apigateway.
The Service Metrics page dynamically updates the page to show charts for each metric that is emitted by
the selected metric namespace. For more information about the emitted metrics, see Available Metrics:
oci_apigateway on page 483.
For more information about monitoring metrics and using alarms, see Monitoring Overview on page 2686. For
information about notifications for alarms, see Notifications Overview on page 3378.

Oracle Cloud Infrastructure User Guide 486


API Gateway

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following APIs for monitoring:
• Monitoring API for metrics and alarms
• Notifications API for notifications (used with alarms)

Oracle Cloud Infrastructure User Guide 487


Archive Storage

Chapter

7
Archive Storage
This chapter explains how to upload, manage, and access data using Archive Storage.

Overview of Archive Storage


The Archive Storage service is ideal for storing data that is seldom accessed, but requires long retention periods.
Archive Storage is more cost effective than Object Storage for preserving cold data. Unlike Object Storage, Archive
Storage data retrieval is not instantaneous.
Oracle Cloud Infrastructure supports multiple storage tiers that offer cost and performance flexibility. Archive is the
default storage tier for Archive Storage buckets.
Archive Storage is Always Free eligible. For more information about Always Free resources, including capabilities
and limitations, see Oracle Cloud Infrastructure Free Tier on page 142.

Using Archive Storage


Important:

You interact with the data stored in the Archive Storage using the same
resources and management interfaces that you use for data stored in Object
Storage. All Object Storage features are also supported in Archive Storage.
Use the following Object Storage resources to store and manage Archive Storage data.

Buckets
Buckets are logical containers for storing objects. A bucket is associated with a single compartment that has policies
that determine what actions a user can perform on a bucket and on all the objects in the bucket.
When you initially create the bucket container for your data, you decide which default storage tier (Archive or
Standard) is appropriate for your data. The default tier is automatically selected when you upload objects to the
bucket, but you can instead select a different tier. Also, if objects meet the criteria of an object lifecycle policy rule,
Object Storage can automatically move objects to Archive, while remaining in the Standard tier bucket.
Once set, you cannot change the default storage tier property for a bucket:
• An existing Standard tier bucket cannot be changed to an Archive tier bucket.
• An existing Archive tier bucket cannot be changed to a Standard tier bucket.
In addition to the inability to change the default storage tier designation of a bucket, there are other reasons why
storage tier selection for buckets requires careful consideration:
• The minimum storage retention period for the Archive tier is 90 days. If you delete or overwrite objects from the
Archive tier before the minimum retention requirements are met, you are charged the prorated cost of storing the
data for the full 90 days.
• When you restore objects, you are returning those objects to the Standard tier for access. You are billed for the
Standard class tier while the restored objects reside in that tier.

Oracle Cloud Infrastructure User Guide 488


Archive Storage

You can use object lifecycle policy rules to automatically delete objects in an Archive Storage bucket based on the
age of the object. You cannot, however, use object lifecycle policy rules to automatically restore archived objects to
the Standard tier. See Restoring and Downloading Objects for information on restoring objects.
See Managing Buckets on page 3426 for detailed instructions on creating an Archive Storage bucket.

Objects
Any type of data, regardless of content type, is stored as an object. The object is composed of the object itself and
metadata about the object. Each object is stored in a bucket.
You upload objects to an Archive Storage bucket the same way you upload objects to a standard Object Storage
bucket. The difference is that when you upload an object to an Archive Storage bucket, the object is immediately
archived. You must first restore the object before you can download it.
Archived objects are displayed in the object listing of a bucket. You can also display the details of each object.
See Managing Objects on page 3447 for detailed instructions on uploading objects to an Archive Storage bucket.

Restoring and Downloading Objects


To download an object from Archive Storage, you must first restore the object. Restoration takes at most an hour
from the time an Archive Storage restore request is made, to the time the first byte of data is retrieved. The retrieval
time metric is measured as Time To First Byte (TTFB). How long the full restoration takes, depends on the size of
the object. You can determine the status of the restoration by looking at the object Details. Once the status shows as
Restored, you can then download the object.
After an object is restored, you have a window of time to download the object. By default, you have 24 hours to
download an object, but you can alternatively specify a time from 1 to 240 hours. You can find out how much of the
download time is remaining by looking at Available for Download in object Details. After the allotted download
time expires, the object returns to Archive Storage. You can always access the metadata for an object, regardless of
whether the object is in an archived or restored state.
See Managing Objects on page 3447 for detailed instructions on restoring, checking status of, and downloading
Archive Storage objects.

Ways to Access Archive Storage


Archive Storage and Object Storage use the same management interfaces:
• The Console is an easy-to-use, browser-based interface. To access Archive Storage in the console, do the
following:
• Sign in to the Console.
• Open the navigation menu. Under Core Infrastructure, click Object Storage. A list of the buckets in the
compartment you're viewing is displayed. If you don’t see the one you're looking for, verify that you’re
viewing the correct compartment (select from the list on the left side of the page).
• Click the name of the Archive Storage tier bucket you want to manage.
• The command line interface (CLI) provides both quick access and full functionality without the need for
programming. For more information, see Command Line Interface (CLI) on page 4228.
The syntax for CLI commands includes specifying a service. You use the Object Storage service designation oci
os to manage Archive Storage using the CLI.
• The REST API provides the most functionality, but requires programming expertise. API Reference and
Endpoints provides endpoint details and links to the available API reference documents. For general information
about using the API, see REST APIs on page 4409. Object Storage is accessible with the following APIs:
• Object Storage Service
• Amazon S3 Compatibility API
• Swift API (for use with Oracle RMAN)

Oracle Cloud Infrastructure User Guide 489


Archive Storage

• Oracle Cloud Infrastructure provides SDKs that interact with Archive Storage and Object Storage without you
having to create a framework. For general information about using the SDKs, see Software Development Kits and
Command Line Interface on page 4262.

Authentication and Authorization


Each service in Oracle Cloud Infrastructure integrates with IAM for authentication and authorization, for all
interfaces (the Console, SDK or CLI, and REST API). IAM also manages user credentials for things like API signing
keys, auth tokens, and customer secret keys for Amazon S3 Compatibility API. See User Credentials on page 2379
for details.
An administrator in your organization needs to set up groups, compartments, and policies that control which users can
access which services, which resources, and the type of access. For example, the policies control things like who can
create users, create and manage the cloud network, launch instances, create buckets, and download objects. For more
information, see Getting Started with Policies on page 2143. For specific details about writing policies for each of
the different services, see the Policy Reference on page 2176. For specific details about writing policies for Archive
Storage, see Details for Object Storage, Archive Storage, and Data Transfer on page 2343.
If you’re a regular user (not an administrator) who needs to use the Oracle Cloud Infrastructure resources that
your company owns, contact your administrator to set up a user ID for you. The administrator can confirm which
compartment or compartments you should be using.
For administrators:
• The policy Let Object Storage admins manage buckets and objects on page 2157 lets the specified group do
everything with buckets and objects.
• Users that need to restore archived objects require the OBJECT_RESTORE permission.

WORM Compliance
Use retention rules to achieve WORM compliance with Archive Storage so that after the data is written, the data
cannot be overwritten. Retention rules are configured at the bucket level and are applied to all individual objects
in the bucket. You cannot update, overwrite, or delete objects or object metadata until the retention rule is deleted
(indefinite rule) or for the duration specified (time-bound rules). You can, however, always restore an object from
Archive Storage.
For more information, see Using Retention Rules to Preserve Data on page 3486.

Limits on Archive Storage Resources


See Service Limits on page 217 for a list of applicable limits and instructions for requesting a limit increase.
Other limits include:
• Number of namespaces per root compartment: 1
• Maximum object size: 10 TiB
• Maximum object part size in a multipart upload: 50 GiB
• Maximum number of parts in a multipart upload: 10,000
• Maximum object size allowed by PutObject API: 50 GiB
• Maximum size of object metadata: 2 K

Oracle Cloud Infrastructure User Guide 490


Archive Storage

Oracle Cloud Infrastructure User Guide 491


Audit

Chapter

8
Audit
This chapter explains how to work with audit logs.

Overview of Audit
The Oracle Cloud Infrastructure Audit service automatically records calls to all supported Oracle Cloud Infrastructure
public application programming interface (API) endpoints as log events. Currently, all services support logging by
Audit. Object Storage service supports logging for bucket-related events, but not for object-related events. Log events
recorded by the Audit service include API calls made by the Oracle Cloud Infrastructure Console, Command Line
Interface (CLI), Software Development Kits (SDK), your own custom clients, or other Oracle Cloud Infrastructure
services. Information in the logs includes the following:
• Time the API activity occurred
• Source of the activity
• Target of the activity
• Type of action
• Type of response
Each log event includes a header ID, target resources, timestamp of the recorded event, request parameters, and
response parameters. You can view events logged by the Audit service by using the Console, API, or the SDK for
Java. Data from events can be used to perform diagnostics, track resource usage, monitor compliance, and collect
security-related events.

Version 2 Audit Log Schema


On October 8, 2019, Oracle introduced the Audit version 2 schema, which provides the following benefits:
• Captures state changes of resources
• Better tracking of long running APIs
• Provides troubleshooting information in logs
The new schema is being implemented over time. Oracle continues to provide Audit logs in the version 1 format,
but you cannot access version 1 format logs from the Console. The Console displays only the version 2 format logs.
However, not all resources are emitting logs using the version 2 schema. For those services that are not emitting in the
version 2 format, Oracle converts version 1 logs to version 2 logs, leaving fields blank if information for the version 2
schema cannot be determined.

Ways to Access Oracle Cloud Infrastructure


You can access Oracle Cloud Infrastructure using the Console (a browser-based interface) or the REST API.
Instructions for the Console and API are included in topics throughout this guide. For a list of available SDKs, see
Software Development Kits and Command Line Interface on page 4262.
To access the Console, you must use a supported browser.
Oracle Cloud Infrastructure supports the following browsers and versions:

Oracle Cloud Infrastructure User Guide 492


Audit

• Google Chrome 69 or later


• Safari 12.1 or later
• Firefox 62 or later
For general information about using the API, see REST APIs on page 4409.

Authentication and Authorization


Each service in Oracle Cloud Infrastructure integrates with IAM for authentication and authorization, for all
interfaces (the Console, SDK or CLI, and REST API).
An administrator in your organization needs to set up groups, compartments, and policies that control which users
can access which services, which resources, and the type of access. For example, the policies control who can create
new users, create and manage the cloud network, launch instances, create buckets, download objects, etc. For more
information, see Getting Started with Policies on page 2143. For specific details about writing policies for each of
the different services, see Policy Reference on page 2176.
If you’re a regular user (not an administrator) who needs to use the Oracle Cloud Infrastructure resources that
your company owns, contact your administrator to set up a user ID for you. The administrator can confirm which
compartment or compartments you should be using.
Administrators: For an example of policy that gives groups access to audit logs, see Required IAM Policy on page
498. To modify the Audit log retention period, you must be a member of the Administrators group. See The
Administrators Group and Policy on page 2133.

Contents of an Audit Log Event


The following explains the contents of an Audit log event. Every audit log event includes two main parts:
• Envelopes that act as a container for all event messages
• Payloads that contain data from the resource emitting the event message

Resource Identifiers
Most types of Oracle Cloud Infrastructure resources have a unique, Oracle-assigned identifier called an Oracle
Cloud ID (OCID). For information about the OCID format and other ways to identify your resources, see Resource
Identifiers.

Event Envelope
These attributes for an event envelope are the same for all events. The structure of the envelope follows the
CloudEvents industry standard format hosted by the Cloud Native Computing Foundation ( CNCF).

Property Description
cloudEventsVersion The version of the CloudEvents specification.
Note:

Audit uses version 0.1 specification of


the CloudEvents event envelope.

contentType Set to application/json. The content type of the data


contained in the data attribute.
data The payload of the event. Information within data comes
from the resource emitting the event.
eventID The UUID of the event. This identifier is not an OCID, but
just a unique ID for the event.

Oracle Cloud Infrastructure User Guide 493


Audit

Property Description
eventTime The time of the event, expressed in RFC 3339 timestamp
format.
eventType The type of event that happened.
Note:

The service that produces the event


can also add, remove, or change
the meaning of a field. A service
implementing these type changes
would publish a new version of
an eventType and revise the
eventTypeVersion field.

eventTypeVersion The version of the event type. This version applies


to the payload of the event, not the envelope. Use
cloudEventsVersion to determine the version of the
envelope.

source The resource that produced the event. For example, an


Autonomous Database or an Object Storage bucket.

Payload
The data in these fields depends on which service produced the event log and the event type it defines.

Data
The data object contains the following attributes.

Property Description
data.additionalDetails A container object for attributes unique to the resource
emitting the event.
data.availabilityDomain The availability domain where the resource resides.
data.compartmentId The OCID of the compartment of the resource emitting the
event.
data.compartmentName The name of the compartment of the resource emitting the
event.
data.definedTags Defined tags added to the resource emitting the event.
data.eventGroupingId This value links multiple audit events that are part of
the same API operation. For example, a long running
API operation that emits an event at the start and the end of
the operation.

data.eventName Name of the API operation that generated this event.


Example: LaunchInstance

data.freeformTags Free-form tags added to the resource emitting the event.


data.identity A container object for identity attributes. See Identity.

Oracle Cloud Infrastructure User Guide 494


Audit

Property Description
data.request A container object for request attributes. See Request.
data.resourceId An OCID or an ID for the resource emitting the event.
data.resourceName The name of the resource emitting the event.
data.response A container object for response attributes. See Response.
data.stateChange A container object for state change attributes. See State
Change.

Identity
The identity object contains the following attributes.

Property Description
data.identity.authType The type of authentication used.
data.identity.callerId The OCID of the caller. The caller that made a request on
behalf of the principal.
data.identity.callerName The name of the user or service issuing the request. This
value is the friendly name associated with callerId.
data.identity.consoleSessionId This value identifies any Console session associated with
this request.
data.identity.credentials The credential ID of the user.
data.identity.ipAddress The IP address of the source of the request.
data.identity.principalId The OCID of the principal.
data.identity.principalName The name of the user or service. This value is the friendly
name associated with principalId.
data.identity.tenantId The OCID of the tenant.
data.identity.userAgent The user agent of the client that made the request.

Request
The request object contains the following attributes.

Property Description
data.request.action The HTTP method of the request.
Example: GET

data.request.headers The HTTP header fields and values in the request.


data.request.id The unique identifier of a request.
data.request.parameters All the parameters supplied by the caller during this
operation.
data.request.path The full path of the API request.
Example: /20160918/instances/
ocid1.instance.oc1.phx.<unique_ID>

Oracle Cloud Infrastructure User Guide 495


Audit

Response
The response object contains the following attributes.

Property Description
data.response.headers The headers of the response.
data.response.message A friendly description of what happened during the
operation.
data.response.payload This value is included for backward compatibility with the
Audit version 1 schema, where it contained metadata of
interest from the response payload.
data.response.responseTime The time of the response to the audited request, expressed
in RFC 3339 timestamp format.
data.response.status The status code of the response.

State Change
The state change object contains the following attributes.

Property Description
data.stateChange.current Provides the current state of fields that may have changed
during an operation. To determine how the current
operation changed a resource, compare the information in
this attribute to data.stateChange.previous.
data.stateChange.previous Provides the previous state of fields that may have changed
during an operation. To determine how the current
operation changed a resource, compare the information in
this attribute to data.stateChange.current.

An Example Audit Log


The following is an example an event recorded by the Audit service.

{
"eventType": "com.oraclecloud.ComputeApi.GetInstance",
"cloudEventsVersion": "0.1",
"eventTypeVersion": "2.0",
"source": "ComputeApi",
"eventId": "<unique_ID>",
"eventTime": "2019-09-18T00:10:59.252Z",
"contentType": "application/json",
"data": {
"eventGroupingId": null,
"eventName": "GetInstance",
"compartmentId": "ocid1.tenancy.oc1..<unique_ID>",
"compartmentName": "compartmentA",
"resourceName": "my_instance",
"resourceId": "ocid1.instance.oc1.phx.<unique_ID>",
"availabilityDomain": "<availability_domain>",
"freeformTags": null,
"definedTags": null,
"identity": {
"principalName": "ExampleName",
"principalId": "ocid1.user.oc1..<unique_ID>",
"authType": "natv",
"callerName": null,

Oracle Cloud Infrastructure User Guide 496


Audit

"callerId": null,
"tenantId": "ocid1.tenancy.oc1..<unique_ID>",
"ipAddress": "172.24.80.88",
"credentials": null,
"userAgent": "Jersey/2.23 (HttpUrlConnection 1.8.0_212)",
"consoleSessionId": null
},
"request": {
"id": "<unique_ID>",
"path": "/20160918/instances/ocid1.instance.oc1.phx.<unique_ID>",
"action": "GET",
"parameters": {},
"headers": {
"opc-principal": [
"{\"tenantId\":\"ocid1.tenancy.oc1..<unique_ID>\",\"subjectId\":
\"ocid1.user.oc1..<unique_ID>\",\"claims\":[{\"key\":\"pstype\",\"value\":
\"natv\",\"issuer\":\"authService.oracle.com\"},{\"key\":\"h_host\",\"value
\":\"iaas.r2.oracleiaas.com\",\"issuer\":\"h\"},{\"key\":\"h_opc-request-
id\",\"value\":\"<unique_ID>\",\"issuer\":\"h\"},{\"key\":\"ptype\",\"value
\":\"user\",\"issuer\":\"authService.oracle.com\"},{\"key\":\"h_date\",
\"value\":\"Wed, 18 Sep 2019 00:10:58 UTC\",\"issuer\":\"h\"},{\"key\":
\"h_accept\",\"value\":\"application/json\",\"issuer\":\"h\"},{\"key\":
\"authorization\",\"value\":\"Signature headers=\\\"date (request-target)
host accept opc-request-id\\\",keyId=\\\"ocid1.tenancy.oc1..<unique_ID>/
ocid1.user.oc1..<unique_ID>/8c:b4:5f:18:e7:ec:db:08:b8:fa:d2:2a:7d:11:76:ac
\\\",algorithm=\\\"rsa-pss-sha256\\\",signature=\\\"<unique_ID>\\\",version=
\\\"1\\\"\",\"issuer\":\"h\"},{\"key\":\"h_(request-target)\",\"value\":
\"get /20160918/instances/ocid1.instance.oc1.phx.<unique_ID>\",\"issuer\":
\"h\"}]}"
],
"Accept": [
"application/json"
],
"X-Oracle-Auth-Client-CN": [
"splat-proxy-se-02302.node.ad2.r2"
],
"X-Forwarded-Host": [
"compute-api.svc.ad1.r2"
],
"Connection": [
"close"
],
"User-Agent": [
"Jersey/2.23 (HttpUrlConnection 1.8.0_212)"
],
"X-Forwarded-For": [
"172.24.80.88"
],
"X-Real-IP": [
"172.24.80.88"
],
"oci-original-url": [
"https://iaas.r2.oracleiaas.com/20160918/instances/
ocid1.instance.oc1.phx.<unique_ID>"
],
"opc-request-id": [
"<unique_ID>"
],
"Date": [
"Wed, 18 Sep 2019 00:10:58 UTC"
]
}
},
"response": {

Oracle Cloud Infrastructure User Guide 497


Audit

"status": "200",
"responseTime": "2019-09-18T00:10:59.278Z",
"headers": {
"ETag": [
"<unique_ID>"
],
"Connection": [
"close"
],
"Content-Length": [
"1828"
],
"opc-request-id": [
"<unique_ID>"
],
"Date": [
"Wed, 18 Sep 2019 00:10:59 GMT"
],
"Content-Type": [
"application/json"
]
},
"payload": {
"resourceName": "my_instance",
"id": "ocid1.instance.oc1.phx.<unique_ID>"
},
"message": null
},
"stateChange": {
"previous": null,
"current": null
},
"additionalDetails": {
"imageId": "ocid1.image.oc1.phx.<unique_ID>",
"shape": "VM.Standard1.1",
"type": "CustomerVmi"
}
}
}

Viewing Audit Log Events


Audit provides records of API operations performed against supported services as a list of log events. The service
logs events at both the tenant and compartment level.
When viewing events logged by Audit, you might be interested in specific activities that happened in the tenancy
or compartment and who was responsible for the activity. You will need to know that the approximate time and
date something happened and the compartment in which it happened to display a list of log events that includes
the activity in question. List log events by specifying a time range on the 24-hour clock in Greenwich Mean Time
(GMT), calculating the offset for your local time zone, as appropriate. New activity is appended to the existing list,
usually within 15 minutes of the API call, though processing time can vary.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.

Oracle Cloud Infrastructure User Guide 498


Audit

For administrators: The following policy statement gives the specified group (Auditors) the ability to view all the
Audit event logs in the tenancy:

Allow group Auditors to read audit-events in tenancy

To give the group access to the Audit event logs in a specific compartment only (ProjectA), write a policy like the
following:

Allow group Auditors to read audit-events in compartment ProjectA

If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
more details about policies for the Audit, see Details for the Audit Service on page 2188.

Searching and Filtering in the Console


When you navigate to Audit in the Console, a list of results is generated for the current compartment. Audit logs are
organized by compartment, so if you are looking for a particular event, you must know which compartment the event
occurred in. You can filter the list in all the following ways:
• Date and time
• Request Action Types (operations)
• Keywords
For example, users begin to report that their attempts to log in are failing. You want to use Audit to research the
problem. Adjust the date and time to search for corresponding failures during a window of time that starts a little
before the events were reported. Look for corresponding failures and similar operations preceding the failures to
correlate a reason for the failures.
Note:

The service logs events at the time they are processed. There can be a delay
between the time an operation occurs and when it is processed.
You can filter results by request actions to zero in on only the events with operations that interest you. For example,
say that you only want to know about instances that were deleted during a specific time frame. Select a delete request
action filter to see only the events with delete operations.
You can also filter by keywords. Keyword filters are powerful when combined with the values from audit event
fields. For example, say that you know the user name of an account and want a list of all activity by that account in a
particulate time frame. Do a search using the user name as a keyword filter.
Every audit event contains the same fields, so search for values from those fields. To get a better understanding of
what values are available, see Contents of an Audit Log Event on page 493.

Using the Console


To search log events
1. Open the navigation menu. Under Governance and Administration, go to Governance and click Audit.
The list of events that occurred in the current compartment is displayed.
2. Click one of the compartments under Compartment.
Audit organizes logs by compartment, so if you are looking for a particular event, you must know which
compartment the event occurred in.
3. Click in the Start Date box to choose the start date and time for the range of results you want to see. You can
click the arrows on either side of the month to go backward or forward.

Oracle Cloud Infrastructure User Guide 499


Audit

4. (Optional) Specify a time by doing one of the following:


a. Click Time and specify an exact start time in thirty-minute increments.
b. Type an exact time in the Start Date box.
The service uses a 24-hour clock, so you must provide a number between 0 and 23 for the hour. Also
remember to calculate the offset between Greenwich Mean Time (GMT) and your local time.
5. Repeat step 3 and 4 to choose an end date and time.
6. (Optional) In Request Action Types, specify one or more operations with which to filter results.
• GET
• POST
• PUT
• PATCH
• DELETE
7. (Optional) In the Keywords box, type the text you want to find and click Search.
Tip: If you want to find log events with a specific status code, include quotes (") around the code to avoid results
that have those numbers embedded in a longer string.
The results are updated to include only log events that were processed within the time range and filters you specified.
If an event occurred in the recent past, you might have to wait to see it in the list. The service typically requires up to
15 minutes for processing.
If there are more than 100 results for the specified time range, you can click the right arrow next to the page number
at the bottom of the page to advance to the next page of log events.
Tip:

If you get fewer than 100 results on the last page of a results list, you might
still have more results, which you can access by clicking the right arrow. If
there are more results, Audit prompts you.
If you want to view all the key-value pairs in a log event, see To view the details of a log event on page 500.
To view the details of a log event
View the details of your event:
• To see only the top-level details, click the down arrow to the right of an event.
• To see lower-level details, click { . . . } to the right of the collapsed parameter.
To copy the details of a log event
The following assumes that you have expanded a row in your results.
• To copy an entire event, click the clipboard icon to the right of the event parameter.
• To copy a portion of an event, click the clipboard icon to the right of the nested parameter or value you want to
copy.
The log event is copied to your clipboard. The Audit service logs events in JSON format. You can paste the log event
details into a text editor to save and review later or to use with standard log analysis tools.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following operation to list audit log events:
• ListEvents

Oracle Cloud Infrastructure User Guide 500


Audit

Note:

This API is not intended for bulk-export operations. For bulk export, see Bulk
Export of Audit Log Events on page 501.

Bulk Export of Audit Log Events


You can request a bulk export of audit logs, and within 5-10 business days Oracle support will begin making copies
of the logs and adding them to buckets in your tenancy. The export includes logs for the specified regions, beginning
after you make the request and continuing into the future.

Highlights
• Administrators have full control of the buckets and can provide access to others with IAM policy statements.
• Exported logs remain available indefinitely.
Tip:

You can automatically manage archiving and deleting logs using Object
Storage. See Using Object Lifecycle Management on page 3494.
• Specify all the regions you want exported in your request. If you only request some regions, then decide later you
want to add other regions, you must make another request.
• To disable your bulk export, contact Oracle support. New logs will stop being added to the bucket, and audit logs
will only be available through the Console, based on the retention period you have defined.

Required IAM Policy


To access the bucket where Oracle exports the audit logs, you must be a member of the Administrators group. See
The Administrators Group and Policy on page 2133

Requesting an Export of Audit Logs


A member of the Administrators group for your tenancy must create a ticket at My Oracle Support and provide the
following information:
• Ticket name: Export Audit Logs - <your_company_name>
• Tenancy OCID
• Regions
For example:
• Ticket name: Export Audit Logs - ACME
• Tenancy OCID: ocid1.tenancy.oc1.<unique_ID>
• Regions: US East (Ashburn), region identifier= us-ashburn-1; (US West (Phoenix)), region identifier = us-
phoenix-1
Note:

It can take 5-10 business days before your My Oracle Support ticket is
complete and the logs are available to you.

Bucket and Object Details


This section specifies the naming conventions of the bucket and objects you receive.

Bucket Name Format


Oracle support creates buckets for audit log exports using the following naming format:
oci-logs._audit.<compartment_OCID>

Oracle Cloud Infrastructure User Guide 501


Audit

• oci-logs identifies that Oracle created this bucket.


• _audit identifies that the bucket contains audit events.
• <compartment_OCID> identifies the compartment where the audit events were generated.
For example:

oci-logs._audit.ocid1compartment.oc1..<unique_ID>

Important:

If the OCID of the compartment that generated the audit log contains a colon,
your bucket name will not match the OCID. To create a bucket, Oracle must
substitute colon characters (:) from the OCID with dot characters (.) in the
bucket name.

Object Name Format


Objects use the following naming format:
<region>/<ad>/<YYYY-MM-DDTHH:MMZ>[_<seqNum>].log.gz
• <region> identifies the region where the audit events were generated.
• <ad> identifies the availability domain where the audit events were generated.
• <YYYY-MM-DDTHH:MMZ> identifies the start time of the earliest audit event listed in the object.
• [_<seqNum>] identifies a conditional sequence number. If present, this number means that either an event came
in late or the object became too large to write. Sequence numbers start at two. Apply multiple sequence numbers
to the original object in the order listed.
For example:

us-phoenix-1/ad1/2019-03-21T00:00Z.log.gz
us-phoenix-1/ad1/2019-03-21T00:00Z_2.log.gz

File Format
Files list a single audit event per line. For more information, see Contents of an Audit Log Event on page 493.
Note:

Audit introduced a version 2 schema of Audit logs but bulk export is


currently only available for version 1 schema logs.

Oracle Cloud Infrastructure User Guide 502


Audit

Oracle Cloud Infrastructure User Guide 503


Block Volume

Chapter

9
Block Volume
This chapter explains how to create storage volumes and attach them to instances.

Overview of Block Volume


The Oracle Cloud Infrastructure Block Volume service lets you dynamically provision and manage block storage
volumes. You can create, attach, connect, and move volumes, as well as change volume performance, as needed, to
meet your storage, performance, and application requirements. After you attach and connect a volume to an instance,
you can use the volume like a regular hard drive. You can also disconnect a volume and attach it to another instance
without the loss of data.
These components are required to create a volume and attach it to an instance:
• Instance: A bare metal or virtual machine (VM) host running in the cloud.
• Volume attachment: There are two types of volume attachments:
• iSCSI on page 505: A TCP/IP-based standard used for communication between a volume and attached
instance.
• Paravirtualized on page 505: A virtualized attachment available for VMs.
• Volume: There are two types of volumes:
• Block volume: A detachable block storage device that allows you to dynamically expand the storage capacity
of an instance.
• Boot volume: A detachable boot volume device that contains the image used to boot a Compute instance. See
Boot Volumes on page 613 for more information.
For additional Oracle Cloud Infrastructure terms, see the Glossary.
Block Volume is Always Free eligible. For more information about Always Free resources, including capabilities
and limitations, see Oracle Cloud Infrastructure Free Tier on page 142.

Typical Block Volume Scenarios

Scenario A: Expanding an Instance's Storage


A common usage of Block Volume is adding storage capacity to an Oracle Cloud Infrastructure instance. After you
have launched an instance and set up your cloud network, you can create a block storage volume through the Console
or API. Then, you attach the volume to an instance using a volume attachment. After the volume is attached, you
connect to the volume from your instance's guest OS using iSCSI. The volume can then be mounted and used by your
instance.

Scenario B: Persistent and Durable Storage


A Block Volume volume can be detached from an instance and moved to a different instance without the loss of data.
This data persistence enables you to migrate data between instances and ensures that your data is safely stored, even
when it is not connected to an instance. Any data remains intact until you reformat or delete the volume.

Oracle Cloud Infrastructure User Guide 504


Block Volume

To move your volume to another instance, unmount the drive from the initial instance, terminate the iSCSI
connection, and attach the volume to the second instance. From there, you connect and mount the drive from that
instance's guest OS to have access to all of your data.
Additionally, Block Volume volumes offer a high level of data durability compared to standard, attached drives. All
volumes are automatically replicated for you, helping to protect against data loss, see Block Volume Durability on
page 509.

Scenario C: Instance Scaling


When you terminate an instance, you can keep the associated boot volume and use it to launch a new instance with
a different instance type or shape. This allows you to easily switch from a bare metal instance to a VM instance and
vice versa, or scale up or scale down the number of cores for an instance. See Creating an Instance on page 700 for
steps to launch an instance based on a boot volume.

Volume Attachment Types


When you attach a block volume to a VM instance, you have two options for attachment type, iSCSI or
paravirtualized. Paravirtualized attachments simplify the process of configuring your block storage by removing
the extra commands that are required before connecting to an iSCSI-attached volume. The trade-off is that IOPS
performance for iSCSI attachments is greater than that for paravirtualized attachments. You should consider your
requirements when selecting a volume's attachment type.
Important:

Connecting to Volumes on Linux Instances


When connecting to volumes on Linux instances, if you want to
automatically mount these volumes on instance boot, you need to use
some specific options in the /etc/fstab file, or the instance may fail to
launch. See Traditional fstab Options on page 532 and fstab Options for
Block Volumes Using Consistent Device Paths on page 531 for more
information.

iSCSI
iSCSI attachments are the only option when connecting a block volume to any of the following types of instances:
• Bare metal instances
• VM instances based on Windows images that were published before February 2018
• VM instances based on Linux images that were published before December 2017
After the volume is attached, you need to log in to the instance and use the iscsiadm command-line tool to
configure the iSCSI connection. For more information about the additional configuration steps required for iSCSI
attachments, see iSCSI Commands and Information on page 509, Connecting to a Volume on page 528, and
Disconnecting From a Volume on page 566.
IOPS performance is better with iSCSI attachments compared to paravirtualized attachments. For more information
about iSCSI-attached volume performance, see Block Volume Performance on page 571.

Paravirtualized
Paravirtualized attachments are an option when attaching volumes to the following types of VM instances:
• For VM instances launched from Oracle-provided images, you can select this option for Linux-based images
published in December 2017 or later, and Windows images published in February 2018 or later.
• For VM instances launched from custom images, the volume attachment type is based on the volume attachment
type from the VM the custom image was created from.

Oracle Cloud Infrastructure User Guide 505


Block Volume

After you attach a volume using the paravirtualized attachment type, it is ready to use, and you do not need to run
any additional commands. However, because of the overhead of virtualization, this reduces the maximum IOPS
performance for larger block volumes.

Volume Access Types


When you attach a block volume, you can specify one of the following options for access type:
• Read/write: This is the default option for volume attachments. With this option, an instance can read and write
data to the volume.
• Read/write, shareable: With this option, you can attach a volume to more than one instance at a time and
those instances can read and write data to the volume. To prevent data corruption from uncontrolled read/write
operations with multiple instance volume attachments you must install and configure a cluster-aware solution for
system before you can use the volume, see Configuring Multiple Instance Volume Attachments with Read/Write
Access on page 525 for more information.
• Read-only: With this option, an instance can only read data on the volume. It cannot update data on the volume.
Specify this option to safeguard data against accidental or malicious modifications.
To change the access type for a block volume, you need to detach the volume and specify the new access type when
you reattach the volume. For more information, see Detaching a Volume on page 567 and Attaching a Volume on
page 521.
The access type for boot volumes is always read/write. If you want to change the access type, you need to stop the
instance and detach the boot volume. You can then reattach it to another instance as a block volume, with read-only
specified as the access type. For more information, see Detaching a Boot Volume on page 627 and Attaching a
Volume on page 521.

Device Paths
When you attach a block volume to a compatible Linux-based instance, you can select a device path that remains
consistent between instance reboots. This enables you to refer to the volume using a consistent device path. For
example, you can use the device path when you set options in the /etc/fstab file to automatically mount the
volume on instance boot.
Consistent device paths are supported on instances when all of the following things are true:
• The instance was created using an Oracle-provided image.
• The image is a Linux-based image.
• The image was released in November 2018 or later. For specific version numbers, see Oracle-Provided Image
Release Notes.
• The instance was launched after January 11, 2019.
For instances launched using the image OCID or an existing boot volume, if the source image supports consistent
device paths, the instance supports device paths.
Consistent device paths are not supported on Linux-based partner images or custom images that are created from
other sources. This feature does not apply to Windows-based images.
Important:

You must select a device path when you attach a volume using the Console,
it is required. Specifying a device path is optional when you attach a volume
using the CLI, REST APIs, or SDK.
For more information about consistent device paths, see Connecting to Volumes With Consistent Device Paths on
page 522.

Regions and Availability Domains


Volumes are only accessible to instances in the same availability domain . You cannot move a volume between
availability domains or regions, they are only accessible within the region or availability domain they were created in.

Oracle Cloud Infrastructure User Guide 506


Block Volume

However volume backups are not limited to the availability domain of the source volume, you can restore them to any
availability domain within that region, see Restoring a Backup to a New Volume on page 561. You can also copy
a volume backup to a new region and restore the backup to a volume in any availability domain in the new region, for
more information see Copying a Volume Backup Between Regions on page 562.
For more information, see Regions and Availability Domains on page 182.

Resource Identifiers
Most types of Oracle Cloud Infrastructure resources have a unique, Oracle-assigned identifier called an Oracle
Cloud ID (OCID). For information about the OCID format and other ways to identify your resources, see Resource
Identifiers.

Ways to Access Oracle Cloud Infrastructure


You can access Oracle Cloud Infrastructure using the Console (a browser-based interface) or the REST API.
Instructions for the Console and API are included in topics throughout this guide. For a list of available SDKs, see
Software Development Kits and Command Line Interface on page 4262.
To access the Console, you must use a supported browser.
Oracle Cloud Infrastructure supports the following browsers and versions:
• Google Chrome 69 or later
• Safari 12.1 or later
• Firefox 62 or later
For general information about using the API, see REST APIs on page 4409.

Authentication and Authorization


Each service in Oracle Cloud Infrastructure integrates with IAM for authentication and authorization, for all
interfaces (the Console, SDK or CLI, and REST API).
An administrator in your organization needs to set up groups, compartments, and policies that control which users
can access which services, which resources, and the type of access. For example, the policies control who can create
new users, create and manage the cloud network, launch instances, create buckets, download objects, etc. For more
information, see Getting Started with Policies on page 2143. For specific details about writing policies for each of
the different services, see Policy Reference on page 2176.
If you’re a regular user (not an administrator) who needs to use the Oracle Cloud Infrastructure resources that
your company owns, contact your administrator to set up a user ID for you. The administrator can confirm which
compartment or compartments you should be using.

Monitoring Resources
You can monitor the health, capacity, and performance of your Oracle Cloud Infrastructure resources by using
metrics, alarms, and notifications. For more information, see Monitoring Overview on page 2686 and Notifications
Overview on page 3378.

Moving Resources
You can move Block Volume resources such as block volumes, boot volumes, volume backups, volume groups, and
volume group backups from one compartment to another. For more information, see Move Block Volume Resources
Between Compartments on page 568.

Tagging Resources
You can apply tags to your resources to help you organize them according to your business needs. You can apply tags
at the time you create a resource, or you can update the resource later with the wanted tags. For general information
about applying tags, see Resource Tags on page 213.

Oracle Cloud Infrastructure User Guide 507


Block Volume

Creating Automation with Events


You can create automation based on state changes for your Oracle Cloud Infrastructure resources by using event
types, rules, and actions. For more information, see Overview of Events on page 1788.
The following Block Volume resources emit events:
• Block volumes and block volume backups
• Boot volumes and boot volume backups
• Volume groups and volume group backups
Note:

For troubleshooting, see Known Issues - Block Volume for a list of known
issues related to Block Volume events.

Block Volume Encryption


The Oracle Cloud Infrastructure Block Volume service always encrypts all block volumes, boot volumes, and volume
backups at rest by using the Advanced Encryption Standard (AES) algorithm with 256-bit encryption. By default all
volumes and their backups are encrypted using the Oracle-provided encryption keys. Each time a volume is cloned or
restored from a backup the volume is assigned a new unique encryption key.
You have the option to encrypt all of your volumes and their backups using the keys that you own and manage using
the Vault service, for more information see Overview of Vault on page 3988. If you do not configure a volume
to use the Vault service or you later unassign a key from the volume, the Block Volume service uses the Oracle-
provided encryption key instead. This applies to both encryption at-rest and in-transit encryption.
For how to use your own key for new volumes, see Creating a Volume on page 519. See To assign a key to an
existing Block Volume on page 4008 for how to assign or change the key for an existing volume.
All the data moving between the instance and the block volume is transferred over an internal and highly secure
network. If you have specific compliance requirements related to the encryption of the data while it is moving
between the instance and the block volume, the Block Volume service provides the option to enable in-transit
encryption for paravirtualized volume attachments on virtual machine (VM) instances.
Important:

In-transit encryption for boot and block volumes is only available for virtual
machine (VM) instances launched from Oracle-provided images, it is not
supported on bare metal instances. It is also not supported in most cases
for instances launched from custom images imported for "bring your own
image" (BYOI) scenarios. To confirm support for certain Linux-based
custom images and for more information contact Oracle support, see Getting
Help and Contacting Support on page 126.

Block Volume Data Eradication


The Oracle Cloud Infrastructure Block Volume service uses eventual-overwrite data eradication, which guarantees
that block volumes you delete cannot be accessed by anyone else and that the deleted data is eventually overwritten.
When you terminate a volume, its associated data is overwritten in the storage infrastructure before any future volume
allocations.

Block Volume Performance


Block Volume performance varies with volume size, see Block Volume Performance on page 571 for more
information.
The Block Volume service's elastic performance feature enables you to dynamically change the volume performance.
You can select one of the following volume performance options for your block volumes:
• Balanced

Oracle Cloud Infrastructure User Guide 508


Block Volume

• Higher Performance
• Lower Cost
For more information about this feature and the performance options, see Block Volume Elastic Performance on page
585 and Changing the Performance of a Volume on page 586

Block Volume Durability


The Oracle Cloud Infrastructure Block Volume service offer a high level of data durability compared to standard,
attached drives. All volumes are automatically replicated for you, helping to protect against data loss. Multiple copies
of data are stored redundantly across multiple storage servers with built-in repair mechanisms. For service level
objective, the Block Volume service is designed to provide 99.99 percent annual durability for block volumes and
boot volumes. However, we recommend that you make regular backups to protect against the failure of an availability
domain.

Block Volume Capabilities and Limits


Block Volume volumes can be created in sizes ranging from 50 GB to 32 TB in 1 GB increments. By default, Block
Volume volumes are 1 TB.
See Service Limits on page 217 for a list of applicable limits and instructions for requesting a limit increase. To set
compartment-specific limits on a resource or resource family, administrators can use compartment quotas.
Additional limits include:
• Attached block volumes per instance: 32
• Attached boot volumes per instance: 1
Note:

Boot volumes attached to an instance as a data volume and not as the


instance's boot volume count towards the limit for attached block volumes.
• Number of backups
• Monthly universal credits: 100,000
• Pay-as-you-go: 100,000

iSCSI Commands and Information


Block volumes attached with the iSCSI on page 505 attachment type use the iSCSI protocol to connect a volume to
an instance. See Volume Attachment Types on page 505 for more information about volume attachment options.
Once the volume is attached, you need to log on to the instance and use the iscsiadm command-line tool to
configure the iSCSI connection. After you configure the volume, you can mount it and use it like a normal hard drive.
To enhance security, Oracle enforces an iSCSI security protocol called CHAP that provides authentication between
the instance and volume.

Accessing a Volume's iSCSI Information


When you successfully attach a volume to an instance, Block Volume provides a list of iSCSI information. You need
the following information from the list when you connect the instance to the volume.
• IP address
• Port
• CHAP user name and password (if enabled)
• IQN

Oracle Cloud Infrastructure User Guide 509


Block Volume

Note:

The CHAP credentials are auto-generated by the system and cannot be


changed. They are also unique to their assigned volume/instance pair and
cannot be used to authenticated another volume/instance pair.
The Console provides this information on the details page of the volume's attached instance. Click the Actions icon
(three dots) on your volume's row, and then click iSCSI Information. The system also returns this information when
the AttachVolume API operation completes successfully. You can re-run the operation with the same parameter
values to review the information.
See Attaching a Volume on page 521 and Connecting to a Volume on page 528 for step-by-step instructions.

Recommended iSCSI Initiator Parameters for Linux-based Images


iSCSI attached volumes for Linux-based images are managed by the Linux iSCSI initiator service, iscsid. Oracle
Cloud Infrastructure images use iSCSI default settings for the iscsid service's parameters, with the exception of the
following parameters:
• node.startup = automatic
• node.session.timeo.replacement_timeout = 6000
• node.conn[0].timeo.noop_out_interval = 0
• node.conn[0].timeo.noop_out_timeout = 0
• node.conn[0].iscsi.HeaderDigest = None
If you are using custom images, you should update the iscsid service configuration by modifying the /etc/iscsi/
iscsid.conf file.

Additional Reading
There is a wealth of information on the internet about iSCSI and CHAP. If you need more information on these
topics, try the following pages:
• Oracle Linux 8 Managing Storage Devices - Working with iSCSI Devices
• Oracle Linux Administrator's Guide for Release 7 - About iSCSI Storage
• Oracle Linux Administrator's Guide for Release 6 - About iSCSI Storage
• Troubleshooting iSCSI Configuration Problems

Volume Groups
The Oracle Cloud Infrastructure Block Volume service provides you with the capability to group together multiple
volumes in a volume group. A volume group can include both types of volumes, boot volumes, which are the system
disks for your Compute instances, and block volumes for your data storage. You can use volume groups to create
volume group backups and clones that are point-in-time and crash-consistent.
This simplifies the process to create time-consistent backups of running enterprise applications that span multiple
storage volumes across multiple instances. You can then restore an entire group of volumes from a volume group
backup.
Similarly, you can also clone an entire volume group in a time-consistent and crash-consistent manner. A deep
disk-to-disk and fully isolated clone of a volume group, with all the volumes associated in it, becomes available for
use within a matter of seconds. This speeds up the process of creating new environments for development, quality
assurance, user acceptance testing, and troubleshooting.
For more information about Block Volume-backed system disks, see Boot Volumes on page 613. For more
information about Block Volume backups see Overview of Block Volume Backups on page 544. See Cloning a
Volume on page 564 for more information about Block Volume clones.
This capability is available using the Console, command line interface (CLI), SDKs, or REST APIs.

Oracle Cloud Infrastructure User Guide 510


Block Volume

Volume groups and volume group backups are high-level constructs that allow you to group together multiple
volumes. When working with volume groups and volume group backups, keep the following in mind:
• You can only add a volume to a volume group when the volume status is available.
• You can add up to 32 volumes in a volume group, up to a maximum size limit of 128 TB. For example, if you
wanted to add 32 volumes of equal size to a volume group, the maximum size for each volume would be 4 TB. Or
you could add volumes that vary in size, however the overall combined size of all the block and boot volumes in
the volume group must be 128 TB or less. Make sure you account for the size of any boot volumes in your volume
group when considering volume group size limits.
• Each volume may only be in one volume group.
• When you clone a volume group, a new group with new volumes are created. For example, if you clone a volume
group containing three volumes, once this operation is complete, you will now have two separate volume groups
and six different volumes with nothing shared between the volume groups.
• When you update a volume group using the CLI, SDKs, or REST APIs you need to specify all the volumes to
include in the volume group each time you use the update operation. If you do not include a volume ID in the
update call, that volume will be removed from the volume group.
• When you delete a volume group the individual volumes in the group are not deleted, only the volume group is
deleted.
• When you delete a volume that is part of a volume group you must first remove it from the volume group before
you can delete it.
• When you delete a volume group backup, all the volume backups in the volume group backup are deleted.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let volume admins manage block volumes, backups, and volume groups on page
2154 lets the specified group do everything with block volumes, backups, and volume groups.
See the following policy examples for working with volume groups:
• Let users create a volume group on page 2156 lets the specified group create a volume group from a set of
volumes.
• Let users clone a volume group on page 2156 lets the specified group clone a volume group from an existing
volume group.
• Let users create a volume group backup on page 2156 lets the specified group create a volume group backup.
• Let users restore a volume group backup on page 2156 lets the specified group create a volume group by
restoring a volume group backup.
Tip:

When users create a backup from a volume or restore a volume from a


backup, the volume and backup don't have to be in the same compartment.
However, users must have access to both compartments.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Tagging Resources
You can apply tags to your resources to help you organize them according to your business needs. You can update the
resource later with the desired tags. For general information about applying tags, see Resource Tags on page 213.

Oracle Cloud Infrastructure User Guide 511


Block Volume

Managing Volume Groups


This section covers how to perform tasks related to managing your volume groups using the Console, command line
interface (CLI), and REST APIs.
Using the Console
To create a volume group
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Volumes Groups.
2. Click Create Volume Group.
3. Fill in the required volume information:
•Name: A user-friendly name or description. Avoid entering confidential information.
•Compartment: The compartment for the volume group.
•Availability Domain: The availability domain for the volume group.
•Backup Policy: The backup policy to use for scheduled backups. For more inforamtion, see Policy-Based
Volume Group Backups on page 518.
• Volumes: For each volume you want to add, select the compartment containing the volume and then the
volume to add. Click + Volume to add additional volumes.
4. Click Create Volume Group.
To view the volumes in a volume group
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Volumes Groups.
2. In the Volume Groups list, click the volume group you want to view the volumes for.
3. To view the block volumes for the volume group, in Resources, click Block Volumes.
4. To view the boot volumes for the volume group, in Resources, click Boot Volumes.
To add block volumes to an existing volume group
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Volumes Groups.
2. In the Volume Groups list, click the volume group you want to add the volume to.
3. In Resources, click Block Volumes.
4. Click Add Block Volumes.
Note:

You cannot add a volume with an existing backup policy assignment to


a volume group with a backup policy assignment. You must first remove
the backup policy assignment from the volume before you can add it to the
volume group.
5. For each block volume you want to add, select the compartment containing the volume and then select the volume
to add. Click + Volume to add additional volumes.
6. Once you have selected all the block volumes to add to the volume group, click Add.
To remove block volumes from an existing volume group
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Volumes Groups.
2. In the Volume Groups list, click the volume group you want to add the volume to.
3. In Resources, click Block Volumes.
4. In Actions menu for the block volume you want to remove, click Remove.
5. In the Confirm dialog, click Remove.
Note:

When you remove the last volume in a volume group the volume group is
terminated.
To add boot volumes to an existing volume group
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Volumes Groups.

Oracle Cloud Infrastructure User Guide 512


Block Volume

2. In the Volume Groups list, click the volume group you want to add the volume to.
3. In Resources, click Boot Volumes.
4. Click Add Boot Volumes.
Note:

You cannot add a volume with an existing backup policy assignment to


a volume group with a backup policy assignment. You must first remove
the backup policy assignment from the volume before you can add it to the
volume group.
5. For each boot volume you want to add, select the compartment containing the volume and then select the volume
to add. Click + Volume to add additional volumes.
6. Once you have selected all the boot volumes to add to the volume group, click Add.
To remove boot volumes from an existing volume group
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Volumes Groups.
2. In the Volume Groups list, click the volume group you want to add the volume to.
3. In Resources, click Boot Volumes.
4. In Actions menu for the boot volume you want to remove, click Remove.
5. In the Confirm dialog, click Remove.
To create a clone of the volume group
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Volumes Groups.
2. In the Volume Groups list, click Create Volume Group Clone in the Actions menu for the volume group you
want to clone.
To delete the volume group
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Volumes Groups.
2. In the Volume Groups list, click the volume group you want to delete.
3. On the Volume Group Details page, click Terminate.
4. On the Terminate Volume Group dialog, click Terminate.
Note:

When you delete a volume group the individual volumes in the group are not
deleted, only the volume group is deleted.
Using the CLI
For information about using the CLI, see Command Line Interface (CLI) on page 4228.
To retrieve information about the supported operations
Open a command prompt and run the one of the following commands to retrieve the information.
• To retrieve the supported operations for volume groups:

oci bv volume-group --help


• To retrieve the supported operations for volume group backups:

oci bv volume-group-backup --help


• To retrieve help for a specific volume group operation:

oci bv volume-group <operation_name> --help

Oracle Cloud Infrastructure User Guide 513


Block Volume

• To retrieve help for a specific volume group backup operation:

oci bv volume-group-backup <operation_name> --help

To list the volume groups in a specified compartment


Open a command prompt and run:

oci bv volume-group list --compartment-id <compartment_ID>

For example:

oci bv volume-group list --compartment-id ocid1.compartment.oc1..<unique_ID>

To create a volume group from existing volumes


Open a command prompt and run:

oci bv volume-group create --compartment-id <compartment_ID> --availability-


domain <external_AD> --source-details <Source_details_JSON>

Volume status must be available to add it to a volume group.


For example:

oci bv volume-group create --compartment-id


ocid1.compartment.oc1..<unique_ID> --availability-domain
ABbv:PHX-AD-1 --source-details '{"type": "volumeIds",
"volumeIds":["ocid1.volume.oc1.phx.<unique_ID_1>",
"ocid1.volume.oc1.phx.<unique_ID_2>"]}'

To clone a volume group from another volume group


Open a command prompt and run:

oci bv volume-group create --compartment-id <compartment_ID> --availability-


domain <external_AD> --source-details <Source_details_JSON>

For example:

oci bv volume-group create --compartment-id


ocid1.compartment.oc1..<unique_ID> --availability-domain ABbv:PHX-
AD-1 --source-details '{"type": "volumeGroupId", "volumeGroupId":
"ocid1.volumegroup.oc1.phx.<unique_ID>"}'

To retrieve a volume group


Open a command prompt and run:

oci bv volume-group get --volume-group-id <volume-group-ID>

For example:

oci bv volume-group get --volume-group-id


ocid1.volumegroup.oc1.phx.<unique_ID>

Oracle Cloud Infrastructure User Guide 514


Block Volume

To update display name or add/remove volumes from a volume group


Open a command prompt and run:

oci bv volume-group update --volume-group-id <volume-group_ID> --volume-


ids <volume_ID_JSON>

You can update the volume group display name along with adding or removing volumes from the volume group. The
volume group is updated to include only the volumes specified in the update operation. This means that you need to
specify the volume IDs for all of the volumes in the volume group each time you update the volume group.
The following example changes the volume group's display name for a volume group with two volumes:

oci bv volume-group update --volume-group-id


ocid1.volumegroup.oc1.phx.<unique_ID> --volume-ids
'["ocid1.volume.oc1.phx.<unique_ID_1>","ocid1.volume.oc1.phx.<unique_ID_2>"]'
--display-name "new display name"

If you specify volumes in the command that are not part of the volume group they are added to the group. Any
volumes not specified in the command are removed from the volume group.
To delete a volume group
Open a command prompt and run:

oci bv volume-group delete --volume-group-id <volume-group_ID>

When you delete a volume group, the individual volumes in the group are not deleted, only the volume group is
deleted.
For example:

oci bv volume-group delete --volume-group-id


ocid1.volumegroup.oc1.phx.<unique_ID>

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following operations for working with volume groups:
• ListVolumeGroups
• CreateVolumeGroup
• DeleteVolumeGroup
• GetVolumeGroup
• UpdateVolumeGroup

Volume Group Backups

A volume group backup provides coordinated point-in-time-consistent backups of all the volumes in a volume group
automatically. You can perform most of the same backup operations and tasks with volume groups that you can
perform with individual block volumes and boot volumes. You can restore a volume group backup to a volume group,
or you can restore individual volumes in the volume group from volume backups. With volume group backups,
you can manage the backup settings for several volumes in one place, consistently. This simplifies the process to
create time-consistent backups of running enterprise applications that span multiple storage volumes across multiple
instances.
For a general overview of the Block Volume's service backup functionality, see Overview of Block Volume Backups
on page 544.

Oracle Cloud Infrastructure User Guide 515


Block Volume

Source Region
Volume group backups include a Source Region field. This specifies the region for the volume group that the backup
was created from. For volume group backups copied from another region, this field will show the region the volume
group backup was copied from.
Manual Volume Group Backups
Manual backups are on-demand one-off backups that you can launch immediately for volume groups by following
the steps outlined in the procedures in this section. For general information about the manual backups feature for the
Block Volume service, see Manual Backups on page 544.
Using the Console
To create a backup of the volume group
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Volumes Groups.
2. In the Volume Groups list, click Create Volume Group Backup in the Actions menu for the volume group you
want to create a backup for.
To restore a volume group from a volume group backup
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Volumes Group
Backups.
2. In the Volume Group Backups list, click the volume group backup you want to restore.
3. Click Create Volume Group.
4. Fill in the required volume information:
• Name: A user-friendly name or description. Avoid entering confidential information.
• Compartment: The compartment for the volume group.
• Availability Domain: The availability domain for the volume group.
5. Click Create Volume Group.
Using the CLI
For information about using the CLI, see Command Line Interface (CLI) on page 4228.
To list volume backup groups
Open a command prompt and run:

oci bv volume-group-backup list --compartment-id <compartment_ID>

For example:

oci bv volume-group-backup list --compartment-id


ocid1.compartment.oc1..<unique_ID>

To create a volume group backup


Open a command prompt and run:

oci bv volume-group-backup create --volume-group-id <volume-group_ID>

For example:

oci bv volume-group-backup create --volume-group-id


ocid1.volumegroup.oc1.phx.<unique_ID>

To retrieve a volume group backup

Oracle Cloud Infrastructure User Guide 516


Block Volume

Open a command prompt and run:

oci bv volume-group-backup get --volume-group-backup-id <volume-group-


backup_ID>

For example:

oci bv volume-group-backup get --volume-group-backup-id


ocid1.volumegroupbackup.oc1.phx.<unique_ID>

To update display name for a volume group backup


Open a command prompt and run:

oci bv volume-group-backup update --volume-group-backup-id <volume-group-


backup_ID> --display-name <new_display_name>

You can only update the display name for the volume group backup.
For example:

oci bv volume-group-backup update --volume-group-backup-id


ocid1.volumegroupbackup.oc1.phx.<unique_ID> --display-name "new display
name"

To restore a volume group from a volume group backup


Open a command prompt and run:

oci bv volume-group create --compartment-id <compartment_ID> --availability-


domain <external_AD> --source-details <Source_details_JSON>

For example:

oci bv volume-group create --compartment-id


ocid1.compartment.oc1..<unique_ID> --availability-domain ABbv:PHX-AD-1
--source-details '{"type": "volumeGroupBackupId", "volumeGroupBackupId":
"ocid1.volumegroup.oc1.sea.<unique_ID>"}'

To delete a volume group backup


Open a command prompt and run:

oci bv volume-group-backup delete --volume-group-backup-id <volume-group-


backup_ID>

When you delete a volume group backup, all volume backups in the group are deleted.
For example:

oci bv volume-group-backup delete --volume-group-backup-id


ocid1.volumegroupbackup.oc1.phx.<unique_ID>

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following operations for working with volume group backups:
• ListVolumeGroupBackups
• CreateVolumeGroupBackup

Oracle Cloud Infrastructure User Guide 517


Block Volume

• DeleteVolumeGroupBackup
• GetVolumeGroupBackup
• UpdateVolumeGroupBackup
Policy-Based Volume Group Backups
These are automated scheduled backups as defined by the backup policy assigned to the volume group. Policy-
based backups for volume groups are essentially the same as policy-based backups for block volumes, the main
difference is that the backup policy is applied to all the volumes in the volume group instead of a single volume. For
general information about policy-based backups, see Policy-Based Backups on page 551. The process to create
and configure user defined backup policies are the same for volume groups as they are for volumes, see Creating and
Configuring User Defined Backup Policies on page 556 for these procedures.
Note:

Oracle defined backup policies are not supported for scheduled volume group
backups.
Caution:

Vault encryption keys for volumes are not copied to the destination region for
scheduled volume and volume group backups enabled for cross region copy.
For more information, see Vault encryption keys not copied to destination
region for scheduled cross region backup copies.
Managing Backup Policy Assignments to Volume Groups
The backup policy assigned to a volume group defines the frequency and schedule for volume group backups. This
section covers how to perform tasks related to managing the backup policy assignments for your volume groups using
the Console, command line interface (CLI), and REST APIs.
If a volume group has an assigned backup policy, you must remove any backup policy assignments from volumes
before you can add them to the volume group.
Before you can assign a backup policy to an existing volume group containing one or more volumes with assigned
backup policies, you must remove those policy assignments from the invidual volumes before you can assign the
policy to the volume group.
Using the Console
To assign a backup policy to a volume group
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Volumes Groups.
2. Click the volume group for which you want to assign a backup policy to.
3. On the Volume Group Details page click Edit .
4. In the BACKUP POLICIES section, select the compartment containing the backup policies.
5. Select the appropriate backup policy for your requirements.
6. Click Save Changes.
To change a backup policy assigned to a volume group
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Volumes Groups.
2. Click the volume group for which you want to change the backup policy for.
3. On the Volume Group Details page click Edit.
4. In the BACKUP POLICIES section, select the compartment containing the backup policy.
5. Select the backup policy you want to switch to.
6. Click Save Changes.
To remote a backup policy assigned to a volume group
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Volumes Groups.
2. Click the volume group for which you want to remove the backup policy for.
3. On the Volume Group Details page click Edit .

Oracle Cloud Infrastructure User Guide 518


Block Volume

4. In the BACKUP POLICIES section, select None from the list, and then click Save Changes.
Using the CLI
For information about using the CLI, see Command Line Interface (CLI) on page 4228.
To assign a backup policy to a volume group
Open a command prompt and run:

oci bv volume-backup-policy-assignment create --asset-id <volume_group_ID>


--policy-id <policy_ID>

For example:

oci bv volume-backup-policy-assignment create --asset-


id ocid1.volumegroup.oc1..<unique_ID> --policy-id
ocid1.volumebackuppolicy.oc1..<unique_ID>

To get the backup policy assigned to a volume group


Open a command prompt and run:

oci bv volume-backup-policy-assignment get-volume-backup-policy-asset-


assignment --asset-id <volume_group_ID>

For example:

oci bv volume-backup-policy-assignment get-volume-backup-policy-asset-


assignment --asset-id ocid1.volumegroup.oc1..<unique_ID>

To retrieve a specific backup policy assignment


Open a command prompt and run:

oci bv volume-backup-policy-assignment get --policy-assignment-id <backup-


policy-ID>

For example:

oci bv volume-backup-policy-assignment get --policy-assignment-id


ocid1.volumebackuppolicyassignment.oc1.phx.<unique_ID>

Using the API


Use the following operations to manage backup policy assignments to volume groups:
• CreateVolumeBackupPolicyAssignment
• DeleteVolumeBackupPolicyAssignment
• GetVolumeBackupPolicyAssetAssignment
• GetVolumeBackupPolicyAssignment
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Creating a Volume
You can create a volume using Block Volume.

Oracle Cloud Infrastructure User Guide 519


Block Volume

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let volume admins manage block volumes, backups, and volume groups on page
2154 lets the specified group do everything with block volumes and backups.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Monitoring Resources
You can monitor the health, capacity, and performance of your Oracle Cloud Infrastructure resources by using
metrics, alarms, and notifications. For more information, see Monitoring Overview on page 2686 and Notifications
Overview on page 3378.

Tagging Resources
You can apply tags to your resources to help you organize them according to your business needs. You can apply tags
at the time you create a resource, or you can update the resource later with the wanted tags. For general information
about applying tags, see Resource Tags on page 213.

Using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Block Volumes.
2. Click Create Block Volume.
3. Fill in the required volume information:
• Name: A user-friendly name or description. Avoid entering confidential information.
• Domain: Must be in the same availability domain as the instance.
• Size: Must be between 50 GB and 32 TB. You can choose in 1 GB increments within this range. The default is
1024 GB. If you choose a size outside of your service limit, you may be prompted to request an increase. For
more information, see Service Limits on page 217.
• Backup Policy: Optionally, you can select the appropriate backup policy for your requirements. See Policy-
Based Backups on page 551 for more information about backup policies.
• Volume Performance: Optionally, you can select the appropriate performance setting for your requirements.
See Block Volume Elastic Performance on page 585 for more information about volume performance
options. The default option is Balanced.
• Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags
to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more
information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip
this option (you can apply tags later) or ask your administrator.
• Encryption: Optionally, you can encrypt the data in this volume using your own Vault encryption key. To use
Vault for your encryption needs, select the Encrypt using customer-managed keys radio button. Then, select
the Vault compartment and Vault that contain the master encryption key you want to use. Also select the
Master encryption key compartment and Master encryption key. For more information about encryption,
see Overview of Vault on page 3988.
4. Click Create Block Volume.
The volume will be ready to attach once its icon no longer lists it as PROVISIONING in the volume list. For
more information, see Attaching a Volume on page 521.

Oracle Cloud Infrastructure User Guide 520


Block Volume

Using the API


To create a volume, use the following operation:
• CreateVolume
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Attaching a Volume
You can attach a volume to an instance in order to expand the available storage on the instance. If you specify iSCSI
on page 505 as the volume attachment type, you must also connect and mount the volume from the instance for
the volume to be usable. For more information, see Volume Attachment Types on page 505 and Connecting to a
Volume on page 528.
You can attach volumes to more than one instance at a time, see Attaching a Volume to Multiple Instances on
page 524. To prevent data corruption from uncontrolled read/write operations with multiple instance volume
attachments you must install and configure a clustered file system before you can use the volume, see Configuring
Multiple Instance Volume Attachments with Read/Write Access on page 525 for more information.
Note:

You should only attach Linux volumes to Linux instances and Windows
volumes to Windows instances.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to attach/
detach existing block volumes. The policy in Let volume admins manage block volumes, backups, and volume groups
on page 2154 lets the specified group do everything with block volumes and backups, but not launch instances.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Security Zones
Security Zones ensure that your cloud resources comply with Oracle security principles. If any operation on a
resource in a security zone compartment violates a policy for that security zone, then the operation is denied.
The following security zone policies affect your ability to attach block volumes to Compute instances.
• All block volumes attached to a Compute instance in a security zone must themselves be in a security zone.
• Block volumes in a security zone cannot be attached to a Compute instance that is not in a security zone.

Using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. In the Instances list, click the instance that you want to attach a volume to.
3. In the Resources section, click Attached Block Volumes.
4. Click Attach Block Volume.
5. Select the volume attachment type, iSCSI, Paravirtualized, or Let Oracle Cloud Infrastructure choose the
best attachment type.
For more information, see Volume Attachment Types on page 505.

Oracle Cloud Infrastructure User Guide 521


Block Volume

6. In the Block Volume Compartment drop-down list, select the compartment.


7. Specify the volume you want to attach to. To use the volume name, choose SELECT VOLUME and then select
the volume from the Block Volume drop-down list. To specify the volume OCID, choose ENTER VOLUME
OCID and then enter the OCID into the Block Volume OCID field.
8. If the instance supports consistent device paths, and the volume you are attaching is not a boot volume, select a
path from the Device Path drop-down list when attaching. This is required and enables you to specify a device
path for the volume attachment that remains consistent between instance reboots.
For more information about this feature and the instances that support it, see Connecting to Volumes With
Consistent Device Paths on page 522
Tip:

You must select a device path when you attach a volume from the Console,
it is not optional. Specifying a device path is optional when you attach a
volume using the CLI, REST APIs, or SDK.
9. Select the access type, Read/Write or Read-only.
For more information, see Volume Access Types on page 506.
10. For paravirtualized volume attachments on virtual machine (VM) instances, you can optionally encrypt data that
is transferred between the instance and the Block Volume service storage servers. To do this, select the Use in-
transit encryption check box. If you configured the volume to use an encryption key that you manage using the
Vault service, this key is used for in-transit encryption. Otherwise, the Oracle-provided encryption key is used.
See Block Volume Encryption on page 508 for more information.
11. Click Attach.
When the volume's icon no longer lists it as Attaching, if the attachment type is Paravirtualized on page 505,
you can use the volume. If the attachment type is iSCSI on page 505, you need to connect to the volume first.
For more information, see Connecting to a Volume on page 528.
On Linux-based instances, if you want to automatically mount volumes on instance boot, you need to set some
specific options in the /etc/fstab file, or the instance may fail to launch. This applies to both iSCSI and
paravirtualized attachment types. For volumes using consistent device paths, see fstab Options for Block Volumes
Using Consistent Device Paths on page 531. For all other volumes, see Traditional fstab Options on page
532.

Using the API


To attach a volume to an instance, use the following operation:
• AttachVolume
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Connecting to Volumes With Consistent Device Paths


Oracle Cloud Infrastructure supports consistent device paths for block volumes that are attached to compatible Linux-
based instances. When you attach a block volume to an instance, you must select a device path that remains consistent
between instance reboots. This enables you to use a consistent device path when you refer to the volume to perform
tasks such as:
• Creating partitions.
• Creating file systems.
• Mounting file systems.
• Specifying options in the /etc/fstab file to ensure that volumes are mounted properly when automatically
mounting volumes on instance boot. For more information, see fstab Options for Block Volumes Using Consistent
Device Paths on page 531.

Oracle Cloud Infrastructure User Guide 522


Block Volume

When you use consistent device paths on compatible Linux-based instances, the boot volume's device path is:

/dev/oracleoci/oraclevda

Note:

Device paths are not available when you attach a boot volume as a data
volume to a second instance.
Images that Support Consistent Device Paths
Consistent device paths are supported on instances when all of the following things are true:
• The instance was created using an Oracle-provided image.
• The image is a Linux-based image.
• The image was released in November 2018 or later. For specific version numbers, see Oracle-Provided Image
Release Notes.
• The instance was launched after January 11, 2019.
For instances launched using the image OCID or an existing boot volume, if the source image supports consistent
device paths, the instance supports device paths.
Consistent device paths are not supported on Linux-based partner images or custom images that are created from
other sources. This feature does not apply to Windows-based images.
Important:

You must select a device path when you attach a volume using the Console,
it is required. Specifying a device path is optional when you attach a volume
using the CLI, REST APIs, or SDK.
Device Paths in the Console
You select a device path when you attach a block volume to an instance.
If you specify a device path, the path appears in the Attached Block Volumes list for an instance, in the Device Path
field. An example is shown in the following screenshot.

Device Paths on the Instance


Use the following sample commands to perform various configuration tasks on the attached volume. Commands are
provided for volumes that use consistent device paths and for volumes that don't.
Creating a partition with fdisk
• No device path specified:

fdisk /dev/sdb
• Device path specified:

fdisk /dev/oracleoci/oraclevdb

Creating an ext3 file system


• No device path specified:

/sbin/mkfs.ext3 /dev/sdb1

Oracle Cloud Infrastructure User Guide 523


Block Volume

• Device path specified:

/sbin/mkfs.ext3 /dev/oracleoci/oraclevdb1

Updating the /etc/fstab file


• No device path specified:

UUID=84dc162c-43dc-429c-9ac1-b511f3f0e23c /oradiskvdb1 xfs


defaults,_netdev,noatime 0 2
• Device path specified:

/dev/oracleoci/oraclevdb1 /oradiskvdb1 ext3


defaults,_netdev,noatime 0 2

Mounting the file system


• No device path specified:

mount /dev/sdb1 /oradiskvdb1


• Device path specified:

mount /dev/oracleoci/oraclevdb1 /oradiskvdb1

Attaching a Volume to Multiple Instances


The Oracle Cloud Infrastructure Block Volume service provides the capability to attach a block volume to multiple
Compute instances. With this feature, you can share block volumes across instances in read/write or read-only mode.
Attaching block volumes as read/write and shareable enables you to deploy and manage your cluster-aware solutions.
This topic describes how to attach block volumes as shareable, along with the limits and considerations for this
feature.
See Volume Access Types on page 506 for more information about the available access type options. For attaching
volumes to single instances, see Attaching a Volume on page 521.

Limits and Considerations


• The Block Volume service does not provide coordination for concurrent write operations to block volumes
attached to multiple instances, so if you configure the block volume as read/write and shareable you must deploy
a cluster aware system or solution on top of the shared storage, see Configuring Multiple Instance Volume
Attachments with Read/Write Access on page 525.
• Once you attach a block volume to an instance as read-only, it can only be attached to other instances as read-
only. If you want to attach the block volume to an instance as read/write, you need to detach the block volume
from all instances and then you can reattach the block volume to instances as read/write.
• If the block volume is already attached to an instance as read/write non-shareable you can't attach it to another
instance until you detach it from the first instance. You can then reattach it to both the first and second instances
as read/write shareable.
• You can't delete a block volume until it has been detached from all instances it was attached to. When viewing the
instances attached to the block volume from the Resources section of the Volume Details page, you should note
that only instances in the selected compartment will be displayed. You may need to change the compartment to
list additional instances that are attached to the volume.
• You can attach a block volume as read/write shareable or read-only shareable up to a maximum of eight instances.
• Block volumes attached as read-only are configured as shareable by default.
• Performance characteristics described in Block Volume Performance on page 571 are per volume, so when a
block volume is attached to multiple instances the performance is shared across all the attached instances.

Oracle Cloud Infrastructure User Guide 524


Block Volume

Configuring Multiple Instance Volume Attachments with Read/Write Access


The Block Volume service does not provide coordination for concurrent write operations to volumes attached to
multiple instances. To prevent data corruption from uncontrolled read/write operations you must install and configure
a cluster aware system or solution such as Oracle Cluster File System version 2 (OCFS2) on top of the shared storage
before you can use the volume.
You can see an sample walkthrough of scenario using OCFS2 described in Using the Multiple-Instance Attach Block
Volume Feature to Create a Shared File System on Oracle Cloud Infrastructure. The summary of the required steps
for this scenario are:
1. Attach the block volume to an instance as Read/Write-Shareable using the Console, CLI, or API.
2. Set up your OCFS2/O2CB cluster nodes.
3. Create your OCFS2 file system and mount point.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to attach/
detach existing block volumes. The policy in Let volume admins manage block volumes, backups, and volume groups
on page 2154 lets the specified group do everything with block volumes and backups, but not launch instances.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Using the Console


To attach a volume to multiple instances from the Instance details page
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. In the Instances list, click the instance that you want to attach a volume to.
3. In the Resources section, click Attached Block Volumes.
4. Click Attach Block Volume.
5. Select the volume attachment type, iSCSI or Paravirtualized.
For more information, see Volume Attachment Types on page 505.
6. Select the volume access type. Select Read/Write-Shareable if you want to enable read/write attachments to
multiple instances or Read-only-Shareable for read-only attachments to multiple instances.
For more information, see Volume Access Types on page 506.
7. In the Block Volume Compartment drop-down list, select the compartment.
8. Specify the volume you want to attach to. To use the volume name, choose SELECT VOLUME and then select
the volume from the Block Volume drop-down list. To specify the volume OCID, choose ENTER VOLUME
OCID and then enter the OCID into the Block Volume OCID field.
9. If the instance supports consistent device paths select a path from the Device Path drop-down list when attaching.
This is required and enables you to specify a device path for the volume attachment that remains consistent
between instance reboots.
For more information about this feature and the instances that support it, see Connecting to Volumes With
Consistent Device Paths on page 522
Tip:

You must select a device path when you attach a volume from the Console,
it is not optional. Specifying a device path is optional when you attach a
volume using the CLI, REST APIs, or SDK.

Oracle Cloud Infrastructure User Guide 525


Block Volume

10. For paravirtualized volume attachments on virtual machine (VM) instances, you can optionally encrypt data that
is transferred between the instance and the Block Volume service storage servers. To do this, select the Use in-
transit encryption check box. If you configured the volume to use an encryption key that you manage using the
Vault service, this key is used for in-transit encryption. Otherwise, the Oracle-provided encryption key is used.
See Block Volume Encryption on page 508 for more information.
11. Click Attach.
When the volume's icon no longer lists it as Attaching, if the attachment type is Paravirtualized on page 505,
you can use the volume. If the attachment type is iSCSI on page 505, you need to connect to the volume first.
For more information, see Connecting to a Volume on page 528.
On Linux-based instances, if you want to automatically mount volumes on instance boot, you need to set some
specific options in the /etc/fstab file, or the instance may fail to launch. This applies to both iSCSI and
paravirtualized attachment types. For volumes using consistent device paths, see fstab Options for Block Volumes
Using Consistent Device Paths on page 531. For all other volumes, see Traditional fstab Options on page
532.
To attach a volume to multiple instances from the Block Volume details page
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Block Volumes.
2. In the Block Volumes list, click the block volume that you want to attach to an instance.
3. In the Resources section, click Attached Instances.
4. Click Attach to Instance.
5. Select the volume attachment type, iSCSI or Paravirtualized.
For more information, see Volume Attachment Types on page 505.
6. Select the volume access type. Select Read/Write-Shareable if you want to enable read/write attachments to
multiple instances or Read-only-Shareable for read-only attachments to multiple instances.
For more information, see Volume Access Types on page 506.
7. In the Choose Instance drop-down list, select the instance. Click Change Compartment if the instance is in a
different compartment than the default one listed. If you want to specify the instance using the OCID, select the
ENTER INSTANCE OCID option and then copy the OCID in the textbox.
8. If the instance supports consistent device paths select a path from the Device Path drop-down list when attaching.
This is required and enables you to specify a device path for the volume attachment that remains consistent
between instance reboots.
For more information about this feature and the instances that support it, see Connecting to Volumes With
Consistent Device Paths on page 522
Tip:

You must select a device path when you attach a volume from the Console,
it is not optional. Specifying a device path is optional when you attach a
volume using the CLI, REST APIs, or SDK.
9. For paravirtualized volume attachments on virtual machine (VM) instances, you can optionally encrypt data that
is transferred between the instance and the Block Volume service storage servers. To do this, select the Use in-
transit encryption check box. If you configured the volume to use an encryption key that you manage using the
Vault service, this key is used for in-transit encryption. Otherwise, the Oracle-provided encryption key is used.
See Block Volume Encryption on page 508 for more information.
10. Click Attach.
When the volume's icon no longer lists it as Attaching, if the attachment type is Paravirtualized on page 505,
you can use the volume. If the attachment type is iSCSI on page 505, you need to connect to the volume first.
For more information, see Connecting to a Volume on page 528.
On Linux-based instances, if you want to automatically mount volumes on instance boot, you need to set some
specific options in the /etc/fstab file, or the instance may fail to launch. This applies to both iSCSI and
paravirtualized attachment types. For volumes using consistent device paths, see fstab Options for Block Volumes

Oracle Cloud Infrastructure User Guide 526


Block Volume

Using Consistent Device Paths on page 531. For all other volumes, see Traditional fstab Options on page
532.
To view the instances attached to a volume from the Volume details page
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Block Volumes.
2. In the Block Volumes list, click the block volume that you want to view the attached instances for.
3. In the Resources section, click Attached Instances.
All the attached instances in the selected compartment will be displayed in the list. To view attached instances in
other compartments, change the compartment in the COMPARTMENT drop down list.
To view the volumes attached to an instance from the Instance details page
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. In the Instances list, click the instance that you want to view the attached volumes for.
3. In the Resources section, click Attached Block Volumes.
All the block volumes attached to the instance will be displayed in the list, regardless of the compartment the block
volumes are in.

Using the CLI


For information about using the CLI, see Command Line Interface (CLI) on page 4228.
To attach a volume to an instance as shareable, read/write
Open a command prompt and run:

oci compute volume-attachment attach --instance-id <instance_ID> --


type <attachment_type> --volume-id <volume_ID> --read-only true/false --
is-shareable true

For example:

oci compute volume-attachment attach --instance-id


ocid1.instance.oc1..<unique_ID> --type iscsi --volume-id
ocid1.volume.oc1..<unique_ID> --read-only false --is-shareable true

To list all the instances attached to a volume


Open a command prompt and run:

oci compute volume-attachment list --compartment-id <compartment_ID> --


volume-id <volume_ID>

For example:

oci compute volume-attachment attach --compartment-


id ocid1.compartment.oc1..<unique_ID> --volume-id
ocid1.volume.oc1..<unique_ID>

Note:

This operation will only return the attached instances that are in the specified
compartment. You need to run this operation for every compartment that may
contain instances that are attached to the specified volume.

Using the API


Use the following APIs to attach volumes and work with volume attachments to instances:

Oracle Cloud Infrastructure User Guide 527


Block Volume

• AttachVolume
Set the isShareable attribute of AttachVolumeDetails to true.
• GetVolumeAttachment
• ListVolumeAttachments
The ListVolumeAttachments operation will only return the attached instances that are in compartment you
specify. You need to run this operation for every compartment that may contain instances that are attached to the
specified volume.
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Additional Resources
See the following links for example deployments of shared file systems on Oracle Cloud Infrastructure.
• GitHub project for automated terraform deployment of BeeGFS: oci-beegfs
• GitHub project for automated terraform deployment of Lustre: oci-lustre
• GitHub project for automated terraform deployments of IBM Spectrum Scale (GPFS) distributed parallel file
system on Oracle Cloud Infrastructure: oci-ibm-spectrum-scale
• Refer to page 8 of Deploying Microsoft SQL Server Always On Availability Groups for an example of setting up
shared access.

Connecting to a Volume
For volumes attached with Paravirtualized on page 505 as the volume attachment type, you do not need to perform
any additional steps after Attaching a Volume on page 521, the volumes are connected automatically. However,
for Linux-based images, if you want to mount these volumes on instance boot, you need to perform additional
configuration steps. If you specified a device path when you attached the volume, see fstab Options for Block
Volumes Using Consistent Device Paths on page 531. If you did not specify a device path or if your instance was
created from an image that does not support device paths, see Traditional fstab Options on page 532.
For volumes attached with iSCSI on page 505 as the volume attachment type, you need to connect and mount
the volume from the instance for the volume to be usable. For more information about attachment type options, see
Volume Attachment Types on page 505. In order to connect the volume, you must first attach the volume to the
instance, see Attaching a Volume on page 521.

Connecting to iSCSI-Attached Volumes

Required IAM Policy


Connecting a volume to an instance does not require a specific IAM policy. However, you may need permission to
run the necessary commands on the instance's guest OS. Contact your system administrator for more information.

Prerequisites
You must attach the volume to the instance before you can connect the volume to the instance's guest OS. For details,
see Attaching a Volume.
To connect the volume, you need the following information:
• iSCSI IP Address
• iSCSI Port numbers
• CHAP credentials (if you enabled CHAP)
• IQN
The Console provides the commands required to configure, authenticate, and log on to iSCSI.

Oracle Cloud Infrastructure User Guide 528


Block Volume

Connecting to a Volume on a Linux Instance


1. Use the Console to obtain the iSCSI data you need to connect the volume:
a. Log on to Oracle Cloud Infrastructure.
b. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
c. Click the name of the instance to display the instance details.
d. In the Resources section on the Instance Details page, click Attached Block Volumes to view the attached
block volume.
e. Click the Actions icon (three dots) next to the volume you're interested in, and then click iSCSI Commands
and Information.
The iSCSI Commands and Information dialog box displays specific identifying information about your
volume and the iSCSI commands you'll need. The commands are ready to use with the appropriate information
included. You can copy and paste the commands into your instance session window for each of the following
steps.
2. Log on to your instance's guest OS.
3. Register the volume with the iscsiadm tool.

iscsiadm -m node -o new -T <volume IQN> -p <iSCSI IP address>:<iSCSI port>

A successful registration response resembles the following:

New iSCSI node [tcp:[hw=,ip=,net_if=,iscsi_if=default] 169.254.0.2,3260,-1


iqn.2015-12.us.oracle.com:c6acda73-90b4-4bbb-9a75-faux09015418] added
4. Configure iSCSI to automatically connect to the authenticated block storage volumes after a reboot:

iscsiadm -m node -T <volume IQN> -o update -n node.startup -v automatic

Note: All command arguments are essential. Success returns no response.


5. Skip this step if CHAP is not enabled. If you enabled CHAP when you attached the volume, authenticate the
iSCSI connection by providing the volume's CHAP credentials as follows:

iscsiadm -m node -T <volume IQN> -p <iSCSI IP address>:<iSCSI port> -o


update -n node.session.auth.authmethod -v CHAP

iscsiadm -m node -T <volume IQN> -p <iSCSI IP address>:<iSCSI port> -o


update -n node.session.auth.username -v <CHAP user name>

iscsiadm -m node -T <volume's IQN> -p <iSCSI IP address>:<iSCSI port> -o


update -n node.session.auth.password -v <CHAP password>

Success returns no response.


6. Log in to iSCSI:

iscsiadm -m node -T <volume's IQN> -p <iSCSI IP Address>:<iSCSI port> -l

A successful login response resembles the following:

Logging in to [iface: default, target:


iqn.2015-12.us.oracle.com:c6acda73-90b4-4bbb-9a75-faux09015418, portal:
169.254.0.2,3260] (multiple)

Oracle Cloud Infrastructure User Guide 529


Block Volume

Login to [iface: default, target:


iqn.2015-12.us.oracle.com:c6acda73-90b4-4bbb-9a75-faux09015418, portal:
169.254.0.2,3260] successful.
7. You can now format (if needed) and mount the volume. To get a list of mountable iSCSI devices on the instance,
run the following command:

fdisk -l

The connected volume listing resembles the following:

Disk /dev/sdb: 274.9 GB, 274877906944 bytes, 536870912 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Tip:

If you have multiple volumes that do not have CHAP enabled, you can log in
to them all at once by using the following commands:

iscsiadm -m discovery -t sendtargets -p <iSCSI IP


address>:<iSCSI port>
iscsiadm -m node –l
iscsiadm -m node –l

Connecting to a Volume on a Windows Instance


Caution:

When connecting to a Windows boot volume as a data volume from a second


instance, you need to append -IsMultipathEnabled $True to the
Connect-IscsiTarget command. See Attaching a Windows boot
volume as a data volume to another instance fails for more information.
1. Use the Console to obtain the iSCSI data you need to connect the volume:
a. Log on to Oracle Cloud Infrastructure.
b. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
c. Click your instance's name to display the instance details.
d. In the Resources section on the Instance Details page, click Attached Block Volumes to view the attached
block volume.
e. Click the Actions icon (three dots) next to the volume you're interested in, and then click iSCSI Commands
and Information.
The iSCSI Commands and Information dialog box displays your volume’s IP address and port, which you’ll
need to know later in this procedure.
2. Log in to your instance using a Remote Desktop client.
3. On your Windows instance, open the iSCSI Initiator. The steps to open the iSCSI Initiator may vary depending on
the version of Windows.
For example: Open Server Manager, click Tools, and then select iSCSI Initiator.
4. In the iSCSI Initiator Properties dialog box, click the Discovery tab, and then click Discover Portal.
5. Enter the block volume IP Address and Port, and then click OK.
6. Click the Targets tab.
7. Under Discovered targets, select the volume IQN.
8. Click Connect.
9. Make sure that the Add this connection to the list of favorite targets check box is selected, and then click OK.

Oracle Cloud Infrastructure User Guide 530


Block Volume

10. You can now format (if needed) and mount the volume. To view a list of mountable iSCSI devices on your
instance, in Server Manager, click File and Storage Services, and then click Disks.
The disk is displayed in the list.

fstab Options for Block Volumes Using Consistent Device Paths


On Linux instances, if you want to automatically mount volumes on instance boot, you need to set some specific
options in the /etc/fstab file, or the instance may fail to launch.
Note:

These steps are for block volumes that are attached with consistent device
paths enabled. If the block volume does not have consistent device paths
enabled, use the legacy etc/fstab options instead.
Prerequisites
Before using a consistent device path, you should confirm that the instance supports consistent device paths and is
correctly configured.
To verify that the volume is attached to a supported instance, connect to the instance and run the following command:

ll /dev/oracleoci/oraclevd*

The output will look similar to the following:

lrwxrwxrwx. 1 root root 6 Feb 7 21:02 /dev/oracleoci/oraclevda -> ../sda


lrwxrwxrwx. 1 root root 7 Feb 7 21:02 /dev/oracleoci/oraclevda1 -> ../sda1
lrwxrwxrwx. 1 root root 7 Feb 7 21:02 /dev/oracleoci/oraclevda2 -> ../sda2
lrwxrwxrwx. 1 root root 7 Feb 7 21:02 /dev/oracleoci/oraclevda3 -> ../sda3

If you don't see this output and instead see the following error message:

cannot access /dev/oracleoci/oraclevd*: No such file or directory

there may be a problem with the instance configuration for device paths. For assistance with this, contact Support.
Use the _netdev and nofail Options
By default, the /etc/fstab file is processed before the initiator starts. To configure the mount process to initiate
before the volumes are mounted, specify the _netdev option on each line of the /etc/fstab file.
When you create a custom image of an instance where the volumes, excluding the root volume, are listed in the /
etc/fstab file, instances will fail to launch from the custom image. To prevent this issue, specify the nofail
option in the /etc/fstab file.
In the example scenario with three volumes, the /etc/fstab file entries for the volumes with the _netdev and
nofail options are as follows:

/dev/oracleoci/oraclevdb /mnt/vol1 xfs defaults,_netdev,nofail 0 2


/dev/oracleoci/oraclevdc /mnt/vol2 xfs defaults,_netdev,nofail 0 2
/dev/oracleoci/oraclevdd /mnt/vol3 xfs defaults,_netdev,nofail 0 2

After you have updated the /etc/fstab file, use the following command to mount the volumes:

bash-4.2$ sudo mount -a

Reboot the instance to confirm that the volumes are mounted properly on reboot with the following command:

bash-4.2$ sudo reboot

Oracle Cloud Infrastructure User Guide 531


Block Volume

Troubleshooting Issues with the /etc/fstab File


If the instance fails to reboot after you update the /etc/fstab file, you may need to undo the changes to the /
etc/fstab file. To update the file, first connect to the serial console for the instance. When you have access to the
instance using the serial console connection, you can remove, comment out, or fix the changes that you made to the /
etc/fstab file.

Traditional fstab Options


On Linux instances, if you want to automatically mount volumes on instance boot, you need to set some specific
options in the /etc/fstab file, or the instance may fail to launch.
Note:

These steps are for block volumes that do not have consistent device paths
enabled. If consistent device paths are enabled for the block volume, use the /
etc/fstab options for block volumes using consistent device paths instead.
Volume UUIDs
On Linux operating systems, the order in which volumes are attached is non-deterministic, so it can change with
each reboot. If you refer to a volume using the device name, such as /dev/sdb, and you have more than one non-
root volume, you can't guarantee that the volume you intend to mount for a specific device name will be the volume
mounted.
To prevent this issue, specify the volume UUID in the /etc/fstab file instead of the device name. When you
use the UUID, the mount process matches the UUID in the superblock with the mount point specified in the /etc/
fstab file. This process guarantees that the same volume is always mounted to the same mount point.

Determining the UUID for a Volume


1. Follow the steps to attach a volume and connect to the volume.
2. After the volumes are connected, create the file system of your choice on each volume using standard Linux tools.
The remaining steps assume that three volumes were connected, and that an XFS file system was created on each
volume.
3. Run the following command to use the blkid utility to get the UUIDs for the volumes:

sudo blkid

The output will look similar to the following:

{{ /dev/sda3: UUID="1701c7e0-7527-4338-ae9f-672fd8d24ec7" TYPE="xfs"


PARTUUID="82d2ba4e-4d6e-4a33-9c4d-ba52db57ea61"}}
{{ /dev/sda1: UUID="5750-10A1" TYPE="vfat" PARTLABEL="EFI System
Partition" PARTUUID="082c26fd-85f5-4db2-9f4e-9288a3f3e784"}}
{{ /dev/sda2: UUID="1aad7aca-689d-4f4f-aff0-e0d46fc1b89f" TYPE="swap"
PARTUUID="94ee5675-a805-49b2-aaf5-2fa15aade8d5"}}
{{ /dev/sdb: UUID="699a776a-3d8d-4c88-8f46-209101f318b6" TYPE="xfs"}}
{{ /dev/sdd: UUID="85566369-7148-4ffc-bf97-50954cae7854" TYPE="xfs"}}
{{ /dev/sdc: UUID="ba0ac1d3-58cf-4ff0-bd28-f2df532f7de9" TYPE="xfs"}}

The root volume in this output is /dev/sda*. The additional remote volumes are:
• /dev/sdb
• /dev/sdc
• /dev/sdd
4. To automatically attach the volumes at /mnt/vol1, /mnt/vol2, and /mnt/vol3 respectively, create the
three directories using the following commands:

bash-4.2$ sudo mkdir /mnt/vol1

Oracle Cloud Infrastructure User Guide 532


Block Volume

{{ bash-4.2$ sudo mkdir /mnt/vol2}}


{{ bash-4.2$ sudo mkdir /mnt/vol3}}

Use the _netdev and nofail Options


By default, the /etc/fstab file is processed before the initiator starts. To configure the mount process to initiate
before the volumes are mounted, specify the _netdev option on each line of the /etc/fstab file.
When you create a custom image of an instance where the volumes, excluding the root volume, are listed in the /
etc/fstab file, instances will fail to launch from the custom image. To prevent this issue, specify the nofail
option in the /etc/fstab file.
In the example scenario with three volumes, the /etc/fstab file entries for the volumes with the _netdev and
nofail options are as follows:

UUID=699a776a-3d8d-4c88-8f46-209101f318b6 /mnt/vol1 xfs


defaults,_netdev,nofail 0 2
UUID=ba0ac1d3-58cf-4ff0-bd28-f2df532f7de9 /mnt/vol2 xfs
defaults,_netdev,nofail 0 2
UUID=85566369-7148-4ffc-bf97-50954cae7854 /mnt/vol3 xfs
defaults,_netdev,nofail 0 2

After you have updated the /etc/fstab file, use the following command to mount the volumes:

bash-4.2$ sudo mount -a

Reboot the instance to confirm that the volumes are mounted properly on reboot with the following command:

bash-4.2$ sudo reboot

Troubleshooting Issues with the /etc/fstab File


If the instance fails to reboot after you update the /etc/fstab file, you may need to undo the changes to the /
etc/fstab file. To update the file, first connect to the serial console for the instance. When you have access to the
instance using the serial console connection, you can remove, comment out, or fix the changes that you made to the /
etc/fstab file.

Listing Volumes
You can list all Block Volume volumes in a specific compartment, as well as detailed information on a single volume.

Required IAM Service Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to list
volumes. The policy in Let volume admins manage block volumes, backups, and volume groups on page 2154 lets
the specified group do everything with block volumes and backups, but not launch instances.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Using the Console


Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Block Volumes. A detailed
list of volumes in your current compartment is displayed.

Oracle Cloud Infrastructure User Guide 533


Block Volume

• To view the volumes in a different compartment, change the compartment in the Compartment drop-down menu.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

List Volumes:
Get a list of volumes within a compartment.
• ListVolumes

Get a Single Volume:


Get detailed information on a single volume:
• GetVolume

Listing Volume Attachments


You can use the API to list all Block Volume volume attachments in a specific compartment, as well as detailed
information on a single volume attachment.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to list
volume attachments. The policy in Let volume admins manage block volumes, backups, and volume groups on page
2154 lets the specified group do everything with block volumes and backups, but not launch instances.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

List Attachments:
Get information on all volume attachments in a specific compartment.
• ListVolumeAttachments

Get a Single Attachment:


Get detailed information on a single attachment.
• GetVolumeAttachment

Renaming a Volume
You can use the API to change the display name of a Block Volume volume.

Oracle Cloud Infrastructure User Guide 534


Block Volume

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to rename
block volumes. The policy in Let volume admins manage block volumes, backups, and volume groups on page
2154 lets the specified group do everything with block volumes and backups, but not launch instances.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Using the API


To update a volume's display name, use the following operation:
• UpdateVolume
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Editing a Volume's Settings


The Oracle Cloud Infrastructure Block Volume service enables you to edit the following settings for block volumes
and boot volumes:
• Expand the volume size.
• Change the volume performance.
• Enable performance auto-tune.
• Assigning a volume backup policy.
You can edit these settings when volumes are online and attached to instances or while they’re detached from
instances. See the applicable sections in this topic for links to tasks describing the steps to edit these settings, along
with additional information about working with these settings.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to attach/
detach existing block volumes. The policy in Let volume admins manage block volumes, backups, and volume groups
on page 2154 lets the specified group do everything with block volumes and backups, but not launch instances.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Resizing Volumes
The Block Volume service's online resizing feature enables you to expand the size of an existing volume in place.
After you resize an online volume, you need to rescan the disk and extend the partition. For more information, see:
• Resizing a Volume on page 536
• Rescanning the Disk for a Block Volume or Boot Volume on page 539

Oracle Cloud Infrastructure User Guide 535


Block Volume

• Extending the Partition for a Block Volume on page 540 and Extending the Partition for a Boot Volume on
page 615

Changing the Volume Performance


The elastic performance feature of the Block Volume service allows you to dynamically change the volume
performance, see Block Volume Elastic Performance on page 585 for more information. You can change volume
performance for your block volumes and boot volumes while they are online, without any downtime, see Changing
the Performance of a Volume on page 586. For specific information about this task in the Console, see Using the
Console on page 586.

Enabling Performance Auto-tune for a Volume


With the performance auto-tune feature of the Block Volume service, you can configure your volumes to optimize
cost with performance. When auto-tune is enabled, a volume’s performance it automatically adjusted to the lower
cost option when the volume is detached from all instances. The performance is then automatically reset to the default
performance option when you reattach the volume to an instance. For details, see Auto-tune Volume Performance on
page 587.

Assigning a Backup Policy to a Volume


The Block Volume service provides you with the capability to perform volume backups automatically on a schedule
and retain them based on the selected backup policy.
There are two kinds of backup policies:
• User defined: Custom backup policies that you create and configure schedules for. With user defined policies,
you can also enable scheduled cross-region backups, so that scheduled volume backups are automatically copied
to a second region, see Scheduling Volume Backup Copies Across Regions on page 552.
• Oracle defined: Predefined backup policies that have a set backup frequency and retention period. You cannot
modify these policies.
For more information about scheduling volume backups, see Policy-Based Backups. For task-based procedures, see
Managing Backup Policy Assignments to Volumes.

Resizing a Volume
The Oracle Cloud Infrastructure Block Volume service lets you expand the size of block volumes and boot volumes.
You have several options to increase the size of your volumes:
• Expand an existing volume in place with online resizing. See Online Resizing of Block Volumes Using the
Console on page 537 for the steps to do this.
• Restore from a volume backup to a larger volume. See Restoring a Backup to a New Volume on page 561 and
Restoring a Boot Volume on page 622.
• Clone an existing volume to a new, larger volume. See Cloning a Volume on page 564 and Cloning a Boot
Volume on page 625.
• Expand an existing volume in place with offline resizing. See Offline Resizing of Block Volumes Using the
Console on page 538 for the steps to do this.
For more information about the Block Volume service, see the Block Volume FAQ.
You can only increase the size of the volume, you cannot decrease the size.
Note:

Resizing IDE type boot volumes is not supported. This applies to both offline
and online resizing. To workaround this limitation, you can do one of the
following:
• Terminate the VM instance, ensuring that you keep the boot volume when
you terminate the instance. Resize the boot volume that you have kept,

Oracle Cloud Infrastructure User Guide 536


Block Volume

and then launch a new VM instance, using the resized boot volume as the
image source.
• Create a clone of the boot volume, resize the boot volume clone, and then
launch a new VM instance using the resized boot volume clone as the
image source.
Caution:

Before you resize a boot or block volume, you should create a backup of the
volume.
Note:

After a volume has been resized, the first backup on the resized volume
will be a full backup. See Volume Backup Types on page 545 for more
information about full versus incremental volume backups.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to attach/
detach existing block volumes. The policy in Let volume admins manage block volumes, backups, and volume groups
on page 2154 lets the specified group do everything with block volumes and backups, but not launch instances.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Online Resizing of Block Volumes Using the Console


With online resizing, you can expand the volume size without detaching the volume from an instance.
To resize a block volume attached to a Linux-based instance
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Block Volumes.
2. In the Block Volumes list, click the block volume you want to resize.
3. Click Edit Size or Performance.
4. Specify the new size in VOLUME SIZE (IN GB). You must specify a larger value than the block volume's
current size.
5. Click Save Changes. This opens a dialog that lists the commands to rescan the disk that you need to run after
the volume is provisioned. You need to run these commands so that the operating system identifies the expanded
volume size. Click the Copy link to copy the commands, and then click Close to close the dialog.
6. Log on to your instance's OS and then paste and run the rescan commands you copied in the previous step into
your instance session window. The rescan commands are also provided in Rescanning the Disk for Volumes
Attached to Linux-Based Instances on page 539.
7. Extend the partition, see Extending the Partition for a Block Volume on page 540.
To resize a block volume attached to a Windows instance
This procedure describes the process for online resizing for block volumes attached to Windows instances, or other
instance types that are not Linux-based.
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Block Volumes.
2. In the Block Volumes list, click the block volume you want to resize.
3. Click Edit Size or Performance.

Oracle Cloud Infrastructure User Guide 537


Block Volume

4. Specify the new size in VOLUME SIZE (IN GB). You must specify a larger value than the block volume's
current size.
5. Click Save Changes.
6. Rescan the disk, see Rescanning the Disk for Volumes Attached to Windows Instances on page 540.
7. Extend the partition, see Extending the Partition for a Block Volume on page 540.
To resize a boot volume for a Linux-based Instance
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Boot Volumes.
2. In the Boot Volumes list, click the boot volume you want to resize.
3. Click Edit Size or Performance.
4. Specify the new size in VOLUME SIZE (IN GB). You must specify a larger value than the boot volume's current
size.
5. Click Save Changes. This opens a dialog that lists the commands to rescan the disk that you need to run after
the volume is provisioned. You need to run these commands so that the operating system identifies the expanded
volume size. Click the Copy link to copy the commands, and then click Close to close the dialog.
6. Log on to your instance's OS and then paste and run the rescan commands you copied in the previous step into
your instance session window. The rescan commands are also provided in Rescanning the Disk for Volumes
Attached to Linux-Based Instances on page 539.
7. Extend the partition and grow the file system using the oci-growfs on page 648 operation from OCI
Utilities on page 646.
Resizing a Boot Volume for a Windows Instance
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Boot Volumes.
2. In the Boot Volumes list, click the boot volume you want to resize.
3. Click Resize.
4. Click Save Changes.
5. Rescan the disk, see Rescanning the Disk for Volumes Attached to Windows Instances on page 540.
6. Extend the partition, see Extending the System Partition on a Windows-Based Image on page 616.

Offline Resizing of Block Volumes Using the Console


With offline resizing, you detach the volume from an instance before you expand the volume size. Once the volume is
resized and reattached, you need to extend the partition, but you do not need to rescan the disk.

Considerations When Resizing an Offline Volume


Whenever you detach and reattach volumes, there are complexities and risks for both Linux-based and Windows-
based instances. This applies to both paravirtualized and iSCSI attachment types. You should keep the following in
mind when resizing volumes:
• When you reattach a volume to an instance after resizing, if you are not using consistent device paths, or the
instance does not support consistent device paths, device order and path may change. If you are using a tool such
as Logical Volume Manager (LVM), you may need to fix the device mappings. For more information about
consistent device paths, see Connecting to Volumes With Consistent Device Paths on page 522.
• When you detach and then reattach an iSCSI-attached volume to an instance, the volume's IP address will
increment.
• Before you resize a volume, you should create a full backup of the volume.
To resize a block volume attached to a Linux-based instance
1. Detach the block volume, see Detaching a Volume on page 567.
2. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Block Volumes.
3. In the Block Volumes list, click the block volume you want to resize.
4. Click Edit Size or Performance.
5. Specify the new size in VOLUME SIZE (IN GB). You must specify a larger value than the block volume's
current size.

Oracle Cloud Infrastructure User Guide 538


Block Volume

6. Click Save Changes. This opens a dialog that lists the required steps to complete the volume resize. For offline
resizing, you need to extend the partition after you reattach the volume. Click Close to close the dialog.
7. Reattach the volume, see Attaching a Volume on page 521.
8. Extend the partition, see Extending the Partition for a Block Volume on page 540.
Resizing a Boot Volume for a Windows Instance
1. Stop the instance, see Stopping and Starting an Instance on page 785.
2. Detach the boot volume, see Detaching a Boot Volume on page 627.
3. Open the navigation menu. Under Core Infrastructure, go to Compute and click Boot Volumes.
4. In the Boot Volumes list, click the boot volume you want to resize.
5. Click Edit Size or Performance.
6. Specify the new size in VOLUME SIZE (IN GB). You must specify a larger value than the block volume's
current size.
7. Reattach the boot volume, see Attaching a Boot Volume on page 617.
8. Restart the instance, see Stopping and Starting an Instance on page 785.
9. Extend the partition, see Extending the System Partition on a Windows-Based Image on page 616.
Resizing a Boot Volume for a Linux Instance
1. Stop the instance, see Stopping and Starting an Instance on page 785.
2. Detach the boot volume, see Detaching a Boot Volume on page 627.
3. Open the navigation menu. Under Core Infrastructure, go to Compute and click Boot Volumes.
4. In the Boot Volumes list, click the boot volume you want to resize.
5. Click Edit Size or Performance.
6. Specify the new size in VOLUME SIZE (IN GB). You must specify a larger value than the block volume's
current size.
7. Attach the boot volume to a second instance as a data volume. See Attaching a Volume on page 521 and
Connecting to a Volume on page 528.
8. Extend the partition and grow the file system, see Extending the Root Partition on a Linux-Based Image on page
616.
9. Reattach the boot volume, see Attaching a Boot Volume on page 617.
10. Restart the instance, see Stopping and Starting an Instance on page 785.

Rescanning the Disk for a Block Volume or Boot Volume


The Oracle Cloud Infrastructure Block Volume service lets you expand the size of block volumes and boot volumes
while they are online and attached to instances, for more information, see Online Resizing of Block Volumes Using
the Console on page 537. After the volume is provisioned, you need to run commands to rescan the disk, so that
the operating system identifies the expanded volume size. You will need to run different rescan commands depending
on the operating system of the attached instance. This topic describes some procedures you can use to rescan the disk.
Required IAM Policy
Rescanning the disk does not require a specific IAM policy. However, you may need permission to run the necessary
commands on the instance's guest OS. Contact your system administrator for more information.
Rescanning the Disk for Volumes Attached to Linux-Based Instances
For volumes attached to Linux-based instances, run the following commands rescan the disk for a block volume:

sudo dd iflag=direct if=/dev/<device_name> of=/dev/null count=1


echo "1" | sudo tee /sys/class/block/<device_name>/device/rescan

These commands are also displayed in the dialog that opens after you click Save Changes in the Edit Size or
Performance dialog, and you can copy them from that dialog.

Oracle Cloud Infrastructure User Guide 539


Block Volume

Next Steps
After you've rescanned the disk, you need to extend the partition. See Extending the Partition for a Block Volume on
a Linux-Based Image on page 541 for block volumes. For boot volumes, use the oci-growfs on page 648
operation from OCI Utilities on page 646.
Rescanning the Disk for Volumes Attached to Windows Instances

Using Disk Management or diskpart


For volumes formatted as FAT32 or NTFS, you can rescan the disk using the Windows interface, in Disk
Management, or you can use the diskpart utility's rescan command from the command line.
Rescanning the disk using the command line with DISKPART
1. Open a command prompt as administrator on the instance.
2. Run the following command to start the diskpart utility:

diskpart
3. At the DISKPART prompt, run the following command:

rescan

Rescanning the disk using the Windows interface


1. Open the Disk Management system utility on the instance.
2. Click Action, and then click Rescan Disks.
Update disk information.

Using Cygwin
For volumes formatted with a non-native Windows file system, such as volumes formatted using Oracle Automatic
Storage Management (Oracle ASM), you can't use the Windows interface or the diskpart utitily. Instead, you can use
the dd process from a Cygwin terminal to rescan the disk. You can also use this for native Windows file systems.
For more information, see Cygwin.

Next Steps
After you've rescanned the disk, you need to extend the partition. See Extending the Partition for a Block Volume on
a Windows-Based Image on page 543 for block volumes, see Extending the System Partition on a Windows-Based
Image on page 616 for boot volumes.

Extending the Partition for a Block Volume


The Oracle Cloud Infrastructure Block Volume service lets you expand the size of block volumes with offline volume
resizing. For more information, see Resizing a Volume on page 536. In order to take advantage of the larger
volume size, you need to extend the partition for the block volume. For boot volumes, see Extending the Partition for
a Boot Volume on page 615.
Note:

After a volume has been resized, the first backup on the resized volume
will be a full backup. See Volume Backup Types on page 545 for more
information about full versus incremental volume backups.
Required IAM Policy
Extending a partition on an instance does not require a specific IAM policy. However, you may need permission to
run the necessary commands on the instance's guest OS. Contact your system administrator for more information.

Oracle Cloud Infrastructure User Guide 540


Block Volume

Extending the Partition for a Block Volume on a Linux-Based Image


On Linux-based images, use the following steps to extend the partition for a block volume.

Prerequisites
After you have resized a volume, you need to attach it to an instance before you can extend the partition and grow
the file system. See Attaching a Volume on page 521 and Connecting to a Volume on page 528 for more
information.

Extending the Linux Partition


Extending a partition
1. To identify the volume that you want to extend the partition for, run the following command to list the attached
block volumes:

lsblk
2. Run the following command to edit the volume's partition table with parted:

parted <volume_id>

<volume_id> is the volume identifier, for example /dev/sdc.


3. When you run parted, you may encounter the following error message:

Warning: Not all of the space available to <volume_id> appears to be


used,
you can fix the GPT to use all of the space (an extra volume_size blocks)

Oracle Cloud Infrastructure User Guide 541


Block Volume

or continue with the current setting?

You are then prompted to fix the error or ignore the error and continue with the current setting. Specify the option
to fix the error.
4. Run the following command to change the display units to sectors so that you can see the precise start position for
the volume:

(parted) unit s
5. Run the following command to display the current partitions in the partition table:

(parted) print

Make note of the values in the Number, Start, and File system columns for the root partition.
6. Run the following command to remove the existing root partition:

(parted) rm <partition_number>

<partition_number> is the value from the Number column.


7. Run the following command to recreate the partition:

(parted) mkpart

At the Start? prompt, specify the value from the Start column. At the File system type? prompt,
specify the value from the File system column. Specify 100% for the End? prompt.
8. Run the following command to exit parted:

(parted) quit

This command forces a rewrite of the partition table with the new partition settings that you specified.
9. To verify that the root partition was extended, run the following command to list the attached block volumes:

lsblk

After you extend the root partition you need to grow the file system. The steps in the following procedure apply only
to xfs file systems.
Growing the file system for a partition
1. Before you grow the file system, repair any issues with the file system on the extended partition by running the
following command:

xfs_repair <partition_id>

<partition_id> is the partition identifier, for example /dev/sdc1. See Checking and Repairing an XFS File
System for more information.
2. After you have confirmed that there are no more issues to repair, you need to create a mount point to run the
xfs_growfs against. To do this, create a directory and mount the partition to that directory by running the
following commands:

mkdir <directory_name>

Oracle Cloud Infrastructure User Guide 542


Block Volume

mount <partition_id> <directory_name> -o nouuid

<partition_id> is the partition identifier, for example /dev/sdc1, and <directory_name> is the directory name,
for example data.
3. After you have created the mount point run the following command to grow the file system:

xfs_growfs -d <directory_name>

<directory_name> is the name for the directory you created in the previous step, for example data.
4. To verify that the file system size is correct, run the following command to display the file system details:

df -lh

Extending the Partition for a Block Volume on a Windows-Based Image


On Windows-based images, you can extend a partition using the Windows interface or from the command line using
the DISKPART utility.

Windows Server 2012 and Later Versions


The steps to extend a partition for a block volume attached to an instance running Windows Server 2012, Windows
Server 2016, or Windows Server 2019 are the same, and are described in the following procedures.
Extending a partition using the Windows interface
1. Open the Disk Management system utility on the instance.
2. Right-click the expanded block volume and select Extend Volume.
3. Follow the instructions in the Extend Volume Wizard:
a. Select the disk that you want to extend, enter the size, and then click Next.
b. Confirm that the disk and size settings are correct, and then click Finish.
4. Verify that the block volume's disk has been extended in Disk Management.
Extending a partition using the command line with DISKPART
1. Open a command prompt as administrator on the instance.

Oracle Cloud Infrastructure User Guide 543


Block Volume

2. Run the following command to start the DISKPART utility:

diskpart
3. At the DISKPART prompt, run the following command to display the instance's volumes:

list volume
4. Run the following command to select the expanded block volume:

select volume <volume_number>

<volume_number> is the number associated with the block volume that you want to extend the partition for.
5. Run the following command to extend the partition:

extend size=<increased_size_in_MB>

<increased_size_in_MB> is the size in MB that you want to extend the partition to.
Caution:

When using the DISKPART utility, do not overextend the partition beyond
the current available space. Overextending the partition could result in data
loss.
6. To confirm that the partition was extended, run the following command and verify that the block volume's
partition has been extended:

list volume

Overview of Block Volume Backups


The backups feature of the Oracle Cloud Infrastructure Block Volume service lets you make a point-in-time snapshot
of the data on a block volume. You can make a backup of a volume when it is attached to an instance or while it is
detached. These backups can then be restored to new volumes either immediately after a backup or at a later time that
you choose.
Backups are encrypted and stored in Oracle Cloud Infrastructure Object Storage, and can be restored as new volumes
to any availability domain within the same region they are stored. This capability provides you with a spare copy of a
volume and gives you the ability to successfully complete disaster recovery within the same region.
There are two ways you can initiate a backup, either by manually starting the backup, or by assigning a policy which
defines a set backup schedule.

Manual Backups
These are on-demand one-off backups that you can launch immediately by following the steps described in Backing
Up a Volume on page 550. When launching a manual backup, you can specify whether an incremental or a full
backup should be performed. See Volume Backup Types for more information about backup types.

Policy-Based Backups
These are automated scheduled backups as defined by the backup policy assigned to the volume.
There are two kinds of backup policies:
• Oracle defined: Predefined backup policies that have a set backup frequency and retention period. You cannot
modify these policies. For more information, see Oracle Defined Backup Policies on page 554.
• User defined: Custom backup policies that you create and configure schedules and retention periods for. You can
also enable scheduled cross-region automated backups with user defined policies, see Scheduling Volume Backup
Copies Across Regions on page 552. For more information, see User Defined Backup Policies on page 551.

Oracle Cloud Infrastructure User Guide 544


Block Volume

See Policy-Based Backups on page 551 for more information.

Tags
When a volume backup is created, the source volume's tags are automatically included in the volume backup. This
also includes volumes with custom backup policies applied to create backups on a schedule. Source volume tags are
automatically assigned to all backups when they are created. You can also apply additional tags to volume backups as
needed.
When a volume backup is copied to a new region, tags are also automatically copied with the volume backup. When
the volume is restored from a backup, the volume is restored with the source volume's tags.

Volume Backup Types


There are two backup types available in the Block Volume service:
• Incremental: This backup type includes only the changes since the last backup.
• Full: This backup type includes all changes since the volume was created.
For data recovery purposes, there is no functional difference between an incremental backup and a full backup. You
can restore a volume from any of your incremental or full volume backups. Both backup types enable you to restore
the full volume contents to the point-in-time snapshot of the volume when the backup was taken. You don't need to
keep the initial full backup or subsequent incremental backups in the backup chain and restore them in sequence, you
only need to keep the backups taken for the times you care about.

Backup Details
For incremental backups, they are a record of all the changes since the last backup. If the first backup on a volume is
created as incremental, it is effectively a full backup. For full backups, they are a record of all the changes since the
volume was created.
For example, in a scenario where you create a 16 TB block volume, modify 40 GB on the volume, and then launch
a full backup of the volume, upon completion, the volume backup size is 40 GB. If you then modify an additional 4
GB and create an incremental backup, the unique size of the incremental backup will be 4 GB. If the full backup is
deleted, the incremental backup will retain the full 44 GB necessary to restore the volume contents. In this example,
if there was a third incremental backup of non-overlapping blocks, with a size of 1 GB, created after the second
incremental backup, and then the full backup is deleted, the third backup would stay at a 1 GB size, and the second
incremental backup size would be updated to 44 GB. The blocks are accounted for in the earliest backup that
references them.
Note:

After a volume has been resized, the first backup on the resized volume
will be a full backup. See Resizing a Volume on page 536 for more
information about volume resizing.

Planning Your Backup


The primary use of backups is to support business continuity, disaster recovery, and long-term archiving
requirements. When determining a backup schedule, your backup plan and goals should consider the following:
• Frequency: How often you want to back up your data.
• Recovery time: How long you can wait for a backup to be restored and accessible to the applications that use it.
The time for a backup to complete varies on several factors, but it will generally take a few minutes or longer,
depending on the size of the data being backed up and the amount of data that has changed since your last backup.
• Number of stored backups: How many backups you need to keep available and the deletion schedule for those
you no longer need. You can only create one backup at a time, so if a backup is underway, it will need to complete
before you can create another one. For details about the number of backups you can store, see Block Volume
Capabilities and Limits on page 509.

Oracle Cloud Infrastructure User Guide 545


Block Volume

The common use cases for using backups are:


• Needing to create multiple copies of the same volume. Backups are highly useful in cases where you need to
create many instances with many volumes that need to have the same data formation.
• Taking a snapshot of your work that you can restore to a new volume at a later time.
• Ensuring you have a spare copy of your volume in case something goes wrong with your primary copy.
Volume Backup Size
Volume backup size may be larger than the current volume usage. Some of the reasons for this could include the
following:
• Any part of a volume that has been written to is considered initialized, so will always be part of a volume backup.
• Many operating systems write or zero out the content, which results in these blocks marked as used. The Block
Volume service considers these blocks updated and includes them in the volume backup.
• Volume backups also include metadata, which can be up to 1 GB in additional data. For example, in a full backup
of a 256 GB Windows boot disk, you may see a backup size of 257 GB, which includes an additional 1 GB of
metadata.

Copying Block Volume Backups Across Regions


You can copy block volume backups between regions using the Console, command line interface (CLI), SDKs, or
REST APIs. For steps, see Copying a Volume Backup Between Regions on page 562. This capability enhances the
following scenarios:
• Disaster recovery and business continuity: By copying block volume backups to another region at regular
intervals, it makes it easier for you to rebuild applications and data in the destination region if a region-wide
disaster occurs in the source region.
• Migration and expansion: You can easily migrate and expand your applications to another region.
You can also enable scheduled cross-region automated backups with user defined policies, see Scheduling Volume
Backup Copies Across Regions on page 552.
To copy volume backups between regions, you must have permission to read and copy volume backups in the source
region, and permission to create volume backups in the destination region. For more information see Required IAM
Policy on page 624.
Once you have copied the volume backup to the new region you can then restore from that backup by creating a new
volume from the backup using the steps described in Restoring a Backup to a New Volume on page 561.

Volume Backup Encryption


The Oracle Cloud Infrastructure Block Volume service always encrypts all block volumes, boot volumes, and volume
backups at rest by using the Advanced Encryption Standard (AES) algorithm with 256-bit encryption.
The Oracle Cloud Infrastructure Vault service enables you to bring and manage your own keys to use for encrypting
volumes and their backups. When you create a volume backup, the encryption key used for the volume is also used
for the volume backup. When you restore the backup to create a new volume you configure a new key, see Restoring
a Backup to a New Volume on page 561. See also Overview of Vault on page 3988.
If you do not configure a volume to use the Vault service, the Block Volume service uses the Oracle-provided
encryption key instead. This applies to both encryption at-rest and in-transit encryption.

Best Practices When Creating Block Volume Backups


When creating and restoring from backups, keep in mind the following:
• Before creating a backup, you should ensure that the data is consistent: Sync the file system, unmount the file
system if possible, and save your application data. Only the data on the disk will be backed up. When creating a
backup, after the backup state changes from REQUEST_RECEIVED to CREATING, you can return to writing
data to the volume. While a backup is in progress, the volume that is being backed up cannot be deleted.

Oracle Cloud Infrastructure User Guide 546


Block Volume

• If you want to attach a restored volume that has the original volume attached, be aware that some operating
systems do not allow you to restore identical volumes. To resolve this, you should change the partition IDs
before restoring the volume. The steps to change an operating system's partition ID vary by operating system. For
instructions, see your operating system's documentation.
• You should not delete the original volume until you have verified that the backup you created of it completed
successfully.
See Backing Up a Volume on page 550 and Restoring a Backup to a New Volume on page 561 for more
information.

Differences Between Block Volume Backups and Clones


Consider the following criteria when you decide whether to create a backup or a clone of a volume.

Volume Backup Volume Clone


Description Creates a point-in-time backup of Creates a single point-in-time copy
data on a volume. You can restore of a volume without having to go
multiple new volumes from the through the backup and restore
backup later in the future. process.
Use case Retain a backup of the data in a Rapidly duplicate an existing
volume, so that you can duplicate environment. For example, you can
an environment later or preserve the use a clone to test configuration
data for future use. changes without impacting your
production environment.
Meet compliance and regulatory
requirements, because the data in
a backup remains unchanged over
time.
Support business continuity
requirements.
Reduce the risk of outages or data
mutation over time.

Speed Slower (minutes or hours) Faster (seconds)


Cost Lower cost Higher cost
Storage location Object Storage Block Volume
Retention policy Policy-based backups expire, manual No expiration
backups do not expire
Volume groups Supported. You can back up a Supported. You can clone a volume
volume group. group.

For background information and steps to clone a block volume, see Cloning a Volume on page 564.

Using the CLI or REST APIs to Customize and Manage the Lifecycle of Volume
Backups
You can use the CLI, REST APIs, or the SDKs to automate, script, and manage volume backups and their lifecycle.
Using the CLI
This section provides basic sample CLI commands that you can use in a script, such as a cron job run by the cron
utility on Linux-based operating systems, to perform automatic backups at specific times. For information about using
the CLI, see Command Line Interface (CLI) on page 4228.

Oracle Cloud Infrastructure User Guide 547


Block Volume

To create a manual backup of the specified block volume


Open a command prompt and run:

oci bv backup create --volume-id <block_volume_OCID> --display-name <Name>


--type <FULL|INCREMENTAL>

For example:

oci bv backup create --volume-id ocid1.volume.oc1..<unique_ID> --display-


name "backup display name" --type FULL

To delete a block volume backup


Open a command prompt and run:

oci bv backup delete --volume-backup-id <volume_backup_OCID>

For example:

oci bv backup delete --volume-backup-id ocid1.volume.oc1..<unique_ID>

To create a manual backup of the specified boot volume


Open a command prompt and run:

oci bv boot-volume-backup create --volume-id <boot_volume_OCID> --display-


name <Name> --type <FULL|INCREMENTAL>

For example:

oci bv boot-volume-backup create --volume-id ocid1.volume.oc1..<unique_ID>


--display-name "backup display name" --type FULL

To delete a boot volume backup


Open a command prompt and run:

oci bv backup delete --boot-volume-backup-id <boot_volume__backup_OCID>

For example:

oci bv backup delete --boot-volume-backup-id ocid1.volume.oc1..<unique_ID>

To list the Oracle-defined backup policies


Open a command prompt and run:

oci bv volume-backup-policy list

To assign an Oracle-defined backup policy to a boot or block volume


Open a command prompt and run:

oci bv volume-backup-policy-assignment create --asset-id <volume_OCID> --


policy-id <policy_OCID>

Oracle Cloud Infrastructure User Guide 548


Block Volume

For example:

oci bv volume-backup-policy-assignment create --


asset-id ocid1.volume.oc1..<unique_ID> --policy-id
ocid1.volumebackuppolicy.oc1..<unique_ID>

To un-assign an Oracle-defined backup policy from a boot or block volume


Open a command prompt and run:

oci bv volume-backup-policy-assignment delete --policy-assignment-


id <policy_assignment_OCID>

For example:

oci bv volume-backup-policy-assignment delete --policy-assignment-id


ocid1.volumebackuppolicyassign.oc1..<unique_ID>

To retrieve the backup policy assignment ID for a boot or block volume


Open a command prompt and run:

oci bv volume-backup-policy-assignment get-volume-backup-policy-asset-


assignment --asset-id <volume_OCID>

For example:

oci bv volume-backup-policy-assignment get-volume-backup-policy-asset-


assignment --asset-id ocid1.volume.oc1..<unique_ID>

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following operations for working with block volume backups, boot volume backups, and backup policies.

Block Volume Backups


• CreateVolumeBackup
• DeleteVolumeBackup
• GetVolumeBackup
• ListVolumeBackups
• UpdateVolumeBackup

Boot Volume Backups


• CreateBootVolumeBackup
• DeleteBootVolumeBackup
• GetBootVolumeBackup
• ListBootVolumeBackups
• UpdateBootVolumeBackup

Volume Backup Policies and Policy Assignments


• GetVolumeBackupPolicy
• ListVolumeBackupPolicies
• CreateVolumeBackupPolicyAssignment
• DeleteVolumeBackupPolicyAssignment

Oracle Cloud Infrastructure User Guide 549


Block Volume

• GetVolumeBackupPolicyAssetAssignment
• GetVolumeBackupPolicyAssignment

Backing Up a Volume
You can create a backup of a volume using Block Volume. Volume backups are point-in-time snapshots of volume
data. For more information about volume backups, see Overview of Block Volume Backups on page 544.
For information to help you decide whether to create a backup or a clone of a boot volume, see Differences Between
Block Volume Backups and Clones on page 547.
Note:

Volume backup size may be larger than the current volume usage. See
Volume Backup Size on page 546 for more information.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let volume admins manage block volumes, backups, and volume groups on page
2154 lets the specified group do everything with block volumes and backups. The policy in Let boot volume backup
admins manage only backups on page 2155 further restricts access to just creating and managing backups.
Tip:

When users create a backup from a volume or restore a volume from a


backup, the volume and backup don't have to be in the same compartment.
However, users must have access to both compartments.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.
Using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Block Volumes.
2. Click the block volume that you want to create a backup for.
3. Click Create Manual Backup.
4. Enter a name for the backup. Avoid entering confidential information.
5. Select the backup type, either incremental or full. See Volume Backup Types on page 545 for information
about backup types.
6. If you have permissions to create a resource, then you also have permissions to apply free-form tags to that
resource. To apply a defined tag, you must have permissions to use the tag namespace. For more information
about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip this option (you
can apply tags later) or ask your administrator.
7. Click Create Backup.
The backup will be completed once its icon no longer lists it as CREATING in the volume list.
Using the API
To back up a volume, use the following operation:
• CreateVolumeBackup
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Oracle Cloud Infrastructure User Guide 550


Block Volume

For more information about backups, see Overview of Block Volume Backups on page 544 and Restoring a
Backup to a New Volume on page 561.

Policy-Based Backups
The Oracle Cloud Infrastructure Block Volume service provides you with the capability to perform volume backups
and volume group backups automatically on a schedule and retain them based on the selected backup policy.
With user defined policies, you can also enable scheduled cross-region backups, so that scheduled volume backups
are automatically copied to a second region, see Scheduling Volume Backup Copies Across Regions on page 552.
These features allow you to adhere to your data compliance and regulatory requirements.
Caution:

Deleting Block Volumes with Policy-Based Backups


All policy-based backups will eventually expire, so if you want to keep a
volume backup indefinitely, you need to create a manual backup.
Volume backups are point-in-time snapshots of volume data. For more information about volume backups, see
Overview of Block Volume Backups on page 544.
There are two kinds of backup policies:
• User defined: Custom backup policies that you create and configure schedules for.
• Oracle defined: Predefined backup policies that have a set backup frequency and retention period. You cannot
modify these policies.
Note:

Timing for Scheduled Backups


Scheduled volume backups are not guaranteed to start at the exact time
specified by the backup schedule. You may see up to several hours of delay
between the scheduled start time and the actual start time for the volume
backup in scenarios where the system is overloaded. This applies to both user
defined and Oracle defined backup policies.
Caution:

Full Backups and Oracle Defined Policies


Starting November 3, 2021, Oracle defined policies will no longer include
full backups. See Full backups removed from Oracle defined backup policies.
User Defined Backup Policies
Oracle Cloud Infrastructure enables you to customize your backup schedules with user defined policies. These are
backup policies that you define the backup frequency and retention period for. There are two parts to user defined
backup policies, the backup policy itself, and then one or more schedules in the policy.
To get started with user defined backup policies, you need to first create the backup policy, see To create a user
defined backup policy on page 556. After this step, you have an empty backup policy, so the next step is to define
and add schedules to the policy.

Schedules
Schedules define the backup frequency and retention period for a user defined backup policy, just like Oracle defined
backup policies. The difference is that you can customize the schedules associated with user defined policies. This
gives you control over the backup frequency and retention period.
When defining a schedule for a user defined backup policy, the first thing you configure is the schedule type, this
specifies the backup frequency. Oracle Cloud Infrastructure provides the following schedule types:

Oracle Cloud Infrastructure User Guide 551


Block Volume

• Daily: Backups are generated daily. You specify the hour of the day for the backup.
• Weekly: Backups are generated weekly. You specify the day of the week, and the hour of that day for the backup.
• Monthly: Backups are generated monthly. You specify the day of the month, and the hour of that day for the
backup.
• Yearly: Backups are generated yearly. You specify the month, the day of that month, and the hour of that day for
the backup.
In addition to frequency, you also configure the following:
• Retention time: The amount of time to keep the backup, in days, weeks, months, or years. The time period is
based the schedule type.
• Backup type Options are full or incremental, see Volume Backup Types on page 545 for more information.
• Timezone The time zone to use for the backup schedule. Options are UTC or the regional data center time zone.
For more information, see To add a schedule to a user defined backup policy on page 556.
You can also edit or remove schedules for a user defined policy at any time, see To edit a schedule for a user defined
backup policy on page 557 and To delete a schedule for a user defined backup policy on page 557.
Duplicating Existing Backup Policies
You can create a new backup policy by duplicating any of the existing backup policies.
If one of the Oracle defined policies is close to meeting your volume backup requirements, but with some changes,
you can create a new backup policy by duplicating the Oracle defined policy. This creates a new user defined backup
policy with schedules already assigned, enabling you to use the Oracle defined policy's settings as a starting point to
save time and simplify the process.
You can also duplicate an existing user defined policy. For more information, see To duplicate a backup policy on
page 557. You can then add, edit, or delete schedules for the new backup policy.
Scheduling Volume Backup Copies Across Regions
The Block Volume service enables you to copy volume backups from one region to another for business continuity
and disaster recovery scenarios, for more information, see Copying Block Volume Backups Across Regions on page
546. With user-defined policies, you can automate this process, so that volume backups are copied to another
region on a schedule. Enabling the automatic copying of scheduled volume backups is only supported with user-
defined policies, so if you need to use this feature for a volume currently configured with an Oracle defined policy,
you need to duplicate the policy and then enable cross region copy. The volume backup copy in the target region has
the same retention period as the volume backup in the source region
Caution:

Vault encryption keys for volumes are not copied to the destination region for
scheduled volume and volume group backups enabled for cross region copy.
For more information, see Vault encryption keys not copied to destination
region for scheduled cross region backup copies.
Note:

It may take up to 24 hours for daily scheduled volume backups to be copied


to the target region. You can verify that the volume backup was copied by
switching to the target region and checking the list of volume backups for
that region. If the volume backup has not been copied yet, you can perform
a manual copy of that volume backup to the target region using the steps
described in Copying a Volume Backup Between Regions on page 562.
Cost
Once this feature is enabled, your bill will include charges for storing volume backups in both the source region and
the destination region. You may also see an increase in network costs. For pricing details, see Oracle Storage Cloud
Pricing. The Object Storage price applies to backup storage. Outbound Data Transfer price will be applicable for
network costs with cross-region backup copies.

Oracle Cloud Infrastructure User Guide 552


Block Volume

Regions Pairs
When you enable cross region copy for a backup policy, the default target region is based on the source region for the
backup.
Note:

You can change the default target region using the CLI, see Configuring a
Custom Target Region on page 553. This capability is not available in the
Console.
The following table lists the source and target region pairs for cross region copy. To copy a volume backup to a
region not paired with the backup's source region, you must manually copy the volume backup, see Copying a
Volume Backup Between Regions on page 562.

Source Region Target Region


US West (Phoenix) US East (Ashburn)
US East (Ashburn) US West (Phoenix)
US West (San Jose) US West (Phoenix)
Brazil East (Sao Paulo) US West (Phoenix)
Chile (Santiago) Brazil East (Sao Paulo)
Canada Southeast (Toronto) Canada Southeast (Montreal)
Canada Southeast (Montreal) Canada Southeast (Toronto)
Japan Central (Osaka) Japan East (Tokyo)
Japan East (Tokyo) Japan Central (Osaka)
India South (Hyderabad) India West (Mumbai)
India West (Mumbai) India South (Hyderabad)
Germany Central (Frankfurt) Switzerland North (Zurich)
UK South (London) Germany Central (Frankfurt)
UK West (Newport) UK South (London)
Australia East (Sydney) Australia Southeast (Melbourne)
Australia Southeast (Melbourne) Australia East (Sydney)
Netherlands Northwest (Amsterdam) Germany Central (Frankfurt)
Saudi Arabia West (Jeddah) Germany Central (Frankfurt)
South Korea Central (Seoul) South Korea North (Chuncheon)
South Korea North (Chuncheon) South Korea Central (Seoul)

Configuring a Custom Target Region


You can change the default target region for the scheduled cross region backup copy feature using the CLI. You can
do this when you create a new backup policy or you can update the target region for an existing backup policy. When
you create a new policy, specify the custom target region using the destination-region paramater. If you're updating
an existing policy, you need to ensure that you include all the existing backup policy settings that you want to keep
when you update the policy with the custom target region. To do this:
1. Use the get operation to retrieve the existing policy settings.
2. Save those settings.

Oracle Cloud Infrastructure User Guide 553


Block Volume

3. Use the settings you saved from the previous step in the update operation, along with the custom target region
specified in the destination-region paramater.
For information about using the CLI, see Command Line Interface (CLI) on page 4228.
Note:

This capability is not available in the Console.


To create a volume backup policy with a custom target region:
Open a command prompt and run:

oci bv volume-backup-policy create --compartment-id <compartment_ID>


--display-name <display_name> --destination-region <region_ID> --
schedules file//<path>/<scheduleJSON>.json

For example:

oci bv volume-backup-policy create --compartment-id


ocid1.compartment.oc1..<unique_ID> --display-name MyPolicyName --
destination-region us-sanjose-1 --schedules file//~/input.json

When you update an existing policy, you need to specify all the policy's settings in the update operation, including
the existing settings that you want to keep. To ensure that you include all the settings, you should first retrieve those
settings using the get operation.
To get the volume backup policy:
Open a command prompt and run:

oci bv volume-backup-policy get --policy-id <backup_policy_ID>

For example:

oci bv volume-backup-policy get --policy-id


ocid1.volumebackuppolicy.oc1..<unique_ID>

Save these settings and then use them when you call the update operation.
To update the volume backup policy with a custom target region:
Open a command prompt and run:

oci bv volume-backup-policy update --policy-id <backup_policy_ID> --display-


name <display_name> --destination-region <region_ID> --schedules file//
<path>/<scheduleJSON>.json

For example:

oci bv volume-backup-policy create --policy-id


ocid1.volumebackuppolicy.oc1..<unique_ID> --display-name MyPolicyName --
destination-region us-sanjose-1 --schedules file//~/input.json

Oracle Defined Backup Policies


There are three Oracle defined backup policies, Bronze, Silver, and Gold. Each backup policy is comprised of
schedules with a set backup frequency and a retention period that you cannot modify. If the backup policy settings for
Oracle defined policies don't meet your requirements, you should use User Defined Backup Policies on page 551
instead. With user defined backup policies you define and control the schedules. You can also enable the automatic
copying of volume backups to a second region, which is not supported with Oracle defined policies.

Oracle Cloud Infrastructure User Guide 554


Block Volume

Note:

Oracle defined backup policies are not supported for scheduled volume group
backups.
Caution:

Full Backups and Oracle Defined Policies


Starting November 3, 2021, Oracle defined policies will no longer include
full backups. See Full backups removed from Oracle defined backup policies.

Bronze Policy
The bronze policy includes monthly incremental backups, run on the first day of the month. These backups are
retained for twelve months. This policy also includes a full backup, run yearly during the first part of January. This
backup is retained for five years.

Silver Policy
The silver policy includes weekly incremental backups that run on Sunday. These backups are retained for four
weeks. This policy also includes monthly incremental backups, run on the first day of the month and are retained for
twelve months. Also includes a full backup, run yearly during the first part of January. This backup is retained for five
years.

Gold Policy
The gold policy includes daily incremental backups, retained for seven days, along with weekly incremental backups,
run on Sunday and retained for four weeks. Includes monthly incremental backups, run on the first day of the month,
retained for twelve months. Also include a full backup, run yearly, during the first part of January. This backup is
retained for five years.
Working with Backup Policies
There are two types of tasks when working with backup policies:
• Creating and Configuring User Defined Backup Policies on page 556
• Managing Backup Policy Assignments to Volumes on page 559
The linked sections listed above provide information for working with backup policies using the Console, CLI, and
REST APIs.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
Important:

To view or work with backup policies, you need access to the root
compartment, which is where the predefined backup policies are located.
For administrators: The policy in Let volume admins manage block volumes, backups, and volume groups on page
2154 lets the specified group do everything with block volumes and backups. The policy in Let volume backup
admins manage only backups on page 2154 further restricts access to just creating and managing backups.

Oracle Cloud Infrastructure User Guide 555


Block Volume

Tip:

When users create a backup from a volume or restore a volume from a


backup, the volume and backup don't have to be in the same compartment.
However, users must have access to both compartments.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Tagging Resources
You can apply tags to your resources to help you organize them according to your business needs. You can update the
resource later with the desired tags. For general information about applying tags, see Resource Tags on page 213.
Creating and Configuring User Defined Backup Policies

Using the Console


You can use the Console to create and update user defined backup policies.
To create a user defined backup policy
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Backup Policies.
2. Click Create Backup Policy.
3. Specify a name for the backup policy. Avoid entering confidential information.
4. Select the compartment to create the backup policy in.
While you select a compartment for the backup policy, it is accessible across your tenancy.
5. Optionally, you can enable cross region copy to the specified region. This automates the copying of the volume
backup to a second region after each backup is created. The target region specified for the backup copy is based
on the region pair for the policy's source region and cannot be changed. For more information, see Regions Pairs
on page 553.
6. Click Create Backup Policy to create the backup policy.
To add a schedule to a user defined backup policy
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Backup Policies.
2. Click the backup policy you want to add the schedule to.
3. Click Add Schedule.
4. Specify the backup frequency by selecting from the Schedule Type options: Daily, Weekly, Monthly, or Yearly,
and then configure the additional schedule options. Depending on the schedule type, the additional schedule
options will include one or more of the following:
• Hour of the day
• Day of the week
• Day of the month
• Month of the year
5. Specify the Retention Time, which will be in days, weeks, months, or years, depending on the schedule type you
selected in the previous step.
6. Select Full or Incremental for Backup Type.
7. Select the Timezoneto base the schedule settings on, either UTC or Regional Data Center Time.
8. Click Add Schedule.
To enable cross region copy for a user defined backup policy
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Backup Policies.
2. Click the backup policy that you want to enable cross region copy for.
3. On the details page, for Cross Region Copy Target, click Enable.

Oracle Cloud Infrastructure User Guide 556


Block Volume

4. Click Enable in the confirmation dialog.


To disable cross region copy for a user defined backup policy
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Backup Policies.
2. Click the backup policy that you want to disable cross region copy for.
3. On the details page, for Cross Region Copy Target, click Disable.
4. Click Disable in the confirmation dialog.
To duplicate a backup policy
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Backup Policies.
2. Click the backup policy that you want to duplicate.Both Oracle defined and user defined backup policies can be
duplicated.
3. Click Duplicate.
4. Specify a name for the policy. Avoid entering confidential information.
5. Select the compartment to create the backup policy in. It does not need to be the same compartment as the backup
policy you are duplicating.
6. Optionally, you can enable cross region copy to the specified region. This automates the copying of the volume
backup to a second region after each backup is created. The default target region specified for the backup copy is
based on the region pair for the policy's source region. For more information, see Regions Pairs on page 553.
7. Click Duplicate Backup Policy.
To edit a schedule for a user defined backup policy
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Backup Policies.
2. Click the backup policy that you want to edit a schedule for.
3. In Schedules, for the schedule you want to edit, click the Actions icon (three dots), and then click Edit.
4. After making your changes to the schedule, click Update.
To delete a schedule for a user defined backup policy
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Backup Policies.
2. Click the user defined backup policy that you want to delete a schedule for.
3. In Schedules, for the schedule you want to delete, click the Actions icon (three dots), and then click Delete.
4. Click Delete in the confirmation dialog.
To delete a user defined backup policy
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Backup Policies.
2. Click the user defined backup policy you want to delete.
3. Click Delete.
4. Enter the name of the backup policy and click Delete.
Using the CLI
For information about using the CLI, see Command Line Interface (CLI) on page 4228.
Use the following operations to work with backup policies:
To create a user defined backup policy
Open a command prompt and run:

oci bv volume-backup-policy create --compartment-id <compartment_ID> --


schedules file//<path>/<scheduleJSON>.json

For example:

oci bv volume-backup-policy create --compartment-id


ocid1.compartment.oc1..<unique_ID> --schedules file//~/input.json

Oracle Cloud Infrastructure User Guide 557


Block Volume

To list the backup policies in a specified compartment


Open a command prompt and run:

oci bv volume-backup-policy list --compartment-id <compartment_ID>

For example:

oci bv volume-backup-policy list --compartment-id


ocid1.compartment.oc1..<unique_ID>

To retrieve a specific backup policy


Open a command prompt and run:

oci bv volume-backup-policy get --backup-policy-id <backup-policy-ID>

For example:

oci bv volume-backup-policy get --backup-policy-id


ocid1.volumebackuppolicy.oc1.phx.<unique_ID>

To update the display name for a user defined backup policy


Open a command prompt and run:

oci bv volume-backup-policy update --backup-policy-id <backup-policy_ID> --


display-name <backup-policy_name>

For example:

oci bv volume-backup-policy update --backup-policy-id


ocid1.volumebackuppolicy.oc1.phx.<unique_ID> --display-name "new display
name"

To update the schedules for a user defined backup policy


Open a command prompt and run:

oci bv volume-backup-policy update --backup-policy-id <backup-policy_ID> --


schedules file//<path>/<scheduleJSON>.json

For example:

oci bv volume-backup-policy update --volume-group-id


ocid1.volumebackuppolicy.oc1.phx.<unique_ID> --schedules file//~/input.json

To delete a user defined backup policy


Open a command prompt and run:

oci bv volume-backup-policy delete --backup-policy-id <backup-policy_ID>

You can only delete a user defined backup policy if it is not assigned to any volumes. You cannot delete Oracle
defined backup policies.
For example:

oci bv volume-backup-policy delete --backup-policy-id


ocid1.volumebackuppolicy.oc1.phx.<unique_ID>

Oracle Cloud Infrastructure User Guide 558


Block Volume

Using the API


Use the following operations to work with backup policies:
• CreateVolumeBackupPolicy
• DeleteVolumeBackupPolicy
• UpdateVolumeBackupPolicy
• ListVolumeBackupPolicies
• GetVolumeBackupPolicy
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
For more information about backups, see Overview of Block Volume Backups on page 544 and Restoring a
Backup to a New Volume on page 561.
Managing Backup Policy Assignments to Volumes
If a volume is part of a volume group with a backup policy assignment, the backup policy assignment is managed
by the volume group. In this scenario, to update the backup policy assigned you must change the assignment for the
volume group or remove the volume from the group.
To assign a backup policy to a volume
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Block Volumes.
2. Click the volume for which you want to assign a backup policy to.
3. On the Block Volume Information tab, in Scheduled Backups, check the Managed By field.
Using the Console
You can use the Console to assign, change, or remove both user defined and Oracle defined backup policies for
existing volumes.
To assign a backup policy to a volume
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Block Volumes.
2. Click the volume for which you want to assign a backup policy to.
3. On the Block Volume Information tab click Edit .
4. In the BACKUP POLICIES section, select the compartment containing the backup policies.
5. Select the appropriate backup policy for your requirements.
6. Click Save Changes.
To change a backup policy assigned to a volume
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Block Volumes.
2. Click the volume for which you want to change the backup policy for.
3. On the Block Volume Information tab click Edit.
4. In the BACKUP POLICIES section, select the compartment containing the backup policy.
5. Select the backup policy you want to switch to.
6. Click Save Changes.
To remove a backup policy assigned to a volume
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Block Volumes.
2. Click the volume for which you want to remove the backup policy for.
3. On the Block Volume Information tab click Edit .
4. In the BACKUP POLICIES section, select None from the list, and then click Save Changes.
Using the CLI
For information about using the CLI, see Command Line Interface (CLI) on page 4228.
Use the following operations to work with volume backup policy assignments to volumes:

Oracle Cloud Infrastructure User Guide 559


Block Volume

To assign a backup policy to a volume


Open a command prompt and run:

oci bv volume-backup-policy-assignment create --asset-id <volume_ID> --


policy-id <policy_ID>

For example:

oci bv volume-backup-policy-assignment create --


asset-id ocid1.volume.oc1..<unique_ID> --policy-id
ocid1.volumebackuppolicy.oc1..<unique_ID>

To get the backup policy assigned to a volume


Open a command prompt and run:

oci bv volume-backup-policy-assignment get-volume-backup-policy-asset-


assignment --asset-id <volume_ID>

For example:

oci bv volume-backup-policy-assignment get-volume-backup-policy-asset-


assignment --asset-id ocid1.volume.oc1..<unique_ID>

To retrieve a specific backup policy assignment


Open a command prompt and run:

oci bv volume-backup-policy-assignment get --policy-assignment-id <backup-


policy-ID>

For example:

oci bv volume-backup-policy-assignment get --policy-assignment-id


ocid1.volumebackuppolicyassignment.oc1.phx.<unique_ID>

To delete a backup policy assignment


Open a command prompt and run:

oci bv volume-backup-policy-assignment delete ----policy-assignment-


id <backup-policy_ID>

You can only delete a user defined backup policy if it is not assigned to any volumes. You cannot delete Oracle
defined backup policies.
For example:

oci bv volume-backup-policy-assignment delete ----policy-assignment-id


ocid1.volumebackuppolicyassignment.oc1.phx.<unique_ID>

Using the API


Use the following operations to manage backup policy assignments to volumes:
• CreateVolumeBackupPolicyAssignment
• DeleteVolumeBackupPolicyAssignment
• GetVolumeBackupPolicyAssetAssignment
• GetVolumeBackupPolicyAssignment

Oracle Cloud Infrastructure User Guide 560


Block Volume

For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
For more information about backups, see Overview of Block Volume Backups on page 544 and Restoring a
Backup to a New Volume on page 561.

Restoring a Backup to a New Volume


You can restore a backup of a volume as a new volume using Block Volume.
You can restore a volume from any of your incremental or full volume backups. Both backup types enable you
to restore the full volume contents to the point-in-time snapshot of the volume when the backup was taken. You
don't need to keep the initial full backup or subsequent incremental backups in the backup chain and restore them in
sequence, you only need to keep the backups taken for the times you care about. See Volume Backup Types on page
545 for information about full and incremental backup types.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let volume admins manage block volumes, backups, and volume groups on page
2154 lets the specified group do everything with block volumes and backups.
Tip:

When users create a backup from a volume or restore a volume from a


backup, the volume and backup don't have to be in the same compartment.
However, users must have access to both compartments.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.
Using the Console
1. Open the navigation menu. Under Block Storage, click Block Volume Backups.
A list of the block volume backups in the compartment you're viewing is displayed. If you don’t see the one you're
looking for, make sure you’re viewing the correct compartment (select from the list on the left side of the page).
2. Click the Actions icon (three dots) for the block volume backup you want to restore.
3. Click Create Block Volume.
4. Enter a name for the block volume and choose the availability domain in which you want to restore it. Avoid
entering confidential information.
5. You can restore a block volume backup to a larger volume size. To do this, check Custom Block Volume Size
(GB), and then specify the new size. You can only increase the size of the volume, you cannot decrease the size.
If you restore the block volume backup to a larger size volume, you need to extend the volume's partition, see
Extending the Partition for a Block Volume on page 540 for more information.
6. Optionally, you can select the appropriate backup policy for your requirements. See Policy-Based Backups on
page 551 for more information about backup policies.
7. Optionally, you can encrypt the data in this volume using your own Vault encryption key. To use Vault for your
encryption needs, select the Encrypt using Vault check box. Then, select the Vault compartment and Vault that
contain the master encryption key you want to use. Also select the Master encryption key compartment and
Master encryption key. For more information about encryption, see Overview of Vault on page 3988.
8. If you have permissions to create a resource, then you also have permissions to apply free-form tags to that
resource. To apply a defined tag, you must have permissions to use the tag namespace. For more information
about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip this option (you
can apply tags later) or ask your administrator.

Oracle Cloud Infrastructure User Guide 561


Block Volume

9. Click Create.
The volume will be ready to attach once its icon no longer lists it as PROVISIONING in the volume list. For
more information, see Attaching a Volume on page 521.
Caution:

If you want to attach a restored volume that has the original volume
attached, be aware that some operating systems do not allow you to restore
identical volumes. To resolve this, you should change the partition IDs
before restoring the volume. How to change an operating system's partition
ID varies by operating system; for instructions, see your operating system's
documentation.
Using the API
The API used to restore a backup is CreateVolume. The API has an optional volumeBackupId parameter that you
can use to define the backup from which the data should be restored on the newly created volume. For details, see
CreateVolumeDetails Reference.
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
For more information about backups, see Overview of Block Volume Backups on page 544 and Backing Up a
Volume on page 550.

Copying a Volume Backup Between Regions


You can copy volume backups from one region to another region using the Oracle Cloud Infrastructure Block
Volume service. For more information, see Copying Block Volume Backups Across Regions on page 546. You
can also enable scheduled cross-region automated backups with user defined policies, see Scheduling Volume Backup
Copies Across Regions on page 552.
Note:

When copying block volume backups across regions in your tenancy, you
can copy up to five concurrent backups per tenancy at a time from a specific
source region.
Volume Backup Type Considerations
When volume backups are copied to another region, the volume backup type in the destination region will always
match the source volume backup types, except for certain scenarios for incremental backups.
Incremental backups will be copied as full volume backups in the following scenarios:
• When the volume backup being copied is the first volume backup taken after a volume has been resized. This
applies to volume backups copied on a schedule and volume backups copied manually.
• Volume backups that were the result of a cross region copy, if they are then copied back to their source region.
This applies to volume backups copied on a schedule and volume backups copied manually.
• When the volume backup is being copied to a destination region where the previous incremental backup copy is
not in the AVAILABLE state. This applies to volume backups copied on a schedule and volume backups copied
manually.
• When the volume backup is copied out of order. For example, in the scenario where you have incremental volume
backups #1 through #5, and you copy volume backup #3 and then volume backup #1, the volume backups may
be copied as full backups to the destination region. This only applies to volume backups that are copied manually.
This does not apply to volume backups created and copied using backup policies, as scheduled volume backups
are always copied in sequential order.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message

Oracle Cloud Infrastructure User Guide 562


Block Volume

that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The first two statements listed in the Let volume admins manage block volumes, backups, and
volume groups on page 2154 policy lets the specified group do everything with block volumes and backups with
the exception of copying volume backups across regions. The aggregate resource type volume-family does not
include the VOLUME_BACKUP_COPY permission, so to enable copying volume backups across regions you need to
ensure that you include the third statement in that policy, which is:

Allow group VolumeAdmins to use volume-backups in tenancy where


request.permission='VOLUME_BACKUP_COPY'

To restrict access to just creating and managing volume backups, including copying volume backups between regions,
use the policy in Let boot volume backup admins manage only backups on page 2155. The individual resource type
volume-backups includes the VOLUME_BACKUP_COPY permission, so you do not need to specify it explicitly in
this policy.
If you are copying volume backups encrypted using Vault between regions or you want the copied volume backup to
use Vault for encryption in the destination region, you need to use a policy that allows the Block Volume service to
perform cryptographic operations with keys in the destination region. For a sample policy showing this, see Let Block
Volume, Object Storage, File Storage, Container Engine for Kubernetes, and Streaming services encrypt and decrypt
volumes, volume backups, buckets, file systems, Kubernetes secrets, and stream pools on page 2161.

Restricting Access
The specific permissions needed to copy volume backups across regions are:
• Source region: VOLUME_BACKUP_READ, VOLUME_BACKUP_COPY
• Destination region: VOLUME_BACKUP_CREATE

Sample Policies
To restrict a group to specific source and destination regions for copying volume backups
In this example, the group is restricted to copying volume backups from the UK South (London) region to the
Germany Central (Frankfurt) region.

Allow group MyTestGroup to read volume-backups in tenancy where all


{request.region='lhr'}
Allow group MyTestGroup to use volume-backups in tenancy where all
{request.permission='VOLUME_BACKUP_COPY', request.region = 'lhr',
Allow group MyTestGroup to manage volume-backups in tenancy where all
{request.permission='VOLUME_BACKUP_CREATE', request.region = 'fra'}

To restrict some source regions to specific destination regions while enabling all destination regions for
other source regions
In this example, the following is enabled for the group:
• Manage volume backups in all regions.
• Copy volume backups from the US West (Phoenix) and US East (Ashburn) regions to any destination regions.
• Copy volume backups from the Germany Central (Frankfurt) and UK South (London) regions only to the
Germany Central (Frankfurt) or UK South (London) regions.

Allow group MyTestGroup to read volume-backups in tenancy where all


{request.region='lhr'}
Allow group MyTestGroup to manage volume-backups in tenancy where any
{request.permission!='VOLUME_BACKUP_COPY'}
Allow group MyTestGroup to use volume-backups in tenancy where all
{request.permission='VOLUME_BACKUP_COPY', any {request.region='lhr',
request.region='fra'}, any{target.region='fra', target.region='lhr'}}

Oracle Cloud Infrastructure User Guide 563


Block Volume

Allow group MyTestGroup to use volume-backups in tenancy where all


{request.permission='VOLUME_BACKUP_COPY', any {request.region='phx',
request.region='iad'}}

If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.
Using the Console
1. Open the navigation menu. Under Block Storage, click Block Volume Backups.
A list of the block volume backups in the compartment you're viewing is displayed. If you don’t see the one you're
looking for, make sure you’re viewing the correct compartment (select from the list on the left side of the page).
2. Click the Actions icon (three dots) for the block volume backup you want to copy to another region.
3. Click Copy to Another Region.
4. Enter a name for the backup and choose the region to copy the backup to. Avoid entering confidential information.
5. In the Encryption section select whether you want the volume backup to use the Oracle-provided encryption key
or your own Vault encryption key. If you select the option to use your own key, paste the OCID for encryption
key from the destination region.
6. Click Copy Block Volume Backup.
7. Confirm that the source and destination region details are correct in the confirmation dialog and then click OK.
Using the API
To copy a volume backup to another region, use the following operation:
• CopyVolumeBackup
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Next Steps
After copying the block volume backup, switch to the destination region in the Console and verify that the copied
backup appears in the list of block volume backups for that region. You can then restore the backup by creating a new
block volume from it using the steps in Restoring a Backup to a New Volume on page 561.
For more information about backups, see Overview of Block Volume Backups on page 544.

Cloning a Volume
You can create a clone from a volume using the Block Volume service. Cloning enables you to make a copy of an
existing block volume without needing to go through the backup and restore process.
A cloned volume is a point-in-time direct disk-to-disk deep copy of the source volume, so all the data that is in the
source volume when the clone is created is copied to the clone volume. Any subsequent changes to the data on the
source volume are not copied to the clone. Since the clone is a copy of the source volume it will be the same size as
the source volume unless you specify a larger volume size when you create the clone.
The clone operation occurs immediately, and you can attach and use the cloned volume as a regular volume as soon
as the state changes to available. At this point, the volume data is being copied in the background, and can take up to
thirty minutes depending on the size of the volume.
There is a single point-in-time reference for a source volume while it is being cloned, so if the source volume is
attached when a clone is created, you need to wait for the first clone operation to complete from the source volume
before creating additional clones. If the source volume is detached, you can create up to ten clones from the same
source volume simultaneously.
You can only create a clone for a volume within the same region, availability domain and tenant. You can create a
clone for a volume between compartments as long as you have the required access permissions for the operation.
For more information about the Block Volume service and cloned volumes, see the Block Volume FAQ.

Oracle Cloud Infrastructure User Guide 564


Block Volume

Differences Between Block Volume Clones and Backups


Consider the following criteria when you decide whether to create a backup or a clone of a volume.

Volume Backup Volume Clone


Description Creates a point-in-time backup of Creates a single point-in-time copy
data on a volume. You can restore of a volume without having to go
multiple new volumes from the through the backup and restore
backup later in the future. process.
Use case Retain a backup of the data in a Rapidly duplicate an existing
volume, so that you can duplicate environment. For example, you can
an environment later or preserve the use a clone to test configuration
data for future use. changes without impacting your
production environment.
Meet compliance and regulatory
requirements, because the data in
a backup remains unchanged over
time.
Support business continuity
requirements.
Reduce the risk of outages or data
mutation over time.

Speed Slower (minutes or hours) Faster (seconds)


Cost Lower cost Higher cost
Storage location Object Storage Block Volume
Retention policy Policy-based backups expire, manual No expiration
backups do not expire
Volume groups Supported. You can back up a Supported. You can clone a volume
volume group. group.

For more information about block volume backups, see Overview of Block Volume Backups on page 544 and
Backing Up a Volume on page 550.

Using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Block Volumes.
2. In the Block Volumes list, click the volume that you want to clone.
3. In Resources, click Clones.
4. Click Create Clone.
5. Specify a name for the clone. Avoid entering confidential information.
6. If you want to clone the block volume to a larger size volume, check Custom Block Volume Size (GB) and then
specify the new size. You can only increase the size of the volume, you cannot decrease the size. If you clone the
block volume to a larger size volume, you need to extend the volume's partition. See Extending the Partition for a
Block Volume on page 540 for more information.
7. If you want to change the elastic performance setting when cloning the volume, check Custom Block Volume
Performance and select the elastic performance setting you want the volume clone to use. See Block Volume
Elastic Performance on page 585 for more information. You can also change the elastic performance setting
after you have cloned the volume, see Block Volume Elastic Performance on page 585. If you leave Custom
Block Volume Performance unchecked, the cloned volume will use the same elastic performance setting as the
source volume.
8. Click Create Clone.

Oracle Cloud Infrastructure User Guide 565


Block Volume

The volume is ready use when its icon lists it as AVAILABLE in the volume list. At this point, you can perform
various actions on the volume such as creating a clone from the volume, attaching it to an instance, or deleting the
volume.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
To create a clone from a volume, use the CreateVolume operation and specify VolumeSourceFromVolumeDetails for
CreateVolumeDetails.

Disconnecting From a Volume


For volumes attached with iSCSI on page 505 as the volume attachment type you need to disconnect the volume
from an instance before you detach the volume. For more information about attachment type options, see Volume
Attachment Types on page 505.

Required IAM Policy


Disconnecting a volume from an instance does not require a specific IAM policy. Don't confuse this with detaching a
volume (see Detaching a Volume on page 567).
Disconnecting from a Volume on a Linux Instance
Caution:

We recommend that you unmount and disconnect the volume from the
instance using iscsiadm before you detach the volume. Failure to do so
may lead to loss of data.
1. Log on to your instance's guest OS and unmount the volume.
2. Run the following command to disconnect the instance from the volume:

iscsiadm -m node -T <IQN> -p <iSCSI IP ADDRESS>:<iSCSI PORT> -u

A successful logout response resembles the following:

Logging out of session [sid: 2, target:


iqn.2015-12.us.oracle.com:c6acda73-90b4-4bbb-9a75-faux09015418, portal:
169.254.0.2,3260]
Logout of [sid: 2, target:
iqn.2015-12.us.oracle.com:c6acda73-90b4-4bbb-9a75-faux09015418, portal:
169.254.0.2,3260] successful.
3. You can now detach the volume without the risk of losing data.
Disconnecting from a Volume on a Windows Instance
1. Use a Remote Desktop client to log on to your Windows instance, and then open Disk Management.
2. Right-click the volume you want to disconnect, and then click Offline.
3. Open iSCSI Initiator, select the target, and then click Disconnect.
4. Confirm the session termination. The status should show as Inactive.
5. In iSCSI Initiator, click the Favorite Targets tab, select the target you are disconnecting, and then click
Remove.
6. Click the Volumes and Devices tab, select the volume from the Volume List, and then click Remove.
7. You can now detach the volume without the risk of losing data.

Oracle Cloud Infrastructure User Guide 566


Block Volume

Detaching a Volume
When an instance no longer needs access to a volume, you can detach the volume from the instance without affecting
the volume's data.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to attach/
detach existing block volumes. The policy in Let volume admins manage block volumes, backups, and volume groups
on page 2154 lets the specified group do everything with block volumes and backups, but not launch instances.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Using the Console


Caution:

For volumes attached using iSCSI on page 505, we recommend that you
unmount and disconnect the volume from the instance using iscsiadm
before you detach the volume. Failure to do so may lead to loss of data. See
Disconnecting From a Volume on page 566 for more information.
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. In the Instance list locate the instance. Click its name to display the instance details.
3. In the Resources section on the Instance Details page, click Attached Block Volumes
4. Click the Actions icon (three dots) next to the volume you want to detach, and then click Detach. Confirm when
prompted.

Using the API


To delete an attachment, use the following operation:
• DetachVolume
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Deleting a Volume
You can delete a volume that is no longer needed.
Caution:

• You cannot undo this operation. Any data on a volume will be


permanently deleted once the volume is deleted.
• All policy-based backups will eventually expire, so if you want to keep
a volume backup indefinitely, you need to create a manual backup. See
Overview of Block Volume Backups on page 544 for information
about policy-based and manual backups.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message

Oracle Cloud Infrastructure User Guide 567


Block Volume

that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let volume admins manage block volumes, backups, and volume groups on page
2154 lets the specified group do everything with block volumes and backups.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Block Volumes.
2. In the Block Volumes list, find the volume you want to delete.
3. Click Terminate next to the volume you want to delete and confirm the selection when prompted.

Using the API


To delete a volume, use the following operation:
• DeleteVolume
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Move Block Volume Resources Between Compartments


You can move Block Volume resources such as block volumes, boot volumes, volume backups, volume groups,
and volume group backups from one compartment to another. When you move a Block Volume resource to a
new compartment, associated resources are not moved. After you move the resource to the new compartment,
inherent policies apply immediately and affect access to the resource through the Console. For more information, see
Managing Compartments on page 2450.
Important:

When moving Block Volume resources between compartments you need


to ensure that the resource users have sufficient access permissions on the
compartment the resource is being moved to.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The following policies allow users to move Block Volume resources to a different compartment:

Allow group BlockCompartmentMovers to manage volume-family in tenancy

If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150.

Security Zones
Security Zones ensure that your cloud resources comply with Oracle security principles. If any operation on a
resource in a security zone compartment violates a policy for that security zone, then the operation is denied.
The following security zone policies affect your ability to move Block Volume resources from one compartment to
another:
• You can't move a block volume or boot volume from a security zone to a standard compartment.

Oracle Cloud Infrastructure User Guide 568


Block Volume

• You can't move a block volume or boot volume from a standard compartment to a compartment that is in a
security zone if the volume violates any security zone policies.

Using the Console


To move a block volume to a new compartment
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Block Volumes.
2. In the Scope section, select a compartment.
3. Find the block volume in the list, click the the Actions icon (three dots), and then click Move Resource.
4. Choose the destination compartment from the list.
5. Click Move Resource.
To move a block volume backup to a new compartment
1. Open the navigation menu. Under Block Storage, click Block Volume Backups.
2. In the Scope section, select a compartment.
3. Find the block volume backup in the list, click the the Actions icon (three dots), and then click Move Resource.
4. Choose the destination compartment from the list.
5. Click Move Resource.
To move a volume group to a new compartment
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Volumes Groups.
2. In the Scope section, select a compartment.
3. Find the volume group in the list, click the the Actions icon (three dots), and then click Move Resource.
4. Choose the destination compartment from the list.
5. Click Move Resource.
To move a volume group backup to a new compartment
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Volumes Group
Backups.
2. In the Scope section, select a compartment.
3. Find the volume group backup in the list, click the the Actions icon (three dots), and then click Move Resource.
4. Choose the destination compartment from the list.
5. Click Move Resource.
To move a boot volume to a new compartment
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Boot Volumes.
2. In the Scope section, select a compartment.
3. Find the boot volume in the list, click the the Actions icon (three dots), and then click Move Resource.
4. Choose the destination compartment from the list.
5. Click Move Resource.
To move a boot volume backup to a new compartment
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Boot Volume Backups.
2. In the Scope section, select a compartment.
3. Find the boot volume backup in the list, click the the Actions icon (three dots), and then click Move Resource.
4. Choose the destination compartment from the list.
5. Click Move Resource.

Using the CLI


For information about using the CLI, see Command Line Interface (CLI) on page 4228.

Oracle Cloud Infrastructure User Guide 569


Block Volume

To move a block volume to a new compartment


Open a command prompt and run:

oci bv volume change-volume-compartment --volume-id <volume_OCID> --


compartment-id <destination_compartment_OCID>

To move a block volume backup to a new compartment


Open a command prompt and run:

oci bv volume-backup change-volume-backup-compartment --volume-backup-


id <volume_backup_OCID> --compartment-id <destination_compartment_OCID>

To move a volume group to a new compartment


Open a command prompt and run:

oci bv volume-group change-volume-group-compartment --volume-group-


id <volume_group_OCID> --compartment-id <destination_compartment_OCID>

To move a volume group backup to a new compartment


Open a command prompt and run:

oci bv volume-group-backup change-volume-group-backup-compartment


--volume-group-backup-id <volume_group_backup_OCID> --compartment-
id <destination_compartment_OCID>

To move a boot volume to a new compartment


Open a command prompt and run:

oci bv boot-volume change-boot-volume-compartment --boot-volume-


id <boot_volume_OCID> --compartment-id <destination_compartment_OCID>

To move a boot volume backup to a new compartment


Open a command prompt and run:

oci bv boot-volume-backup change-boot-volume-backup-compartment


--boot-volume-backup-id <boot_volume_backup_OCID> --compartment-
id <destination_compartment_OCID>

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following operations for moving Block Volume resources between compartments:
• ChangeVolumeCompartment
• ChangeVolumeBackupCompartment
• ChangeVolumeGroupCompartment
• ChangeVolumeGroupBackupCompartment
• ChangeBootVolumeCompartment
• ChangeBootVolumeBackupCompartment

Oracle Cloud Infrastructure User Guide 570


Block Volume

Block Volume Performance


The content in the sections below apply to Category 7 and Section 2.7.1.8.1 (Oracle Cloud Infrastructure - Block
Volume subsection) of the Oracle PaaS and IaaS Public Cloud Services Pillar documentation.
The Oracle Cloud Infrastructure Block Volume service lets you dynamically provision and manage block storage
volumes. You can create, attach, connect and move volumes as needed to meet your storage and application
requirements. The Block Volume service uses NVMe-based storage infrastructure, and is designed for consistency.
You just need to provision the capacity needed and performance scales with the performance characteristics of the
elastic performance option selected up to the service maximums. See Block Volume Elastic Performance on page
585 for specific details about the elastic performance options.
The Block Volume service supports creating volumes sized from 50 GB to a maximum size of 32 TB, in 1 GB
increments. You can attach up to 32 block volumes to an instance, with a maximum of 1 PB of attached volumes per
instance. The instance's boot volume does not count towards this limit.
Latency performance is independent of the instance shape or volume size, and is always sub-millisecond at the 95th
percentile for the Balanced and Higher Performance elastic performance options. Per instance performance is up to
700,000 IOPS, see Host Maximum on page 577.
Note:

The throughput and IOPS performance results described in this topic


are for unformatted iSCSI-attached data volumes, attached to bare metal
Compute instances. Performance results may be lower for different scenarios,
such as Windows-formatted data volumes, volumes using paravirtualized
attachments, boot volumes, among other types of volume scenarios. For more
information, see Performance Limitations and Considerations on page 573.
Note:

You should perform benchmark analysis during proof of concept testing to


verify that your environment's configuration will have adequate performance
for your application requirements, for more information, see Metrics and
Performance Testing on page 573.

Higher Performance
The Higher Performance elastic performance option is recommended for workloads with the highest I/O
requirements, requiring the best possible performance, such as large databases. This option provides the best linear
performance scale with 75 IOPS/GB up to a maximum of 35,000 IOPS per volume. Throughput also scales at the
highest rate at 600 KBPS/GB up to a maximum of 480 MBPS per volume.
The following table lists the Block Volume service's throughput and IOPS performance numbers based on volume
size for this option. IOPS and KPBS performance scales linearly per GB volume size up to the service maximums so
you can predictably calculate the performance numbers for a specific volume size. If you're trying to achieve certain
performance targets for volumes configured to use the Higher Performance elastic performance option you can
provision a minimum volume size using this table as a reference.

Volume Size Max Throughput Max Throughput Max IOPS


(1 MB block size) (8 KB block size) (4 KB block size)

50 GB 30 MB/s 30 MB/s 3750


100 GB 60 MB/s 60 MB/s 7500
200 GB 120 MB/s 96 MB/s 15,000
300 GB 180 MB/s 180 MB/s 22,500
400 GB 240 MB/s 240 MB/s 30,000

Oracle Cloud Infrastructure User Guide 571


Block Volume

Volume Size Max Throughput Max Throughput Max IOPS


(1 MB block size) (8 KB block size) (4 KB block size)

500 GB 300 MB/s 280 MB/s 35,000


800 GB - 32 TB 480 MB/s 280 MB/s 35,000

Balanced Performance
The Balanced elastic performance option provides a good balance between performance and cost savings for most
workloads, including those that perform random I/O such as boot volumes. This option provides linear peformance
scaling with 60 IOPS/GB up to 25,000 IOPS per volume. Throughtput scales at 480 KBPS/GB up to a maximum of
480 MBPS per volume.
The following table lists the Block Volume service's throughput and IOPS performance numbers based on volume
size for this option. IOPS and KPBS performance scales linearly per GB volume size up to the service maximums so
you can predictably calculate the performance numbers for a specific volume size. If you're trying to achieve certain
performance targets for volumes configured to use the Balanced elastic performance option you can provision a
minimum volume size using this table as a reference.

Volume Size Max Throughput Max Throughput Max IOPS


(1 MB block size) (8 KB block size) (4 KB block size)

50 GB 24 MB/s 24 MB/s 3000


100 GB 48 MB/s 48 MB/s 6000
200 GB 96 MB/s 96 MB/s 12,000
300 GB 144 MB/s 144 MB/s 18,000
400 GB 192 MB/s 192 MB/s 24,000
500 GB 240 MB/s 200 MB/s 25,000
750 GB 360 MB/s 200 MB/s 25,000
1 TB - 32 TB 480 MB/s 200 MB/s 25,000

Lower Cost
The Lower Cost elastic performance option is recommended for throughput intensive workloads with large
sequential I/O, such as streaming, log processing, and data warehouses. This option gives you linear scaling 2 IOPS/
GB upt to a maximum of 3000 IOPS per volume.
The following table lists the Block Volume service's throughput and IOPS performance numbers based on volume
size for this option. IOPS and KPBS performance scales linearly per GB volume size up to the service maximums so
you can predictably calculate the performance numbers for a specific volume size. If you're trying to achieve certain
performance targets for volumes configured to use the Lower Cost elastic performance option you can provision a
minimum volume size using this table as a reference.

Volume Size Max Throughput Max Throughput Max IOPS


(1 MB block size) (8 KB block size) (4 KB block size)

50 GB 12 MB/s 0.8 MB/s 100


100 GB 24 MB/s 1.6 MB/s 200
200 GB 48 MB/s 3.2 MB/s 400

Oracle Cloud Infrastructure User Guide 572


Block Volume

Volume Size Max Throughput Max Throughput Max IOPS


(1 MB block size) (8 KB block size) (4 KB block size)

300 GB 72 MB/s 4.8 MB/s 600


400 GB 96 MB/s 6.4 MB/s 800
500 GB 120 MB/s 8 MB/s 1000
750 GB 180 MB/s 12 MB/s 1500
1 TB 240 MB/s 16 MB/s 2000
1.5 TB - 32 TB 480 MB/s 23 MB/s 3000

Performance Limitations and Considerations


• Block Volume performance SLA for IOPS per volume and IOPS per instance applies to the Balanced and Higher
Performance elastic performance settings only, not to the Lower Cost setting.
• The performance results described in this topic are for unformatted data volumes. Performance is lower for
Windows-formatted data volumes. Linux-formatted data volume performance will be similar to performance for
unformatted data volumes.
• The throughput performance results are for bare metal Compute instances. Throughput performance on virtual
machine (VM) Compute instances is dependent on the network bandwidth that is available to the instance, and
further limited by that bandwidth for the volume. For details about the network bandwidth available for VM
shapes, see the Network Bandwidth column in the VM Shapes on page 663 table.
• IOPS performance is independent of the instance type or shape, so is applicable to all bare metal and VM shapes,
for iSCSI attached volumes.
• Block Volume performance SLA for IOPS per volume and IOPS per instance applies to raw, unformatted
volumes, with iSCSI volume attachments only, not to paravirtualized volume attachments, at the Block Volume
service level..
Paravirtualized attachments simplify the process of configuring your block storage by removing the extra
commands needed before accessing a volume. However, due to the overhead of virtualization, this reduces the
maximum IOPS performance for larger block volumes. The performance of paravirtualized-attached volumes is
90% of the performance for iSCSI-attached volumes. If storage IOPS performance is of paramount importance for
your workloads, you can continue to experience the guaranteed performance Oracle Cloud Infrastructure Block
Volume offers by using iSCSI attachments.
Boot volumes are paravirtualized-attached volumes, so the volume performance will reflect this.
• For the Lower Cost option you may not see the same latency performance that you see with the Balanced or
Higher Performance elastic performance options. You may also see a greater variance in latency with the Lower
Cost option.
• Windows Defender Advanced Threat Protection (Windows Defender ATP) is enabled by default on all Oracle-
provided Windows images. This tool has a significant negative impact on disk I/O performance. The IOPS
performance characteristics described in this topic are valid for Windows bare metal instances with Windows
Defender ATP disabled for disk I/O. Customers must carefully consider the security implications of disabling
Windows Defender ATP. See Windows Defender Advanced Threat Protection.
• Block volume performance is per volume, so when a block volume is attached to multiple instances the
performance is shared across all the attached instances. See Attaching a Volume to Multiple Instances on page
524.

Metrics and Performance Testing


See Using Block Volumes Service Metrics to Calculate Block Volume Throughput and IOPS for a walkthrough
of a performance testing scenario with FIO that shows how you can use Block Volume metrics to determine the
performance characteristics of your block volume.

Oracle Cloud Infrastructure User Guide 573


Block Volume

For more information about FIO command samples you can use for performance testing see Sample FIO Commands
for Block Volume Performance Tests on Linux-based Instances on page 579.

Testing Methodology and Performance for Balanced Elastic Performance Option


Caution:

• Before running any tests, protect your data by making a backup of your
data and operating system environment to prevent any data loss.
• Do not run FIO tests directly against a device that is already in use, such
as /dev/sdX. If it is in use as a formatted disk and there is data on it,
running FIO with a write workload (readwrite, randrw, write, trimwrite)
will overwrite the data on the disk, and cause data corruption. Run FIO
only on unformatted raw devices that are not in use.
This section describes the setup of the test environments, the methodology, and the observed performance for the
Balanced elastic performance configuration option. Some of the sample volume sizes tested were:
• 50 GB volume - 3,000 IOPS @ 4K
• 1 TB volume - 25,000 IOPS @ 4K
• Host maximum, Ashburn (IAD) region, twenty 1 TB volumes - 400,000 IOPS @ 4K
These tests used a wide range of volume sizes and the most common read and write patterns and were generated with
the Gartner Cloud Harmony test suite. To show the throughput performance limits, 256k or larger block sizes should
be used. For most environments, 4K, 8K, or 16K blocks are common depending on the application workload, and
these are used specifically for IOPS measurements.
In the observed performance images in this section, the X axis represents the volume size tested, ranging from 4KB to
1MB. The Y axis represents the IOPS delivered. The Z axis represents the read/write mix tested, ranging from 100%
read to 100% write.
Note:

Performance Notes for Instance Types


• The throughput performance results are for bare metal instances.
Throughput performance on VM instances is dependent on the network
bandwidth that is available to the instance, and further limited by that
bandwidth for the volume. For details about the network bandwidth
available for VM shapes, see the Network Bandwidth column in the VM
Shapes table.
• IOPS performance is independent of the instance type or shape, so is
applicable to all bare metal and VM shapes, for iSCSI attached volumes.

1 TB Block Volume
A 1 TB volume was mounted to a bare metal instance running in the Phoenix region. The instance shape was dense,
workload was direct I/O with 10GB working set. The following command was run for the Gartner Cloud Harmony
test suite:

~/block-storage/run.sh --nopurge --noprecondition --fio_direct\=1 --


fio_size=10g --target /dev/sdb --test iops --skip_blocksize 512b

The results showed that for 1 TB, the bandwidth limit for the larger block size test occurs at 320MBS.
The following images show the observed performance for 1 TB:

Oracle Cloud Infrastructure User Guide 574


Block Volume

Oracle Cloud Infrastructure User Guide 575


Block Volume

50 GB Block Volume
A 50 GB volume was mounted to a bare metal instance running in the Phoenix region. The instance shape was dense,
workload was direct I/O with 10GB working set. The following command was run for the Gartner Cloud Harmony
test suite:

~/block-storage/run.sh --nopurge --noprecondition --fio_direct=1 --


fio_size=10g --target /dev/sdb --test iops --skip_blocksize 512b

The results showed that for the 50 GB volume, the bandwidth limit is confirmed as 24,000 KBPS for the larger block
size tests (256 KB or larger block sizes), and the maximum of 3,000 IOPS at 4K block size is delivered. For small
volumes, a 4K block size is common.
The following images show the observed performance for 50 GB:

Oracle Cloud Infrastructure User Guide 576


Block Volume

Host Maximum
Depending on the instance shape, a single instance with multiple attached volumes can achieve performance of
up to 700,000 IOPS when the elastic performance settings for the attached volumes are set to balanced or higher
performance.
To test performance, run the following command for the Gartner Cloud Harmony test suite using thirty 800 GB
higher performance volumes:

sudo ./run.sh --savefio --nopurge --noprecondition --nozerofill --


nosecureerase --notrim -v --fio_direct=1 --fio_size=10g --target /dev/sdy,/
dev/sdf,/dev/sdab,/dev/sdo,/dev/sdw,/dev/sdd,/dev/sdm,/dev/sdu,/dev/sdb,/
dev/sdk,/dev/sds,/dev/sdi,/dev/sdq,/dev/sdae,/dev/sdz,/dev/sdg,/dev/sdac,/
dev/sdx,/dev/sde,/dev/sdaa,/dev/sdn,/dev/sdv,/dev/sdc,/dev/sdl,/dev/sdt,/
dev/sdj,/dev/sdr,/dev/sdh,/dev/sdp,/dev/sdad --test iops --skip_blocksize
512b&

The following images show the observed performance:

Oracle Cloud Infrastructure User Guide 577


Block Volume

Oracle Cloud Infrastructure User Guide 578


Block Volume

Sample FIO Commands for Block Volume Performance Tests on Linux-based


Instances
This topic describes sample FIO commands you can use to run performance tests for the Oracle Cloud Infrastructure
Block Volume service on instances created from Linux-based images.
Installing FIO
To install and configure FIO on your instances with Linux-based operating systems, run the commands applicable to
the operating system version for your instance.
Oracle Linux and CentOS
Run the following command to install and configure FIO for your Oracle Linux or CentOS systems.
• Oracle Linux 8:

sudo dnf install fio -y


• Oracle Autonomous Linux 7.x, Oracle Linux 6.x, Oracle Linux 7.x, CentOS 7.x, and CentOS 8.x:

sudo yum install fio -y

Ubuntu
Run the following commands to install and configure FIO for your Ubuntu systems:

sudo apt-get update && sudo apt-get install fio -y

This applies to Ubuntu 16.04, 18.04, and Ubuntu Minimal 16.04, 18.04.

Oracle Cloud Infrastructure User Guide 579


Block Volume

FIO Commands

IOPS Performance Tests


Use the following FIO example commands to test IOPS performance. You can run the commands directly or create a
job file with the command and then run the job file.
Test random reads
Run the following command directly to test random reads:

sudo fio --filename=device name --direct=1 --rw=randread --bs=4k --


ioengine=libaio --iodepth=256 --runtime=120 --numjobs=4 --time_based --
group_reporting --name=iops-test-job --eta-newline=1 --readonly

In some cases you might see more consistent results if you use a job file instead of running the command directly.
Use the following steps for this approach.
1. Create a job file, fiorandomread.fio, with the following:

[global]
bs=4K
iodepth=256
direct=1
ioengine=libaio
group_reporting
time_based
runtime=120
numjobs=4
name=raw-randread
rw=randread

[job1]
filename=device name
2. Run the job using the following command:

fio randomread.fio

Test file random read/writes


Run the following command against the mount point to test file read/writes:

sudo fio --filename=/custom mount point/file --size=500GB --direct=1 --


rw=randrw --bs=4k --ioengine=libaio --iodepth=256 --runtime=120 --numjobs=4
--time_based --group_reporting --name=iops-test-job --eta-newline=1

Add both the read IOPS and the write IOPS returned.
Test random read/writes
Caution:

Do not run FIO tests with a write workload (readwrite, randrw, write,
trimwrite) directly against a device that is in use.
Run the following command to test random read/writes:

sudo fio --filename=device name --direct=1 --rw=randrw --bs=4k --


ioengine=libaio --iodepth=256 --runtime=120 --numjobs=4 --time_based --
group_reporting --name=iops-test-job --eta-newline=1

Add both the read IOPS and the write IOPS returned.

Oracle Cloud Infrastructure User Guide 580


Block Volume

In some cases you might see more consistent results if you use a job file instead of running the command directly.
Use the following steps for this approach.
1. Create a job file, fiorandomreadwrite.fio, with the following:

[global]
bs=4K
iodepth=256
direct=1
ioengine=libaio
group_reporting
time_based
runtime=120
numjobs=4
name=raw-randreadwrite
rw=randrw

[job1]
filename=device name
2. Run the job using the following command:

fio randomreadwrite.fio

Test sequential reads


For workloads that enable you to take advantage of sequential access patterns, such as database workloads, you can
confirm performance for this pattern by testing sequential reads.
Run the following command to test sequential reads:

sudo fio --filename=device name --direct=1 --rw=read --bs=4k --


ioengine=libaio --iodepth=256 --runtime=120 --numjobs=4 --time_based --
group_reporting --name=iops-test-job --eta-newline=1 --readonly

In some cases you may see more consistent results if you use a job file instead of running the command directly. Use
the following instructions for this approach:
1. Create a job file, fioread.fio, with the following:

[global]
bs=4K
iodepth=256
direct=1
ioengine=libaio
group_reporting
time_based
runtime=120
numjobs=4
name=raw-read
rw=read

[job1]
filename=device name
2. Run the job using the following command:

fio read.fio

Throughput Performance Tests


Use the following FIO example commands to test throughput performance.

Oracle Cloud Infrastructure User Guide 581


Block Volume

Test random reads


Run the following command to test random reads:

sudo fio --filename=device name --direct=1 --rw=randread --bs=64k --


ioengine=libaio --iodepth=64 --runtime=120 --numjobs=4 --time_based --
group_reporting --name=throughput-test-job --eta-newline=1 --readonly

In some cases you might see more consistent results if you use a job file instead of running the command directly.
Use the following steps for this approach.
1. Create a job file, fiorandomread.fio, with the following:

[global]
bs=64K
iodepth=64
direct=1
ioengine=libaio
group_reporting
time_based
runtime=120
numjobs=4
name=raw-randread
rw=randread

[job1]
filename=device name
2. Run the job using the following command:

fio randomread.fio

Test file random read/writes


Run the following command against the mount point to test file read/writes:

sudo fio --filename=/custom mount point/file --size=500GB --direct=1 --


rw=randrw --bs=64k --ioengine=libaio --iodepth=64 --runtime=120 --numjobs=4
--time_based --group_reporting --name=throughput-test-job --eta-newline=1

Add both the read MBPs and the write MBPs returned.
Test random read/writes
Caution:

Do not run FIO tests with a write workload (readwrite, randrw, write,
trimwrite) directly against a device that is in use.
Run the following command to test random read/writes:

sudo fio --filename=device name --direct=1 --rw=randrw --bs=64k --


ioengine=libaio --iodepth=64 --runtime=120 --numjobs=4 --time_based --
group_reporting --name=throughput-test-job --eta-newline=1

Add both the read MBPs and the write MBPs returned.
In some cases you might see more consistent results if you use a job file instead of running the command directly.
Use the following steps for this approach.
1. Create a job file, fiorandomread.fio, with the following:

[global]
bs=64K

Oracle Cloud Infrastructure User Guide 582


Block Volume

iodepth=64
direct=1
ioengine=libaio
group_reporting
time_based
runtime=120
numjobs=4
name=raw-randreadwrite
rw=randrw

[job1]
filename=device name
2. Run the job using the following command:

fio randomreadwrite.fio

Test sequential reads


For workloads that enable you to take advantage of sequential access patterns, such as database workloads, you can
confirm performance for this pattern by testing sequential reads.
Run the following command to test sequential reads:

sudo fio --filename=device name --direct=1 --rw=read --bs=64k --


ioengine=libaio --iodepth=64 --runtime=120 --numjobs=4 --time_based --
group_reporting --name=throughput-test-job --eta-newline=1 --readonly

In some cases you might see more consistent results if you use a job file instead of running the command directly.
Use the following steps for this approach.
1. Create a job file, fioread.fio, with the following:

[global]
bs=64K
iodepth=64
direct=1
ioengine=libaio
group_reporting
time_based
runtime=120
numjobs=4
name=raw-read
rw=read

[job1]
filename=device name
2. Run the job using the following command:

fio read.fio

Latency Performance Tests


Use the following FIO example commands to test latency performance. You can run the commands directly or create
a job file with the command and then run the job file.

Oracle Cloud Infrastructure User Guide 583


Block Volume

Test random reads for latency


Run the following command directly to test random reads for latency:

sudo fio --filename=device name --direct=1 --rw=randread --bs=4k --


ioengine=libaio --iodepth=1 --numjobs=1 --time_based --group_reporting --
name=readlatency-test-job --runtime=120 --eta-newline=1 --readonly

In some cases you might see more consistent results if you use a job file instead of running the command directly.
Use the following steps for this approach.
1. Create a job file, fiorandomreadlatency.fio, with the following:

[global]
bs=4K
iodepth=1
direct=1
ioengine=libaio
group_reporting
time_based
runtime=120
numjobs=1
name=readlatency-test-job
rw=randread

[job1]
filename=device name
2. Run the job using the following command:

fio fiorandomreadlatency.fio

Test random read/writes for latency


Caution:

Do not run FIO tests with a write workload (readwrite, randrw, write,
trimwrite) directly against a device that is in use.
Run the following command directly to test random read/writes for latency:

sudo fio --filename=device name --direct=1 --rw=randrw --bs=4k --


ioengine=libaio --iodepth=1 --numjobs=1 --time_based --group_reporting --
name=rwlatency-test-job --runtime=120 --eta-newline=1 --readonly

In some cases you might see more consistent results if you use a job file instead of running the command directly.
Use the following steps for this approach.
1. Create a job file, fiorandomrwlatency.fio, with the following:

[global]
bs=4K
iodepth=1
direct=1
ioengine=libaio
group_reporting
time_based
runtime=120
numjobs=1
name=rwlatency-test-job
rw=randrw

[job1]

Oracle Cloud Infrastructure User Guide 584


Block Volume

filename=device name
2. Run the job using the following command:

fio fioradomrwlatency.fio

Block Volume Elastic Performance


The elastic performance feature of the Oracle Cloud Infrastructure Block Volume service allows you to dynamically
change the volume performance, along with enabling you to pay for the performance characteristics you require
independently from the size of your block volumes and boot volumes.
This feature includes the concept of volume performance units (VPUs). You can purchase more VPUs to allocate
more resources to a volume, increasing IOPS/GB and throughput per GB. You also have the flexibility to purchase
fewer VPUs, which reduces the performance characteristics for a volume, however it can also provide cost savings.
You can also choose not to purchase any VPUs which can provide significant cost savings for volumes that don't
require the increased performance characteristics.
For specific pricing details, see Oracle Storage Cloud Pricing.
Elastic Performance Configuration Options
There are three elastic performance configuration options, as described below.
• Balanced: This is the default setting for new and existing block and boot volumes. It provides a good balance
between performance and cost savings for most workloads, including those that perform random I/O such as
boot volumes. This option provides linear peformance scaling with 60 IOPS/GB up to 25,000 IOPS per volume.
Throughtput scales at 480 KBPS/GB up to a maximum of 480 MBPS per volume. With this option you are
purchasing 10 VPUs per GB/month.
• Higher Performance: Recommended for workloads with the highest I/O requirements, requiring the best possible
performance, such as large databases. This option provides the best linear performance scale with 75 IOPS/GB
up to a maximum of 35,000 IOPS per volume. Throughput also scales at the highest rate at 600 KBPS/GB up to a
maximum of 480 MBPS per volume. With this option you are purchasing 20 VPUs per GB/month.
• Lower Cost: Recommended for throughput intensive workloads with large sequential I/O, such as streaming, log
processing, and data warehouses. The cost is only the storage cost, there is no additional VPU cost. This option
gives you linear scaling 2 IOPS/GB upt to a maximum of 3000 IOPS per volume. This option is only available for
block volumes, it is not available for boot volumes.
The following table lists the performance characteristics for each elastic performance level.

Performance IOPS/GB Max IOPS/ Throughput/GB Max VPUs/GB


Level Volume (KB/s per GB) Throughput/
Volume (MB/
s per volume)

Lower Cost 2 3000 240 Up to 480 0


Balanced 60 25,000 480 480 10
Higher 75 35,000 600 480 20
Performance

See Block Volume Performance on page 571 for additional performance details for the Block Volume service.
VPUs refer to volume performance units, see Oracle Storage Cloud Pricing for specific pricing details.
Configuring Volume Performance
You can configure the volume performance for a block volume when you create a volume, see Creating a Volume on
page 519. You can also change the volume performance for an existing block volume, see To change the volume
performance for an existing block volume on page 586.

Oracle Cloud Infrastructure User Guide 585


Block Volume

When you create a Compute instance, the volume performance for the instance's boot volume is set to Balanced by
default. You can change this setting after the instance has launched, see To change the volume performance for an
existing boot volume on page 586.
Changing the Performance of a Volume
The Block Volume service's elastic performance feature enables you to dynamically configure the volume
performance for block volumes and boot volumes, for more information, see Block Volume Elastic Performance on
page 585.
Required IAM Service Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let volume admins manage block volumes, backups, and volume groups on page
2154 lets the specified group do everything with block volumes and backups, but not launch instances.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.
Limitations
• You can only change the elastic performance configuration on three volumes concurrently per tenancy.
• When changing volume performance for boot volumes, you can only select the Balanced or Higher Performance
options.
Using the Console
The default volume performance setting for existing block volumes or when you create a new block volume is
Balanced. You can change the default setting when you create a new block volume, see Creating a Volume on page
519. You can also change the volume performance setting for an existing block volume using the steps in the
following procedure.
To change the volume performance for an existing block volume
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Block Volumes.
2. Click the block volume that you want to change the performance for.
3. Click Edit Size or Performance.
4. Click the volume performance option you want to change to.
5. Click Save Changes.
When you create an instance, the volume performance setting for the instance's boot volume is set to Balanced. You
can change this setting to Higher Performance after the instance has been launched.
To change the volume performance for an existing boot volume
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Boot Volumes.
2. Click the boot volume that you want to change the performance for.
3. Click Edit Size or Performance.
4. Click the volume performance option you want to change to.
5. Click Save Changes.
Using the CLI
For information about using the CLI, see Command Line Interface (CLI) on page 4228.
Use the volume update operation or the boot-volume update operation with vpus-per-gb parameter to
update a block volume's elastic performance setting. The vpus-per-gb parameter is where you specify the volume
performance units (VPUs). VPUs represent the volume performance settings, with the following allowed values:

Oracle Cloud Infrastructure User Guide 586


Block Volume

• 0: Represents Lower Cost setting, applies to block volumes only.


• 10: Represents Balanced setting, applies to both block volumes and boot volumes.
• 20: Represents Higher Performance setting, applies to both block volumes and boot volumes.
For example:

oci bv volume update --volume-id <volume_ID> --vpus-per-gb 20

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Block Volumes
To update a block volume's performance setting, use the following operation:
• UpdateVolume
The volume performance setting is specified in the vpusPerGB attribute of UpdateVolumeDetails. Allowed values
are 0, 10, and 20.

Boot Volumes
To update a boot volume's performance setting, use the following operation:
• UpdateBootVolume
The volume performance setting is specified in the vpusPerGB attribute of UpdateBootVolumeDetails. Allowed
values are 10 and 20.

Auto-tune Volume Performance


The Block Volume service has three elastic performance configuration options:
• Balanced
• Higher Performance
• Lower Cost
For more information about these settings, see Block Volume Elastic Performance on page 585. The auto-tune
feature enables you to configure your block volumes and boot volumes to use the optimal performance setting based
on whether the volume is attached or detached from an instance.
When you create a volume, the default volume performance setting is Balanced. You can change this default
performance setting when you create the volume, see Creating a Volume on page 519. You can also change the
default performance setting on an existing volume, see Changing the Performance of a Volume on page 586. When
the performance auto-tune feature is disabled, your volume’s performance will always be the default performance
setting. If performance auto-tune is enabled, when your block volume is attached to one or more instances, the
volume’s performance will be the default performance setting. When the volume is detached, the Block Volume
service will adjust the performance setting to Lower Cost for both block volumes and boot volumes. When the
volume is reattached, the performance is adjusted back to the default performance setting.
When viewing the Block Volume Details or Boot Volume Details pages in the Console, the applicable fields are:
• Current Performance: This is the volume’s effective performance. If the auto-tune performance feature is
disabled for the volume, Current Performance will always be what is specified in the Default Performance,
regardless of whether the volume is attached or detached. If the auto-tune performance feature is enabled for the
volume, Current Performance will be adjusted to Lower Cost when the volume is detached. Note that Current
Performance won’t show the performance setting as Lower Cost until the performance adjustment is complete.
• Default Performance: This is the volume’s performance setting that you specify when you create the volume
or when you change the performance setting for an existing volume. When the volume is attached, regardless of
whether the auto-tune performance feature is enabled or not, this is the volume’s performance.

Oracle Cloud Infrastructure User Guide 587


Block Volume

• Auto-tune Performance: This field indicates whether the auto-tune performance feature is enabled for the
volume. When it is off, the volume’s effective performance is always the same as what is specified for Default
Performance. When it is on, the volume performance is adjusted to Lower Cost when the volume is detached.
See Timing Limits and Considerations for details about when these settings take effect.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let volume admins manage block volumes, backups, and volume groups on page
2154 lets the specified group do everything with block volumes and backups, but not launch instances.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.
Timing Limits and Considerations
The following list identifies some timing considerations you should be aware of when using the performance auto-
tune feature.
• When you enable the auto-tune performance feature for a detached volume, the Block Volume service starts the
performance adjustment to Lower Cost after 14 days.
• When you enable the auto-tune performance feature for an attached volume, the Block Volume service starts the
performance adjustment to Lower Cost 14 days after you detach the volume.
• If you disable the auto-tune performance feature while a volume is detached, Block Volume service starts the
performance adjustment to the Default Performance setting right away.
• Attaching a volume with the auto-tune performance feature enabled may take longer than attaching a volume with
it off, as the Block Volume service adjusts the performance before the volume attachment completes.
• If you change the Default Performance for a detached volume with the auto-tune performance feature enabled,
the Current Performance for the volume will remain Lower Cost until you reattach the volume.
• If you clone a detached volume with the auto-tune performance feature enabled, the Block Volume service starts
the performance adjustment to Lower Cost after 14 days.
Using the Console
The following procedures describe how to enable the auto-tune performance feature in the Console.
To enable the auto-tune performance feature for a block volume
1. Open the navigation menu. Under Core Infrastructure, go to Block Storage and click Block Volumes.
2. Click the block volume that you want to enable the auto-tune performance feature for.
3. Click Edit.
4. In the Volume Size and Performance section, click the AUTO-TUNE PERFORMANCE slider so that it
changes from Off to On.
5. Click Save Changes.
To enable the auto-tune performance feature for a boot volume
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Boot Volumes.
2. Click the boot volume that you want to enable the auto-tune performance feature for.
3. Click Edit.
4. In the Volume Size and Performance section, click the AUTO-TUNE PERFORMANCE slider so that it
changes from Off to On.
5. Click Save Changes.

Oracle Cloud Infrastructure User Guide 588


Block Volume

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Block Volumes
To enable or disable the auto-tune performance feature for a block volume, use the following operation:
• UpdateVolume
The auto-tune performance setting is specified in the isAutoTuneEnabled attribute of UpdateVolumeDetails.

Boot Volumes
To enable or disable the auto-tune performance feature for a boot volume, use the following operation:
• UpdateBootVolume
The auto-tune performance setting is specified in the isAutoTuneEnabled attribute of
UpdateBootVolumeDetails.

Block Volume Metrics


You can monitor the health, capacity, and performance of your block volumes and boot volumes by using metrics,
alarms, and notifications.
This topic describes the metrics emitted by the metric namespace oci_blockstore (the Block Volume service)
Resources: Block volumes and boot volumes
See Using Block Volumes Service Metrics to Calculate Block Volume Throughput and IOPS for a walkthrough of
a performance testing scenario with FIO that shows how you can use these metrics to determine the performance
characteristics of your block volume.

Overview of Metrics for an Instance and Its Storage Devices


If you're not already familiar with the different types of metrics available for an instance and its storage and network
devices, see Compute Instance Metrics on page 794.

Available Metrics: oci_blockstore


The Block Volume service metrics help you measure volume operations and throughput related to Compute instances.
The metrics listed in the following table are automatically available for any block volume or boot volume, regardless
of whether the attached instance has monitoring enabled. You do not need to enable monitoring on the volumes to get
these metrics.
You also can use the Monitoring service to create custom queries.
Each metric includes the following dimensions:
ATTACHMENTID
The OCID of the volume attachment.
RESOURCEID
The OCID of the volume.

Oracle Cloud Infrastructure User Guide 589


Block Volume

Metric Metric Display Unit Description Dimensions


Name
Volume Read
VolumeReadThroughput* bytes Read throughput. attachmentId
Throughput Expressed as bytes
read per interval. resourceId

Volume Write
VolumeWriteThroughput* bytes Write throughput.
Throughput Expressed as bytes
written per interval.
VolumeReadOps* Volume Read reads Activity level from I/
Operations O reads. Expressed
as reads per interval.
VolumeWriteOps* Volume Write writes Activity level from I/
Operations O writes. Expressed
as writes per interval.

* The Compute service separately reports network-related metrics as measured on the instance itself and aggregated
across all the attached volumes. Those metrics are available in the oci_computeagent metric namespace. For
more information, see Compute Instance Metrics on page 794.

Using the Console


To view default metric charts for a single volume
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance to view its details.
3. Under Resources, click either Attached Block Volumes or Boot Volume to view the volume you're interested in.
4. Click the volume to view its details.
5. Under Resources, click Metrics.
For more information about monitoring metrics and using alarms, see Monitoring Overview on page 2686. For
information about notifications for alarms, see Notifications Overview on page 3378.
To view default metric charts for multiple volumes
1. Open the navigation menu. Under Solutions and Platform, go to Monitoring and click Service Metrics.
2. For Compartment, select the compartment that contains the volumes you're interested in.
3. For Metric Namespace, select oci_blockstore.
The Service Metrics page dynamically updates the page to show charts for each metric that is emitted by the
selected metric namespace.
For more information about monitoring metrics and using alarms, see Monitoring Overview on page 2686. For
information about notifications for alarms, see Notifications Overview on page 3378.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following APIs for monitoring:
• Monitoring API for metrics and alarms
• Notifications API for notifications (used with alarms)

Oracle Cloud Infrastructure User Guide 590


Block Volume

Oracle Cloud Infrastructure User Guide 591


Compliance Documents

Chapter

10
Compliance Documents
This chapter explains how to view and download compliance documents.

Overview of Compliance Documents


The Oracle Cloud Infrastructure Compliance Documents service lets you view and download compliance documents
that you could previously access only by submitting a request to the Elevated Support Portal.
Note:

Compliance Documents is not available in Oracle Cloud Infrastructure


Government Cloud realms.

Types of Compliance Documents


When viewing compliance documents, you can filter on the following types:
• Attestation. A Payment Card Industry (PCI) Data Security Standard (DSS) Attestation of Compliance document.
• Audit. A general audit report.
• Bridge Letter (BridgeLetter). A bridge letter. Bridge letters provide compliance information for the period of
time between the end date of an SOC report and the date of the release of a new SOC report.
• Certificate. A document indicating certification by a particular authority, with regard to certification requirements
and examination results conforming to said requirements.
• SOC3. A Service Organization Controls 3 audit report that provides information relating to a service
organization's internal controls for security, availability, confidentiality, and privacy.
• Other. A compliance document that doesn't fit into any of the preceding, more specific categories.
If you need to further narrow down what documents are displayed, you can combine the type filter with the
environment filter.

Types of Environments
The environments, or business pillars or platforms, to which the documents belong include:
• OCI. Oracle Cloud Infrastructure is a set of complementary cloud infrastructure services that let you build and run
applications and services in a highly available hosted environment.
• PAAS. Oracle Platform as a Service (PaaS) provides various platforms to build and deploy applications within the
public, private, or hybrid cloud.

Regions and Availability Domains


You can use the Compliance Documents service in all regions. For a list of supported regions, see Regions and
Availability Domains on page 182.

Oracle Cloud Infrastructure User Guide 592


Compliance Documents

Ways to Access Oracle Cloud Infrastructure


You can access Oracle Cloud Infrastructure using the Console (a browser-based interface) or the REST API.
Instructions for the Console and API are included in topics throughout this guide. However, you cannot specifically
access Compliance Documents by using the API or Command Line Interface (CLI). Compliance Documents does not
have public API, SDK, or CLI support at this time.
To access the Console, you must use a supported browser.
Oracle Cloud Infrastructure supports the following browsers and versions:
• Google Chrome 69 or later
• Safari 12.1 or later
• Firefox 62 or later

Viewing and Downloading Compliance Documents


This section describes how to view compliance documents. The Console displays documents available to your
tenancy for the currently selected region.

To view a list of all compliance documents


1. Open the navigation menu. Under the Governance and Administration group, go to Governance and click
Compliance Documents.
2. The Compliance Documents page displays all documents you have permission to view. From this page, you can
do the following:
• Filter. You can filter documents by environment or by type.
• Sort. You can sort documents by name, type, or creation date.
• Download. You can download documents to your local computer.

To download a compliance document


1. Open the navigation menu. Under the Governance and Administration group, go to Governance and click
Compliance Documents.
2. The Compliance Documents page displays all documents you have permission to view.
3. Next to the name of the document, click Download.
4. Review the terms of use.
5. When you're ready, select the I have reviewed and accept these terms and conditions check box, and then click
Download File.

To sort a list of compliance documents


1. Open the navigation menu. Under the Governance and Administration group, go to Governance and click
Compliance Documents.
2. By default, the list displays documents according to the document name, in alphabetical order. To sort the list
another way, do one of the following:
• Click Name. The list sorts alphabetically, according to the summary of the announcement. If you begin by
viewing the default sort order, the sort order will change to show documents in reverse alphabetical order by
name.
• Click Type. The list sorts according to the type of the document, in alphabetical order by type.
• Click Created. The list sorts according to the date and time the document was created.
3. To sort the list again, repeat the previous step.

Oracle Cloud Infrastructure User Guide 593


Compute

Chapter

11
Compute
This chapter explains how to launch, access, manage, and terminate compute instances.

Overview of the Compute Service


Oracle Cloud Infrastructure Compute lets you provision and manage compute hosts, known as instances. You can
launch instances as needed to meet your compute and application requirements. After you launch an instance, you
can access it securely from your computer, restart it, attach and detach volumes, and terminate it when you're done
with it. Any changes made to the instance's local drives are lost when you terminate it. Any saved changes to volumes
attached to the instance are retained.
Oracle Cloud Infrastructure offers both bare metal and virtual machine instances:
• Bare Metal: A bare metal compute instance gives you dedicated physical server access for highest performance
and strong isolation.
• Virtual Machine: A virtual machine (VM) is an independent computing environment that runs on top of physical
bare metal hardware. The virtualization makes it possible to run multiple VMs that are isolated from each other.
VMs are ideal for running applications that do not require the performance and resources (CPU, memory, network
bandwidth, storage) of an entire physical machine.
An Oracle Cloud Infrastructure VM compute instance runs on the same hardware as a bare metal instance,
leveraging the same cloud-optimized hardware, firmware, software stack, and networking infrastructure.
Be sure to review Best Practices for Your Compute Instance on page 597 for important information about working
with your Oracle Cloud Infrastructure Compute instance.
Oracle Cloud Infrastructure uses Oracle Ksplice to apply important security and other critical kernel updates to the
hypervisor hosts without a reboot. Oracle Cloud Infrastructure can apply these patches transparently without the need
to pause any VMs, and all hypervisor hosts support this capability. For more information, see Installing and Running
Oracle Ksplice on page 669.
Compute is Always Free eligible. For more information about Always Free resources, including capabilities and
limitations, see Oracle Cloud Infrastructure Free Tier on page 142.

Instance Types
When you create a Compute instance, you can select the most appropriate type of instance for your applications
based on characteristics such as the number of CPUs, amount of memory, and network resources. Oracle Cloud
Infrastructure offers a variety of shapes that are designed to meet a range of compute and application requirements:
• Standard shapes: Designed for general purpose workloads and suitable for a wide range of applications and use
cases. Standard shapes provide a balance of cores, memory, and network resources. Standard shapes are available
with Intel or AMD processors.
• DenseIO shapes: Designed for large databases, big data workloads, and applications that require high-
performance local storage. DenseIO shapes include locally-attached NVMe-based SSDs.
• GPU shapes: Designed for hardware-accelerated workloads. GPU shapes include Intel or AMD CPUs and
NVIDIA graphics processors.

Oracle Cloud Infrastructure User Guide 594


Compute

• High performance computing (HPC) shapes: Designed for high-performance computing workloads that require
high frequency processor cores and cluster networking for massively parallel HPC workloads. HPC shapes are
available for bare metal instances only.
For more information about the available bare metal and VM shapes, see Compute Shapes on page 659, Bare
Metal Instances, Virtual Machines, and Virtual Machines and Bare Metal (GPU).

Flexible Shapes
Flexible shapes let you customize the number of OCPUs and the amount of memory allocated to an instance. When
you create a VM instance using a flexible shape, you select the number of OCPUs and the amount of memory
that you need for the workloads that run on the instance. The network bandwidth and number of VNICs scale
proportionately with the number of OCPUs. This flexibility lets you build VMs that match your workload, enabling
you to optimize performance and minimize cost.

Components for Launching Instances


The components required to launch an instance are:
availability domain
The Oracle Cloud Infrastructure data center within your geographical region that hosts cloud resources,
including your instances. You can place instances in the same or different availability domains, depending
on your performance and redundancy requirements. For more information, see Regions and Availability
Domains on page 182.
virtual cloud network
A virtual version of a traditional network—including subnets, route tables, and gateways—on which your
instance runs. At least one cloud network has to be set up before you launch instances. For information about
setting up cloud networks, see Networking Overview on page 2774.
key pair (for Linux instances)
A security mechanism required for Secure Shell (SSH) access to an instance. Before you launch an instance,
you’ll need at least one key pair. For more information, see Managing Key Pairs on Linux Instances on page
698.
tags
You can apply tags to your resources to help you organize them according to your business needs. You can
apply tags at the time you create a resource, or you can update the resource later with the wanted tags. For
general information about applying tags, see Resource Tags on page 213.
password (for Windows instances)
A security mechanism required to access an instance that uses an Oracle-provided Windows image. The first
time you launch an instance using a Windows image, Oracle Cloud Infrastructure will generate an initial,
one-time password that you can retrieve using the console or API. This password must be changed after you
initially log on.
image
A template of a virtual hard drive that determines the operating system and other software for an instance.
For details about Oracle Cloud Infrastructure platform images, see Oracle-Provided Images on page 633.
You can also launch instances from:
• Trusted third-party images published by Oracle partners from the Partner Image catalog. For more
information about partner images, see Overview of Marketplace on page 2676 and Working with
Listings on page 2677.
• Pre-built Oracle enterprise images and solutions enabled for Oracle Cloud Infrastructure
• Custom images, including bring your own image scenarios.
• Boot Volumes on page 613.

Oracle Cloud Infrastructure User Guide 595


Compute

shape
A template that determines the number of CPUs, amount of memory, and other resources allocated to a
newly created instance. You choose the most appropriate shape when you launch an instance. See Compute
Shapes on page 659 for a list of available bare metal and VM shapes.
You can optionally attach volumes to an instance. For more information, see Overview of Block Volume on page
504.

Creating Automation with Events


You can create automation based on state changes for your Oracle Cloud Infrastructure resources by using event
types, rules, and actions. For more information, see Overview of Events on page 1788.
The following Compute resources emit events:
• Autoscaling configurations and autoscaling policies
• Cluster networks
• Console histories
• Images
• Instances and instance attachments
• Instance configurations
• Instance console connections
• Instance pools

Resource Identifiers
Most types of Oracle Cloud Infrastructure resources have a unique, Oracle-assigned identifier called an Oracle
Cloud ID (OCID). For information about the OCID format and other ways to identify your resources, see Resource
Identifiers on page 199.

Work Requests
Compute is one of the Oracle Cloud Infrastructure services that is integrated with the Work Requests API. For general
information on using work requests in Oracle Cloud Infrastructure, see Work Requests in the user guide, and the
Work Requests API.

Ways to Access Oracle Cloud Infrastructure


You can access Oracle Cloud Infrastructure using the Console (a browser-based interface) or the REST API.
Instructions for the Console and API are included in topics throughout this guide. For a list of available SDKs, see
Software Development Kits and Command Line Interface on page 4262.
To access the Console, you must use a supported browser.
Oracle Cloud Infrastructure supports the following browsers and versions:
• Google Chrome 69 or later
• Safari 12.1 or later
• Firefox 62 or later
For general information about using the API, see REST APIs on page 4409.

Authentication and Authorization


Each service in Oracle Cloud Infrastructure integrates with IAM for authentication and authorization, for all
interfaces (the Console, SDK or CLI, and REST API).
An administrator in your organization needs to set up groups, compartments, and policies that control which users
can access which services, which resources, and the type of access. For example, the policies control who can create
new users, create and manage the cloud network, launch instances, create buckets, download objects, etc. For more

Oracle Cloud Infrastructure User Guide 596


Compute

information, see Getting Started with Policies on page 2143. For specific details about writing policies for each of
the different services, see Policy Reference on page 2176.
If you’re a regular user (not an administrator) who needs to use the Oracle Cloud Infrastructure resources that
your company owns, contact your administrator to set up a user ID for you. The administrator can confirm which
compartment or compartments you should be using.

Storage for Compute Instances


You can expand the storage that's available for your Compute instances with the following services:
• Block Volume: Lets you dynamically provision and manage block volumes that you can attach to one or more
Compute instances. See Overview of Block Volume on page 504 for more information. For steps to attach
block volumes to Compute instances, see Attaching a Volume on page 521 and Attaching a Volume to Multiple
Instances on page 524.
• File Storage: A durable, scalable, secure, enterprise-grade network file system that you can connect to from any
Compute instance in your virtual cloud network (VCN). See Overview of File Storage on page 1928 for more
information.
• Object Storage: An internet-scale, high-performance storage platform that lets you store an unlimited amount of
unstructured data of any content type. This storage is regional and not tied to any specific Compute instance. See
Overview of Object Storage on page 3420 for more information.
• Archive Storage: A storage platform that lets you store an unlimited amount of unstructured data of any content
type that doesn't require instantaneous data retrieval. This storage is regional and not tied to any specific Compute
instance. See Overview of Archive Storage on page 488 for more information.

Limits on Compute Resources


See Service Limits on page 217 for a list of applicable limits and instructions for requesting a limit increase. To set
compartment-specific limits on a resource or resource family, administrators can use compartment quotas.
Additional limits include:
• To attach a volume to an instance, both the instance and volume must be within the same availability domain.
• Many Compute operations are subject to throttling.
A service limit is different from host capacity. A service limit is the quota or allowance set on a resource. Host
capacity is the physical infrastructure that resources such as Compute instances run on.

Metadata Key Limits


Custom metadata keys (any key you define that is not ssh_authorized_keys or user_data) have the
following limits:
• Max number of metadata keys: 128
• Max size of key name: 255 characters
• Max size of key value: 255 characters
ssh_authorized_keys is a special key that does not have these limits, but its value is validated to conform to a
public key in the OpenSSH format.
user_data has a maximum size of 16KB. For Linux instances with cloud-init configured, you can populate the
user_data field with a Base64-encoded string of cloud-init user data. For more information on formats that cloud-
init accepts, see cloud-init formats.

Best Practices for Your Compute Instance


Oracle Cloud Infrastructure Compute provides bare metal and virtual machine (VM) compute capacity that delivers
performance, flexibility, and control without compromise. It's powered by Oracle’s next generation, internet-scale
infrastructure, designed to help you develop and run your most demanding applications and workloads in the cloud.

Oracle Cloud Infrastructure User Guide 597


Compute

You can provision compute capacity through an easy-to-use web console or the API, SDKs, or CLI. The compute
instance, once provisioned, provides you with access to the host. This gives you complete control of your instance.
Though you have full management authority for your instance, we recommend a variety of best practices to ensure
system availability and top performance.

IP Addresses Reserved for Use by Oracle


Certain IP addresses are reserved for Oracle Cloud Infrastructure use and may not be used in your address numbering
scheme.

169.254.0.0/16
These addresses are used for iSCSI connections to the boot and block volumes, instance metadata, and other services.

Class D and Class E


All addresses from 224.0.0.0 to 239.255.255.255 (Class D) are prohibited for use in a VCN, they are reserved for
multicast address assignments in the IP standards. See RFC 3171 for details.
All addresses from 240.0.0.0 to 255.255.255.255 (Class E) are prohibited for use in a VCN, they are reserved for
future use in the IP standards. See RFC 1112, Section 4 for details.

Three IP Addresses in Each Subnet


These addresses consist of:
• The first IP address in the CIDR (the network address)
• The last IP address in the CIDR (the broadcast address)
• The first host address in the CIDR (the subnet default gateway address)
For example, in a subnet with CIDR 192.168.0.0/24, these addresses are reserved:
• 192.168.0.0 (the network address)
• 192.168.0.255 (the broadcast address)
• 192.168.0.1 (the subnet default gateway address)
The remaining addresses in the CIDR (192.168.0.2 to 192.168.0.254) are available for use.

Essential Firewall Rules


All Oracle-provided images include rules that allow only "root" on Linux instances or "Administrators" on
Windows Server instances to make outgoing connections to the iSCSI network endpoints (169.254.0.2:3260,
169.254.2.0/24:3260) that serve the instance's boot and block volumes.
• We recommend that you do not reconfigure the firewall on your instance to remove these rules. Removing these
rules allows non-root users or non-administrators to access the instance’s boot disk volume.
• We recommend that you do not create custom images without these rules unless you understand the security risks.
• Running Uncomplicated Firewall (UFW) on Ubuntu images might cause issues with these rules. Because of this,
we recommend that you do not enable UFW on your instances. See Ubuntu instance fails to reboot after enabling
Uncomplicated Firewall (UFW) for more information.

System Resilience
Follow industry-wide hardware failure best practices to ensure the resilience of your solution in the event of a
hardware failure. Some best practices include:
• Design your system with redundant compute nodes in different availability domains to support failover capability.
• Create a custom image of your system drive each time you change the image.
• Back up your data drives, or sync to spare drives, regularly.

Oracle Cloud Infrastructure User Guide 598


Compute

If you experience a hardware failure and have followed these practices, you can terminate the failed instance, launch
your custom image to create a new instance, and then apply the backup data.

Uninterrupted Access to the Instance


Make sure to keep the DHCP client running so you can always access the instance. If you stop the DHCP client
manually or disable NetworkManager (which stops the DHCP client on Linux instances), the instance can't renew
its DHCP lease and will become inaccessible when the lease expires (typically within 24 hours). Do not disable
NetworkManager unless you use another method to ensure renewal of the lease.
Stopping the DHCP client might remove the host route table when the lease expires. Also, loss of network
connectivity to your iSCSI connections might result in loss of the boot drive.

User Access
If you created your instance using an Oracle-provided Linux image, you can use SSH to access your instance from a
remote host as the opc user. After logging in, you can add users on your instance.
If you do not want to share SSH keys, you can create additional SSH-enabled users.
If you created your instance using an Oracle-provided Windows image, you can access your instance using a Remote
Desktop client as the opc user. After logging in, you can add users on your instance.
For more information about user access, see Adding Users on an Instance on page 743. For steps to log in to an
instance, see Connecting to an Instance on page 739.

NTP Service
Oracle Cloud Infrastructure offers a fully managed, secure, and highly available NTP service that you can use to
set the date and time of your Compute and Database instances from within your virtual cloud network (VCN). We
recommend that you configure your instances to use the Oracle Cloud Infrastructure NTP service. For information
about how to configure instances to use this service, see Configuring the Oracle Cloud Infrastructure NTP Service for
an Instance on page 600.

Fault Domains
A fault domain is a grouping of hardware and infrastructure that is distinct from other fault domains in the same
availability domain. Each availability domain has three fault domains. By properly leveraging fault domains you can
increase the availability of applications running on Oracle Cloud Infrastructure. See Fault Domains on page 184 for
more information.
Your application's architecture will determine whether you should separate or group instances using fault domains.

Scenario 1: Highly Available Application Architecture


In this scenario you have a highly available application, for example you have two web servers and a clustered
database. In this scenario you should group one web server and one database node in one fault domain and the other
half of each pair in another fault domain. This ensures that a failure of any one fault domain does not result in an
outage for your application.

Scenario 2: Single Web Server and Database Instance Architecture


In this scenario your application architecture is not highly available, for example you have one web server and one
database instance. In this scenario both the web server and the database instance must be placed in the same fault
domain. This ensures that your application will only be impacted by the failure of that single fault domain.

Customer-Managed Virtual Machine (VM) Maintenance


When an underlying infrastructure component needs to undergo maintenance, you are notified before the impact to
your VM instances. During an infrastructure maintenance event, where applicable, Oracle Cloud Infrastructure live
migrates Standard VM instances from the physical VM host that needs maintenance to a healthy VM host without

Oracle Cloud Infrastructure User Guide 599


Compute

disrupting running instances. If a VM cannot be live migrated, there might be a short downtime while the instance is
reboot migrated.
You can control how and when your applications experience maintenance downtime by proactively rebooting (or
stopping and starting) the instances at any time before the scheduled maintenance event. A maintenance reboot is
different from a normal reboot. When you reboot an instance for maintenance, the instance is stopped on the physical
VM host that needs maintenance, and then restarted on a healthy VM host.
If you choose not to reboot before the scheduled time, then Oracle Cloud Infrastructure will migrate the instances
before proceeding with the planned infrastructure maintenance. Optionally, you can configure the instances to remain
stopped after they are reboot migrated. For more information, see Recovering a Virtual Machine (VM) During
Planned Maintenance on page 786.

Configuring the Oracle Cloud Infrastructure NTP Service for an Instance


Oracle Cloud Infrastructure offers a fully managed, secure, and highly available NTP service that you can use to
set the date and time of your Compute and Database instances from within your virtual cloud network (VCN). The
Oracle Cloud Infrastructure NTP service uses redundant Stratum 1 devices in every availability domain. The Stratum
1 devices are synchronized to dedicated Stratum 2 devices that every host synchronizes against. The service is
available in every region.
This topic describes how to configure your Compute instances to use this NTP service.
You can also choose to configure your instances to use a public NTP service or use FastConnect to leverage an on-
premises NTP service.
Note:

Oracle-provided images for Oracle Autonomous Linux 7.x, Oracle Linux


8.x, Oracle Linux 7.x, CentOS 7.x, and CentOS 8.x released after February
2018 include the Chrony service by default. You do not need to configure the
Oracle Cloud Infrastructure NTP service for these instances.
Oracle Linux 6.x
Use the following steps to configure your Oracle Linux 6.x instances to use the Oracle Cloud Infrastructure NTP
service.
1. Configure IPtables to allow connections to the Oracle Cloud Infrastructure NTP service, using the following
commands:

sudo iptables -I BareMetalInstanceServices 8 -d 169.254.169.254/32 -p udp


-m udp --dport 123 -m comment --comment "Allow access to OCI local NTP
service" -j ACCEPT

sudo service iptables save


2. Install the NTP service with the following command:

sudo yum install ntp


3. Set the date of your instance with the following command:

sudo ntpdate 169.254.169.254


4. Configure the instance to use the Oracle Cloud Infrastructure NTP service for iburst. To configure, modify the /
etc/ntp.conf file as follows:
a. In the server section, comment out the lines specifying the RHEL servers:

#server 0.rhel.pool.ntp.org iburst


#server 1.rhel.pool.ntp.org iburst
#server 2.rhel.pool.ntp.org iburst

Oracle Cloud Infrastructure User Guide 600


Compute

#server 3.rhel.pool.ntp.org iburst


b. Add an entry for the Oracle Cloud Infrastructure NTP server:

server 169.254.169.254 iburst

The modified server section now contains the following:

# Please consider joining the pool (http://www.pool.ntp.org/join.html).


#server 0.rhel.pool.ntp.org iburst
#server 1.rhel.pool.ntp.org iburst
#server 2.rhel.pool.ntp.org iburst
#server 3.rhel.pool.ntp.org iburst
server 169.254.169.254 iburst
5. Set the NTP service to launch automatically when the instance boots with the following command:

sudo chkconfig ntpd on


6. Start the NTP service with the following command:

sudo /etc/init.d/ntpd start


7. Confirm that the NTP service is configured correctly with the following command:

ntpq -p

The output will be similar to the following:

remote refid st t when poll reach delay offset jitter


==============================================================================
169.254.169.254 192.168.32.3 2 u 2 64 1 0.338 0.278
0.187

Oracle Linux 7.x


Use the following steps to configure your Oracle Linux 7.x instances to use the Oracle Cloud Infrastructure NTP
service.
1. Run commands in this section as root with the following command:

sudo su -
2. Install the NTP service with the following command:

yum -y install ntp


3. Change the firewall rules to allow inbound and outbound traffic with the Oracle Cloud Infrastructure NTP server,
at 169.254.169.254, on UDP port 123 with the following command:

awk -v n=13 -v s=' <passthrough ipv="ipv4">-A OUTPUT -d 169.254.169.254/32


-p udp -m udp --dport 123 -m comment --comment "Allow access to OCI local
NTP service" -j ACCEPT </passthrough>' 'NR == n {print s} {print}' /etc/
firewalld/direct.xml > tmp && mv tmp /etc/firewalld/direct.xml

At the prompt mv: overwrite '/etc/firewalld/direct.xml'?, enter y.


4. Restart the firewall with the following command:

service firewalld restart

Oracle Cloud Infrastructure User Guide 601


Compute

5. Set the date of your instance with the following command:

ntpdate 169.254.169.254
6. Configure the instance to use the Oracle Cloud Infrastructure NTP service for iburst. To configure, modify the /
etc/ntp.conf file as follows:
a. In the server section comment out the lines specifying the RHEL servers:

#server 0.rhel.pool.ntp.org iburst


#server 1.rhel.pool.ntp.org iburst
#server 2.rhel.pool.ntp.org iburst
#server 3.rhel.pool.ntp.org iburst
b. Add an entry for the Oracle Cloud Infrastructure NTP service:

server 169.254.169.254 iburst

The modified server section should now contain the following:

# Please consider joining the pool (http://www.pool.ntp.org/join.html).


#server 0.rhel.pool.ntp.org iburst
#server 1.rhel.pool.ntp.org iburst
#server 2.rhel.pool.ntp.org iburst
#server 3.rhel.pool.ntp.org iburst
server 169.254.169.254 iburst
7. Start and enable the NTP service with the following commands:

systemctl start ntpd


systemctl enable ntpd

You also need disable the chrony NTP client to ensure that the NTP service starts automatically after a reboot,
using the following commands:

systemctl stop chronyd


systemctl disable chronyd
8. Confirm that the NTP service is configured correctly with the following command:

ntpq -p

The output will be similar to the following:

remote refid st t when poll reach delay offset


jitter
==============================================================================
169.254.169.254 192.168.32.3 2 u 2 64 1 0.338 0.278
0.187

Windows Server 2012 R2 and later versions


You can configure your Windows Server instances to use the Oracle Cloud Infrastructure NTP service by running the
following commands in Windows Powershell as Administrator.

Set-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Services\W32Time


\Parameters' -Name 'Type' -Value NTP -Type String
Set-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Services\W32Time
\Config' -Name 'AnnounceFlags' -Value 5 -Type DWord
Set-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Services\W32Time
\TimeProviders\NtpServer' -Name 'Enabled' -Value 1 -Type DWord

Oracle Cloud Infrastructure User Guide 602


Compute

Set-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Services\W32Time


\Parameters' -Name 'NtpServer' -Value '169.254.169.254,0x9' -Type String
Set-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Services\W32Time
\TimeProviders\NtpClient' -Name 'SpecialPollInterval' -Value 900 -Type DWord
Set-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Services\W32Time
\Config' -Name 'MaxPosPhaseCorrection' -Value 1800 -Type DWord
Set-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Services\W32Time
\Config' -Name 'MaxNegPhaseCorrection' -Value 1800 -Type DWord

Steps 1 - 7 below walk you though these registry changes, you can use these steps to manually edit the registry
instead of using PowerShell. If you use the PowerShell commands, you can skip steps 1 - 7, and proceed with steps
8 and 9 to complete the process of configuring your Windows instance to use the Oracle Cloud Infrastructure NTP
service.
1. Change the server type to NTP:
a. From Registry Editor, navigate to:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Parameters\
b. Click Type.
c. Change the value to NTP and click OK.
2. Configure the Windows Time service to enable the Timeserv_Announce_Yes and
Reliable_Timeserv_Announce_Auto flags.
To configure, set the AnnounceFlags parameter to 5:
a. From Registry Editor, navigate to:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config\
b. Click AnnounceFlags.
c. Change the value to 5 and click OK.
3. Enable the NTP server:
a. From Registry Editor, navigate to:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time
\TimeProviders\NtpServer\
b. Click Enabled.
c. Change the value to 1 and click OK.
4. Set the time sources:
a. From Registry Editor, navigate to:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Parameters\
b. Click NtpServer.
c. Change the value to 169.254.169.254,0x9 and click OK.
5. Set the poll interval:
a. From Registry Editor, navigate to:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time
\TimeProviders\NtpClient\
b. Click SpecialPollInterval.
c. Set the value to the interval that you want the time service to synchronize on. The value is in seconds. To set it
for 15 minutes, set the value to 900, and click OK.

Oracle Cloud Infrastructure User Guide 603


Compute

6. Set the phase correction limit settings to restrict the time sample boundaries:
a. From Registry Editor, navigate to:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config\
b. Click MaxPosPhaseCorrection.
c. Set the value to the maximum time offset in the future for time samples. The value is in seconds. To set it for
30 minutes, set the value to 1800 and click OK.
d. Click MaxNegPhaseCorrection.
e. Set the value to the maximum time offset in the past for time samples. The value is in seconds. To set it for 30
minutes, set the value to 1800 and click OK.
7. Restart the time service by running the following command from a command prompt:

net stop w32time && net start w32time


8. Test the connection to the NTP service by running the following command from a command prompt:

w32tm /query /peers

The output will be similar to the following:

#Peer: 1

Peer: 169.254.169.254,0x9
State: Active
Time Remaining: 22.1901786s
Mode: 3 (Client)
Stratum: 0 (unspecified)
PeerPoll Interval: 10 (1024s)
HostPoll Interval: 10 (1024s)

After the time specified in the poll interval has elapsed, State will change from Pending to Active.

Protecting Data on NVMe Devices


Some instance shapes in Oracle Cloud Infrastructure include locally attached NVMe devices. These devices provide
extremely low latency, high performance block storage that is ideal for big data, OLTP, and any other workload that
can benefit from high-performance block storage.
Note that these devices are not protected in any way; they are individual devices locally installed on your instance.
Oracle Cloud Infrastructure does not take images, back up, or use RAID or any other methods to protect the data on
NVMe devices. It is your responsibility to protect and manage the durability the data on these devices.
Oracle Cloud Infrastructure offers high-performance remote block (iSCSI) LUNs that are redundant and can be
backed up using an API call. See Overview of Block Volume on page 504 for more information.
See Compute Shapes on page 659 for information about which shapes support local NVMe storage.

Finding the NVMe devices on your instance


You can identify the NVMe devices by using the lsblk command. The response returns a list. NVMe devices begin
with "nvme", as shown in the following example for a BM.DenseIO1.36 instance:

[opc@somehost ~]$ lsblk


NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 46.6G 0 disk
##sda1 8:1 0 512M 0 part /boot/efi
##sda2 8:2 0 8G 0 part [SWAP]
##sda3 8:3 0 38G 0 part /
nvme0n1 259:6 0 2.9T 0 disk

Oracle Cloud Infrastructure User Guide 604


Compute

nvme1n1 259:8 0 2.9T 0 disk


nvme2n1 259:0 0 2.9T 0 disk
nvme3n1 259:1 0 2.9T 0 disk
nvme4n1 259:7 0 2.9T 0 disk
nvme5n1 259:4 0 2.9T 0 disk
nvme6n1 259:5 0 2.9T 0 disk
nvme7n1 259:2 0 2.9T 0 disk
nvme8n1 259:3 0 2.9T 0 disk
[opc@somehost ~]$

Failure Modes and How to Protect Against Them


There are three primary failure modes you should plan for:
• Protecting Against the Failure of an NVMe Device on page 605
• Protecting Against the Loss of the Instance or Availability Domain on page 613
• Protecting Against Data Corruption or Loss from Application or User Error on page 613
Protecting Against the Failure of an NVMe Device
A protected RAID array is the most recommended way to protect against an NVMe device failure. There are three
RAID levels that can be used for the majority of workloads:

Oracle Cloud Infrastructure User Guide 605


Compute

• RAID 1: An exact copy (or mirror) of a set of data on two or more disks; a classic RAID 1 mirrored pair contains
two disks, as shown:

Oracle Cloud Infrastructure User Guide 606


Compute

• RAID 10: Stripes data across multiple mirrored pairs. As long as one disk in each mirrored pair is functional, data
can be retrieved, as shown:

Oracle Cloud Infrastructure User Guide 607


Compute

• RAID 6: Block-level striping with two parity blocks distributed across all member disks, as shown.

For more information about RAID and RAID levels, see RAID.
Because the appropriate RAID level is a function of the number of available drives, the number of individual LUNs
needed, the amount of space needed, and the performance requirements, there isn't one correct choice. You must
understand your workload and design accordingly.
Important:

If you're partitioning or formatting your disk as part of this process and the
drive is larger than 2 TB, you should create a GUID Partition Table (GPT).
If you want to create a GPT, use parted instead of the fdisk command.
For more information, see About Disk Partitions in the Oracle Linux
Administrator's Guide.

Options for Using a BM.DenseIO1.36 Shape


There are several options for BM.DenseIO1.36 instances with nine NVMe devices.
For all options below, you can optionally increase the default RAID resync speed limit value. Increasing this value to
more closely match the fast storage speed on the bare metal instances can decrease the amount of time required to set
up RAID.
Use the following command to increase the speed limit value:

$ sysctl -w dev.raid.speed_limit_max=10000000

Option 1: Create a single RAID 6 device across all nine devices


This array is redundant, performs well, will survive the failure of any two devices, and will be exposed as a single
LUN with about 23.8TB of usable space.

Oracle Cloud Infrastructure User Guide 608


Compute

Use the following commands to create a single RAID 6 device across all nine devices:

$ sudo yum install mdadm -y

$ sudo mdadm --create /dev/md0 --raid-devices=9 --level=6 /dev/nvme0n1 /dev/


nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1 /dev/nvme5n1 /dev/nvme6n1 /
dev/nvme7n1 /dev/nvme8n1

$ sudo mdadm --detail --scan | sudo tee -a /etc/mdadm.conf >> /dev/null

Option 2: Create a four device RAID 10 and a five device RAID 6 array
These arrays would be exposed as two different LUNs to your applications. This is a recommended choice when you
need to isolate one type of I/O from another, such as log and data files. In this example, your RAID 10 array would
have about 6.4TB of usable space and the RAID 6 array would have about 9.6TB of usable space.
Use the following commands to create a four-device RAID 10 and a five-device RAID 6 array:

$ sudo yum install mdadm -y

$ sudo mdadm --create /dev/md0 --raid-devices=4 --level=10 /dev/nvme5n1 /


dev/nvme6n1 /dev/nvme7n1 /dev/nvme8n1

$ sudo mdadm --create /dev/md1 --raid-devices=5 --level=6 /dev/nvme0n1 /dev/


nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1

$ sudo mdadm --detail --scan | sudo tee -a /etc/mdadm.conf >> /dev/null

Option 3: Create an eight-device RAID 10 array


If you need the best possible performance and can sacrifice some of your available space, then an eight-device
RAID 10 array is an option. Because RAID 10 requires an even number of devices, the ninth device is left out of the
array and serves as a hot spare in case another device fails. This creates a single LUN with about 12.8 TB of usable
space.
Use the following commands to create an eight-device RAID 10 array:

$ sudo yum install mdadm -y

$ sudo mdadm --create /dev/md0 --raid-devices=8 --level=10 /dev/nvme0n1 /


dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1 /dev/nvme5n1 /dev/
nvme6n1 /dev/nvme7n1

The following command adds /dev/nvme8n as a hot spare for the /dev/md0 array:

$ sudo mdadm /dev/md0 --add /dev/nvme8n1

$ sudo mdadm --detail --scan | sudo tee -a /etc/mdadm.conf >> /dev/null

Option 4: Create two four-device RAID 10 arrays


For the best possible performance and I/O isolation across LUNs, create two four-device RAID 10 arrays. Because
RAID 10 requires an even number of devices, the ninth device is left out of the arrays and serves as a global hot spare
in case another device in either array fails. This creates two LUNS, each with about 6.4 TB of usable space.

Oracle Cloud Infrastructure User Guide 609


Compute

Use the following commands to create two four-device RAID 10 arrays with a global hot spare:

$ sudo yum install mdadm -y

$ sudo mdadm --create /dev/md0 --raid-devices=4 --level=10 /dev/nvme4n1 /


dev/nvme5n1 /dev/nvme6n1 /dev/nvme7n1

$ sudo mdadm --create /dev/md1 --raid-devices=4 --level=10 /dev/nvme0n1 /


dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1

Creating a global hot spare requires the following two steps:


1. Add the spare to either array (it does not matter which one) by running these commands:

$ sudo mdadm /dev/md0 --add /dev/nvme8n1

$ sudo mdadm --detail --scan | sudo tee -a /etc/mdadm.conf >> /dev/null


2. Edit /etc/mdadm to put both arrays in the same spare-group. Add spare-group=global to the end of the
line that starts with ARRAY, as follows:

$ sudo vi /etc/mdadm.conf

ARRAY /dev/md0 metadata=1.2 spares=1 name=mdadm.localdomain:0


UUID=43f93ce6:4a19d07b:51762f1b:250e2327 spare-group=global

ARRAY /dev/md1 metadata=1.2 name=mdadm.localdomain:1


UUID=7521e51a:83999f00:99459a19:0c836693 spare-group=global

Monitoring Your Array


It's important for you to be notified if a device in one of your arrays fails. Mdadm has built-in tools that can be
utilized for monitoring, and there are two options you can use:
• Set the MAILADDR option in /etc/mdadm.conf and then run the mdadm monitor as a daemon
• Run an external script when mdadm detects a failure

Set the MAILADDR option in /etc/mdadm.conf and run the mdadm monitor as a daemon
The simplest method is to set the MAILADDR option in /etc/mdadm.conf, and then run the mdadm monitor as a
daemon, as follows:
1. The DEVICE partitions line is required for MAILADDR to work; if it is missing, you must add it, as follows:

$ sudo vi /etc/mdadm.conf

DEVICE partitions

ARRAY /dev/md0 level=raid1 UUID=1b70e34a:2930b5a6:016we78d:eese14532

MAILADDR <[email protected]>
2. Run the monitor using the following command:

$ sudo nohup mdadm –-monitor –-scan –-daemonize &

Oracle Cloud Infrastructure User Guide 610


Compute

3. To verify that the monitor runs at startup, run the following commands:

$ sudo chmod +x /etc/rc.d/rc.local

$ sudo vi /etc/rc.local

Add the following line to the end of /etc/rc.local:

nohup mdadm –-monitor –-scan –-daemonize &


4. To verify that the email and monitor are both working run the following command:

$ sudo mdadm --monitor --scan --test -1

Note that these emails will likely be marked as spam. The PROGRAM option, described later in this topic, allows
for more sophisticated alerting and messaging.

Run an external script when a failure is detected


A more advanced option is to create an external script that would run if the mdadm monitor detects a failure. You
would integrate this type of script with your existing monitoring solution. The following is an example of this type of
script:

$ sudo vi /etc/mdadm.events

#!/bin/bash
event=$1
device=$2
if [ $event == "Fail" ]
then
<"do something">
else
if [ $event == "FailSpare" ]
then
<"do something else">
else
if [ $event == "DegradedArray" ]
then
<"do something else else">
else
if [ $event == "TestMessage" ]
then
<"do something else else else">
fi
fi
fi
fi

$ sudo chmod +x /etc/mdadm.events

Next, add the PROGRAM option to /etc/mdadm.conf, as shown in the following example:

Oracle Cloud Infrastructure User Guide 611


Compute

1. The DEVICE partitions line is required for MAILADDR to work; if it is missing, you must add it, as follows:

$ sudo vi /etc/mdadm.conf

DEVICE partitions

ARRAY /dev/md0 level=raid1 UUID=1b70e34a:2930b5a6:016we78d:eese14532

MAILADDR <[email protected]>

PROGRAM /etc/mdadm.events
2. Run the monitor using the following command:

$ sudo nohup mdadm –-monitor –-scan –-daemonize &


3. To verify that the monitor runs at startup, run the following commands:

$ sudo chmod +x /etc/rc.d/rc.local

$ sudo vi /etc/rc.local

Add the following line to the end of /etc/rc.local:

nohup mdadm –-monitor –-scan –-daemonize &


4. To verify that the email and monitor are both working run the following command:

$ sudo mdadm --monitor --scan --test -1

Note that these emails will likely be marked as spam. The PROGRAM option, described later in this topic, allows
for more sophisticated alerting and messaging.

Simulate the failure of a device


You can use mdadm to manually cause a failure of a device to see whether your RAID array can survive the failure,
as well as test the alerts you have set up.
1. Mark a device in the array as failed by running the following command:

$ sudo mdadm /dev/md0 --fail /dev/nvme0n1


2. Recover the device or your array might not be protected. Use the following command:

$ sudo mdadm /dev/md0 --add /dev/nvme0n1

Your array will automatically rebuild in order to use the "new" device. Performance will be decreased during this
process.
3. You can monitor the rebuild status by running the following command:

$ sudo mdadm --detail /dev/md0

What To Do When an NVMe Device Fails


Compute resources in the cloud are designed to be temporary and fungible. If an NVMe device fails while the
instance is in service, you should start another instance with the same amount of storage or more, and then copy the
data onto the new instance, replacing the old instance. There are multiple toolsets for copying large amounts of data,
with rsync being the most popular. Since the connectivity between instances is a full 10 Gb/sec, copying data should

Oracle Cloud Infrastructure User Guide 612


Compute

be quick. Remember that with a failed device, your array may no longer be protected, so you should copy the data off
of the impacted instance as quickly as possible.
Using the Linux Logical Volume Manager
The Linux Logical Volume Manager (LVM) provides a rich set of features for managing volumes. If you need these
features, we strongly recommend that you use mdadm as described in preceding sections of this topic to create the
RAID arrays, and then use LVM's pvcreate, vgcreate, and lvcreate commands to create volumes on the
mdadm LUNs. You should not use LVM directly against your NVMe devices.
Protecting Against the Loss of the Instance or Availability Domain
Once your data is protected against the loss of a NVMe device, you need to protect it against the loss of an instance
or the loss of the availability domain. This type of protection is typically done by replicating your data to another
availability domain or backing up your data to another location. The method you choose depends on your objectives.
For details, see the disaster recovery concepts of Recovery Time Objective (RTO) and Recovery Point Objective
(RPO).

Replication
Replicating your data from one instance in one availability domain to another has the lowest RTO and RPO at a
significantly higher cost than backups; for every instance in one availability domain, you must have another instance
in a different availability domain.
For Oracle database workloads, you should use the built-in Oracle Data Guard functionality to replicate your
databases. Oracle Cloud Infrastructure availability domains are each close enough to each other to support high
performance, synchronous replication. Asynchronous replication is also an option.
For general-purpose block replication, DRBD is the recommended option. You can configure DRBD to replicate,
synchronously or asynchronously, every write in one availability domain to another availability domain.

Backups
Traditional backups are another way to protect data. All commercial backup products are fully supported on Oracle
Cloud Infrastructure. If you use backups, the RTO and RPO are significantly higher than using replication because
you must recreate the compute resources that failed and then restore the most recent backup. Costs are significantly
lower because you don't need to maintain a second instance. Do not store your backups in the same availability
domain as their original instance.
Protecting Against Data Corruption or Loss from Application or User Error
The two recommended ways of protecting against data corruption or loss from application or user error are regularly
taking snapshots or creating backups.

Snapshots
The two easiest ways to maintain snapshots are to either use a file system that supports snapshots, such as ZFS, or
use LVM to create and manage the snapshots. Because of the way LVM has implemented copy-on-write (COW),
performance may significantly decrease when a snapshot is taken using LVM.

Backups
All commercial backup products are fully supported on Oracle Cloud Infrastructure. Make sure that your backups are
stored in a different availability domain from the original instance.

Boot Volumes
When you launch a virtual machine (VM) or bare metal instance based on an Oracle-provided image or custom
image, a new boot volume for the instance is created in the same compartment. That boot volume is associated with
that instance until you terminate the instance. When you terminate the instance, you can preserve the boot volume and

Oracle Cloud Infrastructure User Guide 613


Compute

its data. For more information, see Terminating an Instance on page 789. This feature gives you more control and
management options for your compute instance boot volumes, and enables:
• Instance scaling: When you terminate your instance, you can keep the associated boot volume and use it to
launch a new instance using a different instance type or shape. See Creating an Instance on page 700 for steps
to launch an instance based on a boot volume. This allows you to switch easily from a bare metal instance to a
VM instance and vice versa, or scale up or down the number of cores for an instance.
• Troubleshooting and repair: If you think a boot volume issue is causing a compute instance problem, you can
stop the instance and detach the boot volume. Then you can attach it to another instance as a data volume to
troubleshoot it. After resolving the issue, you can then reattach it to the original instance or use it to launch a new
instance.
Boot volumes are encrypted by default, the same as other block storage volumes. For more information, see Block
Volume Encryption on page 508.
Important:

In-transit encryption for boot and block volumes is only available for virtual
machine (VM) instances launched from Oracle-provided images, it is not
supported on bare metal instances. It is also not supported in most cases
for instances launched from custom images imported for "bring your own
image" (BYOI) scenarios. To confirm support for certain Linux-based
custom images and for more information contact Oracle support, see Getting
Help and Contacting Support on page 126.
You can group boot volumes with block volumes into the same volume group, making it easy to create a group
volume backup or a clone of your entire instance, including both the system disk and storage disks at the same time.
See Volume Groups on page 510 for more information.
You can move Block Volume resources such as boot volumes and boot volume backups between compartments. For
more information, see Move Block Volume Resources Between Compartments on page 568.
For more information about the Block Volume service and boot volumes, see the Block Volume FAQ.

Custom Boot Volume Sizes


When you launch an instance, you can choose whether to use the selected image's default boot volume size, or to
specify a custom size up to 32 TB. This capability is available for the following image source options:
• Oracle-provided image
• Custom image
• Image OCID
See Creating an Instance on page 700 for more information.
For Linux-based images, the custom boot volume size must be larger than the image's default boot volume size or
50 GB, whichever is higher.
For Windows-based images, the custom boot volume size must be larger than the image's default boot volume size or
256 GB, whichever is higher. The minimum size requirement for Windows images is to ensure that there is enough
space available for Windows patches and updates that can require a large amount of space, to improve performance,
and to provide adequate space for setting a suitable page file (see this Microsoft known issue for page file settings on
Windows Server 2012 R2).
If you specify a custom boot volume size, you need to extend the volume to take advantage of the larger size. For
steps, see Extending the Partition for a Boot Volume on page 615.

Boot Volume Performance


Boot volume performance varies with volume size, see Block Volume Performance on page 571 for more
information.

Oracle Cloud Infrastructure User Guide 614


Compute

The Block Volume service's elastic performance feature enables you to dynamically change the volume performance
for boot volumes. Once an instance has been created, you can change the volume performance of the boot volume to
one of the following performance options:
• Balanced
• Higher Performance
For more information about this feature and the performance options, see Block Volume Elastic Performance on page
585 and Changing the Performance of a Volume on page 586

Required IAM Service Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to list boot
volumes. The policy in Let volume admins manage block volumes, backups, and volume groups on page 2154 lets
the specified group do everything with block volumes, boot volumes, and backups, but not launch instances.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Using the Console


To access the Console, you must use a supported browser.
See the following tasks for managing boot volumes:
• Listing Boot Volumes on page 618
• Attaching a Boot Volume on page 617
• Detaching a Boot Volume on page 627
• Listing Boot Volume Attachments on page 619
• Deleting a Boot Volume on page 628

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to manage boot volumes:
• BootVolume
• ListBootVolumes
• GetBootVolume
• UpdateBootVolume
• DetachBootVolume
• DeleteVolume
• BootVolumeAttachment
• AttachBootVolume
• GetBootVolumeAttachment
• ListBootVolumeAttachments

Extending the Partition for a Boot Volume


When you create a new virtual machine (VM) instance or bare metal instance based on an Oracle-provided image or
custom image, you have the option of specifying a custom boot volume size. You can also expand the size of the boot
volume for an existing instance; see Resizing a Volume on page 536 for more information. In order to take advantage

Oracle Cloud Infrastructure User Guide 615


Compute

of the larger size, you need to extend the partition for the boot volume. For block volumes, see Extending the Partition
for a Block Volume on page 540.
Note:

After a boot volume has been resized, the first backup on the resized boot
volume will be a full backup. See Boot Volume Backup Types on page
619 for more information about full versus incremental boot volume
backups.
Required IAM Policy
Extending a partition on an instance does not require a specific IAM policy. However, you may need permission to
run the necessary commands on the instance's guest OS. Contact your system administrator for more information.
Extending the Root Partition on a Linux-Based Image
For instances running Linux-based images, you need to extend the root partition and then grow the file system using
the oci-growfs on page 648 operation from OCI Utilities on page 646.
Extending the System Partition on a Windows-Based Image
On Windows-based images, you can extend a partition using the Windows interface or from the command line using
the DISKPART utility.

Windows Server 2012 and Later Versions


The steps for extending a system partition on instances running Windows Server 2012, Windows Server 2016, or
Windows Server 2019 are the same, and are described in the following procedures.
Extending the system partition using the Windows interface
1. Open the Disk Management system utility on the instance.
2. Right-click the boot volume and select Extend Volume.
3. Follow the instructions in the Extend Volume Wizard:
a. Select the disk that you want to extend, enter the size, and then click Next.
b. Confirm that the disk and size settings are correct, and then click Finish.
4. Verify that the boot volume's system disk has been extended in Disk Management.
Extending the system partition using the command line with DISKPART
1. Open a command prompt as administrator on the instance.

Oracle Cloud Infrastructure User Guide 616


Compute

2. Run the following command to start the DISKPART utility:

diskpart
3. At the DISKPART prompt, run the following command to display the instance's volumes:

list volume
4. Run the following command to select the boot volume:

select volume <volume_number>

<volume_number> is the number associated with the boot volume that you want to extend the partition for.
5. Run the following command to extend the partition:

extend size=<increased_size_in_MB>

<increased_size_in_MB> is the size in MB that you want to extend the partition to.
Caution:

When using the DISKPART utility, do not overextend the partition beyond
the current available space. Overextending the partition could result in data
loss.
6. To confirm that the partition was extended, run the following command and verify that the boot volume's partition
has been extended:

list volume

Attaching a Boot Volume


If a boot volume has been detached from the associated instance, you can reattach it to the instance. If you want to
restart an instance with a detached boot volume, you must reattach the boot volume using the steps described in this
topic.
If a boot volume has been detached from the associated instance, or if the instance is stopped or terminated, you can
attach the boot volume to another instance as a data volume. For steps, see Attaching a Volume on page 521.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to attach and
detach existing block volumes. The policy in Let volume admins manage block volumes, backups, and volume groups
on page 2154 lets the specified group do everything with block volumes and backups, but not launch instances.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.
Security Zones
Security Zones ensure that your cloud resources comply with Oracle security principles. If any operation on a
resource in a security zone compartment violates a policy for that security zone, then the operation is denied.
The following security zone policies affect your ability to attach block volumes to Compute instances.
• The boot volume for a Compute instance in a security zone must also be in a security zone.
• A Compute instance that isn't in a security zone can't be attached to a boot volume that is in a security zone.

Oracle Cloud Infrastructure User Guide 617


Compute

Using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you want to reattach the boot volume to.
3. Under Resources, click Boot Volume.
4. Click the Actions icon (three dots), and then click Attach Boot Volume. Confirm when prompted.
You can start the instance when the boot volume's state is Attached.
Using the API
To attach a volume to an instance, use the following operation:
• AttachBootVolume
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Listing Boot Volumes


You can list all boot volumes in a specific compartment, or detailed information on a single boot volume.
Required IAM Service Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to list
volumes. The policy in Let volume admins manage block volumes, backups, and volume groups on page 2154 lets
the specified group do everything with block volumes and backups, but not launch instances.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.
Using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Boot Volumes.
2. Choose your Compartment.
A detailed list of volumes in the current compartment is displayed. To see detailed information for a specific volume,
click the boot volume name.
The instance associated with the boot volume is listed in the Attached Instance field. If the value for this field
displays the message None in this Compartment, the boot volume has been detached from the associated
instance, or the instance has been terminated while the boot volume was preserved.
To view the volumes in a different compartment, change the compartment in the Compartment drop-down menu.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

List Boot Volumes:


Get a list of boot volumes within a compartment.
• ListBootVolumes

Get a Single Boot Volume:


Get detailed information on a single boot volume:

Oracle Cloud Infrastructure User Guide 618


Compute

• GetBootVolume

Listing Boot Volume Attachments


You can use the API to list all the boot volume attachments in a specific compartment. You can also use the API to
retrieve detailed information on a single boot volume attachment.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to list
volume attachments. The policy in Let volume admins manage block volumes, backups, and volume groups on page
2154 lets the specified group do everything with block volumes and backups, but not launch instances.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

List Boot Volume Attachments:


Get information on all boot volume attachments in a specific compartment.
• ListBootVolumeAttachments

Get a Single Boot Volume Attachment:


Get detailed information on a single boot volume attachment.
• GetBootVolumeAttachment

Overview of Boot Volume Backups


The backups feature of the Oracle Cloud Infrastructure Block Volume service lets you make a crash-consistent
backup, which is a point in time snapshot of a boot volume without application interruption or downtime. You can
make a backup of a boot volume while it is attached to a running instance, or you can make a backup of a boot
volume while it is detached from the instance. Boot volume backup capabilities are the same as block volume backup
capabilities. See Overview of Block Volume Backups on page 544 for more information.
There are two ways you can initiate a boot volume backup, the same as block volume backups. You can either
manually start the backup, or assign a policy which defines a set backup schedule. See Manual Backups on page 544
and Policy-Based Backups on page 544 for more information.
Boot Volume Backup Types
The Block Volume service supports the same backups types for boot volumes as for block volumes:
• Incremental: This backup type includes only the changes since the last backup.
• Full: This backup type includes all changes since the volume was created.
You can restore a boot volume from any of your incremental or full boot volume backups. Both backup types enable
you to restore the full boot volume contents to the point-in-time snapshot of the boot volume when the backup was
taken. You don't need to keep the initial full backup or subsequent incremental backups in the backup chain and
restore them in sequence, you only need to keep the backups taken for the times you care about.

Oracle Cloud Infrastructure User Guide 619


Compute

Note:

After a boot volume has been resized, the first backup on the resized boot
volume will be a full backup. See Resizing a Volume on page 536 for more
information about volume resizing.
Tags
When a boot volume backup is created, the source boot volume's tags are automatically included in the boot volume
backup. This also includes boot volumes with custom backup policies applied to create backups on a schedule. Source
boot volume tags are automatically assigned to all backups when they are created. You can also apply additional tags
to volume backups as needed.
When you create an instance from the boot volume backup, the instance is created includes the source boot volume's
tags.
Backing Up a Boot Volume
You can create boot volume backups using the Console or the REST APIs/command line interface (CLI). See
Backing Up a Boot Volume on page 621 and the BootVolumeBackup API for more information.
Boot Volume Backup Size
Boot volume backup size may be larger than the source boot volume size. Some of the reasons for this could include
the following:
• Any part of the boot volume that has been written to is considered initialized, so will always be part of the boot
volume backup.
• Many operating systems write or zero out the content, which results in these blocks marked as used. The Block
Volume service considers these blocks updated and includes them in the volume backup.
• Boot volume backups also include metadata, which can be up to 1 GB in additional data. For example, in a full
backup of a 256 GB Windows boot disk, you may see a backup size of 257 GB, which includes an additional 1
GB of metadata.
Restoring a Boot Volume
Before you can use a boot volume backup, you need to restore it. For steps, see Restoring a Boot Volume on page
622.
Making a boot volume backup while an instance is running creates a crash-consistent backup, meaning the data is
in the identical state it was in at the time the backup was made. This is the same state it would be in the case of a
loss of power or hard crash. In most cases, you can restore a boot volume backup and use it to create an instance.
Alternatively you can attach it to an instance as a data volume to repair it or recover data, see Attaching a Volume on
page 521. To ensure a bootable image, you should create a custom image from your instance. For information about
creating custom images, see Managing Custom Images on page 670.
Copying Boot Volume Backups Across Regions
You can copy boot volume backups between regions using the Console, command line interface (CLI), SDKs,
or REST APIs. For steps, see Copying a Boot Volume Backup Between Regions on page 623. This capability
enhances the following scenarios:
• Disaster recovery and business continuity: By copying boot volume backups to another region at regular
intervals, it makes it easier for you to restore instances in the destination region if a region-wide disaster occurs in
the source region.
• Migration and expansion: You can easily migrate and expand your instances to another region.
To copy boot volume backups between regions, you must have permission to read and copy boot volume backups in
the source region, and permission to create boot volume backups in the destination region. For more information see
Required IAM Policy on page 624.
Once you have copied the boot volume backup to the new region you can then restore from that backup by creating a
new volume from the backup using the steps described in Restoring a Boot Volume on page 622.

Oracle Cloud Infrastructure User Guide 620


Compute

Differences Between Boot Volume Backups and Clones


Consider the following criteria when you decide whether to create a backup or a clone of a volume.

Volume Backup Volume Clone


Description Creates a point-in-time backup of Creates a single point-in-time copy
data on a volume. You can restore of a volume without having to go
multiple new volumes from the through the backup and restore
backup later in the future. process.
Use case Retain a backup of the data in a Rapidly duplicate an existing
volume, so that you can duplicate environment. For example, you can
an environment later or preserve the use a clone to test configuration
data for future use. changes without impacting your
production environment.
Meet compliance and regulatory
requirements, because the data in
a backup remains unchanged over
time.
Support business continuity
requirements.
Reduce the risk of outages or data
mutation over time.

Speed Slower (minutes or hours) Faster (seconds)


Cost Lower cost Higher cost
Storage location Object Storage Block Volume
Retention policy Policy-based backups expire, manual No expiration
backups do not expire
Volume groups Supported. You can back up a Supported. You can clone a volume
volume group. group.

Backing Up a Boot Volume


You can create a backup of a boot volume using the Oracle Cloud Infrastructure Block Volume service. Boot
volume backups are point-in-time snapshots of a boot volume. For more information about boot volume backups, see
Overview of Boot Volume Backups on page 619.This topic describes how to create a manual boot volume backup.
You can also configure a backup policy that creates backups automatically based on a specified schedule and
retention policy. This works the same as block volumes. See Policy-Based Backups on page 551 for more
information.
For information to help you decide whether to create a backup or a clone of a boot volume, see Differences Between
Boot Volume Backups and Clones on page 621.
Note:

Boot volume backup size may be larger than the source boot volume size.
See Boot Volume Backup Size on page 620 for more information. See also
Boot volume backup size larger than expected.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.

Oracle Cloud Infrastructure User Guide 621


Compute

Tip:

When users create a backup from a volume or restore a volume from a


backup, the volume and backup don't have to be in the same compartment.
However, users must have access to both compartments.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.
Using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Boot Volumes.
2. Click the boot volume that you want to create a backup for.
3. Click Create Manual Backup.
4. Enter a name for the backup. Avoid entering confidential information.
5. Select the backup type, either incremental or full. See Boot Volume Backup Types on page 619 for information
about backup types.
6. If you have permissions to create a resource, then you also have permissions to apply free-form tags to that
resource. To apply a defined tag, you must have permissions to use the tag namespace. For more information
about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip this option (you
can apply tags later) or ask your administrator.
7. Click Create Backup.
The backup is completed when its icon no longer lists it as CREATING in the Boot Volume Backup list.
Using the API
To back up a boot volume, use the following operation:
• CreateBootVolumeBackup
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
For more information about backups, see Overview of Block Volume Backups on page 544 and Restoring a Backup
to a New Volume on page 561.
Restoring a Boot Volume
You can use a boot volume backup to create an instance or you can attach it to another instance as a data volume.
However before you can use a boot volume backup, you need to restore it to a boot volume.
You can restore a boot volume from any of your incremental or full boot volume backups. Both backup types enable
you to restore the full boot volume contents to the point-in-time snapshot of the boot volume when the backup was
taken. You don't need to keep the initial full backup or subsequent incremental backups in the backup chain and
restore them in sequence, you only need to keep the backups taken for the times you care about. See Boot Volume
Backup Types on page 619 for information about full and incremental backup types.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.
Using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Boot Volume Backups.
2. Choose your Compartment.

Oracle Cloud Infrastructure User Guide 622


Compute

3. In the list of boot volume backups, click the Actions icon (three dots) for the boot volume backup you want to
restore and then click Create Boot Volume.
4. Specify a name for the boot volume, select the availability domain to use, and optionally choose a backup policy
for scheduled backups. See Policy-Based Backups on page 551 for more information about scheduled backups and
volume backup policies. Avoid entering confidential information.
5. You can restore a boot volume backup to a larger volume size. To do this, check Custom Block Volume Size
(GB) and then specify the new size. You can only increase the size of the volume, you cannot decrease the size.
If you restore the block volume backup to a larger size volume, you need to extend the volume's partition, see
Extending the Partition for a Boot Volume on page 615 for more information.
6. Click Create Boot Volume.
The boot volume will be ready to use once its icon no longer lists it as PROVISIONING in the details page for
the boot volume.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
To restore a boot volume backup, use the CreateBootVolume operation and specify
BootVolumeSourceFromBootVolumeBackupDetails for CreateBootVolumeDetails.
Next Steps
After you have restored the boot volume backup, you can:
• Use the boot volume to create an instance, for more information, see Creating an Instance on page 700.
• Attach the boot volume to an instance as a data volume, for more information, see Attaching a Volume on page
521.
Making a boot volume backup while an instance is running creates a crash-consistent backup, meaning the data is in
the identical state it was in at the time the backup was made. This is the same state it would be in the case of a loss of
power or hard crash. In most cases you can use the restored boot volume to create an instance, however to ensure a
bootable image, you should create a custom image from your instance. For information about creating custom images,
see Managing Custom Images on page 670.
Copying a Boot Volume Backup Between Regions
You can copy boot volume backups from one region to another region using the Oracle Cloud Infrastructure Block
Volume service. For more information, see Copying Boot Volume Backups Across Regions on page 620.
Note:

Limitations for Copying Boot Volume Backups Across Regions


When copying boot volume backups across regions in your tenancy, you can
only copy one backup at a time from a specific source region.
You can only copy boot volume backups for instances based on Oracle-
Provided Images on page 633. If you try to copy a boot volume for an
instance based on other image types, such as Marketplace images, the request
will fail with an error.
You cannot add compatible shapes in the destination region for boot volume
backups, the shape compatibility list is from the source region and cannot be
changed.
When you create an instance from the Console and specify a boot volume
backup that was copied from another region as the image source, you may
encounter a message indicating that there was an error loading the source
image. You can ignore this error message and click Create Instance to finish
the instance creation process and launch the instance.

Oracle Cloud Infrastructure User Guide 623


Compute

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The first two statements listed in the Let volume admins manage block volumes, backups, and
volume groups on page 2154 policy lets the specified group do everything with boot volumes and boot volume
backups with the exception of copying boot volume backups across regions. The aggregate resource type volume-
family does not include the BOOT_VOLUME_BACKUP_COPY permission, so to enable copying boot volume
backups across regions you need to ensure that you include the third statement in that policy, which is:

Allow group VolumeAdmins to use boot-volume-backups in tenancy where


request.permission='BOOT_VOLUME_BACKUP_COPY'

To restrict access to just creating and managing boot volume backups, including copying boot volume backups
between regions, use the policy in Let boot volume backup admins manage only backups on page 2155. The
individual resource type boot-volume-backups includes the BOOT_VOLUME_BACKUP_COPY permission, so
you do not need to specify it explicitly in this policy.
If you are copying volume backups encrypted using Vault between regions or you want the copied volume backup to
use Vault for encryption in the destination region, you need to use a policy that allows the Block Volume service to
perform cryptographic operations with keys in the destination region. For a sample policy showing this, see Let Block
Volume, Object Storage, File Storage, Container Engine for Kubernetes, and Streaming services encrypt and decrypt
volumes, volume backups, buckets, file systems, Kubernetes secrets, and stream pools on page 2161.

Restricting Access
The specific permissions needed to copy volume backups across regions are:
• Source region: BOOT_VOLUME_BACKUP_READ, BOOT_VOLUME_BACKUP_COPY
• Destination region: BOOT_VOLUME_BACKUP_CREATE

Sample Policies
To restrict a group to specific source and destination regions for copying volume backups
In this example, the group is restricted to copying volume backups from the UK South (London) region to the
Germany Central (Frankfurt) region.

Allow group MyTestGroup to read boot-volume-backups in tenancy where all


{request.region='lhr'}
Allow group MyTestGroup to use boot-volume-backups in tenancy where all
{request.permission='BOOT_VOLUME_BACKUP_COPY', request.region = 'lhr',
Allow group MyTestGroup to manage boot-volume-backups in tenancy where all
{request.permission='BOOT_VOLUME_BACKUP_CREATE', request.region = 'fra'}

To restrict some source regions to specific destination regions while enabling all destination regions for other source regions
In this example, the following is enabled for the group:
• Manage volume backups in all regions.
• Copy volume backups from the US West (Phoenix) and US East (Ashburn) regions to any destination regions.
• Copy volume backups from the Germany Central (Frankfurt) and UK South (London) regions only to the
Germany Central (Frankfurt) or UK South (London) regions.

Allow group MyTestGroup to read boot-volume-backups in tenancy where all


{request.region='lhr'}
Allow group MyTestGroup to manage boot-volume-backups in tenancy where any
{request.permission!='BOOT_VOLUME_BACKUP_COPY'}

Oracle Cloud Infrastructure User Guide 624


Compute

Allow group MyTestGroup to use boot-volume-backups in tenancy where all


{request.permission='BOOT_VOLUME_BACKUP_COPY', any {request.region='lhr',
request.region='fra'}, any{target.region='fra', target.region='lhr'}}
Allow group MyTestGroup to use boot-volume-backups in tenancy where all
{request.permission='BOOT_VOLUME_BACKUP_COPY', any {request.region='phx',
request.region='iad'}}

If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.
Using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Boot Volume Backups.
A list of the boot volume backups in the compartment you're viewing is displayed. If you don’t see the one you're
looking for, make sure you’re viewing the correct compartment (select from the list on the left side of the page).
2. Click the Actions icon (three dots) for the boot volume backup you want to copy to another region.
3. Click Copy to Another Region.
4. Enter a name for the backup and choose the region to copy the backup to. Avoid entering confidential information.
5. In the Encryption section select whether you want the boot volume backup to use the Oracle-provided encryption
key or your own Vault encryption key. If you select the option to use your own key, paste the OCID for
encryption key from the destination region.
6. Click Copy Boot Volume Backup.
7. Confirm that the source and destination region details are correct in the confirmation dialog and then click OK.
Using the API
To copy a boot volume backup to another region, use the following operation:
• CopyBootVolumeBackup
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Next Steps
After copying the boot volume backup, switch to the destination region in the Console and verify that the copied
backup appears in the list of boot volume backups for that region. You can then restore the backup using the steps in
Restoring a Boot Volume on page 622.
For more information about backups, see Overview of Boot Volume Backups on page 619.

Cloning a Boot Volume


You can create a clone from a boot volume using the Oracle Cloud Infrastructure Block Volume service. Cloning
enables you to make a copy of an existing boot volume without needing to go through the backup and restore process.
For more information about the Block Volume service, see Overview of Block Volume on page 504 and the Block
Volume FAQ.
A boot volume clone is a point-in-time direct disk-to-disk deep copy of the source boot volume, so all the data that is
in the source boot volume when the clone is created is copied to the boot volume clone. Any subsequent changes to
the data on the source boot volume are not copied to the boot volume clone. Since the clone is a copy of the source
boot volume it will be the same size as the source boot volume unless you specify a larger volume size when you
create the clone.
The clone operation occurs immediately and you can use the cloned boot volume as soon as the state changes to
available.
There is a single point-in-time reference for a source boot volume while it is being cloned, so if you clone a boot
volume while the associated instance is running, you need to wait for the first clone operation to complete from the
source before creating additional clones. You also need to wait for any backup operations to complete as well.

Oracle Cloud Infrastructure User Guide 625


Compute

You can only create a clone for a boot volume within the same region, availability domain, and tenant. You can
create a clone for a boot volume between compartments as long as you have the required access permissions for the
operation.
Differences Between Boot Volume Backups and Clones
Consider the following criteria when you decide whether to create a backup or a clone of a volume.

Volume Backup Volume Clone


Description Creates a point-in-time backup of Creates a single point-in-time copy
data on a volume. You can restore of a volume without having to go
multiple new volumes from the through the backup and restore
backup later in the future. process.
Use case Retain a backup of the data in a Rapidly duplicate an existing
volume, so that you can duplicate environment. For example, you can
an environment later or preserve the use a clone to test configuration
data for future use. changes without impacting your
production environment.
Meet compliance and regulatory
requirements, because the data in
a backup remains unchanged over
time.
Support business continuity
requirements.
Reduce the risk of outages or data
mutation over time.

Speed Slower (minutes or hours) Faster (seconds)


Cost Lower cost Higher cost
Storage location Object Storage Block Volume
Retention policy Policy-based backups expire, manual No expiration
backups do not expire
Volume groups Supported. You can back up a Supported. You can clone a volume
volume group. group.

For more information about boot volume backups, see Overview of Boot Volume Backups on page 619 and
Backing Up a Boot Volume on page 621.
Using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Boot Volumes.
2. In the Boot Volumes list, click the boot volume that you want to clone.
3. In Resources, click Boot Volume Clones.
4. Click Create Clone.
5. Specify a name for the clone. Avoid entering confidential information.
6. If you want to clone the boot volume to a larger size volume, select Custom Boot Volume Size (GB) and then
specify the new size. You can only increase the size of the volume, you cannot decrease the size. If you clone the
boot volume to a larger size volume, you need to extend the volume's partition. See Extending the Partition for a
Boot Volume on page 615 for more information.
7. Click Create Clone.
The boot volume is ready use when its icon lists it as AVAILABLE in the Boot Volumes list.

Oracle Cloud Infrastructure User Guide 626


Compute

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
To create a clone from a boot volume, use the CreateBootVolume operation and specify
BootVolumeSourceFromBootVolumeDetails for CreateBootVolumeDetails.
Next Steps
After you have cloned a boot volume backup, you can:
• Use the boot volume to create an instance. For more information, see Creating an Instance on page 700.
• Attach the boot volume to an instance as a data volume. For more information, see Attaching a Volume on page
521.
Making a boot volume clone while an instance is running creates a crash-consistent clone, meaning the data is in
the identical state it was in at the time the clone was made. This is the same state it would be in the case of a loss of
power or hard crash. In most cases you can use the cloned boot volume to create an instance, however to ensure a
bootable image, you should create a custom image from your instance. For information about creating custom images,
see Managing Custom Images on page 670.

Detaching a Boot Volume


If you think a boot volume issue is causing a compute instance problem, you can stop the instance and detach the
boot volume using the steps described in this topic. Then you can attach it to another instance as a data volume to
troubleshoot it.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to attach and
detach existing block volumes. The policy in Let volume admins manage block volumes, backups, and volume groups
on page 2154 lets the specified group do everything with block volumes and backups, but not launch instances.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.
Using the Console
You can detach a boot volume from an instance only when the instance is stopped. See Stopping and Starting an
Instance on page 785 for information about managing an instance's state.
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Choose your Compartment.
3. Click the instance that you want to detach the boot volume from.
4. Under Resources, click Boot Volume.
5. Click the Actions icon (three dots) for the boot volume, and then click Detach Boot Volume. Confirm when
prompted.
You can now attach the boot volume to another instance. For more information, see Attaching a Volume on page 521.
Using the API
To delete an attachment, use the following operation:
• DetachBootVolume
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Oracle Cloud Infrastructure User Guide 627


Compute

Deleting a Boot Volume


When you terminate an instance, you choose to delete or preserve the associated boot volume. For more information,
see Terminating an Instance on page 789. You can also delete a boot volume if it has been detached from the
associated instance. See Detaching a Boot Volume on page 627 for how to detach a boot volume.
Caution:

You cannot undo this operation. Any data on a volume will be permanently
deleted once the volume is deleted. You will also not be able to restart the
associated instance.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let volume admins manage block volumes, backups, and volume groups on page
2154 lets the specified group do everything with block volumes and backups.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.
Using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Boot Volumes.
2. Choose your Compartment.
3. In the Boot Volumes list, find the volume you want to delete.
4. Click the Actions icon (three dots) for the boot volume.
5. Click Terminate and confirm the selection when prompted.
Using the API
Use the DeleteBootVolume operation to delete a boot volume.
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Boot Volume Metrics


You can monitor the health, capacity, and performance of your Compute instances by using metrics, alarms, and
notifications.
The Block Volume service provides a set of metrics that apply to both boot volumes and block volumes. For more
information, see Block Volume Metrics on page 589.

Recovering a Corrupted Boot Volume for Linux-Based Instances


If your instance fails to boot successfully or boots with the boot volume set to read-only access, the instance's boot
volume may be corrupted. While it is rare, boot volume corruption can occur in the following scenarios:
• When an instance experiences a forced shutdown using the API.
• When an instance experiences a system hang due to an operating system or software error and a graceful reboot or
shutdown of the instance times out, and then a forced shutdown occurs.
• When an error or outage occurs in the underlying infrastructure and there were critical disk writes pending in the
system.

Oracle Cloud Infrastructure User Guide 628


Compute

Important:

In most cases a simple reboot will resolve boot volume corruption issues, so
this is the first action you should take when troubleshooting this.
This topic describes how to determine if your Linux-based instance's boot volume is corrupted and what steps to take
to troubleshoot and recover the corrupted boot volume. For Windows instances, see Recovering a Corrupted Boot
Volume for Windows Instances on page 632.
Detecting Boot Volume Corruption
Boot volume corruption can prevent an instance from booting successfully, so you may not be able to connect to the
instance using SSH. Instead, you can use the instance console connection feature to connect to the malfunctioning
instance. For more information about using this feature, see Troubleshooting Instances Using Instance Console
Connections on page 819.
This section describes how to use a serial console connection to detect if boot volume corruption has occurred.
Tip:

If you have already confirmed your instance's boot volume is corrupted or


if you are using an imported custom image, proceed to the Recovering the
Boot Volume on page 630 section, which describes how to use a second
instance along with standard file system tools to both detect and repair boot
volume corruption.
1. Create a serial console connection for the instance.
2. Connect to the instance through serial console.
At this point, it's normal for the serial console to appear to hang, as the system may have already crashed.
3. Reboot the instance from the Console.
4. Once the reboot process starts, switch back to the terminal window, and you should see system messages from the
instance start to appear in the window.
5. Monitor the messages that appear as the system is starting up. Most operating systems will set the boot volume to
read-only as soon as disk corruption is detected to prevent writes from further corrupting the volume, so look for
messages that indicate the boot volume is in read-only mode. Following are some examples:
• On an instance with iSCSI-attached boot volumes, the iscsiadm service will fail to attach a volume because
the volume is in read-only mode. This will typically prevent instances from continuing to boot. The serial
console may display a message similar to the following:

iscsiadm: Maybe you are not root?


iscsiadm: Could not lock discovery DB: /var/lock/iscsi/lock.write: Read-
only file system
touch: cannot touch `/var/lock/subsys/iscsid': Read-only file system
touch: cannot touch `/var/lock/subsys/iscsi': Read-only file system
• On an instance with paravirtualized-attached boot volumes, the system may continue the boot process, but will
be in a degraded state because nothing can be written to the boot drive.The serial console may display error
messages similar to the following:

[FAILED] Failed to start Create Volatile Files and Directories.


See 'systemctl status systemd-tmpfiles-setup.service' for details.
...
[ 27.160070] cloud-init[819]: os.chmod(path, real_mode)
[ 27.166027] cloud-init[819]: OSError: [Errno 30] Read-only file
system: '/var/lib/cloud/data'

The error messages and system behavior described here are the most commonly seen for boot volume corruption,
however depending on the operating system, you may see different error messages and system behavior. If
you don't see the ones described here, consult the documentation for your operating system for additional
troubleshooting information.

Oracle Cloud Infrastructure User Guide 629


Compute

Recovering the Boot Volume


To troubleshoot and recover the corrupted boot volume, you need to detach the boot volume from the instance and
then attach the boot volume to a second instance as a data volume.

Detaching the Boot Volume


If you have detected that your instance's boot volume is corrupted, you need to detach the boot volume from the
instance before you can begin troubleshooting and recovery steps.
1. Stop the instance.
2. Detach the boot volume from the instance.

Attaching the Boot Volume as a Data Volume to a Second Instance


For the second instance, we recommend that you use an instance running an operating system that most closely
matches the operating system for the boot volume's instance. You should only attach boot volumes for Linux-based
instances to other Linux-based instances. The second instance must be in the same availability domain and region as
the boot volume's instance. If no existing instance is available, create a new Linux instance using the steps described
in Creating an Instance on page 700. Once you have the second instance, make sure you can log into the instance
and that it is functional before proceeding with the recovery steps. For steps to access the instance, see Connecting
to a Linux Instance on page 740. After you have confirmed that the instance is functional, perform the following
steps.
1. Run the lsblk command and make note of the drives that are currently on the instance, for example:

lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 46.6G 0 disk
##sda2 8:2 0 8G 0 part [SWAP]
##sda3 8:3 0 38.4G 0 part /
##sda1 8:1 0 200M 0 part /boot/efi
2. Attach the boot volume to the second instance as a data volume. For more information, see Attaching a Volume
on page 521.
To attach the boot volume as a data volume
a. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
b. Click the instance that you want to attach a volume to.
c. Under Resources, click Attached Block Volumes.
d. Click Attach Block Volume.
e. Select the volume attachment type. If Paravirtualized attachments are available for this instance, we
recommend that you select this attachment type for this procedure.
If you select iSCSI as the volume attachment type, you need to connect to the volume, see Connecting to a
Volume on page 528 for more information.
f. In the Block Volume Compartment drop-down list, select the compartment.
g. Choose the Select Volume option and then select the volume from the Boot Volume section of the Block
Volume drop-down list.
h. Select Read/Write as the access type.
i. Click Attach.
When the volume's icon no longer lists it as Attaching, proceed with the next steps.
3. Run the lsblk command again to confirm that the boot volume now shows up as a volume attached to the
instance. In this sample output for the lsblk, the boot volume attached as a data volume shows up as sdb:

lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 46.6G 0 disk

Oracle Cloud Infrastructure User Guide 630


Compute

##sdb2 8:18 0 8G 0 part


##sdb3 8:19 0 38.4G 0 part
##sdb1 8:17 0 200M 0 part
sda 8:0 0 46.6G 0 disk
##sda2 8:2 0 8G 0 part [SWAP]
##sda3 8:3 0 38.4G 0 part /
##sda1 8:1 0 200M 0 part /boot/efi
4. Run the fsck command on the volume's root partition. The root partition is usually the largest partition on the
volume.
The following sample for the fsck command shows the output when there are no errors or corruption present on
the partitions for an Oracle 7.6 instance:

sudo fsck -V /dev/sdb1


fsck from util-linux 2.23.2
[/sbin/fsck.vfat (1) -- /boot/efi] fsck.vfat /dev/sdb1
fsck.fat 3.0.20 (12 Jun 2013)
/dev/sdb1: 17 files, 2466/51145 clusters

sudo fsck -V /dev/sdb2


fsck from util-linux 2.23.2

sudo fsck -V /dev/sdb3


fsck from util-linux 2.23.2
[/sbin/fsck.xfs (1) -- /] fsck.xfs /dev/sdb3
If you wish to check the consistency of an XFS filesystem or
repair a damaged filesystem, see xfs_repair(8).

If errors are present on a partition, you will usually be prompted to repair the errors. Following is an example of
an interactive repair session of a corrupt ext4 boot volume for an Ubuntu instance:

sudo fsck -V /dev/sdb1


fsck from util-linux 2.31.1
[/sbin/fsck.ext4 (1) -- /] fsck.ext4 /dev/sdb1
e2fsck 1.44.1 (24-Mar-2018)
One or more block group descriptor checksums are invalid. Fix<y> yes
Group descriptor 92 checksum is 0xe9a1, should be 0x1f53. FIXED.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure

Pass 3: Checking directory connectivity


Pass 4: Checking reference counts
Pass 5: Checking group summary information
Block bitmap differences: Group 92 block bitmap does not match checksum.
FIXED.

cloudimg-rootfs: ***** FILE SYSTEM WAS MODIFIED *****


cloudimg-rootfs: 75336/5999616 files (0.1% non-contiguous),
798678/12181243 blocks

Note:

XFS file systems will usually auto-repair their contents when the system
boots up, fixing any corruption during the boot process. You can use the
xfs_repair command to force a repair for scenarios where boot volume
corruption is preventing the auto-repair functionality from working, as
shown in the following example:

sudo xfs_repair /dev/sdb3


Phase 1 - find and verify superblock...
Phase 2 - using internal log

Oracle Cloud Infrastructure User Guide 631


Compute

- zero log...
- scan filesystem freespace and inode maps...
...
Phase 7 - verify and correct link counts...
done

Recovering a Corrupted Boot Volume for Windows Instances


If your instance fails to boot successfully or boots with the boot volume set to read-only access, the instance's boot
volume may be corrupted. While it is rare, boot volume corruption can occur in the following scenarios:
• When an instance experiences a forced shutdown using the API.
• When an instance experiences a system hang due to an operating system or software error and a graceful reboot or
shutdown of the instance times out, and then a forced shutdown occurs.
• When an error or outage occurs in the underlying infrastructure and there were critical disk writes pending in the
system.
Important:

In most cases a simple reboot will resolve boot volume corruption issues, so
this is the first action you should take when troubleshooting this.
This topic describes how to determine if your Windows instance's boot volume is corrupted and what steps to take
to troubleshoot and recover the corrupted boot volume. For Linux-based instances, see Recovering a Corrupted Boot
Volume for Linux-Based Instances on page 628.
Detecting Boot Volume Corruption
When Windows operating systems detect boot volume corruption, the instance is usually able to recover from it
by automatically repairing the file system. You can use a VNC console connection to verify that the instance isn't
experiencing a system hang while repairing the file system, or to detect if there are other issues. VNC console
connections enable you to see what's displayed through the VGA port, for more information about the VNC console,
see Troubleshooting Instances Using Instance Console Connections on page 819.
Important:

VNC console connections only work for virtual machine (VM) instances
launched on October 13, 2017 or later, and bare metal instances launched on
February 21, 2019 or later. If your instance does not support VNC console
connections, proceed to Recovering the Boot Volume on page 633.
1. Create a VNC console connection for the instance.
2. Connect to the instance through VNC console.
Check what is displayed in the VNC console to see if the instance is stuck in the boot process or if it is in the
recovery partition.
Tip:

For Windows Server 2012 and later versions, if the instance has booted
into the recovery partition it may be possible to directly perform the steps
to recover the boot volume in the recovery partition.
Detaching the Boot Volume
If you have detected that your instance's boot volume is corrupted, you need to detach the boot volume from the
instance before you can begin troubleshooting and recovery steps.
1. Stop the instance.
2. Detach the boot volume from the instance.

Oracle Cloud Infrastructure User Guide 632


Compute

Recovering the Boot Volume


To troubleshoot and recover the corrupted boot volume, you need to attach the boot volume to a second instance as a
data volume. For the second instance we recommend that you use an instance running an operating system that most
closely matches the operating system for the boot volume's instance, and you should only attach boot volumes for
Windows instances to other Windows instances. The second instance must be in the same availability domain and
region as the boot volume's instance. If no existing instance is available create a new Windows instance using the
steps described in Creating an Instance on page 700.
Once you have the second instance, make sure you can log in to the instance and that it is functional before
proceeding with the recovery steps. After you have confirmed that the instance is functional perform the following
steps.
1. Attach the boot volume to the second instance as a data volume. For more information, see Attaching a Volume
on page 521.
To attach the boot volume as a data volume
a. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
b. Click the instance that you want to attach a volume to.
c. Under Resources, click Attached Block Volumes.
d. Click Attach Block Volume.
e. Select iSCSI for the volume attachment type.
f. In the Block Volume Compartment drop-down list, select the compartment.
g. Choose the Select Volume option and then select the volume from the Boot Volume section of the Block
Volume drop-down list.
h. Select Read/Write as the access type.
i. Click Attach.
When the volume's icon no longer lists it as Attaching, proceed with the next steps.
2. Connect to the second instance, see Connecting to a Windows Instance on page 741 for more information.
3. Connect to the volume, see Connecting to a Volume on a Windows Instance on page 530 for more information.
Since you are attaching a boot volume as a data volume you must also run the Connect-IscsiTarget and set
IsMultiEnabled to true. For example:

Set-Service -Name msiscsi -StartupType Automatic


Start-Service msiscsi
New-IscsiTargetPortal –TargetPortalAddress 169.254.2.4
Connect-IscsiTarget -NodeAddress iqn.2015-02.oracle.boot:uefi -
TargetPortalAddress 169.254.2.4 -IsPersistent $True -IsMultipathEnabled
$True
4. Open Computer Management and navigate to Storage, and then Disk Management.
5. Select the new disk and mark it Online.
6. Click This PC and then right-click on the new disk and select Properties.
7. Navigate to Tools, Error Checking, and then Check.
8. Select Scan Drive and fix issues as they come up.
9. Mark the new disk Offline.
10. Open iscsi initiator with administrator privileges.
11. In Favorite Targets, remove the iscsi target of the attached volume.

Oracle-Provided Images
An image is a template of a virtual hard drive. The image determines the operating system and other software for an
instance. The following table lists the platform images that are available in Oracle Cloud Infrastructure. For specific

Oracle Cloud Infrastructure User Guide 633


Compute

image and kernel version details, along with changes between versions, see the Oracle-Provided Image Release
Notes.

Image Name Description


Oracle Autonomous Linux 7 Autonomous-Oracle-Linux-7.x- Oracle Autonomous Linux provides
Unbreakable Enterprise Kernel <date>-<number> autonomous capabilities such as
Release 6 automated patching with zero
downtime, and known exploit
detection, to help keep the operating
system highly secure and reliable.
Oracle Autonomous Linux is based
on Oracle Linux.
GPU shapes are supported with this
image.

Oracle Linux 8 Unbreakable Oracle-Linux-8.x-<date>-<number> The Unbreakable Enterprise Kernel


Enterprise Kernel Release 6 (UEK) is Oracle's optimized
operating system kernel for
demanding Oracle workloads.
GPU shapes are supported with this
image.

Oracle Linux 7 Unbreakable Oracle-Linux-7.x-<date>-<number> The Unbreakable Enterprise Kernel


Enterprise Kernel Release 6 (UEK) is Oracle's optimized
operating system kernel for
demanding Oracle workloads.
GPU shapes are supported with this
image.

Oracle Linux 6 Unbreakable Oracle-Linux-6.x-<date>-<number> The Unbreakable Enterprise Kernel


Enterprise Kernel Release 4 (UEK) is Oracle's optimized
operating system kernel for
demanding Oracle workloads.

CentOS 8 CentOS-8-<date>-<number> CentOS is a free, open-source Linux


distribution that is suitable for use
in enterprise cloud environments.
For more information, see https://
www.centos.org/.
GPU shapes are supported with this
image.
Tip: Looking for an alternative to
CentOS 8? Oracle Linux is a great
option. For details and a script to
switch from CentOS to Oracle Linux,
read our blog post.

CentOS 7 CentOS-7-<date>-<number> CentOS is a free, open-source Linux


distribution that is suitable for use
in enterprise cloud environments.
For more information, see https://
www.centos.org/.

Oracle Cloud Infrastructure User Guide 634


Compute

Image Name Description


Ubuntu 20.04 LTS Canonical- Ubuntu is a free, open-source Linux
Ubuntu-20.04-<date>-<number> distribution that is suitable for use in
the cloud. For more information, see
https://www.ubuntu.com.
Minimal Ubuntu is designed for
automated use at scale. It uses a
smaller boot volume, boots faster,
and has a smaller surface for security
patches than standard Ubuntu
images. For more information, see
https://wiki.ubuntu.com/Minimal.

Ubuntu 18.04 LTS Canonical- Ubuntu is a free, open-source Linux


Ubuntu-18.04-<date>-<number> distribution that is suitable for use in
the cloud. For more information, see
https://www.ubuntu.com.
Minimal Ubuntu is designed for
automated use at scale. It uses a
smaller boot volume, boots faster,
and has a smaller surface for security
patches than standard Ubuntu
images. For more information, see
https://wiki.ubuntu.com/Minimal.
GPU shapes are supported with
this image. You must install the
appropriate GPU drivers from
NVIDIA.

Ubuntu 16.04 LTS Canonical- Ubuntu is a free, open-source Linux


Ubuntu-16.04-<date>-<number> distribution that is suitable for use in
the cloud. For more information, see
https://www.ubuntu.com.
Minimal Ubuntu is designed for
automated use at scale. It uses a
smaller boot volume, boots faster,
and has a smaller surface for security
patches than standard Ubuntu
images. For more information, see
https://wiki.ubuntu.com/Minimal.
GPU shapes are supported with this
image. For Minimal Ubuntu, you
must install the appropriate GPU
drivers from NVIDIA.

Oracle Cloud Infrastructure User Guide 635


Compute

Image Name Description


Windows Server 2019 Windows-Server-2019-<edition>- Windows Server 2019 supports
Gen2.<date>-<number> running production Windows
workloads on Oracle Cloud
Infrastructure.
GPU shapes are supported with
this image. You must install the
appropriate GPU drivers from
NVIDIA.

Windows Server 2016 Windows-Server-2016-<edition>- Windows Server 2016 supports


Gen2.<date>-<number> running production Windows
workloads on Oracle Cloud
Infrastructure.
GPU shapes are supported with
this image. You must install the
appropriate GPU drivers from
NVIDIA.

Windows Server 2012 R2 Windows-Server-2012- Windows Server 2012 R2 supports


R2-<edition>-<gen>-<date>-<number>
running production Windows
workloads on Oracle Cloud
Infrastructure.
GPU shapes are supported with this
image. You must install the GPU
drivers from NVIDIA.

You also can create custom images of your boot disk OS and software configuration for launching new instances.

Essential Firewall Rules


All Oracle-provided images include rules that allow only "root" on Linux instances or "Administrators" on
Windows Server instances to make outgoing connections to the iSCSI network endpoints (169.254.0.2:3260,
169.254.2.0/24:3260) that serve the instance's boot and block volumes.
• We recommend that you do not reconfigure the firewall on your instance to remove these rules. Removing these
rules allows non-root users or non-administrators to access the instance’s boot disk volume.
• We recommend that you do not create custom images without these rules unless you understand the security risks.
• Running Uncomplicated Firewall (UFW) on Ubuntu images might cause issues with these rules. Because of this,
we recommend that you do not enable UFW on your instances. See Ubuntu instance fails to reboot after enabling
Uncomplicated Firewall (UFW) for more information.

User Data
Oracle-provided images give you the ability to run custom scripts or supply custom metadata when the instance
launches. To do this, you specify a custom user data script in the Initialization Script field when you create the
instance. For more information about startup scripts, see cloud-init for Linux-based images and cloudbase-init for
Windows-based images.

OS Updates for Linux Images


Oracle Linux and CentOS images are preconfigured to let you install and update packages from the repositories on
the Oracle public yum server. The repository configuration file is in the /etc/yum.repos.d directory on your
instance. You can install, update, and remove packages by using the yum utility.

Oracle Cloud Infrastructure User Guide 636


Compute

On Oracle Autonomous Linux images, Oracle Ksplice is installed and configured by default to run automatic updates.
Note:

OS Security Updates for Oracle Linux and CentOS images


After you launch an instance using Oracle Linux or CentOS images, you are
responsible for applying the required OS security updates published through
the Oracle public yum server. For more information, see Installing and Using
the Yum Security Plugin.
The Ubuntu image is preconfigured with suitable repositories to allow you to install, update, and remove packages.
Note:

OS Security Updates for the Ubuntu image


After you launch an instance using the Ubuntu image, you are responsible
for applying the required OS security updates using the sudo apt-get
upgrade command.

Linux Kernel Updates


Oracle Linux images on Oracle Cloud Infrastructure include Oracle Linux Premier Support at no extra cost. This
gives you all the services included with Premier Support, including Oracle Ksplice. Ksplice enables you to apply
important security and other critical kernel updates without a reboot. For more information, see About Oracle Ksplice
and Ksplice Overview.
Ksplice is available for Linux instances launched on or after February 15, 2017. For instances launched before August
25, 2017, you must install Ksplice before running it. See Installing and Running Oracle Ksplice on page 669 for
more information.
Note:

Ksplice Support
Oracle Ksplice is not supported for CentOS and Ubuntu images, or on Linux
images launched before February 15, 2017.

Configuring Automatic Package Updating on Instance Launch


You can configure your instance to automatically update to the latest package versions when the instance first
launches using a cloud-init startup script. To do this, add the following code to the startup script:

package_upgrade: true

The upgrade process starts when the instance launches and runs in the background until it completes. To verify that it
completed successfully, check the cloud-init logs in /var/log.
See User Data on page 636 and Cloud config examples - Run apt or yum upgrade for more information.

Linux Image Details


See Lifetime Support Policy: Coverage for Oracle Linux and Oracle VM for details about the Oracle Linux support
policy.

Users
For instances created using Oracle Linux and CentOS images, the username opc is created automatically. The opc
user has sudo privileges and is configured for remote access over the SSH v2 protocol using RSA keys. The SSH
public keys that you specify while creating instances are added to the /home/opc/.ssh/authorized_keys
file.

Oracle Cloud Infrastructure User Guide 637


Compute

For instances created using the Ubuntu image, the username ubuntu is created automatically. The ubuntu user has
sudo privileges and is configured for remote access over the SSH v2 protocol using RSA keys. The SSH public keys
that you specify while creating instances are added to the /home/ubuntu/.ssh/authorized_keys file.
Note that root login is disabled.

Remote Access
Access to the instance is permitted only over the SSH v2 protocol. All other remote access services are disabled.

Firewall Rules
Instances created using Oracle-provided images have a default set of firewall rules that allow only SSH access.
Instance owners can modify those rules as needed, but must not restrict link local traffic to address 169.254.0.2 in
accordance with the warning in Essential Firewall Rules on page 636.
Be aware that the Networking service uses network security groups and security lists to control packet-level traffic
in and out of the instance. When troubleshooting access to an instance, make sure all of the following items are set
correctly: the network security groups that the instance is in, the security lists associated with the instance's subnet,
and the instance's firewall rules.

Disk Partitions
Starting with Oracle Linux 8.x, the main disk partition is managed using Logical Volume Management (LVM). This
gives you increased flexibility to create and resize partitions to suit your workloads. In addition, there is no dedicated
swap partition. Swap is now handled by a file on the file system, giving you more detailed control over swap.

Cloud-init Compatibility
Instances created using Oracle-provided images are compatible with cloud-init. When launching an instance with
the Core Services API, you can pass cloud-init directives with the metadata parameter. For more information, see
LaunchInstance.

Oracle Autonomous Linux


Important:

Beginning with the December 2020 Oracle Autonomous Linux platform


image, the image is based on Unbreakable Enterprise Kernel (UEK) 6
and is configured to use the standard Oracle Linux yum repositories. The
Autonomous Linux repository (al7) is deprecated and all customers with
existing Oracle Autonomous Linux instances are migrated to the new
repositories automatically.
The following repositories are enabled by default beginning with the December 2020 Oracle Autonomous Linux
platform image:
• ol7_UEKR6
• ol7_addons
• ol7_ksplice
• ol7_latest
• ol7_oci_included
• ol7_optional_latest
• ol7_software_collections
• ol7_x86_64_userspace_ksplice
The image includes the release packages for the ol7_developer and ol7_developer_EPEL repositories, but
these repositories are disabled by default.

Oracle Cloud Infrastructure User Guide 638


Compute

For existing Oracle Autonomous Linux instances, after yum migration, the ol7_developer and
ol7_developer_EPEL repositories are not available. If you need packages from these repositories, you can install
the appropriate release package to obtain the correct repository configuration before enabling the repository by using
the following commands:

sudo yum install oraclelinux-developer-release-el7


sudo yum install oracle-epel-release-el7

Note:

Packages found in either the ol7_developer and


ol7_developer_EPEL repositories are considered unsupported and are
only entitled to basic installation support. Content from these repositories is
not recommended for production environments and is intended for developer
purposes only.
To verify that your Oracle Autonomous Linux instance has been migrated to the new repositories, use the following
command:

yum repolist

For example:

# yum repolist
Loaded plugins: langpacks, ulninfo
repo id repo name

status
ol7_UEKR6/x86_64 Latest Unbreakable
Enterprise Kernel Release 6 for Oracle Linux 7Server (x86_64)
197
ol7_addons/x86_64 Oracle Linux 7Server Add
ons (x86_64)
473
ol7_ksplice Ksplice for Oracle Linux
7Server (x86_64)
9,655
ol7_latest/x86_64 Oracle Linux 7Server
Latest (x86_64)
21,367
ol7_oci_included/x86_64 Oracle Software for OCI
users on Oracle Linux 7Server (x86_64)
680
ol7_optional_latest/x86_64 Oracle Linux 7Server
Optional Latest (x86_64)
15,491
ol7_software_collections/x86_64 Software Collection
Library release 3.0 packages for Oracle Linux 7 (x86_64)
15,375
ol7_x86_64_userspace_ksplice Ksplice aware userspace
packages for Oracle Linux 7Server (x86_64)
447
repolist: 63,685

Note:

For existing Oracle Autonomous Linux instances, after yum migration,


the UEK6 repository is enabled and the next daily update installs the latest
UEK6. After reboot, the instance boots into UEK6.

Oracle Cloud Infrastructure User Guide 639


Compute

For more information about installing and configuring Oracle Autonomous Linux, see Getting Started: Deploying
and Configuring Oracle Autonomous Linux on Oracle Cloud Infrastructure and Oracle Autonomous Linux for Oracle
Cloud Infrastructure.
Oracle Instant Client 18.3 basic package cannot be updated to Version 19.5 because of changes in packaging. To
update Oracle Instant Client on Oracle Autonomous Linux images that were launched before March 18, 2020, you
must first manually remove the Oracle Instant Client 18.3, and then install 19.5. Use the following commands:

sudo yum remove oracle-instantclient18.3-basic


sudo yum install oracle-instantclient19.5-basic

On Oracle Autonomous Linux images that were launched after March 18, 2020, Oracle Instant Client is not installed
by default. To install Oracle Instant Client 19.5, you must manually install the package. Use the following command:

sudo yum install oracle-instantclient19.5-basic

On Oracle Autonomous Linux images that were launched after December 9, 2020, the Oracle Instant Client
repository (ol7_oracleinstant client) is not available by default. To add the repository, you must first
install the oracle-release-el7 release package and then enable the ol7_oracle_instantclient
repository. You can then install the appropriate Oracle Instant Client version package. Use the following commands:

sudo yum install oracle-release-el7


sudo yum-config-manager --enable ol7_oracle_instantclient

Oracle Autonomous Linux instances cannot be managed by the OS Management service.


For more information about using Oracle Autonomous Linux, see Known Issues.

OCI Utilities
Instances created using Oracle Linux include a preinstalled set of utilities that are designed to make it easier to work
with Oracle Linux images. These utilities consist of a service component and related command line tools.
For more information, see the OCI Utilities on page 646 reference.

Windows OS Updates for Windows Images


Windows images include the Windows Update utility, which you can run to get the latest Windows updates from
Microsoft. You have to configure the instance's network security group or the security list used by the instance's
subnet to allow instances to access Windows update servers.

Windows Image Details

Windows Editions
Depending on whether you create a bare metal instance or a virtual machine (VM) instance, different editions of
Windows Server are available as Oracle-provided images. Windows Server Standard edition is available only for
VMs. Windows Server Datacenter edition is available only for bare metal instances.

Users
For instances created using Oracle-provided Windows images, the username opc is created automatically. When you
launch an instance using the Windows image, Oracle Cloud Infrastructure will generate an initial, one-time password
that you can retrieve using the console or API. This password must be changed after you initially log on.

Remote Access
Access to the instance is permitted only through a Remote Desktop connection.

Oracle Cloud Infrastructure User Guide 640


Compute

Firewall Rules
Instances created using the Windows image have a default set of firewall rules that allow Remote Desktop protocol
or RDP access on port 3389. Instance owners can modify these rules as needed, but must not restrict link local traffic
to 169.254.169.253 for the instance to activate with Microsoft Key Management Service (KMS). This is how the
instance stays active and licensed.
Be aware that the Networking service uses network security groups and security lists to control packet-level traffic
in and out of the instance. When troubleshooting access to an instance, make sure all of the following items are set
correctly: the network security groups that the instance is in, the security lists associated with the instance's subnet,
and the instance's firewall rules.

User Data on Windows Images


On Windows images custom user data scripts are executed using cloudbase-init, which is the equivalent of cloud-init
on Linux-based images. All Oracle-provided Windows images on Oracle Cloud Infrastructure include cloudbase-init
installed by default. When an instance launches, cloudbase-init runs PowerShell, batch scripts, or additional user data
content. See cloudbase-init Userdata for information about supported content types.
You can use user data scripts to perform various tasks, such as:
• Enable GPU support using a custom script to install the applicable GPU driver.
• Add or update local user accounts.
• Join the instance to a domain controller.
• Install certificates into the certificate store.
• Copy any required application workload files from the Object Storage service directly to the instance.
Caution:

Do not include anything in the script that could trigger a reboot, because this
could impact the instance launch, causing it to fail. Any actions requiring a
reboot should only be performed after the instance state is RUNNING.

Windows Remote Management


Windows Remote Management (WinRM) is enabled by default on Oracle-provided Windows images. WinRM
provides you with the capability to remotely manage the operating system.
To use WinRM you need to add a stateful ingress security rule for TCP traffic on destination port 5986. You can
implement this security rule in either a network security group that the instance belongs to, or a security list that is
used by the instance's subnet.
Caution:

The following procedure allows WinRM connections from 0.0.0.0/0, which


means any IP address, including public IP addresses. To allow access only
from instances within the VCN, change the source CIDR value to the VCN's
CIDR block. For more information, see Security Recommendations on page
3733.
To enable WinRM access
1. Open the navigation menu. Under Core Infrastructure, go to Networking and click Virtual Cloud Networks.
2. Click the VCN that you're interested in.

Oracle Cloud Infrastructure User Guide 641


Compute

3. To add the rule to a network security group that the instance belongs to:
a. Under Resources, click Network Security Groups. Then click the network security group that you're
interested in.
b. Click Add Rules.
c. Enter the following values for the rule:
• Stateless: Leave the check box cleared.
• Source Type: CIDR
• Source CIDR: 0.0.0.0/0
• IP Protocol: TCP
• Source Port Range: All
• Destination Port Range: 5986
• Description: An optional description of the rule.
d. When done, click Add.
4. Or, to add the rule to a security list that is used by the instance's subnet:
a. Under Resources, click Security Lists. Then click the security list you're interested in.
b. Click Add Ingress Rules.
c. Enter the following values for the rule:
• Stateless: Leave the check box cleared.
• Source Type: CIDR
• Source CIDR: 0.0.0.0/0
• IP Protocol: TCP
• Source Port Range: All
• Destination Port Range: 5986
• Description: An optional description of the rule.
d. When done, click Add Ingress Rules.
To use WinRM on an instance
1. Get the instance's public IP address.
2. Open Windows PowerShell on the Windows client that you're using to connect to the instance.
3. Run the following command:

# Get the public IP from your OCI running windows instance


$ComputerName = Public IP Address

# Store your username and password credentials (default username is opc)


$c = Get-Credential

# Options
$opt = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck

# Create new PSSession (Pre-requisite: ensure network security group or


security list has Ingress Rule for port 5986)
$PSSession = New-PSSession -ComputerName $ComputerName -UseSSL -
SessionOption $opt -Authentication Basic -Credential $c

# Connect to Instance PSSession


Enter-PSSession $PSSession

# To close connection use: Exit-PSSession

You can now remotely manage the Windows instance from your local PowerShell client.

Oracle Cloud Infrastructure User Guide 642


Compute

Operating System Lifecycle and Support Policy


When an operating system reaches the end of its support lifecycle, the OS vendor (such as Microsoft) no longer
provides security updates for the OS. You should upgrade to the latest version to remain secure.
Here's what you should expect when an OS version reaches the end of its support lifecycle:
• Oracle Cloud Infrastructure no longer provides new images for the OS version. Images that were previously
published are deprecated, and are no longer updated.
• Although you can continue to run instances that use deprecated images, Oracle Cloud Infrastructure does not
provide any support for operating systems that have reached the end of the support lifecycle.
• If you have an instance that runs an OS version that will be deprecated, and you want to launch new instances with
this OS version after the end of support, you can create a custom image of the instance and then use the custom
image to launch new instances in the future. For custom Linux images, you must purchase extended support from
the OS vendor. For custom Windows images, see Can I purchase Microsoft Extended Security Updates for end-
of-support Windows OSs? on page 815. Oracle Cloud Infrastructure does not provide any support for custom
images that use end-of-support operating systems.
Be aware of these end-of-support dates:
• CentOS 6: Support ended on November 30, 2020.
• Ubuntu 14.04: Support ended on April 19, 2019.
• Ubuntu 16.04: Support ends in April 2021.
• Windows Server 2008 R2: Support ended on January 14, 2020.

Using NVIDIA GPU Cloud with Oracle Cloud Infrastructure


NVIDIA GPU Cloud (NGC) is a GPU-accelerated cloud platform optimized for deep learning and scientific
computing. This topic provides an overview of how to use NGC with Oracle Cloud Infrastructure.
NVIDIA makes available on Oracle Cloud Infrastructure a customized Compute image that is optimized for the
NVIDIA Tesla Volta and Pascal GPUs. Running NGC containers on this instance provides optimum performance for
deep learning jobs.
Prerequisites
• An Oracle Cloud Infrastructure tenancy with a GPU quota. For more information about quotas, see Compute
Quotas on page 250.
• A cloud network to launch the instance in. For information about setting up cloud networks, see Using the
Console on page 2850 in VCNs and Subnets on page 2847.
• A key pair, to use for connecting to the instance via SSH. For information about generating a key pair, see
Managing Key Pairs on Linux Instances on page 698.
• Security group and policy configured for the File Storage service. For more information, see Managing Groups
on page 2438, Getting Started with Policies on page 2143, and Details for the File Storage Service on page
2292.
• An NGC API key for authenticating with the NGC service.
To generate your NGC API key
1. Sign in to the NGC website.
2. On the NGC Registry page, click Get API Key.
3. Click Generate API Key and then click Confirm to generate the key. If you have an existing API key it will
become invalid once you generate a new key.
Launching an Instance Based on the NGC Image

Using the Console


1. Open the Console. For steps, see Signing In to the Console on page 41.
2. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.

Oracle Cloud Infrastructure User Guide 643


Compute

3. Select a Compartment that you have permission to work in.


4. Click Create Instance.
5. Enter a name for the instance. Avoid entering confidential information.
6. In the Placement and hardware section, make the following selections:
a. Select the Availability Domain that you want to create the instance in.
b. In the Shape section, click Change Shape. Then, do the following:
1. For Instance type, select Virtual Machine or Bare Metal Machine.
2. Select a GPU shape for the instance. For more information about GPU shapes, see virtual machine GPU
shapes and bare metal GPU shapes.
Important:

In order to access the GPU shapes, your tenancy must have a GPU
quota. If your tenancy does not have a GPU quota, the GPU shapes
will not be in the shape list. See Prerequisites on page 643 for more
information.
3. Click Select Shape.
c. To select the NGC image, do the following:
1. Under Image, click Change Image.
Important:

In order to access the NVIDIA GPU Cloud images, your tenancy must
have a GPU quota and you must select a GPU shape.
2. In the Image source list, select Oracle Images.
3. Select the check box next to NVIDIA GPU Cloud Machine Image.
4. Review and accept the terms of use, and then click Select Image.
7. In the Networking section, leave Select existing virtual cloud network selected, and then select the virtual cloud
network (VCN) compartment, VCN, subnet compartment, and subnet.
8. In the Add SSH keys section, upload the public key portion of the key pair that you want to use for SSH access to
the instance. Browse to the key file that you want to upload, or drag and drop the file into the box.
9. Click Create.
You should now see the NGC instance with the state of Provisioning. After the state changes to Running, you can
connect to the instance. For general information about launching Compute instances, see Creating an Instance on page
700.
See the following topics for steps to access and work with the instance:
• Connecting to an Instance on page 739
• Stopping and Starting an Instance on page 785
• Terminating an Instance on page 789
When you connect to the instance using SSH, you are prompted for the NGC API key. If you supply the API key at
the prompt, the instance automatically logs you in to the NGC container registry so that you can run containers from
the registry. You can choose not to supply the API key at the prompt and still log in to the instance. You can then log
in later to the NGC container registry. See Logging in to the NGC Container Registry for more information.

Using the CLI


Oracle Cloud Infrastructure provides a Command Line Interface (CLI) on page 4228 you can use to complete tasks.
For more information, see Quickstart on page 4231 and Configuring the CLI on page 4238.
Use the launch command to create an instance, specifying image for sourceType and the image OCID
ocid1.image.oc1..aaaaaaaaknl6phck7e3iuii4r4axpwhenw5qtnnsk3tqppajdjzb5nhoma3q in
InstanceSourceDetails for LaunchInstanceDetails.

Oracle Cloud Infrastructure User Guide 644


Compute

Using the File Storage Service for Persistent Data Storage


You can use the File Storage service for data storage when working with NGC. For more information, see Overview
of File Storage on page 1928. See the following tasks for creating and working with the File Storage service:
• Creating File Systems on page 1956
• Using the Console on page 1964
• Using the API on page 1987
• Managing File Systems on page 1978
• Using the Command Line Interface (CLI) on page 1984
Using the Block Volume Service for Persistent Data Storage
You can use the Block Volume service for data storage when working with NGC. For more information, see
Overview of Block Volume on page 504. See the following tasks for creating and working with the Block Volume
service:
• Creating a Volume on page 519
• Attaching a Volume on page 521
• Connecting to a Volume on page 528
You can also use the CLI to manage block volumes, see the volume commands.
Examples of Running Containers
You first need to log into the NGC container registry. You can skip this section if you provided your API key when
logging into the instance via SSH. If you did not provide your API key when connecting to your instance, then you
must perform this step.
To log into the NGC container registry
1. Run the following Docker command:

docker login nvcr.io


2. When prompted for a username, enter $oauthtoken.
3. When prompted for a password enter your NGC API key.
At this point you can run Docker commands and access the NGC container registry from the instance.
Example: MNIST Training Run Using PyTorch Container
This sample demonstrates running the MNIST example under PyTorch. This example downloads the MNIST dataset
from the web.
1. Pull and run the PyTorch container with the following Docker commands:

docker pull nvcr.io/nvidia/pytorch:17.10


docker run --gpus all --rm -it nvcr.io/nvidia/pytorch:17.10
2. Run the MNIST example with the following commands:

cd /opt/pytorch/examples/mnist
python main.py

Example: MNIST Training Run Using TensorFlow Container


This sample demonstrates running the MNIST example under TensorFlow. This example downloads the MNIST
dataset from the web.
1. Pull and run the TensorFlow container with the following Docker commands:

docker pull nvcr.io/nvidia/tensorflow:17.10


docker run --gpus all --rm -it nvcr.io/nvidia/tensorflow:17.10

Oracle Cloud Infrastructure User Guide 645


Compute

2. Run the MNIST_with_summaries example with the following commands:

cd /opt/tensorflow/tensorflow/examples/tutorials/mnist
python mnist_with_summaries.py

OCI Utilities
Instances created using Oracle-Provided Images on page 633 based on Oracle Linux include a pre-installed
set of utilities that are designed to make it easier to work with Oracle Linux images. These utilities consist of a
service component and related command line tools that can help with managing block volumes (attach, remove, and
automatic discovery), secondary VNIC configuration, discovering the public IP address of an instance, and retrieving
instance metadata.
The following table summarizes the components that are included in the OCI utilities.

Name Description
ocid The service component of oci-utils, which runs as
a daemon started via systemd. This service scans for
changes in the iSCSI and VNIC device configurations
and caches the OCI metadata and public IP address of
the instance.
oci-growfs Expands the root filesystem of the instance to its
configured size.
oci-iscsi-config Lists or configures iSCSI devices attached to a compute
instance. If no command line options are specified, lists
devices that need attention.
oci-metadata Displays metadata for the compute instance. If no
command line options are specified, lists all available
metadata. Metadata includes the instance OCID, display
name, compartment, shape, region, availability domain,
creation date, state, image, and any custom metadata that
you provide, such as an SSH public key.
oci-network-config Lists or configures virtual network interface cards
(VNICs) attached to the Compute instance. When a
secondary VNIC is provisioned in the cloud, it must be
explicitly configured on the instance using this script or
similar commands.
oci-network-inspector Displays a detailed report for a given compartment or
network.
oci-notify Sends a message to a Oracle Cloud Infrastructure
Notifications service topic.
oci-public-ip Displays the public IP address of the current system in
either human-readable or JSON format.

Installing the OCI Utilities


The OCI utilities (oci-utils) are automatically included with instances launched with Oracle Linux 7 and later
images. They are not currently available on other distributions.
Much of the OCI utilities functionality requires that you have the Oracle Cloud Infrastructure SDK for Python and the
Oracle Cloud Infrastructure CLI installed and configured.

Oracle Cloud Infrastructure User Guide 646


Compute

Note:

Beginning with 0.11 release, oci-utils can no longer be used with


Python 2 in favor of Python 3.
To install the Oracle Cloud Infrastructure SDK and CLI using yum, install the required packages corresponding to the
image used by the instance.
Oracle Linux 7

sudo yum install python36-oci-sdk python36-oci-cli

Oracle Linux 8

sudo yum install python3-oci-sdk python3-oci-cli

For configuration information, see the Oracle Cloud Infrastructure SDK for Python documentation and the
documentation for configuring the Oracle Cloud Infrastructure CLI.
Updating the OCI Utilities
To update to the latest version of oci-utils:

sudo yum update oci-utils

Using the OCI Utilities


To use the OCI utilities, you first need to start the ocid service:

sudo systemctl start ocid.service

Example output:

Redirecting to /bin/systemctl start ocid.service

The ocid Daemon

Description
The ocid daemon is the service component of the oci-utils. It monitors for changes in the VNIC and
iSCSI configuration of the instance and attempts to automatically attach or detach devices as they appear or disappear
- for example, when they are created or deleted using the Oracle Cloud Infrastructure Console, CLI, or the API.
Configuration
The ocid daemon requires root privileges. You can configure root privileges for ocid using one of the following
methods:
• Run the oci setup config configuration command as root to create SDK configuration files for the host.
For more information, see SDK and CLI Configuration File on page 4220.
• Use instance principals by adding the instance to a dynamic group that was granted access to Oracle Cloud
Infrastructure services. For more information, see Managing Dynamic Groups on page 2441.
• Configure oci-utils to allow root to use a non-privileged user's Oracle Cloud Infrastructure configuration
files. For more information, see the configuration file located in the /etc/oci-utils.conf.d directory of
the instance.
Usage
To start the ocid daemon using systemd:

service ocid start

Oracle Cloud Infrastructure User Guide 647


Compute

To set ocid to start automatically during system boot:

sudo systemctl enable ocid.service

oci-growfs

Description
Expands the root filesystem of the instance to its configured size. This command must be run as root.
Usage
oci-growfs [-y] [-n] [-h]
Options

-y
Answer 'yes' to all prompts.

-n
Answer 'no' to all prompts.

-h | --help
Display a summary of the command line options.
Example

# sudo /usr/libexec/oci-growfs
CHANGE: disk=/dev/sda partition=3: start=17188864 old:
size=80486399,end=97675263 new: size=192526302,end=209715166
Confirm? [y/n]: y
CHANGED: disk=/dev/sda partition=3: start=17188864 old:
size=80486399,end=97675263 new: size=192526302,end=209715166
meta-data=/dev/sda3 isize=256 agcount=4, agsize=2515200
blks
= sectsz=4096 attr=2, projid32bit=1
= crc=0 finobt=0 spinodes=0
data = bsize=4096 blocks=10060800, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=4912, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 10060800 to 24065787

oci-iscsi-config

Description
Lists and configures iSCSI devices attached to a Compute instance running in Oracle Cloud Infrastructure. When run
without any command line options, oci-iscsi-config lists devices that need attention.
Usage
oci-iscsi-config [-i|--interactive] [-s|--show] [-a | --attach IQN ]
[-d IQN | --detach IQN ] [--username username] [--password password]
[--help] oci-iscsi-config [-s|--show] [-c | --create-volume size]

Oracle Cloud Infrastructure User Guide 648


Compute

[--volume-name name] [--destroy-volume OCID ]


Options

-i | --interactive
Run in interactive mode. This option displays devices that need attention and offers to attach and configure them.
Requires root privileges.

-s | --show
List all devices. If ocid is not running, then root privileges are required.

-a | --attach target
Attempt to attach the device with the given IQN (a unique ID assigned to a device) or Oracle Cloud Identifier
(OCID). When using an IQN, the volume must already be attached (assigned) to the instance in the Console. The
Oracle Cloud Infrastructure SDK for Python is required for selecting volumes using their OCID. This option can be
used multiple times to attach multiple devices at the same time. Requires root privileges.

-d | --detach device
Detach the device with the given IQN (a unique ID assigned to a device). If the volume (or any partition of the
volume) is mounted, this option attempts to unmount it first. This option can be used multiple times to detach
multiple devices at the same time. Requires root privileges.

-c | --create-volume size
Create a volume of SIZE gigabytes and attach it to the current instance. This option requires the Oracle Cloud
Infrastructure SDK for Python to be installed and configured.

--destroy-volume OCID
Destroy the block storage volume with the given OCID. Be sure that the volume is not attached to any instances
before performing this operation.
Caution:

This action is irreversible.

--volume-name name
Set the display name for the volume. This option is used with the --create-volume option. Avoid entering
confidential information.

--username name
Use the specified username as the CHAP username when authentication is needed for attaching a device. This option
is not needed when the Oracle Cloud Infrastructure SDK for Python is available.

--password password
Use the supplied password as the CHAP password when authentication is needed for attaching a device. This option
is not needed when the Oracle Cloud Infrastructure SDK for Python is available.

--help
Display a summary of the command line options.

Oracle Cloud Infrastructure User Guide 649


Compute

Examples

Displaying iSCSI configuration


The oci-iscsi-config utility works with the ocid daemon to monitor device creation and deletion through
the command line, Console, or SDK and automatically discover those changes. You can use the --show option to
display a list of all devices attached to an instance:

# oci-iscsi-config -s
For full functionality of this utility the ocid service must be running
The administrator can start it using this command:
sudo systemctl start ocid.service
ocid already running.
Currently attached iSCSI devices:

Target iqn.2015-02.oracle.boot:uefi
Persistent portal: 169.254.0.2:3260
Current portal: 169.254.0.2:3260
State: running
Attached device: sda
Size: 46.6G
Partitions: Device Size Filesystem Mountpoint
sda1 544M vfat /boot/efi
sda2 8G swap [SWAP]
sda3 38G xfs /

The following example shows the output of the --show option after adding a 50-GB block volume using the
Console:

# oci-iscsi-config --show
Currently attached iSCSI devices:

Target iqn.2015-12.com.oracleiaas:abcdefghijklmnopqrstuvwxyz1234567890
Persistent portal: 169.254.2.2:3260
Current portal: 169.254.2.2:3260
State: running
Attached device: sdb
Size: 50G
File system type: Unknown
Mountpoint: Not mounted

Target iqn.2015-02.oracle.boot:uefi
Persistent portal: 169.254.0.2:3260
Current portal: 169.254.0.2:3260
State: running
Attached device: sda
Size: 46.6G
Partitions: Device Size Filesystem Mountpoint
sda1 544M vfat /boot/efi
sda2 8G swap [SWAP]
sda3 38G xfs /

Creating a volume
The following example shows how to create a volume:

# oci-iscsi-config --create-volume 50
For full functionality of this utility the ocid service must be running

Oracle Cloud Infrastructure User Guide 650


Compute

The administrator can start it using this command:


sudo systemctl start ocid.service
Creating a new 50 GB volume
Volume abcdefghijklmnopqrstuvwxyz1234567890123456789012345678901234 created

Deleting a volume
The following example shows how to destroy a volume:

# oci-iscsi-config --destroy-volume ocid1.volume.oc1.phx.exampleuniqueID

oci-metadata

Description
Displays metadata for an Oracle Cloud Infrastructure Compute instance.
For more information about instance metadata, see Getting Instance Metadata on page 763.
Usage
oci-metadata [-h] [-j] [-g key] [--help]
Options

-h | --human-readable
Display human readable output (default).

-j | --json
Display output in JSON.

-g | --get key
Retrieve data only for the specified key.

--help
Display a summary of the command line options.
Examples

Getting all metadata for the instance


Running oci-metadata with no options returns all metadata for the instance:

# oci-metadata
Instance details:
Display Name: my-example-instance
Region: phx - us-phoenix-1 (Phoenix, AZ, USA)
Canonical Region Name: us-phoenix-1
Availability Domain: cumS:PHX-AD-1
Fault domain: FAULT-DOMAIN-3
OCID: ocid1.instance.oc1.phx.exampleuniqueID
Compartment OCID: ocid.compartment.oc1..exampleuniqueID
Instance shape: VM.Standard2.1
Image ID: ocid1.image.oc1.phx.exampleuniqueID
Created at: 1569529065596
state: Running

Oracle Cloud Infrastructure User Guide 651


Compute

agentConfig:
managementDisabled: False
monitoringDisabled: False
Instance Metadata:
ssh_authorized_keys: example-key
Networking details:
VNIC OCID: ocid1.vnic.oc1.phx.exampleuniqueID
VLAN Tag: 2392
Private IP address: 10.0.0.16
MAC address: 02:00:17:03:D8:FE
Subnet CIDR block: 10.0.0.0/24
Virtual router IP address: 10.0.0.1

Getting only specific metadata


The following example shows how to retrieve metadata for a specified key by using the --get parameter:

# oci-metadata --get state


Instance details:
Instance state: Running

oci-network-config

Description
Configures network interfaces for an Oracle Cloud Infrastructure Compute instance.
Usage
oci-network-config [-h] [-s] [--create-vnic] [--detach-vnic VNIC] [--add-
private-ip]
[--del-private-ip ip_address] [--private-ip ip_address] [--subnet subnet] [--
vnic-name name]
[--assign-public-ip] [--vnic OCID] [-a] [-d] [-e ip_address] [-n format]
[-r] [-X | --exclude item] [-I | --include item] [--quiet]
Options

-s | --show
Show information on all provisioning and interface configuration. If no options are specified, the oci-network-
config utility defaults to this option.

--create-vnic
Create a new virtual network interface card (VNIC) and attach it to this instance.

--detach-vnic OCID
Detach and delete the VNIC with the given Oracle Cloud Identifier (OCID) or primary IP address. Cannot be the
primary VNIC for the instance.

--add-private-ip OCID
Add a secondary private IP to an existing VNIC.

Oracle Cloud Infrastructure User Guide 652


Compute

--del-private-ip ip_address
Delete the secondary private IP address with the given IP address.

--private-ip ip_address
Assign the given private IP address to the VNIC. Used with the --create-vnic and --add-private-ip
options.

--subnet subnet
Connect the new VNIC to the specified subnet. Used with the --create-vnic option.

--vnic-name name
Display name for the new VNIC. Used with the --create-vnic option. Avoid entering confidential information.

--assign-public-ip
Assign a public IP address to the new VNIC. Used with the --create-vnic option.

--vnic OCID
Assign the private IP address to the given VNIC. Used with the --add-private-ip option.

-a | --auto | -c | --configure
Add IP configuration for VNICs that are not configured and delete IP configuration for VNICs that are no longer
provisioned.

-d | --deconfigure
Deconfigure all VNICs (except the primary). If used with the -e option, only the secondary IP addresses are
deconfigured.

-e ip_address VNIC_OCID
Secondary private IP address to configure or deconfigure. Used with --configure and --deconfigure
options.

-n | -ns format
When configuring, place interfaces in namespace identified by the given format. Format can include $nic and
$vltag variables.

-r | --sshd
Start sshd in namespace (if -n is present).

-X | --exclude item
Persistently exclude the given item from automatic configuration or deconfiguration. Use the --include option to
include the item again.

-I | --include item
Include an item that was previously excluded using the --exclude option in automatic configuration/deconfiguration.

Oracle Cloud Infrastructure User Guide 653


Compute

-q | --quiet
Do not display information messages.

-h | --help
Display a summary of the command line options.
Examples

Displaying current network configuration


Running the oci-network-config command with no options returns the network configuration of the current
instance:

VNIC configuration for instance my-test-instance-20180622-1222

VNIC 1 (primary): my-test-instance-20180622-1222


Hostname: my-test-instance-20180622-1222
OCID: ocid1.vnic.oc1.phx.uniqueID
MAC address: 00:00:00:00:00:01
Public IP address: 203.0.113.2
Subnet: Public Subnet cumS:PHX-AD-1 (10.0.0.0/24)

Operating System level network configuration

CONFIG ADDR SPREFIX SBITS VIRTRT NS IND


IFACE VLTAG VLAN STATE MAC VNIC
- 10.0.0.3 10.0.0.0 24 10.0.0.1 -
0 ens3 - - UP 00:00:00:00:00:01
ocid1.vnic.oc1.phx.uniqueID

Creating a VNIC
This example creates a VNIC named MY_NEW_VNIC and attaches it to the instance:

# sudo oci-network-config --create-vnic --vnic-name MY_NEW_VNIC


Info: creating VNIC: 10.0.0.4

Running oci-network-config with the -s option shows information for the new VNIC:

# sudo oci-network-config -s
VNIC configuration for instance scottb-instance-20180622-1222

VNIC 1 (primary): scottb-instance-20180622-1222


Hostname: scottb-instance-20180622-1222
OCID: ocid1.vnic.oc1.phx.uniqueID
MAC address: 00:00:00:00:00:01
Public IP address: 203.0.113.254
Subnet: Public Subnet cumS:PHX-AD-1 (10.0.0.0/24)

VNIC 2: MY_NEW_VNIC
Hostname: scottb-instance-20180622-1222-mynewvnic
OCID: ocid1.vnic.oc1.phx.uniqueID
MAC address: 00:00:00:00:00:02
Public IP address: None
Subnet: Public Subnet cumS:PHX-AD-1 (10.0.0.0/24)

Oracle Cloud Infrastructure User Guide 654


Compute

Operating System level network configuration

CONFIG ADDR SPREFIX SBITS VIRTRT NS IND


IFACE VLTAG VLAN STATE MAC VNIC
- 10.0.0.3 10.0.0.0 24 10.0.0.1 -
0 ens3 - - UP 00:00:00:00:00:01
ocid1.vnic.oc1.phx.uniqueID
- 10.0.0.4 10.0.0.0 24 10.0.0.1 -
1 ens4 - - UP 00:00:00:00:00:02
ocid1.vnic.oc1.phx.uniqueID

Detaching a VNIC
To detach a VNIC from the instance, use the --detach-VNIC option. The given VNIC cannot be the primary
VNIC for the instance:

sudo oci-network-config --detach-vnic 00:00:00:00:00:02

oci-network-inspector

Description
Displays a detailed report for a given compartment or network.
Usage
oci-network-inspector [-C OCID] [-N OCID] [--help]
Options

-C | --compartmentOCID
Show report for the specified compartment.

-N | --vcn OCID
Show report for the specified virtual cloud network.

-h | --help
Display a summary of the command line options.
Examples

Displaying a detailed report for a specified compartment


Running the oci-network-inspector command and specifying an OCID with the -C parameter returns a
detailed network report for that compartment:

$ oci-network-inspector -C ocid1.compartment.oc1..example_OCID

Compartment: scottb_sandbox (ocid1.compartment.oc1..example_OCID)

vcn: scottb_vcn
Security List: Default Security List for scottb_vcn
Ingress: tcp 0.0.0.0/0:- ---:22
Ingress: icmp 0.0.0.0/0:- code-4:type-3
Ingress: icmp 10.0.0.0/16:- code-None:type-3
Ingress: tcp 0.0.0.0/0:80 ---:80
Ingress: tcp 0.0.0.0/0:43 ---:43
Ingress: tcp 0.0.0.0/0:- ---:-
Egress : all ---:- 0.0.0.0/0:-

Oracle Cloud Infrastructure User Guide 655


Compute

Subnet: Public Subnet cumS:PHX-AD-3 Avalibility domain: cumS:PHX-AD-3


Cidr_block: 10.0.2.0/24 Domain name:
sub99999999999.scottbvcn.oraclevcn.com
Security List: Default Security List for scottb_vcn
Ingress: tcp 0.0.0.0/0:- ---:22
Ingress: icmp 0.0.0.0/0:-
code-4:type-3
Ingress: icmp 10.0.0.0/16:- code-
None:type-3
Ingress: tcp 0.0.0.0/0:80 ---:80
Ingress: tcp 0.0.0.0/0:43 ---:43
Ingress: tcp 0.0.0.0/0:- ---:-
Egress : all ---:- 0.0.0.0/0:-

Subnet: Public Subnet cumS:PHX-AD-2 Avalibility domain: cumS:PHX-AD-2


Cidr_block: 10.0.1.0/24 Domain name:
sub99999999998.scottbvcn.oraclevcn.com
Security List: Default Security List for scottb_vcn
Ingress: tcp 0.0.0.0/0:- ---:22
Ingress: icmp 0.0.0.0/0:-
code-4:type-3
Ingress: icmp 10.0.0.0/16:- code-
None:type-3
Ingress: tcp 0.0.0.0/0:80 ---:80
Ingress: tcp 0.0.0.0/0:43 ---:43
Ingress: tcp 0.0.0.0/0:- ---:-
Egress : all ---:- 0.0.0.0/0:-

Subnet: Public Subnet cumS:PHX-AD-1 Avalibility domain: cumS:PHX-AD-1


Cidr_block: 10.0.0.0/24 Domain name:
sub99999999997.scottbvcn.oraclevcn.com
Security List: Default Security List for scottb_vcn
Ingress: tcp 0.0.0.0/0:- ---:22
Ingress: icmp 0.0.0.0/0:-
code-4:type-3
Ingress: icmp 10.0.0.0/16:- code-
None:type-3
Ingress: tcp 0.0.0.0/0:80 ---:80
Ingress: tcp 0.0.0.0/0:43 ---:43
Ingress: tcp 0.0.0.0/0:- ---:-
Egress : all ---:- 0.0.0.0/0:-
Private IP: 10.0.0.2(primary) Host: instance-20180608-1230
Vnic:
ocid1.vnic.oc1.phx.abcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
(AVAILABLE-ATTACHED)
Vnic PublicIP: 203.0.113.2
Instance: instance-20180608-1230(STOPPED)
Private IP: 10.0.0.3(primary) Host: scottb-instance-20180622-1222
Vnic:
ocid1.vnic.oc1.phx.abcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
(AVAILABLE-ATTACHED)
Vnic PublicIP: 203.0.113.3
Instance: scottb-instance-20180622-1222(RUNNING)

oci-notify

Description
Sends a message to a Oracle Cloud Infrastructure Notifications service topic. This command must be run as root.

Oracle Cloud Infrastructure User Guide 656


Compute

A message is composed of a message header (subject) and file. The Notifications service configuration for the topic
determines where and how the messages are delivered. Topics are configured for the Notification service using the
Oracle Cloud Infrastructure Console, API, or CLI.
For more information about the Notifications service, including how to create topics, see Notifications Overview.
Usage
oci-notify [-c topic_OCID] [-t subject -f file] [-h]
Options

-c topic_OCID
Write the topic to the /etc/oci-utils/oci.conf file. The path to the configuration file can be overridden by
using OCI_CONFIG_DIR environment variable.
For topic_OCID, specify the Oracle Cloud Identifier (OCID) associated with the Notifications service topic.

-t subject -f file
Publish the contents of specified file with the specified subject to the topic, which is sent to each subscription for the
topic.
For subject, enter an appropriate subject to be used as the message header (for example, 'log messages' if you
are sending log files). The subject must be enclosed in either single or double quotation marks. Message headers are
truncated to 128 characters.
Note:

When the message is published, the oci-notify utility prepends the


instance name to the subject of the message, for example, <instance
name>:log messages.
For file, enter the full or relative directory path, HTTP, or FTP URL of the message file to be sent. Large files are split
into 64-KB chunks.
Note:

The oci-notify utility writes log and error messages to the /var/log/
oci-notify.log file.

-h | help
Display a summary of the command line options.
Examples

Configuring a Topic on an Instance


The following example shows how to write the OCID of a configured Notifications service topic to the oci.conf
file. Once configured, you can publish messages to the configured topic.

sudo oci-notify -c ocid1.onstopic.oc1..example_OCID

Publishing a Message to a Topic


The following example shows how to send the contents of the /var/log/messages file with the subject
'logging messages' to the configured topic:

sudo oci-notify -t 'logging messages' -f /var/log/messages

Oracle Cloud Infrastructure User Guide 657


Compute

The following example shows how to send the contents of the /proc/meminfo file with the subject 'memory
information' to the configured topic:

sudo oci-notify -t 'memory information' -f /proc/meminfo

The following example shows how to send the contents of the /tmp/uptrack-show file with the subject
'installed Ksplice updates' to the configured topic:

sudo oci-notify -t 'installed Ksplice updates' -f /tmp/uptrack-show

oci-public-ip

Description
Displays the public IP address of the current system in either human-readable or JSON format.
The oci-public-ip utility uses the Oracle Cloud Infrastructure SDK to discover the IP address. If the IP address
cannot be obtained through this method, the oci-public-ip utility then tries the Session Traversal Utilities for
NAT (STUN) protocol as a last resort to discover the IP address. For more information on STUN, see the STUN
Wikepedia article.
Usage
oci-public-ip [-h] [-j] [-g] [-a] [-s source_IP] [-S STUN_server]
[-L] [--instance-id OCID] [--help]
Options

-h | --human-readable
Display human readable output (default).

-j | --json
Display output in JSON.

-g | get
Print the IP address only.

-a | all
Display all public IP addresses.

-s | --sourceip source_IP
Specify the source IP address to use.

-S | --stun-server STUN_server
Specify the STUN server to use.

-L | --list-servers
Print a list of known STUN servers and exit.

--instance-id OCID
Display the public IP address of the given instance instead of the current one. Requires the Oracle Cloud
Infrastructure SDK for Python to be installed and configured.

Oracle Cloud Infrastructure User Guide 658


Compute

--help
Display a summary of the command line options.
Examples

Displaying current IP address


Running the oci-public-ip command with no options returns the IP address of the current instance:

# oci-public-ip
Public IP address: 203.0.113.2

Displaying the IP address of another instance


You can pass in the OCID of a running instance with the --instance-id option to return the IP address for that instance:

# oci-public-ip --instance-id ocid1.instance.oc1.phx.example_OCID


Public IP address: 203.0.113.2

Listing STUN servers


Use the --list-servers option to return a list of STUN servers:

# oci-public-ip --list-servers
stun.stunprotocol.org
stun.counterpath.net
stun.voxgratia.org
stun.callwithus.com
stun.ekiga.net
stun.ideasip.com
stun.voipbuster.com
stun.voiparound.com
stun.voipstunt.com

Compute Shapes
A shape is a template that determines the number of CPUs, amount of memory, and other resources that are allocated
to an instance.
This topic provides basic information about the shapes that are available for bare metal instances, virtual machines
(VMs), and dedicated virtual machine hosts.

Flexible Shapes
A flexible shape is a shape that lets you customize the number of OCPUs and the amount of memory when launching
or resizing your VM. When you create a VM instance using a flexible shape, you select the number of OCPUs and
the amount of memory that you need for the workloads that run on the instance. The network bandwidth and number
of VNICs scale proportionately with the number of OCPUs. This flexibility lets you build VMs that match your
workload, enabling you to optimize performance and minimize cost.
These are the flexible shapes:
• VM.Standard.E3.Flex
• VM.Standard.E4.Flex
Flexible memory is also available on flexible shapes. For both of the flexible shapes, you can select from 1 to 64
OCPUs. The amount of memory allowed is based on the number of OCPUs selected. For each OCPU, you can select

Oracle Cloud Infrastructure User Guide 659


Compute

up to 64 GB of memory, with a maximum of 1024 GB total. The minimum amount of memory allowed is either 1 GB
or a value matching the number of OCPUs, whichever is greater. For example, if you select 25 OCPUs, the minimum
amount of memory allowed is 25 GB.
Important:

Instances that use the VM.Standard.E3.Flex shape or the


VM.Standard.E4.Flex shape, and that also use hardware assisted (SR-IOV)
networking, can be allocated a maximum of 1010 GB of memory. See this
known issue for more information.
These resources are billed at a per-second granularity with a one-minute minimum. Optimize your costs by choosing
the shape that matches your workload and by changing the shape when your workload changes. For example, you
can configure the VM to maximize compute processing power by choosing a low core-to-memory ratio. Or, for
applications like in-memory databases or big data processing engines, configure an instance with a high core-to-
memory ratio. Modify the OCPUs and memory as your workload changes, scaling up to increase performance or
scaling down to reduce costs.
Supported Images
Most Oracle-provided platform images are compatible with flexible shapes. Refer to the following lists.
Supported Platform Images for E3 Shapes
Use an image that was published in March 2020 (for Linux images) or April 2020 (for Windows images) or later.
• Oracle Autonomous Linux 7.x
• Oracle Linux 8.x
• Oracle Linux 7.x
• Oracle Linux 6.x (VMs only)
• CentOS 8.x
• CentOS 7.x
• Ubuntu 18.04
• Ubuntu 16.04
• Windows Server 2019 (VMs only)
• Windows Server 2016 (VMs only)
• Windows Server 2012 R2 (VMs only)
Supported Platform Images for E4 Shapes
Use an image that was published in February 2021 (for Linux images and Windows Server 2012 R2 images) or
March 2021 (for Windows Server 2016 and Windows Server 2019 images), or later.
• Oracle Autonomous Linux 7.x
• Oracle Linux 8.x
• Oracle Linux 7.x
• CentOS 8.x
• CentOS 7.x
• Ubuntu 20.04
• Ubuntu 18.04
• Windows Server 2019 (VMs only)
• Windows Server 2016 (VMs only)
• Windows Server 2012 R2 (VMs only)
Custom images are also supported, depending on the image. You must add flexible shape compatibility to the custom
image, and then test the image on the flexible shape to ensure that it actually works on the shape.
Supported Regions
E3 shapes are supported in all regions.

Oracle Cloud Infrastructure User Guide 660


Compute

Note:

When a new region becomes available, it might take a few weeks before host
capacity also becomes available.
E4 shapes are supported in some regions. For a list, see the service limits for VM.Standard.E4 and BM.Standard.E4
cores. As host capacity becomes available in additional regions, the list will be updated.

Bare Metal Shapes


The following shapes are available for bare metal instances:
• Standard Shapes on page 661
• Dense I/O Shapes on page 662
• GPU Shapes on page 662
• HPC Shapes on page 662
Network bandwidth is based on expected bandwidth for traffic within a VCN. To determine which physical NICs are
active for a shape, refer to the network bandwidth specifications in the following tables. If the network bandwidth is
listed as "2 x <bandwidth> Gbps," it means that both NIC 0 and NIC 1 are active.

Standard Shapes
Designed for general purpose workloads and suitable for a wide range of applications and use cases. Standard
shapes provide a balance of cores, memory, and network resources. Standard shapes are available with Intel or AMD
processors.
These are the bare metal standard series:
• BM.Standard2: X7-based standard compute. Processor: Intel Xeon Platinum 8167M. Base frequency 2.0 GHz,
max turbo frequency 2.4 GHz.
• BM.Standard.E3: E3-based standard compute. Processor: AMD EPYC 7742. Base frequency 2.25 GHz, max
boost frequency 3.4 GHz.
• BM.Standard.E4: E4-based standard compute. Processor: AMD EPYC 7J13. Base frequency 2.55 GHz, max
boost frequency 3.5 GHz.

Shape OCPU Memory Local Disk Max Max VNICs Max VNICs
(GB) Network Total: Linux Total:
Bandwidth Windows
52
BM.Standard2.52 768 Block storage 2 x 25 Gbps 52 total (26 per 27 total (1
only physical NIC) on the first
physical NIC,
26 on the
second)
128
BM.Standard.E3.128 2048 Block storage 2 x 50 Gbps 128 65 (1 on the
only first physical
NIC, 64 on
the second)
128
BM.Standard.E4.128 2048 Block storage 2 x 50 Gbps 128 65 (1 on the
only first physical
NIC, 64 on
the second)

Oracle Cloud Infrastructure User Guide 661


Compute

Dense I/O Shapes


Designed for large databases, big data workloads, and applications that require high-performance local storage.
DenseIO shapes include locally-attached NVMe-based SSDs.
This is the bare metal dense I/O series:
• BM.DenseIO2: X7-based dense I/O compute. Processor: Intel Xeon Platinum 8167M. Base frequency 2.0 GHz,
max turbo frequency 2.4 GHz.

Shape OCPU Memory Local Disk Max Max VNICs Max VNICs
(GB) Network Total: Linux Total:
Bandwidth Windows
52
BM.DenseIO2.52 768 51.2 TB 2 x 25 Gbps 52 total (26 per 27 total (1
NVMe SSD physical NIC) on the first
(8 drives) physical NIC,
26 on the
second)

GPU Shapes
Designed for hardware-accelerated workloads. GPU shapes include Intel or AMD CPUs and NVIDIA graphics
processors.
These are the bare metal GPU series:
• BM.GPU3: X7-based GPU compute.
• GPU: NVIDIA Tesla V100 16 GB
• CPU: Intel Xeon Platinum 8167M. Base frequency 2.0 GHz, max turbo frequency 2.4 GHz.
• BM.GPU4: E2-based GPU compute.
• GPU: NVIDIA A100 40 GB
• CPU: AMD EPYC 7542. Base frequency 2.9 GHz, max boost frequency 3.4 GHz.

Shape OCPU GPU CPU Local Disk Max Max VNICs Max
Memory Memory Network Total: Linux VNICs
(GB) (GB) Bandwidth Total:
Windows

BM.GPU3.8 52 128 768 Block 2 x 25 Gbps 52 27 (1 on


storage the first
(GPU: only physical
8xV100) NIC, 26 on
the second)

BM.GPU4.8 64 320 2048 27.2 TB 1 x 50 Gbps 64 1


NVMe SSD
(GPU: (4 drives) 8 x 200
8xA100) Gbps
RDMA

HPC Shapes
Designed for high-performance computing workloads that require high frequency processor cores and cluster
networking for massively parallel HPC workloads.
This is the bare metal HPC series:
• BM.HPC2: X7-based high frequency compute. Processor: Intel Xeon Gold 6154. Base frequency 3.0 GHz, max
turbo frequency 3.7 GHz.

Oracle Cloud Infrastructure User Guide 662


Compute

Shape OCPU Memory Local Disk Max Max VNICs Max VNICs
(GB) Network Total: Linux Total:
Bandwidth Windows
BM.HPC2.36 36 384 6.4 TB NVMe 1 x 25 Gbps 50 1
SSD (1 drive)
1 x 100 Gbps
RDMA

VM Shapes
The following shapes are available for VMs:
• Standard Shapes on page 663
• Dense I/O Shapes on page 664
• GPU Shapes on page 665
Network bandwidth is based on expected bandwidth for traffic within a VCN.

Standard Shapes
Designed for general purpose workloads and suitable for a wide range of applications and use cases. Standard
shapes provide a balance of cores, memory, and network resources. Standard shapes are available with Intel or AMD
processors.
These are the VM standard series:
• VM.Standard2: X7-based standard compute. Processor: Intel Xeon Platinum 8167M. Base frequency 2.0 GHz,
max turbo frequency 2.4 GHz.
• VM.Standard.E2.1.Micro: E2-based standard compute. Processor: AMD EPYC 7551. Base frequency 2.0 GHz,
max boost frequency 3.0 GHz.
• VM.Standard.E3: E3-based standard compute, with a flexible number of OCPUs. Processor: AMD EPYC 7742.
Base frequency 2.25 GHz, max boost frequency 3.4 GHz.
• VM.Standard.E4: E4-based standard compute. Processor: AMD EPYC 7J13. Base frequency 2.55 GHz, max
boost frequency 3.5 GHz.

Shape OCPU Memory Local Disk (TB) Max Max Max


(GB) Network VNICs VNICs
Bandwidth Total: Total:
Linux Windows
1
VM.Standard2.1 15 Block storage only 1 Gbps 2 2
2
VM.Standard2.2 30 Block storage only 2 Gbps 2 2
4
VM.Standard2.4 60 Block storage only 4.1 Gbps 4 4
8
VM.Standard2.8 120 Block storage only 8.2 Gbps 8 8
16
VM.Standard2.16 240 Block storage only 16.4 Gbps 16 16
24
VM.Standard2.24 320 Block storage only 24.6 Gbps 24 24

Oracle Cloud Infrastructure User Guide 663


Compute

Shape OCPU Memory Local Disk (TB) Max Max Max


(GB) Network VNICs VNICs
Bandwidth Total: Total:
Linux Windows
VM.Standard.E2.1.Micro
1 1 Block storage only 480 Mbps 1 -

See Details
of the
Always
Free
Resources
on page
144.

1 OCPU
VM.Standard.E3.Flex 1 GB Block storage only 1 Gbps per VM with 1 VM with 1
minimum, minimum, OCPU, OCPU: 2 OCPU: 2
See 64 maximum VNICs.
1024 GB VNICs.
Flexible OCPU maximum 40 Gbps
maximum1
Shapes. VM with VM with
Note: See 2 or more 2 or more
this known OCPUs: 1 OCPUs: 1
issue about VNIC per VNIC per
maximum OCPU. OCPU.
memory.
Maximum Maximum
24 VNICs. 24 VNICs.

1 OCPU
VM.Standard.E4.Flex 1 GB Block storage only 1 Gbps per VM with 1 VM with 1
minimum, minimum, OCPU, OCPU: 2 OCPU: 2
See 64 maximum VNICs.
1024 GB VNICs.
Flexible OCPU maximum 40 Gbps
maximum1
Shapes. VM with VM with
Note: See 2 or more 2 or more
this known OCPUs: 1 OCPUs: 1
issue about VNIC per VNIC per
maximum OCPU. OCPU.
memory.
Maximum Maximum
24 VNICs. 24 VNICs.

1: Instances are billed for the full amount of memory that you provision. Usable memory is reduced by up to
256 MB per instance. This difference is due to memory reserved to support the VM on the hypervisor.

Dense I/O Shapes


Designed for large databases, big data workloads, and applications that require high-performance local storage.
DenseIO shapes include locally-attached NVMe-based SSDs.
This is the VM dense I/O series:
• VM.DenseIO2: X7-based dense I/O compute. Processor: Intel Xeon Platinum 8167M. Base frequency 2.0 GHz,
max turbo frequency 2.4 GHz.

Shape OCPU Memory Local Disk (TB) Max Max Max


(GB) Network VNICs VNICs
Bandwidth Total: Total:
Linux Windows
8
VM.DenseIO2.8 120 6.4 TB NVMe SSD 8.2 Gbps 8 8

Oracle Cloud Infrastructure User Guide 664


Compute

Shape OCPU Memory Local Disk (TB) Max Max Max


(GB) Network VNICs VNICs
Bandwidth Total: Total:
Linux Windows
16
VM.DenseIO2.16 240 12.8 TB NVMe SSD 16.4 Gbps 16 16
24
VM.DenseIO2.24 320 25.6 TB NVMe SSD 24.6 Gbps 24 24

GPU Shapes
Designed for hardware-accelerated workloads. GPU shapes include Intel or AMD CPUs and NVIDIA graphics
processors.
This is the VM GPU series:
• VM.GPU3: X7-based GPU compute.
• GPU: NVIDIA Tesla V100 16 GB
• CPU: Intel Xeon Platinum 8167M. Base frequency 2.0 GHz, max turbo frequency 2.4 GHz.

Shape OCPU GPU CPU Local Disk (TB) Max Max Max
Memory Memory Network VNICs VNICs
(GB) (GB) BandwidthTotal: Total:
Linux Windows
6
VM.GPU3.1 16 90 Block storage only 4 Gbps 6 6

(GPU:
1xV100)

12
VM.GPU3.2 32 180 Block storage only 8 Gbps 12 12

(GPU:
2xV100)

24
VM.GPU3.4 64 360 Block storage only 24.6 24 24
Gbps
(GPU:
4xV100)

Dedicated Virtual Machine Host Shapes


Shape Instance Type Billed OCPU Usable OCPU1 Supported Shapes
for Hosted VMs
DVH.Standard2.52 X7-based VM host 52 48 VM.Standard2

1: The difference between billed OCPUs and usable OCPUs is due to OCPUs reserved for hypervisor use.

Previous Generation Shapes


Oracle Cloud Infrastructure periodically releases new generations of Compute shapes. The latest shapes let you take
advantage of newer hardware and a better price-performance ratio. When a shape is several years old, and newer
generation shapes that are suited for the same purposes are available, the old shape transitions to become a previous
generation shape.
Previous generation shapes are still fully supported. However, because the underlying hardware has reached the
sustaining phase of its lifecycle, capacity in certain high-demand regions might be limited.
If you're using a previous generation shape, we encourage you to upgrade to a current generation shape.

Oracle Cloud Infrastructure User Guide 665


Compute

Upgrading from a Previous Generation Shape


To upgrade from a previous generation shape to a current generation shape, you can do the following things:
• For supported VM instances, change the shape of the instance.
• For bare metal instances and VM instances that don't support changing the shape, terminate the instance but DO
NOT delete the boot volume. Then, use the boot volume to create a new instance.

Previous Generation Bare Metal Shapes


These are the previous generation bare metal shape series.
BM.Standard1
Newer shape recommendation: BM.Standard2 or BM.Standard.E3 series
End of orderability date: December 31, 2020
X5-based standard compute. Processor: Intel Xeon E5-2699 v3. Base frequency 2.3 GHz, max turbo frequency 3.6
GHz.

Shape OCPU Memory Local Disk Max Max VNICs Max VNICs
(GB) Network Total: Linux Total:
Bandwidth Windows
36
BM.Standard1.36 256 Block storage 1 x 10 Gbps 36 1
only

BM.Standard.B1
Newer shape recommendation: BM.Standard2 or BM.Standard.E3 series
End of orderability date: December 31, 2020
X6-based standard compute. Processor: Intel Xeon E5-2699 v4. Base frequency 2.2 GHz, max turbo frequency 3.6
GHz.

Shape OCPU Memory Local Disk Max Max VNICs Max VNICs
(GB) Network Total: Linux Total:
Bandwidth Windows
44
BM.Standard.B1.44 512 Block storage 1 x 25 Gbps 44 None
only

BM.Standard.E2
Newer shape recommendation: BM.Standard2 or BM.Standard.E3 series
End of orderability date: February 8, 2021
E2-based standard compute. Processor: AMD EPYC 7551. Base frequency 2.0 GHz, max boost frequency 3.0 GHz.

Shape OCPU Memory Local Disk Max Max VNICs Max VNICs
(GB) Network Total: Linux Total:
Bandwidth Windows
64
BM.Standard.E2.64 512 Block storage 2 x 25 Gbps 75 76 (1 on the
only first physical
NIC, 75 on
the second)

BM.DenseIO1
Newer shape recommendation: BM.DenseIO2

Oracle Cloud Infrastructure User Guide 666


Compute

End of orderability date: December 31, 2020


X5-based dense I/O compute. Processor: Intel Xeon E5-2699 v3. Base frequency 2.3 GHz, max turbo frequency 3.6
GHz.

Shape OCPU Memory Local Disk Max Max VNICs Max VNICs
(GB) Network Total: Linux Total:
Bandwidth Windows
36
BM.DenseIO1.36 512 28.8 TB 1 x 10 Gbps 36 1
NVMe SSD
(9 drives)

BM.GPU2
Newer shape recommendation: BM.GPU3 or BM.GPU4 series
End of orderability date: December 31, 2020
X7-based GPU compute.
• GPU: NVIDIA Tesla P100 16 GB
• CPU: Intel Xeon Platinum 8167M. Base frequency 2.0 GHz, max turbo frequency 2.4 GHz.

Shape OCPU Memory Local Disk Max Max VNICs Max VNICs
(GB) Network Total: Linux Total:
Bandwidth Windows

BM.GPU2.2 28 GPU Block storage 2 x 25 Gbps 28 15 (1 on the


Memory: 32 only first physical
(GPU: NIC, 14 on
2xP100) CPU the second)
Memory: 192

Previous Generation VM Shapes


These are the previous generation VM shape series.
VM.Standard1
Newer shape recommendation: VM.Standard2 or VM.Standard.E3 series
End of orderability date: December 31, 2020
X5-based standard compute. Processor: Intel Xeon E5-2699 v3. Base frequency 2.3 GHz, max turbo frequency 3.6
GHz.

Shape OCPU Memory Local Disk (TB) Max Max Max


(GB) Network VNICs VNICs
Bandwidth Total: Total:
Linux Windows
1
VM.Standard1.1 7 Block storage only 600 Mbps 2 1
2
VM.Standard1.2 14 Block storage only 1.2 Gbps 2 1
4
VM.Standard1.4 28 Block storage only 1.2 Gbps 4 1
8
VM.Standard1.8 56 Block storage only 2.4 Gbps 8 1
16
VM.Standard1.16 112 Block storage only 4.8 Gbps 16 1

VM.Standard.B1

Oracle Cloud Infrastructure User Guide 667


Compute

Newer shape recommendation: VM.Standard2 or VM.Standard.E3 series


End of orderability date: December 31, 2020
X6-based standard compute. Processor: Intel Xeon E5-2699 v4. Base frequency 2.2 GHz, max turbo frequency 3.6
GHz.

Shape OCPU Memory Local Disk (TB) Max Max Max


(GB) Network VNICs VNICs
Bandwidth Total: Total:
Linux Windows
1
VM.Standard.B1.1 12 Block storage only 600 Mbps 2 2
2
VM.Standard.B1.2 24 Block storage only 1.2 Gbps 2 2
4
VM.Standard.B1.4 48 Block storage only 2.4 Gbps 4 4
8
VM.Standard.B1.8 96 Block storage only 4.8 Gbps 8 8
16
VM.Standard.B1.16 192 Block storage only 9.6 Gbps 16 16

VM.Standard.E2
Newer shape recommendation: VM.Standard2 or VM.Standard.E3 series
End of orderability date: February 8, 2021
E2-based standard compute. Processor: AMD EPYC 7551. Base frequency 2.0 GHz, max boost frequency 3.0 GHz.

Shape OCPU Memory Local Disk (TB) Max Max Max


(GB) Network VNICs VNICs
Bandwidth Total: Total:
Linux Windows
1
VM.Standard.E2.1 8 Block storage only 700 Mbps 2 2
2
VM.Standard.E2.2 16 Block storage only 1.4 Gbps 2 2
4
VM.Standard.E2.4 32 Block storage only 2.8 Gbps 4 4
8
VM.Standard.E2.8 64 Block storage only 5.6 Gbps 4 4

VM.DenseIO1
Newer shape recommendation: VM.DenseIO2 series
End of orderability date: December 31, 2020
X5-based dense I/O compute. Processor: Intel Xeon E5-2699 v3. Base frequency 2.3 GHz, max turbo frequency 3.6
GHz.

Shape OCPU Memory Local Disk (TB) Max Max Max


(GB) Network VNICs VNICs
Bandwidth Total: Total:
Linux Windows
4
VM.DenseIO1.4 60 3.2 TB NVMe SSD 1.2 Gbps 4 1
8
VM.DenseIO1.8 120 6.4 TB NVMe SSD 2.4 Gbps 8 1
16
VM.DenseIO1.16 240 12.8 TB NVMe SSD 4.8 Gbps 16 1

VM.GPU2
Newer shape recommendation: VM.GPU3 series

Oracle Cloud Infrastructure User Guide 668


Compute

End of orderability date: December 31, 2020


X7-based GPU compute.
• GPU: NVIDIA Tesla P100 16 GB
• CPU: Intel Xeon Platinum 8167M. Base frequency 2.0 GHz, max turbo frequency 2.4 GHz.

Shape OCPU Memory Local Disk (TB) Max Max Max


(GB) Network VNICs VNICs
Bandwidth Total: Total:
Linux Windows

VM.GPU2.1 12 GPU Block storage only 8 Gbps 12 12


Memory:
(GPU:
16
1xP100)
CPU
Memory:
72

Installing and Running Oracle Ksplice


Oracle Ksplice lets you apply important security updates and other critical kernel updates without a reboot. For more
information, see About Oracle Ksplice and Ksplice Overview.
This topic describes how to install and configure Ksplice. Ksplice is available for Oracle Linux instances that were
launched on or after February 15, 2017. Ksplice is installed on instances that were launched on or after August 25,
2017, so you just need to run it on these instances to install the available Ksplice patches. For instances that were
launched before August 25, 2017, you must install Ksplice before running it.
On Oracle Autonomous Linux images, Ksplice is installed and configured by default to run automatic updates.

Installing Ksplice on instances launched before August 25, 2017


To install Ksplice, you must connect to your Linux instance by using a Secure Shell (SSH). See Connecting to an
Instance on page 739 for more information.
1. Use the following SSH command to access the instance.

ssh –l opc@<public-ip-address>

<public-ip-address> is your instance IP address that you retrieved from the Console, see Getting the Instance
Public IP Address on page 68.
2. Run the following SSH commands to sudo to the root:

sudo bash
3. Download the Ksplice installation script with the following SSH command:

wget -N https://www.ksplice.com/uptrack/install-uptrack-oc
4. Once the script is downloaded, use the following SSH command to install Ksplice:

sh install-uptrack-oc

Running Ksplice
To run Ksplice, you must connect to your Linux instance using a Secure Shell (SSH) connection. See Connecting to
an Instance on page 739 for more information.

Oracle Cloud Infrastructure User Guide 669


Compute

1. Use the following SSH command to access the instance.

ssh –l opc <public-ip-address>

<public-ip-address> is the instance IP address that you retrieved from the Console, see Getting the Instance
Public IP Address on page 68.
2. Run the following SSH commands to sudo to the root:

sudo bash cd
3. To install available Ksplice patches, run the following SSH command:

uptrack-upgrade

Automatic Updates
To configure automatic updates, set the value of autoinstall to yes in /etc/uptrack/uptrack.conf.
Note:

OS Security Updates for Oracle Linux images


Oracle Linux images are updated regularly with the necessary patches, but
after you launch an instance using these images, you are responsible for
applying the required OS security updates published through the Oracle
public Yum server. For more information, see Installing and Using the Yum
Security Plugin.

Managing Custom Images


Oracle Cloud Infrastructure uses images to launch instances. You specify an image to use when you launch an
instance.
You can create a custom image of a bare metal instance's boot disk and use it to launch other instances. Instances
you launch from your image include the customizations, configuration, and software installed when you created the
image.
For details on Windows images, see Creating Windows Custom Images on page 673.
Custom images do not include the data from any attached block volumes. For information about backing up volumes,
see Backing Up a Volume on page 550.
Tip:

Follow industry-wide hardware failure best practices to ensure the resilience


of your solution in the event of a hardware failure. Some best practices
include:
• Design your system with redundant compute nodes in different
availability domains to support failover capability.
• Create a custom image of your system drive each time you change the
image.
• Back up your data drives, or sync to spare drives, regularly.
If you experience a hardware failure and have followed these practices, you
can terminate the failed instance, launch your custom image to create a new
instance, and then apply the backup data.

Oracle Cloud Infrastructure User Guide 670


Compute

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let image admins manage custom images on page 2152 includes the ability to
create, delete, and manage custom images.
The policy in Let users launch compute instances on page 2151 includes the ability to create an instance using
any custom image. The policy in Let users launch compute instances from a specific custom image on page 2152
restricts the ability to create an instance from a custom image on an image-by-image basis.
Tip:

When users create a custom image from an instance or launch an instance


from a custom image, the instance and image don't have to be in the same
compartment. However, users must have access to both compartments.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Limitations and Considerations


• Certain IP addresses are reserved for Oracle Cloud Infrastructure use and may not be used in your address
numbering scheme. See IP Addresses Reserved for Use by Oracle on page 2780 for more information.
• Before you create a custom image of an instance, you must disconnect all iSCSI attachments and remove all iscsid
node configurations from the instance. For steps, see Disconnecting From a Volume on page 566.
• When you create an image of a running instance, the instance shuts down and remains unavailable for several
minutes. The instance restarts when the process completes.
• You cannot create additional custom images of an instance while the instance is engaged in the image creation
process. When you start to create a custom image, the system implements a 20-minute timeout, during which you
cannot create another image of the same instance. You can, however, create images of different instances at the
same time.
• Custom images are available to all users authorized for the compartment in which the image was created.
• Custom images inherit the compatible shapes that are set by default from the base image.
• The maximum size for importing a custom image is 400 GB.
• The maximum size for custom exported images is 400 GB.
• You cannot create an image of an Oracle Database instance.
• If you use a custom image and update the OS kernel on your instance, you must also upload the update to the
network drive. See OS Kernel Updates on page 696 for more information.
For information about how to deploy any version of any operating system that is supported by the Oracle Cloud
Infrastructure hardware, see Bring Your Own Image (BYOI) on page 680.

X5 and X7 Compatibility for Custom Images


Oracle X5, X6, and X7 servers have different host hardware. As a result, using an X5 or X6 image on an X7 bare
metal or virtual machine (VM) instance may not work without additional modifications. Oracle Cloud Infrastructure
recommends for X7 hosts that you use the Oracle-provided images for X7. See Oracle-Provided Image Release Notes
for more information about which images support X7. These images have been explicitly created and tested with X7
hardware.
If you attempt to use an existing X5 image on X7 hardware, note the following:
• No Windows versions are cross-compatible.
• Oracle Autonomous Linux 7 and Oracle Linux 8 are cross-compatible.

Oracle Cloud Infrastructure User Guide 671


Compute

• Oracle Linux 6, Oracle Linux 7, Ubuntu 16.04, CentOS 7, and CentOS 8 are cross-compatible. However, you
must update the kernel to the most recent version to install the latest device drivers. To do this, run the following
commands from a terminal session:
• Oracle Linux

yum update
• CentOS 7, CentOS 8

yum update
• Ubuntu 16.04

apt-get update
apt-get dist-upgrade

If you attempt to use an X6 image on non-X6 hardware, note the following:


• Oracle Linux 6, all CentOS versions, and all Windows versions are not cross-compatible.
• Oracle Autonomous Linux 7 and Oracle Linux 8 are cross-compatible.
• Oracle Linux 7, Ubuntu 18.04, and Ubuntu 16.04 are cross-compatible. Use the Oracle-provided images for X6.
The primary device drivers that are different between X5, X6, and X7 hosts are:
• Network device drivers
• NVMe drive device drivers
• GPU device drivers
Additional updates might be required depending on how you have customized the image.

Using the Console


To create a custom image
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you want to use as the basis for the custom image.
3. Click More Actions, and then click Create Custom Image.
4. In the Create in Compartment list, select the compartment to create the custom image in.
5. Enter a Name for the image. You can change the name later, if needed. You cannot use the name of an Oracle-
provided image for a custom image. Avoid entering confidential information.
6. Show Tagging Options: If you have permissions to create a resource, then you also have permissions to apply
free-form tags to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For
more information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then
skip this option (you can apply tags later) or ask your administrator.
7. Click Create Custom Image.
To track the progress of the operation, you can monitor the associated work request.
Note:

If you see a message indicating that you are at the limit for custom images,
you must delete at least one image before you can create another. Or, you can
request a service limit increase.
To launch an instance from a custom image
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Custom Images.
2. Click the custom image that you're interested in.
3. Click Create Instance.
4. Provide additional launch options as described in Creating an Instance on page 700.

Oracle Cloud Infrastructure User Guide 672


Compute

To edit the name or compatible shapes for a custom image


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Custom Images.
2. Click the custom image that you're interested in.
3. Click Edit Details.
4. Edit the name, or add and remove compatible shapes for the custom image. Avoid entering confidential
information.
5. To configure the minimum and maximum number of OCPUs that users can select when they use this image on a
flexible shape, click the down arrow in the row for the shape, and then enter the minimum and maximum OCPU
counts.
6. Click Save Changes.
Note:

After you add shape compatibility to an image, test the image on the
shape to ensure that the image actually works on the shape. Some images
(especially Windows) might never be cross-compatible with other shapes
because of driver or hardware differences.
To manage tags for a custom image
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Custom Images.
2. Click the custom image that you're interested in.
3. Click the Tags tab to view or edit the existing tags. Or click More Actions, and then click Add tags to add new
ones.
For more information, see Resource Tags on page 213.
To delete a custom image
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Custom Images.
2. Click the custom image that you're interested in.
3. Click More Actions, and then click Delete. Confirm when prompted.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following operations to manage custom images:
• CreateImage
• GetImage
• ListImages
• UpdateImage
• DeleteImage
• AddImageShapeCompatibilityEntry
• ListImageShapeCompatibilityEntries
• GetImageShapeCompatibilityEntry
• RemoveImageShapeCompatibilityEntry

Creating Windows Custom Images


You can create a Windows custom image of a bare metal or virtual machine (VM) instance's boot disk and use it to
launch other instances. Instances you launch from your image include the customizations, configuration, and software
installed when you created the image. For information about custom images, see Managing Custom Images on page
670. For information about the licensing requirements for Windows images, see Microsoft Licensing on Oracle
Cloud Infrastructure on page 809.

Oracle Cloud Infrastructure User Guide 673


Compute

Windows supports two kinds of images: generalized and specialized. Generalized images are images that have been
cleaned of instance-specific information. Specialized images are point-in-time snapshots of the boot disk of a running
instance, and are useful for creating backups of an instance. Oracle Cloud Infrastructure supports bare metal and VM
instances launched from both generalized and specialized custom Windows images.
Generalized images
A generalized image has a generalized OS disk, cleaned of computer-specific information. The images are
generalized using Sysprep. Generalized images can be useful in scenarios such as quickly scaling an environment.
Generalized images can be configured to preserve the existing opc user's account, including the password, at the time
the image is created, or configured to recreate the opc user account, including generating a new, random password
that you retrieve using the API. For background information, see Sysprep (Generalize) a Windows installation.
Specialized images
A specialized image has an OS disk that is already fully installed, and is essentially a copy of the original bare metal
or VM instance. Specialized images are intended to be used for backups so that you can recover from a failure.
Specialized images are useful when you are testing a task and may need to roll back to known good configuration.
Specialized images are not recommended for cloning multiple identical bare metal instances or VMs in the same
network because of issues with multiple computers having the same computer name and ID. When creating a
specialized image, you must remember the opc user's password; a new password is not generated when the instance
launches, and it cannot be retrieved from the console or API.
Creating a Generalized Image
Caution:

Creating a generalized image from an instance will render the instance non-
functional, so you should first create a custom image from the instance, and
then launch a new instance from the custom image. Steps 1 - 2 describe
how to do this. This is the instance that you'll generalize. Alternatively,
you can make a backup image of the instance that you can use to launch a
replacement instance if needed.
Caution:

If you upgrade to PowerShell 5.0/WMF 5.0, you may encounter an issue


where Sysprep fails which will prevent the image generalization process
from completing. If this occurs, you may not be able to log into instances
launched from the custom image. See Unable to log in to instance launched
from new generalized Windows custom image for more information and how
to work around the issue.
1. Create the new image using To create a custom image.
2. Launch an instance from the new image using To launch an instance from a custom image.
3. Connect to the instance using a Remote Desktop client.
4. Go to Windows Generalized Image Support Files on page 837 and download to the instance the file matching
the instance shape.
5. Right-click the file, and then click Run as administrator.
6. Extract the files to C:\Windows\Panther. The following files are extracted into the Panther folder for all
Windows Server versions:
• Generalize.cmd
• Specialize.cmd
• unattend.xml
• Post-Generalize.ps1

Oracle Cloud Infrastructure User Guide 674


Compute

7. Optional: If you want to preserve the opc user account, edit C:\Program Files\bmcs
\imageType.json and change the imageTypesetting to custom. A new password is not created and the
password is not retrievable from the console or API.
If you want to configure the generalized image to recreate the opc user account when a new instance is launched
from the image, leave the imageType setting defaulted to general. The new account's password can be
retrieved through the API using GetInstanceDefaultCredentials.
8. Right-click Generalize.cmd, and then click Run as administrator. Keep in mind the following outcomes of
running this command:
• Your connection to the Remote Desktop client might immediately be turned off and you will be logged out of
the instance. If this does not occur, you should log out of the instance yourself.
• Because sysprep generalize turns off Remote Desktop, you won't be able to log in to the instance
again.
• Creating a generalized image essentially destroys the instance's functionality.
You should wait for a few minutes before proceeding to the following step to ensure the generalization process
has completed.
9. Create the new image using To create a custom image.
10. After you create an image from an instance that has been generalized, we recommend that you terminate the
instance. Although it may appear to be running, it won't be fully operable.
Creating a Specialized Image
Important:

When creating a specialized image, you must remember the opc user's
password. It cannot be retrieved from the Console or API.
You create a specialized image the same way you create other custom images. For steps, see Managing Custom
Images on page 670.

Image Import/Export
Oracle Cloud Infrastructure Compute lets you share custom images across tenancies and regions using image import/
export.

Linux-Based Operating Systems


The following operating systems support image import/export:
• Oracle Linux 6.x
• Oracle Linux 7.x
• Oracle Linux 8.x
• CentOS 7
• CentOS 8
• Ubuntu 16.04 and later
For more information about Oracle-provided platform images, see Oracle-Provided Images on page 633.

Windows-Based Operating Systems


The following Windows versions support image import/export:
• Windows Server 2012 Standard, Datacenter
• Windows Server 2012 R2 Standard, Datacenter
• Windows Server 2016 Standard, Datacenter
• Windows Server 2019 Standard, Datacenter

Oracle Cloud Infrastructure User Guide 675


Compute

Note:

When exporting Windows-based images, you are responsible for complying


with the Microsoft Product Terms and all product use conditions, as well as
verifying your compliance with Microsoft.
For information about the licensing requirements for Windows images, see Microsoft Licensing on Oracle Cloud
Infrastructure on page 809.

Verify Your Windows Operating System


When importing custom Windows images, ensure that the version you select matches the Windows image that
you imported. Failure to provide the correct version and SKU information could be a violation of your Microsoft
Licensing Agreement.

Windows System Time Issue on Custom Windows Instances


If you change the time zone from the default setting on Windows VM instances, when the instance reboots or syncs
with the hardware clock, the system time will revert back to the time for the default time zone. However, the time
zone setting will stay set to the new time zone, so the system clock will be incorrect. You can fix this by setting the
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation registry
key to 1.
Oracle-provided Windows images already have the RealTimeIsUniversal registry key set by default, but you
must set this for any custom Windows images that you import.
To fix this issue for custom Windows images:
1. Open the Windows Registry Editor and navigate to the HKEY_LOCAL_MACHINE\SYSTEM
\CurrentControlSet\Control\TimeZoneInformation registry key.
2. Create a new DWORD key named RealTimeIsUniversal and set the value to 1.
3. Reboot the instance.
4. Reset the time and time zone manually.

Bring Your Own Image Scenarios


You can also use image import/export to share custom images from Bring Your Own Image (BYOI) on page 680
scenarios across tenancies and regions, so you don't need to recreate the image manually in each region. You must go
through the steps required to manually create the image in one of the regions, but after this is done, you can export
the image, making it available for import in additional tenancies and regions. Export the image in the .oci format,
which is a file format that contains a QCOW2 image file and Oracle Cloud Infrastructure-specific metadata.

Best practices for replicating an image across regions


You can replicate an image from one region to another region using the Console or API. At a high level:
1. Export the image to an Object Storage bucket in the same region as the image. For steps, see Exporting an Image
on page 677.
2. Copy the image to an Object Storage bucket in the destination region. For steps, see Copying Objects on page
3517.
3. Obtain the URL path to the image object. For steps, see To view object details on page 3451.
4. In the destination region, import the image. Use the URL path as the Object Storage URL. For steps, see
Importing an Image on page 678.

Best practices for sharing an image across tenancies


You can replicate an image from one tenancy to another tenancy using the Console or API. At a high level:
1. Export the image to an Object Storage bucket in the same region as the image. For steps, see Exporting an Image
on page 677.

Oracle Cloud Infrastructure User Guide 676


Compute

2. Create a pre-authenticated request with read-only access for the image in the destination region. For steps, see
Working with Pre-Authenticated Requests on page 3512.
3. In the destination tenancy, import the image. Use the pre-authenticated request URL as the Object Storage URL.
For steps, see Importing an Image on page 678.

Object Storage Service URLs


When you import or export custom images using the Console, you might need to specify the Object Storage URL
pointing to the location that you want to import the image from or export the image to. Object Storage URLs are
structured as follows:

https://<host_name>/n/<namespace_name>/b/<bucket_name>/o/<object_name>

For example:

https://objectstorage.us-phoenix-1.oraclecloud.com/n/MyNamespace/b/MyBucket/
o/MyCustomImage.qcow2

Pre-Authenticated Requests
When using import/export across tenancies, you need to use an Object Storage pre-authenticated request. See
Working with Pre-Authenticated Requests on page 3512 for steps to create a pre-authenticated request. When you
go through these steps, after you click Create Pre-Authenticated Request, the Pre-Authenticated Request Details
dialog box opens. You must make a copy of the pre-authenticated request URL displayed here, because this is the
only time this URL is displayed. This is the Object Storage URL that you specify for import/export.
Note:

Pre-authenticated requests for a bucket


With image export, if you create the pre-authenticated request for a bucket,
you need to append the object name to the generated URL. For example:

/o/MyCustomImage.qcow2

Exporting an Image
You can use the Console or API to export images, and the exported images are stored in the Oracle Cloud
Infrastructure Object Storage service. To perform an image export, you need write access to the Object Storage
bucket for the image. For more information, see Overview of Object Storage on page 3420 and Let users write
objects to Object Storage buckets on page 2157.

To export an image using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Custom Images.
2. Click the custom image that you're interested in.
3. Click Export.

Oracle Cloud Infrastructure User Guide 677


Compute

4. Specify the Object Storage location to export the image to:


• Export to an Object Storage bucket: Select a bucket. Then, enter a name for the exported image. Avoid
entering confidential information.
• Export to an Object Storage URL: Enter the Object Storage URL.
5. In the Image format list, select the format that you want to export the image to. The following formats are
available:
•Oracle Cloud Infrastructure file with a QCOW2 image and OCI metadata (.oci). Use this format to export a
custom image that you want to import into other tenancies or regions.
• QEMU Copy On Write (.qcow2)
• Virtual Disk Image (.vdi) for Oracle VM VirtualBox
• Virtual Hard Disk (.vhd) for Hyper-V
• Virtual Machine Disk (.vmdk)
6. Click Export Image.
After you click Export Image, the image state changes to Exporting. You can still launch instances while the image
is exporting, but you can't delete the image until the export has finished. To track the progress of the operation, you
can monitor the associated work request.
When the export is complete, the image state changes to Available. If the image state changes to Available, but you
don't see the exported image in the Object Storage location you specified, this means that the export failed, and you
will need to go through the steps again to export the image.

Importing an Image
You can use the Console or API to import exported images from Object Storage. To import an image, you need read
access to the Object Storage object containing the image. For more information, see Let users download objects from
Object Storage buckets on page 2158.

To import an image using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Custom Images.
2. Click Import Image.
3. In the Create in Compartment list, select the compartment that you want to import the image to.
4. Enter a Name for the image. Avoid entering confidential information.
5. Select the Operating System:
•For Linux images, select Linux.
•For Windows images, select Windows. Select the Operating System Version, and then certify that the
selected operating system complies with Microsoft licensing agreements.
6. Specify the Object Storage location to import the image from:
• Import from an Object Storage bucket: Select the Bucket that contains the image. In the Object Name list,
select the image file.
• Import from an Object Storage URL: Enter the Object Storage URL of the image. When importing across
tenancies, you must specify a pre-authenticated request URL.
7. In the Image Type section, select the format of the image.
8. Select the Launch Mode:
• For custom images where the image type is .oci, the launch mode is disabled. Oracle Cloud Infrastructure
selects the appropriate launch mode based on the launch mode for the source image.
• For custom images exported from Oracle Cloud Infrastructure where the image type is QCOW2, select Native
Mode.
• To import other custom images, select Paravirtualized Mode or Emulated Mode. For more information, see
Bring Your Own Image (BYOI) on page 680.
9. Show Tagging Options: If you have permissions to create a resource, then you also have permissions to apply
free-form tags to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For

Oracle Cloud Infrastructure User Guide 678


Compute

more information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then
skip this option (you can apply tags later) or ask your administrator.
10. Click Import Image.
After you click Import Image, you'll see the imported image in the Custom Images list for the compartment, with a
state of Importing. To track the progress of the operation, you can monitor the associated work request.
When the import completes successfully, the state changes to Available. If the state does not change, or no entry
appears in the Custom Images list, the import failed. If the import failed, ensure you have read access to the Object
Storage object, and that the object contains a supported image.

Editing Image Details


You can edit the details of custom images, such as the image name and compatible shapes for the image. For more
information, see To edit the name or compatible shapes for a custom image on page 673 in Managing Custom
Images on page 670.

Managing Tags for an Image


You can add tags to your resources to help you organize them according to your business needs. You can add tags
at the time you create a resource, or you can update the resource later with the desired tags. For general information
about applying tags, see Resource Tags on page 213.
To manage tags for an image
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Custom Images.
2. Click the image that you're interested in.
3. Click the Tags tab to view or edit the existing tags. Or click More Actions, and then click Add tags to add new
ones.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following API operations for custom image import/export:
• ExportImage: Exports a custom image to Object Storage.
• CreateImage: To import an exported image, specify ImageSourceDetails in the request body.
• AddImageShapeCompatibilityEntry: Adds a shape to the compatible shapes list for the image.
• ListImageShapeCompatibilityEntries
• GetImageShapeCompatibilityEntry
• RemoveImageShapeCompatibilityEntry: Removes a shape from the compatible shapes list for the image.

X5 and X7 Compatibility for Image Import/Export


Oracle X5, X6, and X7 servers have different host hardware. As a result, using an X5 or X6 image on an X7 bare
metal or virtual machine (VM) instance may not work without additional modifications. Oracle Cloud Infrastructure
recommends for X7 hosts that you use the Oracle-provided images for X7. See Oracle-Provided Image Release Notes
for more information about which images support X7. These images have been explicitly created and tested with X7
hardware.
If you attempt to use an existing X5 image on X7 hardware, note the following:
• No Windows versions are cross-compatible.
• Oracle Autonomous Linux 7 and Oracle Linux 8 are cross-compatible.

Oracle Cloud Infrastructure User Guide 679


Compute

• Oracle Linux 6, Oracle Linux 7, Ubuntu 16.04, CentOS 7, and CentOS 8 are cross-compatible. However, you
must update the kernel to the most recent version to install the latest device drivers. To do this, run the following
commands from a terminal session:
• Oracle Linux

yum update
• CentOS 7, CentOS 8

yum update
• Ubuntu 16.04

apt-get update
apt-get dist-upgrade

If you attempt to use an X6 image on non-X6 hardware, note the following:


• Oracle Linux 6, all CentOS versions, and all Windows versions are not cross-compatible.
• Oracle Autonomous Linux 7 and Oracle Linux 8 are cross-compatible.
• Oracle Linux 7, Ubuntu 18.04, and Ubuntu 16.04 are cross-compatible. Use the Oracle-provided images for X6.
The primary device drivers that are different between X5, X6, and X7 hosts are:
• Network device drivers
• NVMe drive device drivers
• GPU device drivers
Additional updates might be required depending on how you have customized the image.

Bring Your Own Image (BYOI)


The Bring Your Own Image (BYOI) feature enables you to bring your own versions of operating systems to the cloud
as long as the underlying hardware supports it. The services do not depend on the OS you run.
The BYOI feature does the following things:
• Enables virtual machine cloud migration projects.
• Supports both old and new operating systems.
• Encourages experimentation.
• Increases infrastructure flexibility.
Note:

Licensing Requirements
You must comply with all licensing requirements when you upload and start
instances based on OS images that you supply.

Bringing Your Own Image


A critical part of any lift-and-shift cloud migration project is the migration of on-premises virtual machines (VMs) to
the cloud. You can import your on-premises virtualized root volumes to Oracle Cloud Infrastructure using the custom
image import feature, and then launch Compute instances using those images.
You can import Windows and Linux-based custom images and use them to launch VMs on Oracle Cloud
Infrastructure. Bringing your own image to a bare metal instance is not supported.

Oracle Cloud Infrastructure User Guide 680


Compute

• Windows images
These Windows versions support custom image import:
• Windows Server 2008 R2 Standard, Enterprise, Datacenter
• Windows Server 2012 Standard, Datacenter
• Windows Server 2012 R2 Standard, Datacenter
• Windows Server 2016 Standard, Datacenter
• Windows Server 2019 Standard, Datacenter
For steps to import a Windows image, see Importing Custom Windows Images on page 682.
Bring your own license (BYOL) for Windows Server is not permitted when launching a VM instance on a shared
host. For more information about BYOL and the licensing requirements for Windows images, see Licensing
Options for Microsoft Windows on page 815 and Microsoft Licensing on Oracle Cloud Infrastructure on page
809.
• Linux images
These Linux and UNIX-like operating systems support custom image import:

Linux and UNIX-like Operating Supported Versions Preferred Launch Mode


Systems
CentOS 7 or later Paravirtualized
4.0, 4.8, 5.11, 6.9 Emulated
Debian 8 or later Paravirtualized
5.0.10, 6.0, 7 Emulated
Flatcar Container Linux 2345.3.0 or later Paravirtualized
FreeBSD 8, 9, 10, 11, 12 or later Emulated
openSUSE Leap 15.1 Paravirtualized
Oracle Linux 7.x, 8.x Paravirtualized
5.11, 6.x Emulated
RHEL 7 or later Paravirtualized
4.5, 5.5, 5.6, 5.9, 5.11, 6.5, 6.9 Emulated
SUSE 12.2 or later Paravirtualized
11, 12.1 Emulated
Ubuntu 13.04 or later Paravirtualized
12.04 Emulated

You might also have success importing other distributions of Linux.


For steps to import a Linux-based image, see Importing Custom Linux Images on page 687.

Bringing Your Own Hypervisor Guest OS


You can bring your own hypervisor guest OS using Kernel-based Virtual Machine (KVM) or Hyper-V.
Note:

Bring your own hypervisor deployment of ESXi on bare metal Compute


instances is not supported. ESXi is supported only by provisioning an Oracle

Oracle Cloud Infrastructure User Guide 681


Compute

Cloud VMware Solution software-defined data center (SDDC). See Oracle


Cloud VMware Solution on page 4062 for more information.

Bringing Your Own KVM


You can bring your own operating system images or older operating systems, such as Ubuntu 6.x, RHEL 3.x, and
CentOS 5.4, using KVM on bare metal instances.
To bring your own KVM, first create a bare metal instance using the KVM image from Marketplace. Then, copy your
on-premises guest OS to KVM on the bare metal instance.
For more information, see the following resources:
• Getting Started: Oracle Linux KVM Image for Oracle Cloud Infrastructure
• Installing and Configuring KVM on Bare Metal Instances with Multi-VNIC

Bringing Your Own Hyper-V


You can bring your own operating system images or older operating systems, such as Windows Server 2003 and
Windows Server 2008, using Hyper-V on bare metal instances.
To bring your own Hyper-V, first create a bare metal instance using the Oracle-provided Windows Server Datacenter
platform image. Oracle Cloud Infrastructure will issue a license for Windows Server when the instance is launched.
Then, copy your on-premises guest OS to Hyper-V on the bare metal instance. No additional license is required
because Windows Server Datacenter includes unlimited virtual machines.
Be aware of the following considerations:
• Oracle Cloud Infrastructure will issue a license when you launch an instance using a custom image. If you want to
bring your own license (BYOL) for Windows Server, you must activate Windows Server with your own license.
For steps, see Activating Licenses on a Dedicated Host on page 817.
• Importing your own ISO image is not supported.
For a list of supported Hyper-V guests, see the following resources:
• Supported Windows guest operating systems for Hyper-V on Windows Server
• Supported Linux and FreeBSD virtual machines for Hyper-V on Windows
For more information about deploying Hyper-V, see Deploying Hyper-V on Oracle Cloud Infrastructure.

Importing Custom Windows Images


The Compute service enables you to import Windows images that were created outside of Oracle Cloud
Infrastructure. For example, you can import images running on your on-premises physical or virtual machines (VMs),
or VMs running in Oracle Cloud Infrastructure Classic. You can then launch your imported images on Compute
virtual machines.
For information about the licensing requirements for Windows images, see Microsoft Licensing on Oracle Cloud
Infrastructure on page 809.
Supported Operating Systems
These Windows versions support custom image import:
• Windows Server 2008 R2 Standard, Enterprise, Datacenter
• Windows Server 2012 Standard, Datacenter
• Windows Server 2012 R2 Standard, Datacenter
• Windows Server 2016 Standard, Datacenter
• Windows Server 2019 Standard, Datacenter

Oracle Cloud Infrastructure User Guide 682


Compute

Note:

• Oracle Cloud Infrastructure has tested the operating systems listed


previously and will support customers in ensuring that instances launched
from these images and built according to the guidelines in this topic are
accessible using RDP.
• For OS editions not listed previously, Oracle Cloud Infrastructure will
provide commercially reasonable support to customers in an effort to get
instances that are launched from these images accessible via RDP.
• Support from Oracle Cloud Infrastructure in launching an instance from a
custom OS does not ensure that the operating system vendor also supports
the instance.
• Oracle Cloud Infrastructure licenses and charges the Windows licensing
fee for all instances launched using an imported Windows OS image. This
applies whether or not those instances are registered with Oracle Cloud
Infrastructure's Microsoft Key Management service.
• The Oracle Cloud Agent (used for monitoring) is not supported on
Windows Server 2008 R2.
Windows Source Image Requirements
Custom images must meet the following requirements:
• The maximum image size is 400 GB.
• The image must be set up for BIOS boot.
• Only one disk is supported, and it must be the boot drive with a valid master boot record (MBR) and boot loader.
You can migrate additional data volumes after you import the image's boot volume.
• The minimum boot volume size is 256 GB. For more information, see Custom Boot Volume Sizes on page
614.
• The boot process must not require additional data volumes to be present for a successful boot.
• The disk image cannot be encrypted.
• The disk image must be a VMDK or QCOW2 file.
• Create the image file by cloning the source volume, not by creating a snapshot.
• VMDK files must be either the "single growable" (monolithicSparse) type or the "stream
optimized" (streamOptimized) type, both of which consist of a single VMDK file. All other VMDK formats,
such as those that use multiple files, split volumes, or contain snapshots, are not supported.
• The network interface must use DHCP to discover the network settings. When you import a custom image,
existing network interfaces are not recreated. Any existing network interfaces are replaced with a single NIC after
the import process is complete. You can attach additional VNICs after you launch the imported instance.
• The network configuration must not hardcode the MAC address for the network interface.
Preparing Windows VMs for Import
Before you can import a custom Windows image, you must prepare the image to ensure that instances launched from
the image can boot correctly and that network connections will work.
You can perform the tasks described in this section on the running source system. If you have concerns about
modifying the live source system, you can export the image as-is, import it into Oracle Cloud Infrastructure, and
then launch an instance based on the custom image. You can then connect to the instance using the VNC console and
perform the preparation steps.
Important:

The system drive where Windows is installed will be imported to Oracle


Cloud Infrastructure. All partitions on the drive will follow through the
imported image. Any other drives will not be imported and you must re-

Oracle Cloud Infrastructure User Guide 683


Compute

create them on the instance after import. You will then need to manually
move the data on the non-system drives.
To prepare a Windows VM for import:
1. Follow your organization's security guidelines to ensure that the Windows system is secured. This can include, but
is not limited to the following tasks:
• Install the latest security updates for the operating system and installed applications.
• Enable the firewall, and configure it so that you only enable the rules which are needed.
• Disable unnecessary privileged accounts.
• Use strong passwords for all accounts.
2. Configure Remote Desktop Protocol (RDP) access to the image:
a. Enable Remote Desktop connections to the image.
b. Modify the Windows Firewall inbound port rule to allow RDP access for both Private and Public network
location types. When you import the image, the Windows Network Location Awareness service will identify
the network connection as a Public network type.
3. Determine whether the current Windows license type is a volume license by running the following command in
PowerShell:

Get-CimInstance -ClassName SoftwareLicensingProduct | where


{$_.PartialProductKey} | select ProductKeyChannel

If the license is not a volume license, after you import the image, you will update the license type.
4. If you plan to launch the imported image on more than one VM instance, create a generalized image of the boot
disk. A generalized image is cleaned of computer-specific information, such as unique identifiers. When you
create instances from a generalized image, the unique identifiers are regenerated. This prevents two instances that
are created from the same image from colliding on the same identifiers.
5. Create a backup of the root volume.
6. If the VM has remotely attached storage, such as NFS or block volumes, configure any services that rely on this
storage to start manually. Remotely attached storage is not available the first time that an imported instance boots
on Oracle Cloud Infrastructure.
7. Ensure that all network interfaces use DHCP, and that the MAC address and IP addresses are not hardcoded. See
your system documentation for steps to perform network configuration for your system.

Oracle Cloud Infrastructure User Guide 684


Compute

8. Download the Oracle Windows VirtIO drivers:


a.Sign in to the Oracle Software Delivery Cloud site.
b.In the All Categories list, select Release.
c.Type Oracle Linux 7.7 in the search box and click Search.
d.Add REL: Oracle Linux 7.7.x to your cart, and then click Continue.
e.In the Platforms/Languages list, select x86 64 bit. Click Continue.
f.Accept the license agreement and then click Continue.
g.Select the check box next to Oracle VirtIO Drivers Version for Microsoft Windows 1.1.5. Clear the other
check boxes.
h. Click Download and then follow the prompts.
9. Install the Oracle VirtIO drivers for Windows:
a. Follow the prompts in the installation wizard. On the Installation Type page, select Custom, as shown in the
following screenshot.

b. Reboot the VM.


10. Stop the VM.
11. Clone the stopped VM as a VMDK or QCOW2 file, and then export the image from your virtualization
environment. See the tools documentation for your virtualization environment for steps.
Importing a Windows-Based VM
After you prepare a Windows image for import, follow these steps to import the image:
1. Upload the image file to an Object Storage bucket. You can upload the file using the Console or using the
command line interface (CLI). If you use the CLI, use the following command:

oci os object put -bn <destination_bucket_name> --


file <path_to_the_VMDK_or_QCOW2_file>
2. Open the navigation menu. Under Core Infrastructure, go to Compute and click Custom Images.

Oracle Cloud Infrastructure User Guide 685


Compute

3. Click Import Image.


4. In the Create in Compartment list, select the compartment that you want to import the image to.
5. Enter a Name for the image. Avoid entering confidential information.
6. For the Operating System, select Windows.
7. In the Operating System Version list, select the version of Windows.
8. Confirm that you chose the operating system version that complies with your Microsoft licensing agreement, and
then select the compliance check box.
Important:

Failure to provide the correct version and SKU information could be a


violation of your Microsoft Licensing Agreement.
9. Select the Import from an Object Storage bucket option.
10. Select the Bucket that you uploaded the image to.
11. In the Object Name list, select the image file that you uploaded.
12. For the Image Type, select the file type of the image, either VMDK or QCOW2.
13. In the Launch Mode area, select Paravirtualized Mode.
14. Show Tagging Options: If you have permissions to create a resource, then you also have permissions to apply
free-form tags to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For
more information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then
skip this option (you can apply tags later) or ask your administrator.
15. Click Import Image.
The imported image appears in the Custom Images list for the compartment, with a state of Importing. When the
import completes successfully, the state changes to Available.
If the state doesn't change, or no entry appears in the Custom Images list, the import failed. Ensure that you have
read access to the Object Storage object, and that the object contains a supported image.
16. Complete the post-import tasks.
Post-Import Tasks for Windows Images
After you import a custom Windows-based image, do the following:
1. If you want to use the image on AMD or X6-based shapes, add the shapes to the image's list of compatible shapes.
2. Create an instance based on the custom image. For the image source, select Custom Images, and then select the
image that you imported.
3. Enable Remote Desktop Protocol (RDP) access to the Compute instance.
4. Connect to the instance using RDP.
5. If the instance requires any remotely attached storage, such as block volumes or file storage, create and attach it.
6. Create and attach any required secondary VNICs.
7. Test that all applications are working as expected.
8. Reset any services that were set to start manually.

Oracle Cloud Infrastructure User Guide 686


Compute

9. Register the instance with the Oracle-provided Key Management Service (KMS) server:
a. On the instance, open PowerShell as Administrator.
b. To set the KMS endpoint, run the following command:

slmgr /skms 169.254.169.253:1688


c. If the Windows license type that you noted while preparing the image isn't a volume license, you must update
the license type. Run the following command:

slmgr /ipk <setup key>

<setup key> is the KMS client setup key that corresponds to the version of Windows that you imported:

Windows Version KMS Client Setup Key


Windows Server 2008 R2 Standard YC6KT-GKW9T-YTKYR-T4X34-R7VHC
Windows Server 2008 R2 Enterprise 489J6-VHDMP-X63PK-3K798-CPX3Y
Windows Server 2008 R2 Datacenter 74YFP-3QFB3-KQT8W-PMXWJ-7M648
Windows Server 2012 Standard XC9B7-NBPP2-83J2H-RHMBY-92BT4
Windows Server 2012 Datacenter 48HP8-DN98B-MYWDG-T2DCC-8W83P
Windows Server 2012 R2 Standard D2N9P-3P6X9-2R39C-7RTCD-MDVJX
Windows Server 2012 R2 Datacenter W3GGN-FT8W3-Y4M27-J84CP-Q3VJ9
Windows Server 2016 Standard WC2BQ-8NRM3-FDDYY-2BFGV-KHKQY
Windows Server 2016 Datacenter CB7KF-BWN84-R7R2Y-793K2-8XDDG
Windows Server 2019 Standard N69G4-B89J2-4G8F4-WWYCC-J464C
Windows Server 2019 Datacenter WMDGN-G9PQG-XVVXX-R3X43-63DFG
d. To activate Windows, run the following command:

slmgr /ato
e. To verify the license status, run the following command:

Get-CimInstance -ClassName SoftwareLicensingProduct | where


{$_.PartialProductKey} | select Description, LicenseStatus

If the LicenseStatus is 1, the instance is properly licensed. It might take up to 48 hours for the license
status to update.

Importing Custom Linux Images


The Compute service lets you import Linux-based images that were created outside of Oracle Cloud Infrastructure.
For example, you can import images running on your on-premises physical or virtual machines (VMs), or VMs
running in Oracle Cloud Infrastructure Classic. You can then launch your imported images on Compute virtual
machines.
Supported Operating Systems
You can launch imported Linux VMs in either paravirtualized mode or emulated mode. On AMD shapes, imported
images are supported in paravirtualized mode only.
Paravirtualized mode offers better performance than emulated mode. We recommend that you use paravirtualized
mode if your OS supports it. Linux-based operating systems running the kernel version 3.4 or later support
paravirtualized drivers. You can verify your system's kernel version using the uname command.

Oracle Cloud Infrastructure User Guide 687


Compute

To verify the kernel version using the uname command


Run the following command:

uname -a

The output should look similar to this sample:

Linux ip_bash 4.14.35-1818.2.1.el7uek.x86_64 #2 SMP Mon Aug 27 21:16:31 PDT


2018 x86_64 x86_64 x86_64 GNU/Linux

The kernel version is the number at the first part of output string. In the sample output shown previously, the version
is 4.14.35.

Linux and UNIX-like Operating Systems that Support Custom Image Import
These Linux and UNIX-like operating systems support custom image import:

Linux and UNIX-like Operating Supported Versions Preferred Launch Mode


Systems
CentOS 7 or later Paravirtualized
4.0, 4.8, 5.11, 6.9 Emulated
Debian 8 or later Paravirtualized
5.0.10, 6.0, 7 Emulated
Flatcar Container Linux 2345.3.0 or later Paravirtualized
FreeBSD 8, 9, 10, 11, 12 or later Emulated
openSUSE Leap 15.1 Paravirtualized
Oracle Linux 7.x, 8.x Paravirtualized
5.11, 6.x Emulated
RHEL 7 or later Paravirtualized
4.5, 5.5, 5.6, 5.9, 5.11, 6.5, 6.9 Emulated
SUSE 12.2 or later Paravirtualized
11, 12.1 Emulated
Ubuntu 13.04 or later Paravirtualized
12.04 Emulated

Note:

• Oracle Cloud Infrastructure has tested the operating systems listed in


the previous table and will support customers in ensuring that instances
launched from these images and built according to the guidelines in this
topic are accessible using SSH.
• For any OS version other than those covered by an official support service
from Oracle (for example, Oracle Linux with Premier Support), Oracle
Cloud Infrastructure will provide commercially reasonable support
limited to getting an instance launched and accessible via SSH.
• Support from Oracle Cloud Infrastructure in launching an instance from
a custom OS does not ensure that the operating system vendor also

Oracle Cloud Infrastructure User Guide 688


Compute

supports the instance. Customers running Oracle Linux on Oracle Cloud


Infrastructure automatically have access to Oracle Linux Premier Support.
Tip:

If your image supports paravirtualized drivers, you can convert your existing
emulated mode instances into paravirtualized instances. Create a custom
image of your instance, export it to Object Storage, and then reimport it using
paravirtualized mode.
Linux Source Image Requirements
Custom images must meet the following requirements:
• The maximum image size is 400 GB.
• The image must be set up for BIOS boot.
• Only one disk is supported, and it must be the boot drive with a valid master boot record (MBR) and boot loader.
You can migrate additional data volumes after you import the image's boot volume.
• The boot process must not require additional data volumes to be present for a successful boot.
• The boot loader should use LVM or a UUID to locate the boot volume.
• The disk image cannot be encrypted.
• The disk image must be a VMDK or QCOW2 file.
• Create the image file by cloning the source volume, not by creating a snapshot.
• VMDK files must be either the "single growable" (monolithicSparse) type or the "stream
optimized" (streamOptimized) type, both of which consist of a single VMDK file. All other VMDK formats,
such as those that use multiple files, split volumes, or contain snapshots, are not supported.
• The network interface must use DHCP to discover the network settings. When you import a custom image,
existing network interfaces are not recreated. Any existing network interfaces are replaced with a single NIC after
the import process is complete. You can attach additional VNICs after you launch the imported instance.
• The network configuration must not hardcode the MAC address for the network interface.
We recommend that you enable certificate-based SSH, however this is optional. If you want your image to
automatically use SSH keys supplied from the User Data field when you launch an instance, you can install cloud-
init when preparing the image. See Creating an Instance on page 700 for more information about providing user
data.
Preparing Linux VMs for Import
Before you import a custom Linux image, you must prepare the image to ensure that instances launched from the
image can boot correctly and that network connections will work. Do the following:
1. Optionally, configure your Linux image to support serial console connections. A console connection can help you
remotely troubleshoot malfunctioning instances, such as an imported image that does not complete a successful
boot.
2. Create a backup of the root volume.
3. If the VM has remotely attached storage, such as NFS or block volumes, configure any services that rely on this
storage to start manually. Remotely attached storage is not available the first time that an imported instance boots
on Oracle Cloud Infrastructure.
4. Ensure that all network interfaces use DHCP, and that the MAC address and IP addresses are not hardcoded. See
your system documentation for steps to perform network configuration for your system.
5. Stop the VM.
6. Clone the stopped VM as a VMDK or QCOW2 file, and then export the image from your virtualization
environment. See the tools documentation for your virtualization environment for steps.
Importing a Linux-Based VM
After you prepare a Linux image for import, follow these steps to import the image:

Oracle Cloud Infrastructure User Guide 689


Compute

1. Upload the image file to an Object Storage bucket. You can upload the file using the Console or using the
command line interface (CLI). If you use the CLI, use the following command:

oci os object put -bn <destination_bucket_name> --


file <path_to_the_VMDK_or_QCOW2_file>
2. Open the navigation menu. Under Core Infrastructure, go to Compute and click Custom Images.
3. Click Import Image.
4. In the Create in Compartment list, select the compartment that you want to import the image to.
5. Enter a Name for the image. Avoid entering confidential information.
6. For the Operating System, select Linux.
7. Select the Import from an Object Storage bucket option.
8. Select the Bucket that you uploaded the image to.
9. In the Object Name list, select the image file that you uploaded.
10. For the Image Type, select the file type of the image, either VMDK or QCOW2.
11. Depending on your image's version of Linux, in the Launch Mode area, select Paravirtualized Mode or
Emulated Mode. If your image supports paravirtualized drivers, we recommend that you select paravirtualized
mode.
12. Show Tagging Options: If you have permissions to create a resource, then you also have permissions to apply
free-form tags to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For
more information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then
skip this option (you can apply tags later) or ask your administrator.
13. Click Import Image.
The imported image appears in the Custom Images list for the compartment, with a state of Importing. When the
import completes successfully, the state changes to Available.
If the state doesn't change, or no entry appears in the Custom Images list, the import failed. Ensure that you have
read access to the Object Storage object, and that the object contains a supported image.
14. Complete the post-import tasks.
Post-Import Tasks for Linux Images
After you import a custom Linux-based image, do the following:
1. If you want to use the image on AMD or X6-based shapes, add the shapes to the image's list of compatible shapes.
2. Create an instance based on the custom image. For the image source, select Custom Images, and then select the
image that you imported.
3. Connect to the instance using SSH.
4. If the instance requires any remotely attached storage, such as block volumes or file storage, create and attach it.
If you are using iSCSI on page 505 attachments, see Recommended iSCSI Initiator Parameters for Linux-based
Images on page 510.
5. Create and attach any required secondary VNICs.
6. Test that all applications are working as expected.
7. Reset any services that were set to start manually.
8. If you enabled serial console access to the image, test it by creating a serial console connection to the instance.
See the current issues and workarounds for known issues with imported custom images.
Enabling Serial Console Access for Imported Linux Images
You can configure your custom Linux image to support connections using the serial console feature in the Compute
service.
For more information about serial console connections, and steps to troubleshoot if your image has network
connectivity issues after it is launched, see Troubleshooting Instances Using Instance Console Connections on page
819.

Oracle Cloud Infrastructure User Guide 690


Compute

The serial console connection in Oracle Cloud Infrastructure uses the first serial port, ttyS0, on the VM. The boot
loader and the operating system should be configured to use ttyS0 as a console terminal for both input and output.
Configuring the Boot Loader
The steps to configure the boot loader to use ttyS0 as a console terminal for both input and output depend on the
GRUB version. Run the following command on the operating system to determine the GRUB version:

grub install --version

If the version number returned is 2.x, use the steps for GRUB 2. For earlier versions, use the steps for GRUB.
To configure GRUB2
1. Run the following command to modify the GRUB configuration file:

sudo vi /etc/default/grub
2. Confirm that the configuration file contains the following:

GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200"


GRUB_TERMINAL="serial console"
3. Append the following to the end of the GRUB_CMDLINE_LINUX line:

console=tty1 console=ttyS0,115200

If GRUB_CMDLINE_LINUX does not exist, create this line, using GRUB_CMDLINE_OUTPUT as a template.
4. Regenerate the GRUB2 configuration using the following command:

sudo grub2-mkconfig -o /boot/grub2/grub.cfg

If you have a beta version of GRUB 2, use this command instead:

sudo grub-mkconfig -o /boot/grub/grub.cfg

To configure GRUB
1. Run the following command to modify the GRUB configuration file:

sudo vi /boot/grub/grub.conf
2. Add following after the line containing timeout:

serial --unit=0 --speed=115200


terminal --timeout=5 serial console
3. Append the following to each kernel line:

console=tty1 console=ttyS0,115200

Configuring the Operating System


The operating system may already be configured to use ttyS0 as a console terminal for both input and output. To
verify, run the following command:

sudo vi /etc/securetty

Check the file for ttyS0. If you don't see it, append ttyS0 to the end of the file.

Oracle Cloud Infrastructure User Guide 691


Compute

Validating Serial Console Access


After completing the steps to enable serial console access to the image, you should validate that serial console access
is working by testing the image with serial console in your virtualization environment. Consult the documentation for
your virtualization environment for steps to do this. Verify that the boot output displays in the serial console output
and that there is interactive input after the image has booted.

Troubleshooting the Serial Console


If no output is displayed on the serial console, verify in the configuration for your virtualization environment that the
serial console device is attached to the first serial port.
If the serial console displays output, but there is no interactive input available, check that there is a terminal process
listening on the ttyS0 port. To do this, run the following command:

ps aux | grep ttyS0

This command should output a terminal process that is listening on the ttyS0 port. For example, if your system is
using getty, you will see the following output:

/sbin/getty ttyS0

If you don't see this output, it is likely that a login process is not configured for the serial console connection. To
resolve this, enable the init settings, so that a terminal process is listening on the ttyS0 at startup.
For example, if your system is using getty, add the following command to the init settings to run on system startup:

getty -L 9600 ttyS0 vt102

The steps to do this will vary depending on the operating system, so consult the documentation for the image's
operating system.

Configuring Image Capabilities for Custom Images


Image capabilities are the configuration options available when launching an instance from an image. Some image
capability examples are the firmware used to boot the instance, the volume attachment types supported, and so
on. The full set of image capabilities provided by Oracle Cloud Infrastructure Compute are defined in the global
image capability schema. You can also create your own custom image capability schemas based on the global image
capability schema to specify and configure image capabilities for your custom images. Using these schemas, you can
customize the image configuration and options available when users launch instances from your custom images.
You can configure image capability schemas using the REST APIs, SDKs, or the CLI.
Caution:

Using this feature allows you to customize image capabilities from the
default capabilities that Oracle recommends and should be used for advanced
custom image scenarios only. Ensure that you understand the optimal
configuration options for your custom image.

Global Image Capability Schema


The following JSON is what's returned when you use the
GetComputeGlobalImageCapabilitySchemaVersion API operation or the global-image-
capability-schema-version CLI command. It represents the full set of image capabilities available for
images. The default values specified for each element are the recommended values for each option. You can create a
schema to customize these options, however they must be a subset, you cannot specify values that are not included in
the global capabilities schema.

Oracle Cloud Infrastructure User Guide 692


Compute

"Compute.Firmware": {
"descriptorType": "enumstring",
"values": [
"BIOS",
"UEFI_64"
],
"defaultValue": "UEFI_64"
},
"Compute.LaunchMode": {
"descriptorType": "enumstring",
"values": [
"NATIVE",
"EMULATED",
"PARAVIRTUALIZED",
"CUSTOM"
],
"defaultValue": "PARAVIRTUALIZED"
},
"Network.AttachmentType": {
"descriptorType": "enumstring",
"values": [
"E1000",
"VFIO",
"PARAVIRTUALIZED"
],
"defaultValue": "PARAVIRTUALIZED"
},
"Storage.BootVolumeType": {
"descriptorType": "enumstring",
"values": [
"ISCSI",
"SCSI",
"IDE",
"PARAVIRTUALIZED"
],
"defaultValue": "PARAVIRTUALIZED"
},
"Storage.LocalDataVolumeType": {
"descriptorType": "enumstring",
"values": [
"ISCSI",
"SCSI",
"IDE",
"PARAVIRTUALIZED"
],
"defaultValue": "PARAVIRTUALIZED"
},
"Storage.RemoteDataVolumeType": {
"descriptorType": "enumstring",
"values": [
"ISCSI",
"SCSI",
"IDE",
"PARAVIRTUALIZED"
],
"defaultValue": "PARAVIRTUALIZED"
},
"Storage.ConsistentVolumeNaming": {
"descriptorType": "boolean",
"defaultValue": "true"
},
"Storage.ParaVirtualization.EncryptionInTransit": {
"descriptorType": "boolean",
"defaultValue": "true"

Oracle Cloud Infrastructure User Guide 693


Compute

},
"Storage.ParaVirtualization.AttachmentVersion": {
"descriptorType": "enuminteger",
"values": [
1,
2
],
"defaultValue": 2
}
}

Schema Elements
The following list describes all the available elements in the global image capabilities schema.
• Compute.Firmware: The firmware used to boot the virtual machine instance. The default value is UEFI_64.
• Compute.LaunchMode: The configuration mode for launching instances. The default value is
PARAVIRTUALIZED.
• Network.AttachmentType: The emulation type for the primary VNIC, which is automatically created and
attached when the instance is launched. The default value is PARAVIRTUALIZED.
• Storage.BootVolumeType: Specifies the driver options for the image’s boot volume. The default value is
PARAVIRTUALIZED.
• Storage.LocalDataVolumeType: Specifies the driver options for the image to access local storage volumes. The
default value is PARAVIRTUALIZED.
• Storage.RemoteDataVolumeType: Specifies the driver options for the image to access remote storage volumes.
The default value is PARAVIRTUALIZED.
• Storage.ConsistentVolumeNaming: Specifies whether consistent device paths for iSCSI and paravirtualized
attached block volumes are enabled for the image. If enabled, the image must support consistent device names.
The default value is true.
• Storage.ParaVirtualization.EncryptionInTransit: Specifies whether in-transit encryption is enabled for the
image’s boot volume attachment. Applies only to paravirtualized boot volume attachments. The default value is
true.
• Storage.ParaVirtualization.AttachmentVersion: Specifies the paravirtualization version for boot volume and
block volume attachments. Applies only to paravirtualized volume attachments. The default value is 2.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.
For administrators, the following policy provides full access to the image capability schema framework:

Allow group IAM_group_name to manage compute-image-capability-schema in


tenancy

Using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Custom Images.
2. Click the custom image that you're interested in.
3. Click Edit Image Capabilities.
4. Edit the image capabilities you want to configure. See Schema Elements on page 694 for details about the
image capabilities you can configure.

Oracle Cloud Infrastructure User Guide 694


Compute

5. Click Save Changes.

Using the CLI


For information about using the CLI, see Command Line Interface (CLI) on page 4228. To work with image
capability schemas using the CLI, open a command prompt and run any of the following commands.
To list out the global image capability schema:

oci compute global-image-capability-schema list

To list out the global image capability schema versions:

oci compute global-image-capability-schema-version list --global-image-


capability-schema-id <GLOBAL_IMAGE_CAPABILITY_SCHEMA_ID>

To retrieve the global image capability schema version:

oci compute global-image-capability-schema-version get --global-image-


capability-schema-id <GLOBAL_IMAGE_CAPABILITY_SCHEMA_ID> --global-image-
capability-schema-version-name <VERSION_NAME>

To list the image capability schemas in the specified compartment:

oci compute image-capability-schema list --compartment-id <COMPARTMENT_ID>

To retrieve the image capability schema for the specified ID:

oci compute image-capability-schema get --image-capability-schema-


id <IMAGE_CAPABILITY_SCHEMA_ID>

To update the specified image capability schema:

oci -d compute image-capability-schema update --image-capability-schema-


id <IMAGE_CAPABILITY_SCHEMA_ID> --schema-data file://<SCHEMA_DATA_FILE>.json

To create an image capability schema:

oci compute image-capability-schema create --schema-data


file:// <SCHEMA_DATA_FILE>.json -c <COMPARTMENT_ID> --image-id <IMAGE_ID>
--global-image-capability-schema-version-name <VERSION_NAME>

When you create the schema, you specify the image OCID for the custom image you want to apply the image
capability schema to.
To delete the specified image capability schema:

oci -d compute image-capability-schema delete --image-capability-schema-


id <IMAGE_CAPABILITY_SCHEMA_ID>

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following API operations for working with image capability schemas:
• ListComputeGlobalImageCapabilitySchemas
• ListComputeGlobalImageCapabilitySchemaVersions
• GetComputeGlobalImageCapabilitySchema

Oracle Cloud Infrastructure User Guide 695


Compute

• GetComputeGlobalImageCapabilitySchemaVersion
• ListComputeImageCapabilitySchemas
• CreateComputeImageCapabilitySchema
• UpdateComputeImageCapabilitySchema
• DeleteComputeImageCapabilitySchema
• ChangeComputeImageCapabilitySchemaCompartment

OS Kernel Updates
Note:

This topic applies only to Linux instances that were launched before February
15, 2017. Linux instances launched on or after February 15, 2017 boot
directly from the image and do not require further action for kernel updates.
Oracle Cloud Infrastructure boots each instance from a network drive. This configuration requires additional actions
when you update the OS kernel.
Oracle Cloud Infrastructure uses Unified Extensible Firmware Interface (UEFI) firmware and a Preboot eXecution
Environment (PXE) interface on the host server to load iPXE from a Trivial File Transfer Protocol (TFTP) server.
The iPXE implementation runs a script to boot Oracle Linux. During the boot process, the system downloads the
kernel, the initrd file, and the kernel boot parameters from the network. The instance does not use the host's
GRUB boot loader.
Normally, the yum update kernel-uek command edits the GRUB configuration file, either grub.cfg or
grub.conf, to configure the next boot. Since bare metal instances do not use the GRUB boot loader, changes to the
GRUB configuration file are not implemented. When you update the kernel on your instance, you also must upload
the update to the network to ensure a successful boot process. The following approaches address this need:
• Instances launched from an Oracle-provided image include an Oracle yum plug-in that seamlessly handles the
upload when you run the yum update kernel-uek command.
• If you use a custom image based on an Oracle-provided image, the included yum plug-in will continue to work,
barring extraordinary changes.
• If you install your own package manager, you must either write your own plug-in or upload the kernel, initrd, and
kernel boot parameters manually.

Oracle Yum Plug-in


On instances launched with an Oracle-provided image, you can find the Oracle yum plug-in at:
/usr/share/yum-plugins/kernel-update-handler.py
The plug-in configuration is at:
/etc/yum/pluginconf.d
The plug-in looks for two variables in the /etc/sysconfig/kernel file, UPDATEDEFAULT and
DEFAULTKERNEL. It picks up the updates only when the first variable is set to "yes" and the DEFAULTKERNEL
value matches the kernel being updated. For example:

# UPDATEDEFAULT specifies if new-kernel-pkg should make


# new kernels the default
UPDATEDEFAULT=yes

# DEFAULTKERNEL specifies the default kernel package type


DEFAULTKERNEL=kernel-uek

Oracle-provided images incorporate the Unbreakable Enterprise Kernel (UEK). If you want to switch to a non-UEK
kernel, you must update the DEFAULTKERNEL value to "kernel" before you run yum update kernel.

Oracle Cloud Infrastructure User Guide 696


Compute

Manual Updates
Tip:

Oracle recommends using the Oracle yum plug-in to update the kernel.
If you manually upload the updates, there are four relevant URLs:

http://169.254.0.3/kernel
http://169.254.0.3/initrd
http://169.254.0.3/cmdline
http://169.254.0.3/activate

The first three URLs are for uploading files (HTTP request type PUT). The fourth URL is for activating the uploaded
files (HTTP request type POST). The system discards the uploaded files if they are not activated before the host
restarts.
The kernel and initrd are simple file uploads. The cmdline upload must contain the kernel boot parameters found in
the grub.cfg or grub.conf file, depending on the Linux version. The following example is an entry from the /
boot/efi/EFI/redhat/grub.cfg file in Red Hat Linux 7. The highlighted text represents the parameters to
upload.

kernel /boot/vmlinuz-4.1.12-37.5.1.el6uek.x86_64

ro root=UUID=8079e287-53d7-4b3d-b708-c519cf6829c8 rd_NO_LUKS
KEYBOARDTYPE=pc KEYTABLE=us
netroot=iscsi:@169.254.0.2::3260:iface1:eth0::iqn.2015-02.oracle.boot:uefi
rd_NO_MD SYSFONT=latarcyrheb-sun16 ifname=eth0:90:e2:ba:a2:e3:80
crashkernel=auto iscsi_initiator=iqn.2015-02. rd_NO_LVM ip=eth0:dhcp
rd_NO_DM LANG=en_US.UTF-8 console=tty0 console=ttyS0,9600 iommu=on

The following command returns what is being uploaded to the cmdline file.

cat /tmp/cmdline

A typical response resembles the following.

ro root=UUID=8079e287-53d7-4b3d-b708-c519cf6829c8 rd_NO_LUKS
KEYBOARDTYPE=pc KEYTABLE=us
netroot=iscsi:@169.254.0.2::3260:iface1:eth0::iqn.2015-02.oracle.boot:uefi
rd_NO_MD SYSFONT=latarcyrheb-sun16 ifname=eth0:90:e2:ba:a2:e3:80
crashkernel=auto iscsi_initiator=iqn.2015-02. rd_NO_LVM ip=eth0:dhcp
rd_NO_DM LANG=en_US.UTF-8 console=tty0 console=ttyS0,9600 iommu=on

Oracle Cloud Infrastructure User Guide 697


Compute

The following commands update the cmdline and initrd files, and then activate the changes.

CKSUM=`md5sum /tmp/cmdline | cut -d ' ' -f 1`

sudo curl -X PUT --data-binary @/tmp/cmdline -H "Content-MD5: $CKSUM"


http://169.254.0.3/cmdline

CKSUM=`md5sum /boot/initramfs-3.8.13-118.8.1.el7uek.x86_64.img | cut -d ' '


-f 1`

sudo curl -X PUT --data-binary @/boot/


initramfs-3.8.13-118.8.1.el7uek.x86_64.img -H "Content-MD5: $CKSUM"
http://169.254.0.3/initrd

sudo curl -X POST http://169.254.0.3/activate

Managing Key Pairs on Linux Instances


Instances launched using Oracle Linux, CentOS, or Ubuntu images use an SSH key pair instead of a password to
authenticate a remote user (see Security Credentials on page 181). A key pair consists of a private key and public
key. You keep the private key on your computer and provide the public key when you create an instance. When you
connect to the instance using SSH, you provide the path to the private key in the SSH command.
You can have as many key pairs as you want, or you can keep it simple and use one key pair for all or several of your
instances.
If you're using OpenSSH to connect to an instance, you can use a key pair that is generated by Oracle Cloud
Infrastructure at the time that you create the instance. OpenSSH should be installed on UNIX-based systems
(including Linux and OS X), Windows 10, and Windows Server 2019.
To create your own key pairs, you can use a third-party tool such as OpenSSH on UNIX-style systems (including
Linux, Solaris, BSD, and OS X) or PuTTY Key Generator on Windows.
Caution:

Anyone who has access to the private key can connect to the instance. Store
the private key in a secure location.

Required SSH Public Key Format


If you provide your own key pair, it must use the OpenSSH format.
A public key has the following format:

<key_type> <public_key> <optional_comment>

For example, an RSA public key looks like this:

ssh-rsa AAAAB3BzaC1yc2EAAAADAQABAAABAQD9BRwrUiLDki6P0+jZhwsjS2muM...

...yXDus/5DQ== rsa-key-20201202

For Oracle-provided images, these SSH key types are supported: RSA, DSA, DSS, ECDSA, and Ed25519. If you
bring your own image, you're responsible for managing the SSH key types that are supported.
For RSA, DSS, and DSA keys, a minimum of 2048 bits is recommended. For ECDSA keys, a minimum of 256 bits is
recommended.

Oracle Cloud Infrastructure User Guide 698


Compute

Prerequisites
• If you're using a UNIX-style system, you probably already have the ssh-keygen utility installed. To determine
whether it's installed, type ssh-keygen on the command line. If it's not installed, you can download OpenSSH
for UNIX from http://www.openssh.com/portable.html and install it.
• If you're using a Windows operating system, you will need PuTTY and the PuTTY Key Generator. Download
PuTTY and PuTTYgen from http://www.putty.org and install them.

Creating an SSH Key Pair on the Command Line


1. Open a shell or terminal for entering the commands.
2. At the prompt, enter ssh-keygen and provide a name for the key when prompted. Optionally, include a
passphrase.
The keys will be created with the default values: RSA keys of 2048 bits.
Alternatively, you can type a complete ssh-keygen command, for example:

ssh-keygen -t rsa -N "" -b 2048 -C "<key_name>" -f <path/root_name>

The command arguments are shown in the following table:

Argument Description
-t rsa Use the RSA algorithm.
-N "<passphrase>" A passphrase to protect the use of the key (like a password). If
you don't want to set a passphrase, don't enter anything between
the quotes.
A passphrase is not required. You can specify one as a security
measure to protect the private key from unauthorized use. If
you specify a passphrase, when you connect to the instance you
must provide the passphrase, which typically makes it harder to
automate connecting to an instance.

-b 2048 Generate a 2048-bit key. You don't have to set this if 2048 is
acceptable, as 2048 is the default.
A minimum of 2048 bits is recommended for SSH-2 RSA.

-C "<key_name>" A name to identify the key.


-f <path/root_name> The location where the key pair will be saved and the root name
for the files.

Creating an SSH Key Pair Using PuTTY Key Generator


1. Find puttygen.exe in the PuTTY folder on your computer, for example, C:\Program Files
(x86)\PuTTY. Double-click puttygen.exe to open it.
2. Specify a key type of SSH-2 RSA and a key size of 2048 bits:
• In the Key menu, confirm that the default value of SSH-2 RSA key is selected.
• For the Type of key to generate, accept the default key type of RSA.
• Set the Number of bits in a generated key to 2048 if it is not already set.
3. Click Generate.

Oracle Cloud Infrastructure User Guide 699


Compute

4. Move your mouse around the blank area in the PuTTY window to generate random data in the key.
When the key is generated, it appears under Public key for pasting into OpenSSH authorized_keys file.
5. A Key comment is generated for you, including the date and time stamp. You can keep the default comment or
replace it with your own more descriptive comment.
6. Leave the Key passphrase field blank.
7. Click Save private key, and then click Yes in the prompt about saving the key without a passphrase.
The key pair is saved in the PuTTY Private Key (PPK) format, which is a proprietary format that works only with
the PuTTY tool set.
You can name the key anything you want, but use the ppk file extension. For example, mykey.ppk.
8. Select all of the generated key that appears under Public key for pasting into OpenSSH authorized_keys file,
copy it using Ctrl + C, paste it into a text file, and then save the file in the same location as the private key.
(Do not use Save public key because it does not save the key in the OpenSSH format.)
You can name the key anything you want, but for consistency, use the same name as the private key and a file
extension of pub. For example, mykey.pub.
9. Write down the names and location of your public and private key files. You will need the public key when
launching an instance. You will need the private key to access the instance via SSH.
Now that you have a key pair, you're ready to launch instances as described in Creating an Instance on page 700.

Creating an Instance
Use the steps in this topic to create a bare metal or virtual machine (VM) Compute instance.
Tip:

If this is your first time creating an instance, consider following the Getting
Started Tutorial for a guided workflow through the steps required to create an
instance.
When you create an instance, the instance is automatically attached to a virtual network interface card (VNIC) in
the cloud network's subnet and given a private IP address from the subnet's CIDR. You can let the IP address be
automatically assigned, or you can specify a particular address of your choice. The private IP address lets instances
within the cloud network communicate with each other. If you've set up the cloud network for DNS, instances can
instead use fully qualified domain names (FQDNs).
If the subnet is public, you can optionally assign the instance a public IP address. A public IP address is required to
communicate with the instance over the internet, and to establish a Secure Shell (SSH) or Remote Desktop Protocol
(RDP) connection to the instance from outside the cloud network.
Note:

Partner images and pre-built Oracle enterprise images are not available in
Government Cloud realms.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
Tip:

When you create an instance, several other resources are involved, such as
an image, a cloud network, and a subnet. Those other resources can be in the
same compartment with the instance or in other compartments. You must
have the required level of access to each of the compartments involved in

Oracle Cloud Infrastructure User Guide 700


Compute

order to launch the instance. This is also true when you attach a volume to an
instance; they don't have to be in the same compartment, but if they're not,
you need the required level of access to each of the compartments.
For administrators: The simplest policy to enable users to create instances is listed in Let users launch compute
instances on page 2151. It gives the specified group general access to manage instances and images, along with the
required level of access to attach existing block volumes to the instances. If the group needs to create block volumes,
they'll need the ability to manage block volumes (see Let volume admins manage block volumes, backups, and
volume groups on page 2154).
To require that legacy instance metadata service endpoints are disabled on any new instances that are created, use the
following policy:

Allow group InstanceLaunchers to manage instances in compartment ABC


where request.instanceOptions.areLegacyEndpointsDisabled= 'true'

If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Partner Image Catalog


If the group needs to create instances based on partner images, they'll need the manage permission for app-catalog-
listing to create subscriptions to images from the Partner Image catalog. See Let users list and subscribe to images
from the Partner Image catalog on page 2153.

Security Zones
Security Zones ensure that your cloud resources comply with Oracle security principles. If any operation on a
resource in a security zone compartment violates a policy for that security zone, then the operation is denied.
The following security zone policies affect your ability to create instances:
• The boot volume for a compute instance in a security zone must also be in a security zone.
• A compute instance that isn't in a security zone can't use a boot volume that is in a security zone.
• A compute instance in a security zone must use subnets that are also in a security zone.
• All compute instances in a security zone must be created using an Oracle-provided image. You can't create a
compute instance from a custom image in a security zone.

Recommended Networking Launch Types


When you launch a VM instance, by default, Oracle Cloud Infrastructure chooses a recommended networking type
for the VNIC based on the instance shape and OS image. The networking interface handles functions such as disk
input/output and network communication. The following options are available:
• Paravirtualized networking: For general purpose workloads such as enterprise applications, microservices,
and small databases. Paravirtualized networking also provides increased flexibility to use the same image across
different hardware platforms. Linux images with paravirtualized networking support live migration during
infrastructure maintenance.
• Hardware-assisted (SR-IOV) networking: Single root input/output virtualization. For low-latency workloads
such as video streaming, real-time applications, and large or clustered databases. Hardware-assisted (SR-IOV)
networking uses the VFIO driver framework.
Important:

To use a particular networking type, both the shape and the image must
support that networking type.
Shapes: The following table lists the default and supported networking types for VM shapes.

Oracle Cloud Infrastructure User Guide 701


Compute

Shape series Default Networking Type Supported Networking Types

VM.Standard1 SR-IOV Paravirtualized, SR-IOV

VM.Standard2 Paravirtualized Paravirtualized, SR-IOV

VM.Standard.E2 Paravirtualized Paravirtualized only


VM.Standard.E3 SR-IOV Paravirtualized, SR-IOV
VM.Standard.E4 SR-IOV Paravirtualized, SR-IOV
VM.DenseIO1 SR-IOV Paravirtualized, SR-IOV
VM.DenseIO2 Paravirtualized Paravirtualized, SR-IOV
VM.GPU2 SR-IOV Paravirtualized, SR-IOV
VM.GPU3 SR-IOV Paravirtualized, SR-IOV

Images: Paravirtualized networking is supported on these Oracle-provided images:


• Oracle Linux 8: All images.
• Oracle Linux 7, Oracle Linux 6: Images published in March 2019 or later.
• CentOS 8: All images.
• CentOS 7: Images published in July 2019 or later.
• Ubuntu 18.04, Ubuntu 16.04: Images published in March 2019 or later.
• Windows Server 2019: All images.
• Windows Server 2016: Images published in August 2019 or later.
SR-IOV networking is supported on all Oracle-provided images, with the following exceptions: On Windows Server
2019, when launched using a VM.Standard2 shape, SR-IOV networking is not supported. On Windows Server 2012
R2, SR-IOV networking is only supported on the VM.Standard2 and VM.DenseIO2 shapes.
You can create an instance that uses a specific networking type instead of the default. However, depending on
compatibility between the shape and image that you choose, the instance might not launch properly. You can test
whether it succeeded by connecting to the instance. If the connection fails, the networking type is not supported.
Relaunch the instance using a supported networking type.

Creating a Linux Instance


Use the following steps to create a Linux instance.

Prerequisites
Before you start, you need these things:
• (Optional) An existing virtual cloud network (VCN) to launch the instance in. Alternatively, you can create a new
VCN while you create the instance. For information about setting up cloud networks, see Networking on page
2772.
• If you want to use your own Secure Shell (SSH) key to connect to the instance using SSH, you need the public
key from the SSH key pair that you plan to use. The key must be in OpenSSH format. For more information, see
Managing Key Pairs on Linux Instances on page 698.

To create a Linux instance


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click Create Instance.
3. Enter a name for the instance. You can add or change the name later. The name doesn't need to be unique, because
an Oracle Cloud Identifier (OCID) uniquely identifies the instance. Avoid entering confidential information.

Oracle Cloud Infrastructure User Guide 702


Compute

4. Select the compartment to create the instance in.


The other resources that you choose can come from different compartments.
5. In the Placement and hardware section, make the following selections:
a. Select the Availability domain that you want to create the instance in.
Note:

If you're creating an instance from a boot volume, you must create the
instance in the same availability domain as the boot volume.
b. Choose a fault domain: The fault domain to use for the instance. If you do not specify the fault domain, the
system selects one for you. To change the fault domain for a VM instance after you create the instance, edit
the fault domain. For more information, see Fault Domains on page 184 and Best Practices for Your Compute
Instance on page 597.
c. By default, an Oracle Linux 7.x image is used to boot the instance. To select a different image or a boot
volume, in the Image section, click Change Image. Then, select an Image source from the list. The following
options are available:
• Platform images: Pre-built images for Oracle Cloud Infrastructure. To select a different OS version or
image build, select the check box next to an image, and then select a value from the lists in the row for the
image. To see which shapes are compatible with an OS version and image build, click Advanced Options.
For more information about platform images, see Oracle-Provided Images on page 633.
• Oracle images: Pre-built Oracle enterprise images and solutions enabled for Oracle Cloud Infrastructure.
• Partner images: Trusted third-party images published by Oracle partners. Click the down arrow in the row
for an image to view more details about the image, or to change the image build. For more information, see
Overview of Marketplace on page 2676 and Working with Listings on page 2677.
• Custom images: Custom images created or imported into your Oracle Cloud Infrastructure environment.
For more information, see Managing Custom Images on page 670.
• Boot volumes: Boot volumes that are available for creating a new instance in your Oracle Cloud
Infrastructure environment. For more information, see Boot Volumes on page 613.
• Image OCID: Create an instance using a specific version of an image by providing the image OCID. To
determine the image OCID for Oracle-provided images, see Oracle-Provided Image Release Notes.
Choose an image or boot volume, and then click Select Image.
d. In the Shape section, click Change Shape. Then, do the following:
1. In the Instance type section, select Virtual Machine or Bare Metal Machine.
2. If you're creating a virtual machine, in the Shape series section, select a processor group, and then choose
a shape. The following options are available:
• AMD Rome: The flexible shapes, which use current generation AMD processors and have a
customizable number of OCPUs and amount of memory.
• For Number of OCPUs, choose the number of OCPUs that you want to allocate to this instance by
dragging the slider. You can select from 1 to 64 OCPUs.
• For Amount of memory (GB), choose the amount of memory that you want to allocate to this
instance by dragging the slider. The amount of memory allowed is based on the number of OCPUs
selected. For each OCPU, you can select up to 64 GB of memory, with a maximum of 1024 GB
total. The minimum amount of memory allowed is either 1 GB or a value matching the number
of OCPUs, whichever is greater. For example, if you select 25 OCPUs, the minimum amount of
memory allowed is 25 GB.
The other resources scale proportionately.
Important:

Instances that use the VM.Standard.E3.Flex shape or the


VM.Standard.E4.Flex shape, and that also use hardware assisted

Oracle Cloud Infrastructure User Guide 703


Compute

(SR-IOV) networking, can be allocated a maximum of 1010 GB of


memory. See this known issue for more information.
• Intel Skylake: Standard shapes that use the current generation Intel processor and have a fixed number
of OCPUs.
• Specialty and Previous Generation: Standard shapes with previous generation Intel and AMD
processors, the Always Free VM.Standard.E2.1.Micro shape, Dense I/O shapes, GPU shapes, and HPC
shapes.
If a shape is disabled, it means that the shape is either incompatible with the image that you selected
previously, or not available in the current availability domain. If you don't see a shape, it means that you
don't have service limits for the shape. You can request a service limit increase.
For more information about shapes, see Compute Shapes on page 659.
3. Click Select Shape.
6. In the Networking section, configure the network details for the instance:
a. For Network and Subnet, specify the virtual cloud network (VCN) and subnet to create the instance in.
Decide whether you want to use an existing VCN and subnet, create a new VCN or subnet, or enter an existing
subnet's OCID:
Select existing virtual cloud network
Make the following selections:
• Virtual cloud network in <compartment_name>: The cloud network to create the instance in.
• Subnet: A subnet within the cloud network that the instance is attached to. The subnets are either public
or private. Private means the instances in that subnet can't have public IP addresses. For more information,
see Access to the Internet on page 2778. Subnets can also be either AD-specific or regional (regional ones
have "regional" after the name). We recommend using regional subnets. For more information, see About
Regional Subnets on page 2848.
If choosing Select existing subnet, for Subnet in <compartment_name>, select the subnet.
If choosing Create new public subnet, enter the following information:
• New subnet name: A friendly name for the subnet. It doesn't have to be unique, and it cannot be
changed later in the Console. You can change it with the API. Avoid entering confidential information.
• Create in compartment: The compartment where you want to put the subnet.
• CIDR block: A single, contiguous CIDR block for the subnet (for example, 172.16.0.0/24). Make sure
it's within the cloud network's CIDR block and doesn't overlap with any other subnets. You cannot
change this value later. See Allowed VCN Size and Address Ranges on page 2776. For reference,
here's a CIDR calculator.
Create new virtual cloud network
Make the following selections:
• New virtual cloud network name: A friendly name for the network. Avoid entering confidential
information.
• Create in compartment: The compartment where you want to put the new network.
• Subnet: A subnet within the cloud network to attach the instance to. The subnets are either public or
private. Private means the instances in that subnet can't have public IP addresses. For more information, see
Access to the Internet on page 2778. Subnets can also be either AD-specific or regional (regional ones

Oracle Cloud Infrastructure User Guide 704


Compute

have "regional" after the name). We recommend using regional subnets. For more information, see About
Regional Subnets on page 2848.
Enter the following information:
• New subnet name: A friendly name for the subnet. It doesn't have to be unique, and it cannot be
changed later in the Console. You can change it with the API. Avoid entering confidential information.
• Create in compartment: The compartment where you want to put the subnet.
• CIDR block: A single, contiguous CIDR block for the subnet (for example, 172.16.0.0/24). Make sure
it's within the cloud network's CIDR block and doesn't overlap with any other subnets. You cannot
change this value later. See Allowed VCN Size and Address Ranges on page 2776. For reference,
here's a CIDR calculator.
Enter subnet OCID
For Subnet OCID, enter the subnet OCID.
b. If the subnet is public, you can optionally assign the instance a public IP address. A public IP address
makes the instance accessible from the internet. Select the Assign a public IPv4 address option. For more
information, see Access to the Internet on page 2778.
c. (Optional) If you want to configure advanced networking settings, click Show advanced options. The
following options are available:
•Use network security groups to control traffic: Select this option if you want to add the instance's
primary VNIC to one or more network security groups (NSGs). Then, specify the NSGs. Available only
when you use an existing VCN. For more information, see Network Security Groups on page 2867.
• Private IP address: An available private IP address of your choice from the subnet's CIDR. If you don't
specify a value, the private IP address is automatically assigned.
• Hostname: A hostname to be used for DNS within the cloud network. Available only if the VCN and
subnet both have DNS labels. For more information, see DNS in Your Virtual Cloud Network on page
2936.
• Launch Options: The networking launch type. Available only for VMs. For more information, see
Recommended Networking Launch Types on page 701.
7. In the Add SSH keys section, generate an SSH key pair or upload your own public key. Select one of the
following options:
• Generate SSH keys: Oracle Cloud Infrastructure generates an RSA key pair for the instance. Click Save
Private Key, and then save the private key on your computer. Optionally, click Save Public Key and then
save the public key.
Caution:

Anyone who has access to the private key can connect to the instance.
Store the private key in a secure location.
Important:

To use a key pair that is generated by Oracle Cloud Infrastructure, you


must access the instance from a system that has OpenSSH installed.
UNIX-based systems (including Linux and OS X), Windows 10, and
Windows Server 2019 should have OpenSSH. For more information, see
Managing Key Pairs on Linux Instances on page 698.
• Choose SSH key files: Upload the public key portion of your key pair. Either browse to the key file that you
want to upload, or drag and drop the file into the box. To provide multiple keys, press and hold down the
Command key (on Mac) or the CTRL key (on Windows) while selecting files.
• Paste SSH keys: Paste the public key portion of your key pair in the box.
• No SSH keys: Select this option only if you do not want to connect to the instance using SSH. You cannot
provide a public key or save the key pair that is generated by Oracle Cloud Infrastructure after the instance is
created.

Oracle Cloud Infrastructure User Guide 705


Compute

8. In the Boot volume section, configure the size and encryption options for the instance's boot volume:
•To specify a custom size for the boot volume, select the Specify a custom boot volume size check box. Then,
enter a custom size from 50 GB to 32 TB. The specified size must be larger than the default boot volume size
for the selected image. See Custom Boot Volume Sizes on page 614 for more information.
• For VM instances, you can optionally select the Use in-transit encryption check box. See Block Volume
Encryption on page 508 for more information. If you are using your own Vault service encryption key for the
boot volume, then this key is also used for in-transit encryption. Otherwise, the Oracle-provided encryption
key is used.
• Boot volumes are encrypted by default, but you can optionally use your own Vault service encryption key to
encrypt the data in this volume. To use the Vault service for your encryption needs, select the Encrypt this
volume with a key that you manage check box. Then, select the Vault compartment and Vault that contain
the master encryption key you want to use. Also select the Master encryption key compartment and Master
encryption key. For more information about encryption, see Overview of Vault on page 3988. If you enable
this option, this key is used for both data at rest encryption and in-transit encryption.
• The Block Volume elastic performance feature lets you change the volume performance for boot volumes.
When you create an instance, its boot volume is configured with the default volume performance set to
Balanced. After you launch the instance, you can modify the performance setting. For steps to modify the
performance setting, see Changing the Performance of a Volume on page 586. For more information about this
feature, see Block Volume Elastic Performance on page 585.
9. (Optional) To configure advanced settings, click Show Advanced Options. The following options are available:
• On the Management tab, you can configure the following:
• Require an authorization header: Select this check box to require that all requests to the instance
metadata service (IMDS) use the version 2 endpoint and include an authorization header. Requests to
IMDSv1 are denied. The image must support IMDSv2. For more information, see Getting Instance
Metadata on page 763.
• Initialization Script: User data to be used by cloud-init to run custom scripts or provide custom cloud-
init configuration. Browse to the file that you want to upload, or drag and drop the file into the box. The
file or script does not need to be base64-encoded, because the Console performs this encoding when the
information is submitted. For information about how to take advantage of user data, see the cloud-init
documentation. The total maximum size for user data and other metadata that you provide is 32,000 bytes.
• Restore instance lifecycle state after infrastructure maintenance: By default, if a VM instance is
running when a maintenance event affects the underlying infrastructure, the instance is rebooted after it is
recovered. Clear this check box if you want the instance to be recovered in the stopped state.
• Tagging: If you have permissions to create a resource, then you also have permissions to apply free-form
tags to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For
more information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags,
then skip this option (you can apply tags later) or ask your administrator.
• On the Placement tab, you can choose to launch the instance on a dedicated virtual machine host. This
lets you run the instance in isolation, so that it is not running on shared infrastructure. To do this, select the
Dedicated host option, and then select a dedicated virtual machine host from the drop-down list. Before you
can place an instance on a dedicated virtual machine host, you must create a dedicated virtual machine host in
the same availability domain and fault domain as the instance. You can only place an instance on a dedicated
virtual machine host at the time that you create the instance. For more information, see Dedicated Virtual
Machine Hosts on page 735.
• On the Oracle Cloud Agent tab, choose which plugins you want to enable when the instance is launched.
Plugins collect performance metrics, install OS updates, and perform other instance management tasks. For
more information, see Managing Plugins with Oracle Cloud Agent on page 746.
Important:

• After you create the instance, you might need to perform additional
configuration tasks before you can use each plugin.
• Oracle Autonomous Linux instances cannot be managed by the
OS Management service. See this known issue for more information.

Oracle Cloud Infrastructure User Guide 706


Compute

10. Click Create.


To track the progress of the operation, you can monitor the associated work request.
After the instance is provisioned, details about it appear in the instance list. To view more details, including IP
addresses, click the instance name.
When the instance is fully provisioned and running, you can connect to it using SSH as described in Connecting to an
Instance on page 739.
You also can attach a volume to the instance, provided the volume is in the same availability domain. For background
information about volumes, see Overview of Block Volume on page 504.
For steps to let additional users connect to the instance, see Adding Users on an Instance on page 743.

Creating a Windows Instance


Use the following steps to create a Windows instance.

Prerequisites
Before you start, you need these things:
• (Optional) An existing virtual cloud network (VCN) to launch the instance in. Alternatively, you can create a new
VCN while you create the instance. For information about setting up VCNs, see Networking on page 2772.
• A VCN security rule that enables Remote Desktop Protocol (RDP) access so that you can connect to your
instance. Specifically, you need a stateful ingress rule for TCP traffic on destination port 3389 from source
0.0.0.0/0 and any source port. For more information, see Security Rules on page 2859. You can implement

Oracle Cloud Infrastructure User Guide 707


Compute

this security rule in a network security group that you add this Windows instance to. Or, you can implement this
security rule in a security list that is used by the instance's subnet.
To enable RDP access
1. Open the navigation menu. Under Core Infrastructure, go to Networking and click Virtual Cloud
Networks.
2. Choose a compartment you have permission to work in (on the left side of the page). The page updates to
display only the resources in that compartment. If you're not sure which compartment to use, contact an
administrator.
3. Click the cloud network that you're interested in.
4. To add the rule to a network security group that the instance belongs to:
a. Under Resources, click Network Security Groups. Then click the network security group that you're
interested in.
b. Click Add Rules.
c. Enter the following values for the rule:
• Stateless: Leave the check box cleared.
• Source Type: CIDR
• Source CIDR: 0.0.0.0/0
• IP Protocol: RDP (TCP/3389)
• Source Port Range: All
• Destination Port Range: 3389
• Description: An optional description of the rule.
d. When done, click Add.
5. Or, to add the rule to a security list that is used by the instance's subnet:
a. Under Resources, click Security Lists. Then click the security list you're interested in.
b. Click Add Ingress Rules.
c. Enter the following values for the rule:
• Stateless: Leave the check box cleared.
• Source Type: CIDR
• Source CIDR: 0.0.0.0/0
• IP Protocol: RDP (TCP/3389)
• Source Port Range: All
• Destination Port Range: 3389
• Description: An optional description of the rule.
d. When done, click Add Ingress Rules.

To create a Windows instance


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click Create Instance.
3. Enter a name for the instance. You can add or change the name later. The name doesn't need to be unique, because
an Oracle Cloud Identifier (OCID) uniquely identifies the instance. Avoid entering confidential information.
Important:

Use only these ASCII characters in the instance name: uppercase letters
(A-Z), lowercase letters (a-z), numbers (0-9), and hyphens (-). See this
known issue for more information.
4. Select the compartment to create the instance in.
The other resources that you choose can come from different compartments.

Oracle Cloud Infrastructure User Guide 708


Compute

5. In the Placement and hardware section, make the following selections:


a. Select the Availability domain that you want to create the instance in.
Note:

If you're creating an instance from a boot volume, you must create the
instance in the same availability domain as the boot volume.
b. Choose a fault domain: The fault domain to use for the instance. If you do not specify the fault domain, the
system selects one for you. To change the fault domain for a VM instance after you create the instance, edit
the fault domain. For more information, see Fault Domains on page 184 and Best Practices for Your Compute
Instance on page 597.
c. In the Image section, you choose the image that's used to boot the instance. Click Change Image. Then, select
an Image source from the list. The following options are available:
• Platform images: Pre-built images for Oracle Cloud Infrastructure. To select a different OS version or
image build, select the check box next to an image, and then select a value from the lists in the row for the
image. To see which shapes are compatible with an OS version and image build, click Advanced Options.
For more information about platform images, see Oracle-Provided Images on page 633.
• Oracle images: Pre-built Oracle enterprise images and solutions enabled for Oracle Cloud Infrastructure.
• Partner images: Trusted third-party images published by Oracle partners. Click the down arrow in the row
for an image to view more details about the image, or to change the image build. For more information, see
Overview of Marketplace on page 2676 and Working with Listings on page 2677.
• Custom images: Custom images created or imported into your Oracle Cloud Infrastructure environment.
For more information, see Managing Custom Images on page 670.
• Boot volumes: Boot volumes that are available for creating a new instance in your Oracle Cloud
Infrastructure environment. For more information, see Boot Volumes on page 613.
• Image OCID: Create an instance using a specific version of an image by providing the image OCID. To
determine the image OCID for Oracle-provided images, see Oracle-Provided Image Release Notes.
Choose an image or boot volume, and then click Select Image.
d. In the Shape section, click Change Shape. Then, do the following:
1. In the Instance type section, select Virtual Machine or Bare Metal Machine.
2. If you're creating a virtual machine, in the Shape series section, select a processor group, and then choose
a shape. The following options are available:
• AMD Rome: The flexible shapes, which use current generation AMD processors and have a
customizable number of OCPUs and amount of memory.
• For Number of OCPUs, choose the number of OCPUs that you want to allocate to this instance by
dragging the slider. You can select from 1 to 64 OCPUs.
• For Amount of memory (GB), choose the amount of memory that you want to allocate to this
instance by dragging the slider. The amount of memory allowed is based on the number of OCPUs
selected. For each OCPU, you can select up to 64 GB of memory, with a maximum of 1024 GB
total. The minimum amount of memory allowed is either 1 GB or a value matching the number
of OCPUs, whichever is greater. For example, if you select 25 OCPUs, the minimum amount of
memory allowed is 25 GB.
The other resources scale proportionately.
Important:

• For Windows Server 2019 instances using the


VM.Standard.E3.Flex shape, allocate a maximum of 32 OCPUs
to the instance. See this known issue for more information.
• Instances that use the VM.Standard.E3.Flex shape or the
VM.Standard.E4.Flex shape, and that also use hardware assisted

Oracle Cloud Infrastructure User Guide 709


Compute

(SR-IOV) networking, can be allocated a maximum of 1010 GB


of memory. See this known issue for more information.
• Intel Skylake: Standard shapes that use the current generation Intel processor and have a fixed number
of OCPUs.
• Specialty and Previous Generation: Standard shapes with previous generation Intel and AMD
processors, the Always Free VM.Standard.E2.1.Micro shape, Dense I/O shapes, GPU shapes, and HPC
shapes.
If a shape is disabled, it means that the shape is either incompatible with the image that you selected
previously, or not available in the current availability domain. If you don't see a shape, it means that you
don't have service limits for the shape. You can request a service limit increase.
For more information about shapes, see Compute Shapes on page 659.
3. Click Select Shape.
6. In the Networking section, configure the network details for the instance:
a. For Network and Subnet, specify the virtual cloud network (VCN) and subnet to create the instance in.
Decide whether you want to use an existing VCN and subnet, create a new VCN or subnet, or enter an existing
subnet's OCID:
Select existing virtual cloud network
Make the following selections:
• Virtual cloud network in <compartment_name>: The cloud network to create the instance in.
• Subnet: A subnet within the cloud network that the instance is attached to. The subnets are either public
or private. Private means the instances in that subnet can't have public IP addresses. For more information,
see Access to the Internet on page 2778. Subnets can also be either AD-specific or regional (regional ones
have "regional" after the name). We recommend using regional subnets. For more information, see About
Regional Subnets on page 2848.
If choosing Select existing subnet, for Subnet in <compartment_name>, select the subnet.
If choosing Create new public subnet, enter the following information:
• New subnet name: A friendly name for the subnet. It doesn't have to be unique, and it cannot be
changed later in the Console. You can change it with the API. Avoid entering confidential information.
• Create in compartment: The compartment where you want to put the subnet.
• CIDR block: A single, contiguous CIDR block for the subnet (for example, 172.16.0.0/24). Make sure
it's within the cloud network's CIDR block and doesn't overlap with any other subnets. You cannot
change this value later. See Allowed VCN Size and Address Ranges on page 2776. For reference,
here's a CIDR calculator.
Create new virtual cloud network
Make the following selections:
• New virtual cloud network name: A friendly name for the network. Avoid entering confidential
information.
• Create in compartment: The compartment where you want to put the new network.
• Subnet: A subnet within the cloud network to attach the instance to. The subnets are either public or
private. Private means the instances in that subnet can't have public IP addresses. For more information, see
Access to the Internet on page 2778. Subnets can also be either AD-specific or regional (regional ones

Oracle Cloud Infrastructure User Guide 710


Compute

have "regional" after the name). We recommend using regional subnets. For more information, see About
Regional Subnets on page 2848.
Enter the following information:
• New subnet name: A friendly name for the subnet. It doesn't have to be unique, and it cannot be
changed later in the Console. You can change it with the API. Avoid entering confidential information.
• Create in compartment: The compartment where you want to put the subnet.
• CIDR block: A single, contiguous CIDR block for the subnet (for example, 172.16.0.0/24). Make sure
it's within the cloud network's CIDR block and doesn't overlap with any other subnets. You cannot
change this value later. See Allowed VCN Size and Address Ranges on page 2776. For reference,
here's a CIDR calculator.
Enter subnet OCID
For Subnet OCID, enter the subnet OCID.
b. If the subnet is public, you can optionally assign the instance a public IP address. A public IP address
makes the instance accessible from the internet. Select the Assign a public IPv4 address option. For more
information, see Access to the Internet on page 2778.
c. (Optional) If you want to configure advanced networking settings, click Show advanced options. The
following options are available:
•Use network security groups to control traffic: Select this option if you want to add the instance's
primary VNIC to one or more network security groups (NSGs). Then, specify the NSGs. Available only
when you use an existing VCN. For more information, see Network Security Groups on page 2867.
• Private IP address: An available private IP address of your choice from the subnet's CIDR. If you don't
specify a value, the private IP address is automatically assigned.
• Hostname: A hostname to be used for DNS within the cloud network. Available only if the VCN and
subnet both have DNS labels. For more information, see DNS in Your Virtual Cloud Network on page
2936.
• Launch Options: The networking launch type. Available only for VMs. For more information, see
Recommended Networking Launch Types on page 701.
7. In the Boot volume section, configure the size and encryption options for the instance's boot volume:
•To specify a custom size for the boot volume, select the Specify a custom boot volume size check box. Then,
enter a custom size from 50 GB (256 GB for Oracle-provided Windows images) to 32 TB. The specified size
must be larger than the selected image's default boot volume size. See Custom Boot Volume Sizes on page
614 for more information.
• For VM instances, you can optionally select the Use in-transit encryption check box. See Block Volume
Encryption on page 508 for more information. If you are using your own Vault service encryption key for the
boot volume, then this key is also used for in-transit encryption. Otherwise, the Oracle-provided encryption
key is used.
• Boot volumes are encrypted by default, but you can optionally use your own Vault service encryption key to
encrypt the data in this volume. To use the Vault service for your encryption needs, select the Encrypt this
volume with a key that you manage check box. Then, select the Vault compartment and Vault that contain
the master encryption key you want to use. Also select the Master encryption key compartment and Master
encryption key. For more information about encryption, see Overview of Vault on page 3988.
• The Block Volume elastic performance feature lets you change the volume performance for boot volumes.
When you create an instance, its boot volume is configured with the default volume performance set to
Balanced. After you launch the instance, you can modify the performance setting. For steps to modify the
performance setting, see Changing the Performance of a Volume on page 586. For more information about this
feature, see Block Volume Elastic Performance on page 585.
8. (Optional) To configure advanced settings, click Show Advanced Options. The following options are available:
• On the Management tab, you can configure the following:
• Choose a fault domain: The fault domain to use for the instance. If you do not specify the fault domain,
the system selects one for you. You can change the fault domain for a VM instance after you create the

Oracle Cloud Infrastructure User Guide 711


Compute

instance. For more information about fault domains, see Fault Domains on page 184 and Best Practices for
Your Compute Instance on page 597.
• Require an authorization header: Select this check box to require that all requests to the instance
metadata service (IMDS) use the version 2 endpoint and include an authorization header. Requests to
IMDSv1 are denied. The image must support IMDSv2. For more information, see Getting Instance
Metadata on page 763.
• Initialization Script: User data to be used by cloudbase-init to run custom scripts or provide custom
cloudbase-init configuration. Browse to the file that you want to upload, or drag and drop the file into the
box. The file or script does not need to be base64-encoded, because the Console performs this encoding
when the information is submitted. For information about how to take advantage of user data, see the
cloudbase-init documentation. The total maximum size for user data and other metadata that you provide is
32,000 bytes.
Caution:

Do not include anything in the script that could trigger a reboot,


because this could impact the instance launch and cause it to fail. Any
actions requiring a reboot should only be performed once the instance
state is Running.
• Restore instance lifecycle state after infrastructure maintenance: By default, if a VM instance is
running when a maintenance event affects the underlying infrastructure, the instance is rebooted after it is
recovered. Clear this check box if you want the instance to be recovered in the stopped state.
• Tagging: If you have permissions to create a resource, then you also have permissions to apply free-form
tags to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For
more information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags,
then skip this option (you can apply tags later) or ask your administrator.
• On the Placement tab, you can choose to launch the instance on a dedicated virtual machine host. This
lets you run the instance in isolation, so that it is not running on shared infrastructure. To do this, select the
Dedicated host option, and then select a dedicated virtual machine host from the drop-down list. Before you
can place an instance on a dedicated virtual machine host, you must create a dedicated virtual machine host in
the same availability domain and fault domain as the instance. You can only place an instance on a dedicated
virtual machine host at the time you create the instance. For more information, see Dedicated Virtual Machine
Hosts on page 735.
• On the Oracle Cloud Agent tab, choose which plugins you want to enable when the instance is launched.
Plugins collect performance metrics, install OS updates, and perform other instance management tasks. For
more information, see Managing Plugins with Oracle Cloud Agent on page 746.
Important:

After you create the instance, you might need to perform additional
configuration tasks before you can use each plugin.
9. Click Create.
To track the progress of the operation, you can monitor the associated work request.
After the instance is provisioned, details about it appear in the instance list. To view more details, including IP
addresses and the initial Windows password, click the instance name.
When the instance is fully provisioned and running, you can connect to it using Remote Desktop as described in
Connecting to an Instance on page 739.
You also can attach a volume to the instance, provided the volume is in the same availability domain. For background
information about volumes, see Overview of Block Volume on page 504.
For steps to let additional users connect to the instance, see Adding Users on an Instance on page 743.

Oracle Cloud Infrastructure User Guide 712


Compute

Managing Tags for an Instance


You can add tags to your resources to help you organize them according to your business needs. You can add tags
at the time you create a resource, or you can update the resource later with the desired tags. For general information
about applying tags, see Resource Tags on page 213.
To manage tags for an instance
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Click the Tags tab to view or edit the existing tags. Or click More Actions, and then click Add tags to add new
ones.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to manage instances:
• ListInstances
• LaunchInstance
• GetInstance
• UpdateInstance
• TerminateInstance
• GetInstanceDefaultCredentials
You can also launch instances from images that are published by Oracle partners in the Partner Image catalog. Use
these APIs to work with the Partner Image catalog listings:
• AppCatalogListing
• AppCatalogListingResourceVersion
• AppCatalogListingResourceVersionAgreements
• AppCatalogListingResourceVersionSummary
• AppCatalogListingSummary
• AppCatalogSubscription

Managing Compute Instances


You can simplify the management of your Compute instances using resources such as instance configurations and
instance pools.
An instance configuration is a template that defines the settings to use when creating Compute instances.
An instance pool is a set of instances that is managed as a group.

Instance Configurations
An instance configuration defines the settings to use when creating Compute instances, including details such as the
base image, shape, and metadata. You can also specify the associated resources for the instance, such as block volume
attachments and network configuration.
For steps to create an instance configuration, see Creating an Instance Configuration on page 714.
To modify an existing instance configuration, create a new instance configuration with the desired settings.
For steps to delete an instance configuration, see Deleting an Instance Configuration on page 723.

Instance Pools
Instance pools let you create and manage multiple Compute instances within the same region as a group. They also
enable integration with other services, such as the Load Balancing service and IAM service.

Oracle Cloud Infrastructure User Guide 713


Compute

You create an instance pool using an existing instance configuration. For steps, see Creating an Instance Pool on page
716.
After you have created an instance pool, you can update the size of the pool, add and remove existing instances
from the pool, and attach or detach load balancers. You can also update the instance pool to use a different instance
configuration. For more information, see Updating an Instance Pool on page 718.
You can automatically adjust the number of instances in an instance pool based on performance metrics or a schedule.
To do this, you enable autoscaling for the instance pool. For background information and steps, see Autoscaling on
page 724.
A cluster network is a special kind of instance pool that is designed for massive, high-performance computing jobs.
For more information, see Managing Cluster Networks on page 732.
For steps to delete an instance pool, see Deleting an Instance Pool on page 723.
Caution:

When you delete an instance pool, all of its resources are permanently
deleted, including associated instances, attached boot volumes, and block
volumes.

Instance Pool Lifecycle States


The following list describes the different lifecycle states for instance pools.
• Provisioning: When you create an instance pool, this is the first state the instance pool is in. Instances for the
instance pool are being configured based on the specified instance configuration.
• Starting: The instances are being launched. At this point, the only action you can take is to terminate the instance
pool.
• Running: The instances are created and running.
• Stopping: The instances are in the process of being shut down.
• Stopped: The instances are shut down.
• Scaling: After an instance pool has been created, if you update the instance pool size, it will go into this state
while creating instances (for increases in pool size) or terminating instances (for decreases in pool size). At this
point, the only action you can take is to terminate the instance pool.
• Terminating: The instances and associated resources are being terminated.
• Terminated: The instance pool, all its instances and associated resources are terminated.
When working with instance configurations and instance pools, keep the following things in mind:
• You can't delete an instance configuration if it is associated with at least one instance pool.
• You can use the same instance configuration for multiple instance pools. However, an instance pool can have only
one instance configuration associated with it.
• If the instance pool has been in the scaling or provisioning state for an extended period of time, it might
be because the number of instances that were requested has exceeded your tenancy's service limits for that
availability domain. For information about how to check your service limits, and steps to request a service limit
increase, see Service Limits on page 217. If this occurs, you need to terminate the instance pool and re-create it.
• If you modify the instance configuration for an instance pool, existing instances that are part of that pool will not
change. Any new instances that are created after you modify the instance configuration will use the new instance
configuration. New instances will not be created unless you increase the size of the instance pool or terminate
existing instances.
• If you decrease the size of an instance pool, the oldest instances are terminated first.

Creating an Instance Configuration


Instance configurations let you define the settings to use when creating Compute instances.
You use an instance configuration when you want to create one or more instances in an instance pool. For background
information about instance pools, see Managing Compute Instances on page 713.

Oracle Cloud Infrastructure User Guide 714


Compute

You can also use an instance configuration to launch individual instances that are not part of a pool. To do this, use
the SDKs, command line interface (CLI), or API.
In the Console, you create an instance configuration using an existing Compute instance as a template. If you want to
create an instance configuration by specifying a list of configuration settings, use the SDKs, CLI, or API.
When you create an instance configuration using an existing instance as a template, be aware of the following
information:
• The instance configuration does not include any information from the instance's boot volume, such as installed
applications, binaries, and files on the instance. To create an instance configuration that includes the custom setup
from an instance, you must first create a custom image from the instance, and then use the custom image to launch
a new instance. Finally, create the instance configuration based on the instance that you created from the custom
image.
• The instance configuration does not include the contents of any block volumes that are attached to the instance. To
include block volume contents with an instance configuration, first create a backup of the attached block volumes.
Then, use the SDKs, CLI, or API to create the instance configuration, specifying the block volume backups in the
list of configuration settings.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: For a typical policy that gives access to instance pools and instance configurations, see Let users
manage Compute instance configurations, instance pools, and cluster networks on page 2152.
Tagging Resources
You can add tags to your resources to help you organize them according to your business needs. You can add tags
at the time you create a resource, or you can update the resource later with the desired tags. For general information
about applying tags, see Resource Tags on page 213.
To manage tags for an instance configuration
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instance Configurations.
2. Click the instance configuration that you're interested in.
3. Click the Tags tab to view or edit the existing tags. Or click Add tags to add new ones.
For more information, see Resource Tags on page 213.
Using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance whose image you want to use as a template to create the instance configuration.
3. Click More Actions, and then click Create Instance Configuration.
4. Select the compartment you want to create the instance configuration in.
5. Specify a name for the instance configuration. It doesn't have to be unique, and it cannot be changed later in the
Console (but you can change it with the API). Avoid entering confidential information.
6. Show Tagging Options: Optionally, you can add tags. If you have permissions to create a resource, you also have
permissions to add free-form tags to that resource. To add a defined tag, you must have permissions to use the tag
namespace. For more information about tagging, see Resource Tags on page 213. If you are not sure if you should
add tags, skip this option (you can add tags later) or ask your administrator.
7. Click Create Instance Configuration.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the CreateInstanceConfiguration operation to create an instance configuration.

Oracle Cloud Infrastructure User Guide 715


Compute

Creating an Instance Pool


Use instance pools to create and manage multiple compute instances within the same region as a group.
When you create an instance pool, you use an instance configuration as the template to create new instances in
the pool. You can also attach existing instances to a pool by updating the pool. For background information about
instance pools and instance configurations, see Managing Compute Instances on page 713.
Optionally, you can associate one or more load balancers with an instance pool. If you do this, when you add an
instance to the instance pool, the instance is automatically added to the load balancer's backend set . After the instance
reaches a healthy state (the instance is listening on the configured port number), incoming traffic is automatically
routed to the new instance.
Instance pools are supported for virtual machine (VM) and bare metal instances.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: For a typical policy that gives access to instance pools and instance configurations, see Let users
manage Compute instance configurations, instance pools, and cluster networks on page 2152.
Tagging Resources
You can add tags to your resources to help you organize them according to your business needs. You can add tags
at the time you create a resource, or you can update the resource later with the desired tags. For general information
about applying tags, see Resource Tags on page 213.
Distributing Instances Across Fault Domains for High Availability
By default, the instances in a pool are distributed across all fault domains in a best-effort manner based on capacity. If
capacity isn't available in one fault domain, the instances are placed in other fault domains to allow the instance pool
to launch successfully.
In a high availability scenario, you can require that the instances in a pool are evenly distributed across each of the
fault domains that you specify. When sufficient capacity isn't available in one of the fault domains, the instance pool
will not launch or scale successfully, and a work request for the instance pool will return an "out of capacity" error.
To fix the capacity error, either wait for capacity to become available, or use the UpdateInstancePool operation to
update the placement configuration (the availability domain and fault domain) for the instance pool.
Prerequisites
Before you can create an instance pool, you need:
• An instance configuration. An instance configuration is a template that defines the settings to use when creating
instances. When you create the instance pool, monitoring will be enabled by default on instances that support
monitoring, regardless of the settings in the instance configuration. For more information, see Creating an Instance
Configuration on page 714.
Note:

You cannot create an instance pool from an instance configuration where


the image source is a boot volume.
• If you want to associate the instance pool with a load balancer, you need a load balancer and backend set. For
steps to create a load balancer, see Managing Load Balancers on page 2512.
Using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instance Pools.
2. Click Create Instance Pool.

Oracle Cloud Infrastructure User Guide 716


Compute

3. On the Add Basic Details page, do the following:


a. Select the compartment to create the instance pool in.
b. Enter a name for the instance pool. It doesn't have to be unique, and it cannot be changed later in the Console
(but you can change it with the API). Avoid entering confidential information.
c. Select the Instance configuration that you want to use.
d. Specify the targeted Number of instances for the instance pool.
e. Show Tagging Options: Optionally, you can add tags. If you have permissions to create a resource, you also
have permissions to add free-form tags to that resource. To add a defined tag, you must have permissions to
use the tag namespace. For more information about tagging, see Resource Tags on page 213. If you are not
sure if you should add tags, skip this option (you can add tags later) or ask your administrator.
4. Click Next.
5. On the Configure Pool Placement page, select the location where you want to place the instances. Do the
following:
a. Select the Availability domain to launch the instances in.
b. For the Fault domains box, do one of the following things:
• If you want the system to make a best effort to distribute instances across fault domains based on capacity,
leave the box blank.
• If you want to require that the instances in the pool are distributed evenly in one or more fault domains,
select the fault domains to place the instances in. The pool will not launch or scale successfully if sufficient
capacity is unavailable in the selected fault domains. For more information, see Distributing Instances
Across Fault Domains for High Availability on page 716.
c. In the Primary VNIC section, configure the network details for the instances:
• Virtual cloud network: The virtual cloud network (VCN) to create the instances in.
• Subnet: A subnet within the cloud network to attach the instances to. The subnets are either public or
private. Private means the instances in that subnet can't have public IP addresses. For more information, see
Access to the Internet on page 2778. Subnets can also be either AD-specific or regional (regional ones
have "regional" after the name). We recommend using regional subnets. For more information, see About
Regional Subnets on page 2848.
d. If secondary VNICs are defined by the instance configuration, a Secondary VNIC section appears. Select the
secondary VCN and subnet for the instance pool.
e. If you want the instance pool to create instances in more than one availability domain, click + Another
Availability Domain. Then, repeat the previous steps.
6. If you want to associate a load balancer with the instance pool, select the Attach a load balancer check box.
Then, do the following:
a. Select the Load balancer to associate with the instance pool.
b. Select the Backend set on the load balancer to add instances to.
c. In the Port box, enter the server port on the instances to which the load balancer must direct traffic. This value
applies to all instances that use this load balancer attachment.
d. In the VNIC list, select the VNIC to use when adding the instance to the backend set. Instances that belong to
a backend set are also called backend servers. The private IP address is used. This value applies to all instances
that use this load balancer attachment.
e. If you want to associate additional load balancers with the instance pool, click + Another Load Balancer.
Then, repeat the previous steps. Do this for each additional load balancer you want to associate with the
instance pool.
For background information about load balancers, see Overview of Load Balancing on page 2498.
7. Click Next.
8. Review the instance pool details, and then click Create.
To track the progress of the operation, you can monitor the associated work request.

Oracle Cloud Infrastructure User Guide 717


Compute

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to create and manage instance pools:
• CreateInstancePool
• AttachLoadBalancer
• DetachLoadBalancer
• AttachInstancePoolInstance
• DetachInstancePoolInstance
• GetInstancePoolLoadBalancerAttachment

Updating an Instance Pool


You can change the size of an instance pool, attach existing instances to a pool, attach load balancers, and update
various other properties.
For background information about instance pools and instance configurations, see Managing Compute Instances on
page 713.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: For a typical policy that gives access to instance pools and instance configurations, see Let users
manage Compute instance configurations, instance pools, and cluster networks on page 2152.
Updating the Instance Pool Size
You can manually update the number of instances for an instance pool.
When you increase the size of a pool, the pool creates new instances using the pool's instance configuration as a
template. If you want to add existing instances to the pool, you can instead attach instances to the pool.
When you decrease the size of a pool, the pool terminates (deletes) the extra instances. The oldest instances are
terminated first. If you need to perform tasks on an instance before it is deleted, you should instead detach the
instance from the pool and then delete the instance separately.
To automatically adjust the number of instances in an instance pool based on performance metrics or a schedule,
enable autoscaling for the instance pool.

Using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instance Pools.
2. Click the instance pool that you're interested in.
3. Click Edit.
4. Specify the updated number of instances for the instance pool, and then click Save Changes.
When you update the instance pool size, it triggers a scaling event. Keep the following things in mind:
• If the instance pool's lifecycle state is Running, the pool will create new instances or terminate existing instances
at that time, to match the new size of the pool. Instances are terminated in the order that they were created, first-in,
first-out.
• If the instance pool's lifecycle state is Stopped, for an increase in size, new instances will be configured for the
pool, but won't be launched. For a decrease in size, the instances will be terminated.
To track the progress of the operation, you can monitor the associated work request.

Oracle Cloud Infrastructure User Guide 718


Compute

Important:

If the instance pool has been in the scaling or provisioning state for an
extended period of time, it may be because the number of instances requested
has exceeded your tenancy's service limits for that shape and availability
domain. Check your tenancy's service limits for Compute.

Using the API


Use the UpdateInstancePool operation.
Attaching an Instance to an Instance Pool
You can attach an existing instance to an instance pool. This lets you select which instances you want to manage as a
group.
When you attach an instance, the pool size is increased.
Important:

If an autoscaling configuration is associated with the instance pool, ensure


that the autoscaling policy defines a maximum pool size that's big enough for
the expanded pool. You can do this by editing the autoscaling policy. If you
attach an instance and it causes the pool size to increase above the maximum
autoscaling target, a future autoscaling event might decrease the pool size and
terminate instances.
If load balancers are attached to the pool, the instance is also added to the load balancers.

Prerequisites
To attach an instance to a pool, all of the following things must be true:
• The instance and the pool are running.
• The instance is the same machine type as the pool, either virtual machine or bare metal.
• The instance is in the same availability domain and fault domain as the pool.
• The instance's primary VNIC is in the same VCN and subnet as the pool.
• If secondary VNICs are defined, the instance's secondary VNIC is in the same VCN and subnet as the secondary
VNICs used by other instances in the pool.
• The instance is not attached to another pool.
You must also know the name or OCID of the instance that you want to attach.

Using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instance Pools.
2. Click the instance pool that you're interested in.
3. Under Resources, click Attached Instances.
4. Click Attach Instance.
5. In the Input type list, select either Instance name or Instance OCID. Then, select the name of the instance or
enter its OCID.
6. Click Attach Instance.
To track the progress of the operation, you can monitor the associated work request.

Using the API


Use the AttachInstancePoolInstance operation.

Oracle Cloud Infrastructure User Guide 719


Compute

Detaching an Instance from an Instance Pool


You can detach an instance from an instance pool when you no longer want to manage the instance as part of the
pool.
When you detach an instance from a pool, you can choose whether to delete the instance or to retain it. You can
also choose whether to replace the detached instance by creating a new instance in the pool. If you don't replace the
detached instance, the pool size is decreased.

Using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instance Pools.
2. Click the instance pool that you're interested in.
3. Under Resources, click Attached Instances.
4. For the instance that you want to detach, click the Actions icon (three dots). Then, click Detach Instance.
5. If you want to delete the instance and its boot volume, select the Permanently terminate (delete) this instance
and its attached boot volume check box.
6. If you want the pool to remain the same size after you detach the instance, you can provision a replacement
instance. Select the Replace the instance with a new instance, using the pool’s instance configuration as a
template for the instance check box.
7. Click Detach (or Detach and Terminate, if you're also deleting the instance).
To track the progress of the operation, you can monitor the associated work request.

Using the API


Use the DetachInstancePoolInstance operation.
Attaching a Load Balancer to an Instance Pool
Optionally, you can associate a load balancer with an instance pool. If you do this, when you add an instance to the
instance pool, the instance is automatically added to the load balancer's backend set. After the instance reaches a
healthy state (the instance is listening on the configured port number), incoming traffic is automatically routed to the
new instance. For background information about the Load Balancing service, see Overview of Load Balancing on
page 2498.

Prerequisites
You must have a load balancer and backend set to associate with the instance pool. For steps to create a load balancer,
see Managing Load Balancers on page 2512.

Using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instance Pools.
2. Click the instance pool that you're interested in.
3. Under Resources, click Load Balancers.
4. Click Attach a Load Balancer.
5. Enter the following:
•Load balancer: The load balancer to associate with the instance pool.
•Backend set: The name of the backend set on the load balancer to add instances to.
•Port: The server port on the instances to which the load balancer must direct traffic. This value applies to all
instances that use this load balancer attachment.
• VNIC: The VNIC to use when adding the instance to the backend set. Instances that belong to a backend set
are also called backend servers. The private IP address is used. This value applies to all instances that use this
load balancer attachment.
6. Click Attach.

Oracle Cloud Infrastructure User Guide 720


Compute

7. If you want to associate additional load balancers with the instance pool, click + Another Load Balancer. Then,
repeat the previous steps. Do this for each additional load balancer you want to associate with the instance pool.
To track the progress of the operation, you can monitor the associated work request.

Using the API


Use the AttachLoadBalancer operation.
Detaching a Load Balancer from an Instance Pool
You can detach a load balancer from an instance pool.

Using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instance Pools.
2. Click the instance pool that you're interested in.
3. Under Resources, click Load Balancers.
4. Click the Actions icon (three dots) for the load balancer you want to detach.
5. Click Detach Load Balancer, and then click Detach to confirm.
To track the progress of the operation, you can monitor the associated work request.

Using the API


Use the DetachLoadBalancer operation.
Tagging Resources
You can add tags to your resources to help you organize them according to your business needs. You can add tags
at the time you create a resource, or you can update the resource later with the desired tags. For general information
about applying tags, see Resource Tags on page 213.
To manage tags for an instance pool
Using the Console:
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instance Pools.
2. Click the instance pool that you're interested in.
3. Click the Tags tab to view or edit the existing tags. Or click More Actions, and then click Add tags to add new
ones.
Using the API: Use the UpdateInstancePool operation.
Updating Other Instance Pool Settings
To update other instance pool settings, such as the instance configuration that's used by the pool, the display name,
and the availability domain selections, use the UpdateInstancePool operation in the API, SDKs, or CLI.
To update the instance configuration that a pool uses when creating instances, you can do either of the following
things:
• Create a new instance configuration with the desired settings, and then attach the new instance configuration to
the pool. You can create a new instance configuration using the Console or the API, SDKs, or CLI. Then, use the
UpdateInstancePool operation to attach the new instance configuration to the pool.
• If you only want to update the display name or tags of an existing instance configuration, you can update the
pool's existing instance configuration. Use the UpdateInstanceConfiguration operation. For any other updates,
create a new instance configuration with the settings you want to use.

Stopping and Starting the Instances in an Instance Pool


You can stop and start the instances in an instance pool as needed to update software or resolve error conditions.

Oracle Cloud Infrastructure User Guide 721


Compute

Stopping or Restarting an Instance Using the Instance's OS


In addition to using the API and Console, you can stop and restart instances using the commands available in the
operating system when you are logged in to the instance. Stopping an instance using the instance's OS does not stop
billing for that instance. If you stop the instances in an instance pool this way, be sure to also stop the instance pool
from the Console or API.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: For a typical policy that gives access to instance pools and instance configurations, see Let users
manage Compute instance configurations, instance pools, and cluster networks on page 2152.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.
Resource Billing for Stopped Instances
For both VM and bare metal instances, billing depends on the shape that you use to create the instance:
• Standard shapes: Stopping an instance pool pauses billing. However, stopped instances continue to count toward
your service limits.
• Dense I/O shapes: Billing continues for stopped instance pools because the NVMe storage resources are
preserved. Related resources continue to count toward your service limits. To halt billing and remove related
resources from your service limits, you must terminate the instance pool.
• GPU shapes: Billing continues for stopped instance pools because GPU resources are preserved. Related
resources continue to count toward your service limits. To halt billing and remove related resources from your
service limits, you must terminate the instance pool.
• HPC shapes: Billing continues for stopped instance pools because the NVMe storage resources are preserved.
Related resources continue to count toward your service limits. To halt billing and remove related resources from
your service limits, you must terminate the instance pool.
Stopping an instance using the instance's OS does not stop billing for that instance. If you stop the instances in an
instance pool this way, be sure to also stop the instance pool from the Console or API.
For more information about Compute pricing, see Compute Pricing. For more information about how instances
running Microsoft Windows Server are billed when they are stopped, see How am I charged for Windows Server on
Oracle Cloud Infrastructure? on page 811.
Using the Console

To start all instances in a pool


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instance Pools.
2. Click the instance pool that you're interested in.
3. Click Start.

To stop all instances in a pool


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instance Pools.
2. Click the instance pool that you're interested in.
3. Click Stop.
4. Click Stop again.
The instances are shut down immediately, without waiting for the operating system to respond.

Oracle Cloud Infrastructure User Guide 722


Compute

To reboot all instances in a pool


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instance Pools.
2. Click the instance pool that you're interested in.
3. Click Reboot.
4. By default, the Console gracefully restarts the instances by sending a shutdown command to the operating system.
After waiting 15 minutes for the OS to shut down, the instances are powered off and then powered back on.
Note:

If the applications that run on the instances take more than 15 minutes to
shut down, they could be improperly stopped, resulting in data corruption.
To avoid this, shut down the instances using the commands available in the
OS before you restart the instance using the Console.
If you want to reboot the instances immediately, without waiting for the OS to respond, select the Force reboot
the instance pool by immediately powering off every instance in the pool, then powering them back on
check box.
5. Click Reboot Instance Pool.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
To manage the lifecycle state of the instances in an instance pool, use these operations:
• StartInstancePool
• StopInstancePool
• ResetInstancePool
• SoftresetInstancePool

Deleting an Instance Configuration


You can permanently delete instance configurations that you no longer need.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: For a typical policy that gives access to instance pools and instance configurations, see Let users
manage Compute instance configurations, instance pools, and cluster networks on page 2152.
Using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instance Configurations.
2. Click the instance configuration that you're interested in.
3. Click Delete, and then confirm when prompted.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the DeleteInstanceConfiguration operation to delete an instance configuration.

Deleting an Instance Pool


You can permanently delete instance pools that you no longer need.

Oracle Cloud Infrastructure User Guide 723


Compute

Caution:

When you delete an instance pool, the resources that are associated with the
pool are permanently deleted. This includes instances that were created by the
pool, instances that are attached to the pool, attached boot volumes, and block
volumes.
If an autoscaling configuration applies to the instance pool, the autoscaling configuration will be deleted
asynchronously after the pool is deleted. You can also manually delete the autoscaling configuration.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: For a typical policy that gives access to instance pools and instance configurations, see Let users
manage Compute instance configurations, instance pools, and cluster networks on page 2152.
Using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instance Pools.
2. Click the instance pool that you're interested in.
3. Click More Actions, and then click Terminate.
4. Confirm when prompted.
To track the progress of the operation, you can monitor the associated work request.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the TerminateInstancePool operation to delete an instance pool.

Autoscaling
Autoscaling lets you automatically adjust the number of Compute instances in an instance pool. This helps you
provide consistent performance for your end users during periods of high demand, and helps you reduce your costs
during periods of low demand.
You can apply the following types of autoscaling to an instance pool:
• Metric-based autoscaling: An autoscaling action is triggered when a performance metric meets or exceeds a
threshold.
• Schedule-based autoscaling: Autoscaling events take place at the specific times that you schedule.
Autoscaling is supported for virtual machine (VM) and bare metal instance pools that use Standard, DenseIO, and
GPU shapes.

How Autoscaling Works: the Basics


You use autoscaling configurations to automatically manage the size of your instance pools. When autoscaling
automatically provisions instances in an instance pool, the pool scales out. When autoscaling removes instances from
the pool, the pool scales in.
When an instance pool scales in, instances are terminated in this order: the number of instances is balanced across
availability domains, and then balanced across fault domains. Finally, within a fault domain, the oldest instance is
terminated first.
An autoscaling configuration includes one or more autoscaling policies. These policies define the criteria that
trigger autoscaling actions and the actions to take. Each autoscaling configuration can either have one metric-based

Oracle Cloud Infrastructure User Guide 724


Compute

autoscaling policy, or multiple schedule-based autoscaling policies. You can add a maximum of 50 schedule-based
autoscaling policies to an autoscaling configuration.
Each instance pool can have only one autoscaling configuration.

Metric-Based Autoscaling
In metric-based autoscaling, you choose a performance metric to monitor, and set thresholds that the performance
metric must reach to trigger an autoscaling event. When system usage meets a threshold, autoscaling dynamically
resizes the instance pool in near-real time. As load increases, the pool scales out. As load decreases, the pool scales in.
Tip:

Avoid changing the value assigned to the initial number of instances after
the pool has scaled. Lowering this value after the number of instances in the
pool size has increased will cause instances in the pool to terminate. If you
need to change this value, the new value should equal or exceed the number
of instances currently in the pool.
Metric-based autoscaling relies on performance metrics that are collected by the Monitoring service, such as CPU
utilization. These performance metrics are aggregated into one-minute time periods and then averaged across all
instances in the instance pool. When three consecutive values (that is, the average metrics for three consecutive
minutes) meet the threshold, an autoscaling event is triggered.
A cooldown period between metric-based autoscaling events lets the system stabilize at the updated level. The
cooldown period starts when the instance pool reaches the Running state. Autoscaling continues to evaluate
performance metrics during the cooldown period. When the cooldown period ends, autoscaling adjusts the instance
pool's size again if needed.

Schedule-Based Autoscaling
Schedule-based autoscaling is ideal for instance pools where demand behaves predictably based on a schedule, such a
month, date, or time of day. Schedules can be recurring or one-time. For example:
• An instance pool has heavy use during business hours. The pool has lighter use on evenings and weekends. You
can schedule the pool to scale out on weekday mornings, and to scale in on weekday evenings.
• An instance pool has high demand on New Years Eve. You can schedule the pool to scale out every year on
December 30, and to scale in on January 2.
• You're releasing a new application that runs in the instance pool, and expect that many people will start using
the application after the public announcement. In advance, you can schedule the pool to scale out on the day of
release.
A schedule-based autoscaling configuration can have multiple autoscaling policies, each with a different schedule and
target pool size. If you want to configure scale-in and scale-out events, you must create at least two separate policies.
One policy defines the target pool size and schedule for scaling in. The other policy defines the target pool size and
schedule for scaling out.
After a schedule-based autoscaling policy is executed, the instance pool stays at the target pool size until something
else changes the pool size, such as a different autoscaling policy. However, if you manually change the pool size,
schedule-based autoscaling does not readjust the pool size until the next scheduled autoscaling policy is executed.
You define autoscaling schedules using cron expressions. Autoscaling uses the Quartz cron implementation. You can
use an online cron expression generator to verify your cron expressions; one example is FREEFORMATTER.
Provide all times in UTC.
Note:

Schedule-based autoscaling configurations include an attribute for cooldown


period, which you'll see in the Console and when using the API, SDKs, and
CLI. However, the cooldown period does not have an effect on schedule-
based autoscaling configurations.

Oracle Cloud Infrastructure User Guide 725


Compute

About Cron Expressions


A cron expression is a string composed of six or seven fields that represent the different parts of a schedule, such as
hours or days of the week. Cron expressions use this format:
<second> <minute> <hour> <day of month> <month> <day of week> <year>
The following table lists the values and special characters that are allowed for each field.

Field Allowed Values Allowed Special Characters


Second 0 None

Note: When using the API, CLI,


or SDKs for autoscaling, you must
specify 0 as the value for seconds,
even though other values will
create a valid cron expression. You
don't need to provide any value for
seconds when using the Console.

Minute 0-59 *-,/


Hour 0-23 *-,/
Day of the month 1-31 *-,?/LW
Month 1-12 or JAN-DEC *-,/
Day of the week 1-7 or SUN-SAT *-,?/L#
Year 1970-2099 *-,/

The special characters are described in the following table.

Special Character Description Example


* Indicates all values for a field. * in the month field means every
month.
- Indicates a range of values. 8-17 in the hour field means hours 8
through 17, or 8 a.m. through 5 p.m.
, Indicates multiple values. 3,5 in the day-of-the-week field
means Tuesday and Thursday.
? Indicates no specific values. 0 0 10 ? * MON * means 10 a.m. on
every Monday.
When you want to specify a day of
the month, use ? in the day-of-the-
week field.
When you want to specify a day of
the week, use ? in the day-of-the-
month field.

/ Use n/m to indicate increments. The 0/20 in the minute field means the
value before the slash is the start minutes 0, 20, and 40.
time, and the number after the slash
is the value to increment by.

Oracle Cloud Infrastructure User Guide 726


Compute

Special Character Description Example


L Last day of the week or last day of L in the day-of-the-month field
the month. means January 31, February 28 in
non-leap years, and so on.
Use xL in the day-of-the-week field
to indicate the last x day of the 6L in the day-of-the-week field
month. means the last Friday of the month.
Use L-n in the day-of-the-month L-5 means 5 days before the last day
field to indicate an offset of n days of the month.
from the last day of the month.
Do not L use with multiple values or
a range of values.

W The weekday (Monday - Friday) that 10W means the nearest weekday to
is nearest to the given day. the 10th of the month. If the 10th
is a Saturday, it means Friday the
The value does not cross months. 9th. If the 10th is a Sunday, it means
Do not use W with multiple values or Monday the 11th. If the 10th is a
a range of values. Wednesday, it means Wednesday the
10th.
# Use x#n to indicate the nth x day of 5#2 means the second Thursday of
the month. the month.

Oracle Cloud Infrastructure User Guide 727


Compute

Example Cron Expressions


Use these example cron expressions as a starting point to create your own autoscaling schedules. Combine each cron
expression with a target pool size to create an autoscaling policy. Then, include one or more autoscaling policies in an
autoscaling configuration.
Goal: A one-time schedule with only one scaling event. At 11:00 p.m. on December 31, 2020, scale an instance pool
to 100 instances. You'll need one autoscaling policy.
• Policy 1:
• Target pool size: 100 instances
• Execution time: 11:00 p.m. on the 31st day of December, in 2020
• Cron expression: 0 0 23 31 12 ? 2020
Goal: A one-time schedule with a scale-out event and a scale-in event. At 10:00 a.m. on March 1, 2021, scale out to
75 instances. At 4 p.m. on March 7, 2021, scale in to 30 instances. You'll need two autoscaling policies.
• Policy 1 - scale out:
• Target pool size: 75 instances
• Execution time: 10:00 a.m. on the 1st day of March, in 2021
• Cron expression: 0 0 10 1 3 ? 2021
• Policy 2 - scale in:
• Target pool size: 30 instances
• Execution time: 4:00 p.m. on the 7th day of March, in 2021
• Cron expression: 0 0 16 7 3 ? 2021
Goal: A recurring daily schedule. On weekday mornings at 8:30 a.m., scale out to 10 instances. On weekday evenings
at 6 p.m., scale in to two instances. You'll need two autoscaling policies.
• Policy 1 - morning scale out:
• Target pool size: 10 instances
• Execution time: 8:30 a.m. on every Monday through Friday, in every month, in every year
• Cron expression: 0 30 8 ? * MON-FRI *
• Policy 2 - evening scale in:
• Target pool size: 2 instances
• Execution time: 6:00 p.m. on every Monday through Friday, in every month, in every year
• Cron expression: 0 0 18 ? * MON-FRI *
Goal: A recurring weekly schedule. On Tuesdays and Thursdays, scale the pool to 30 instances. On all other days of
the week, scale the pool to 20 instances. You'll need two autoscaling policies.
• Policy 1 - Tuesday and Thursday:
• Target pool size: 30 instances
• Execution time: 1 a.m. on every Tuesday and Thursday, in every month, in every year
• Cron expression: 0 0 1 ? * TUE,THU *
• Policy 2 - all other days:
• Target pool size: 20 instances
• Execution time: 1 a.m. on Sunday through Monday, Wednesday, and Friday though Saturday, in every month,
in every year
• Cron expression: 0 0 1 ? * SUN-MON,WED,FRI-SAT *
Goal: A recurring monthly schedule. On all days of the month, set the pool size to 20 instances. On the 15th day of
the month, scale out to 40 instances. You'll need two autoscaling policies.
• Policy 1 - daily pool size:

Oracle Cloud Infrastructure User Guide 728


Compute

• Target pool size: 20 instances


• Execution time: Midnight on every day, in every month, in every year
• Cron expression: 0 0 0 * * ? *
• Policy 2 - scale out:
• Target pool size: 40 instances
• Execution time: 12:05 a.m. on the 15th day of the month, in every month, in every year
• Cron expression: 0 5 0 15 * ? *

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: For a typical policy that gives access to autoscaling configurations, see Let users manage
Compute autoscaling configurations on page 2153.

Tagging Resources
You can apply tags to your resources to help you organize them according to your business needs. You can apply tags
at the time you create a resource, or you can update the resource later with the wanted tags. For general information
about applying tags, see Resource Tags on page 213.

Prerequisites
• You have an instance pool. Optionally, you can attach a load balancer to the instance pool.
• For metric-based autoscaling, monitoring is enabled on the instances in the instance pool, and the Monitoring
service is receiving metrics that are emitted by the instance. When you initially create an instance pool using
instances that support monitoring, monitoring is enabled by default, regardless of the settings in the pool's instance
configuration.
• You have sufficient service limits to create the maximum number of instances that you want to scale to.

Using the Console


To create a metric-based autoscaling configuration
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Autoscaling Configurations.
2. Click Create Autoscaling Configuration.
3. On the Add Basic Details page, do the following:
Enter a name for the autoscaling configuration. Avoid entering confidential information.
a.
Select the compartment to create the autoscaling configuration in.
b.
Select the Instance pool to apply the autoscaling configuration to.
c.
Show Tagging Options: If you have permissions to create a resource, then you also have permissions to apply
d.
free-form tags to that resource. To apply a defined tag, you must have permissions to use the tag namespace.
For more information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags,
then skip this option (you can apply tags later) or ask your administrator.
4. Click Next.

Oracle Cloud Infrastructure User Guide 729


Compute

5. On the Configure Autoscaling Policy page, select Metric-Based Autoscaling. Then, do the following:
a. Enter a name for the autoscaling policy. Avoid entering confidential information.
b. In the Cooldown in seconds box, enter the minimum amount of time to wait between scaling events. The
cooldown period gives the system time to stabilize before rescaling. The minimum value is 300 seconds,
which is also the default.
c. Select the Performance metric that triggers an increase or decrease in the number of instances in the instance
pool.
d. In the Scale-out rule area, specify the threshold that the performance metric must reach to increase the pool
size. Select a Scale-out operator and Threshold percentage. Then, enter the Number of instances to add to
the pool.
For example, when CPU utilization is greater than 90%, add 10 instances to the pool.
e. In the Scale-in rule area, specify the threshold that the performance metric must reach to decrease the pool
size. Select a Scale-in operator and Threshold percentage. Then, enter the Number of instances to remove
from the pool.
For example, when CPU utilization is less than 20%, remove 5 instances from the pool.
f. In the Scaling limits area, specify the number of instances in the instance pool:
• Minimum number of instances: The minimum number of instances that the pool is allowed to decrease
to.
• Maximum number of instances: The maximum number of instances that the pool is allowed to increase
to.
Important:

The number of instances that can be provisioned is also limited by


your tenancy's service limits.
• Initial number of instances: The number of instances to launch in the instance pool immediately after
autoscaling is enabled. After autoscaling retrieves performance metrics, the number of instances is
automatically adjusted from this initial number to a number that is based on the scaling limits that you set.
6. Click Next.
7. Review the autoscaling configuration, and then click Create.
Autoscaling runs. The cooldown period starts when the instance pool's state changes from Scaling to Running.
To create a schedule-based autoscaling configuration
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Autoscaling Configurations.
2. Click Create Autoscaling Configuration.
3. On the Add Basic Details page, do the following:
Enter a name for the autoscaling configuration. Avoid entering confidential information.
a.
Select the compartment to create the autoscaling configuration in.
b.
Select the Instance pool to apply the autoscaling configuration to.
c.
Show Tagging Options: If you have permissions to create a resource, then you also have permissions to apply
d.
free-form tags to that resource. To apply a defined tag, you must have permissions to use the tag namespace.
For more information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags,
then skip this option (you can apply tags later) or ask your administrator.
4. Click Next.

Oracle Cloud Infrastructure User Guide 730


Compute

5. On the Configure Autoscaling Policy page, select Schedule-Based Autoscaling. Then, do the following:
a. Enter a name for the autoscaling policy. Avoid entering confidential information.
b. In the Target pool size box, enter the number of instances that the pool should scale to at the scheduled time.
Important:

The number of instances that can be provisioned is also limited by your


tenancy's service limits.
c. In the Execution schedule area, define the schedule for implementing this autoscaling policy in UTC. Use a
Quartz cron expression. For more information about cron expressions, see About Cron Expressions on page
726.
d. To schedule additional scaling events, click + Another Policy and then repeat the previous steps.
6. When you're finished, click Next.
7. Review the autoscaling configuration, and then click Create.
Autoscaling runs at the scheduled time.
To edit an autoscaling configuration
You can change these characteristics of an autoscaling configuration:
• Name
• For metric-based autoscaling, the cooldown period between autoscaling actions
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Autoscaling Configurations.
2. Click the autoscaling configuration that you're interested in.
3. Click Edit.
4. Make your updates. Avoid entering confidential information.
5. Click Save Changes.
To edit an autoscaling policy
You can change these characteristics of an autoscaling policy:
• Name
• For metric-based autoscaling:
• Which performance metric triggers an autoscaling action
• The minimum and maximum number of instances
• The initial number of instances that the pool should have immediately after you update the autoscaling policy
Caution:

If you specify a smaller initial number of instances than the current pool
size, instances will be terminated.
• Scale-out and scale-in operators and thresholds
• The number of instances to add or remove
• For schedule-based autoscaling, you can edit the target pool size or schedule for an existing policy, delete an
existing policy, or add a new policy
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Autoscaling Configurations.
2. Click the autoscaling configuration that you're interested in.
3. In the Autoscaling Policies area, click Edit.
4. Make your updates. Avoid entering confidential information.
5. Click Save Changes.
To enable or disable a schedule-based autoscaling policy
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Autoscaling Configurations.
2. Click the autoscaling configuration that you're interested in.

Oracle Cloud Infrastructure User Guide 731


Compute

3. In the Autoscaling Policies area, under Status, toggle the Enabled or Disabled switch.
To disable an autoscaling configuration
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Autoscaling Configurations.
2. Click the autoscaling configuration that you're interested in.
3. Click Disable, and then confirm when prompted.
To delete an autoscaling configuration
When you delete an autoscaling configuration, the instance pool remains in its most recent state.
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Autoscaling Configurations.
2. Click the autoscaling configuration that you're interested in.
3. Click Delete, and then confirm when prompted.
To manage tags for an autoscaling configuration
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Autoscaling Configurations.
2. Click the autoscaling configuration that you're interested in.
3. Click the Tags tab to view or edit the existing tags. Or click Add Tags to add new ones.
For more information, see Resource Tags on page 213.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the Autoscaling API to manage autoscaling configurations and policies.
To update the autoscaling configuration with a new instance pool, create a new instance configuration and then point
the instance pool to the new configuration:
• First, create a new instance configuration with the desired settings. You can do this using the Console.
For steps, see Creating an Instance Configuration on page 714. To do this using the API, use the
CreateInstanceConfiguration operation.
• Next, update the instance pool used in the autoscaling configuration to point to the new instance configuration. To
do this using the API, use the UpdateInstancePool operation to change the instanceConfigurationId. You
cannot use the Console to update the instance configuration used by the instance pool.

Managing Cluster Networks


A cluster network is a pool of high performance computing (HPC) instances or GPU instances that are connected
with a high-bandwidth, ultra low-latency network. Each node in the cluster is a bare metal machine located in close
physical proximity to the other nodes. A remote direct memory access (RDMA) network between nodes provides
latency as low as single-digit microseconds, comparable to on-premises HPC clusters.
Cluster networks are designed for highly demanding parallel computing workloads. For example:
• Computational fluid dynamics simulations for automotive or aerospace modeling
• Financial modeling and risk analysis
• Biomedical simulations
• Trajectory analysis and design for space exploration
• Artificial intelligence and big data workloads

Cluster networks are built on top of the instance pools feature. Most operations in the instance pool are managed
directly by the cluster network, though you can monitor and add tags to the underlying instance pool.

Oracle Cloud Infrastructure User Guide 732


Compute

For more information about how to access and store the data that you want to process in your cluster networks, see
FastConnect Overview on page 3201, Overview of File Storage on page 1928, Overview of Object Storage on
page 3420, and Overview of Block Volume on page 504.

Supported Shapes
The following shapes support cluster networks:
• BM.HPC2.36
• BM.GPU4.8
Typically, to be able to create the multiple HPC or GPU instances that are contained in a cluster network, you must
request a service limit increase.

Supported Regions and Availability Domains


Cluster networks are supported in the following regions:
• Regions in the Oracle Cloud Infrastructure commercial realm:
• Australia East (Sydney)
• Australia Southeast (Melbourne)
• Germany Central (Frankfurt)
• Japan Central (Osaka)
• Japan East (Tokyo)
• Netherlands Northwest (Amsterdam)
• South Korea Central (Seoul)
• UK South (London)
• US East (Ashburn)
• US West (Phoenix)
• US West (San Jose)
• Regions in the Government Cloud realms:
• UK Gov South (London)
• US Gov East (Ashburn)
The availability domain that you create the cluster network in must have cluster network-capable hardware.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: For a typical policy that gives access to cluster networks, see Let users manage Compute instance
configurations, instance pools, and cluster networks on page 2152.

Tagging Resources
You can apply tags to your resources to help you organize them according to your business needs. You can apply tags
at the time you create a resource, or you can update the resource later with the wanted tags. For general information
about applying tags, see Resource Tags on page 213.

Oracle Cloud Infrastructure User Guide 733


Compute

Prerequisites
Create an instance configuration for the instance pool that is managed by the cluster network. To do this:
1. Create an instance with the following settings:
• Image or operating system: Click Change Image, and then click Oracle Images. Select the Oracle HPC
cluster networking image.
• Shape: Click Change Shape. Select Bare Metal Machine. Then, select either the BM.HPC2.36 shape or the
BM.GPU4.8 shape.
For more information about these shapes, see Compute Shapes on page 659.
2. Create an instance configuration using the instance that you created in the previous step as a template.
Optionally, you can delete the instance after you create the instance configuration.

Using the Console


To create a cluster network
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Cluster Networks.
2. Click Create Cluster Network.
3. Enter a name for the cluster network. It doesn't have to be unique, and you can change it later. Avoid entering
confidential information.
4. Select the compartment to create the cluster network in.
5. Select the Availability Domain to run the cluster network in. Only the availability domains with cluster network-
capable hardware can be selected.
6. In the Configure networking section, specify the network that you want to use to administer the cluster network.
This network is separate from the closed RDMA network between nodes within the cluster. Enter the following
information:
• Virtual cloud network: The virtual cloud network (VCN) for the cluster network.
• Subnet: The subnet for the cluster network.
7. In the Configure instance pool section, enter the following:
•Instance pool name: A name for the instance pool that is managed by the cluster network. Avoid entering
confidential information.
• Number of instances: The number of instances in the pool.
• Instance configuration: Select the instance configuration to use when creating the instances in the cluster
network's instance pool, as described in the prerequisites.
8. Show Tagging Options: Optionally, you can add tags. If you have permissions to create a resource, you also have
permissions to add free-form tags to that resource. To add a defined tag, you must have permissions to use the tag
namespace. For more information about tagging, see Resource Tags on page 213. If you are not sure if you should
add tags, skip this option (you can add tags later) or ask your administrator.
9. Click Create Cluster Network.
To track the progress of the operation, you can monitor the associated work request.
For cluster networks with 10 or more instances, the cluster network is created if the required number of instances
is available and at least 95% of the instances in the pool launch successfully. For cluster networks with less than
10 instances, all instances in the pool must launch successfully. If the cluster network fails to launch, wait a few
minutes, and then try creating it again.
To edit the name of a cluster network
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Cluster Networks.
2. Click the cluster network that you're interested in.
3. Click Edit Name.
4. Enter a new name. Avoid entering confidential information.
5. Click Save Changes.

Oracle Cloud Infrastructure User Guide 734


Compute

To manage tags for a cluster network


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Cluster Networks.
2. Click the cluster network that you're interested in.
3. Click the Tags tab to view or edit the existing tags. Or click Add Tags to add new ones.
For more information, see Resource Tags on page 213.
To delete a cluster network
Caution:

When you delete a cluster network, all of its resources are permanently
deleted, including associated instances, attached boot volumes, and block
volumes.
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Cluster Networks.
2. Click the cluster network that you're interested in.
3. Click Terminate, and then confirm when prompted.
To track the progress of the operation, you can monitor the associated work request.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to work with cluster networks:
• CreateClusterNetwork
• GetClusterNetwork
• ListClusterNetworks
• ListClusterNetworkInstances
• UpdateClusterNetwork
• ChangeClusterNetworkCompartment
• TerminateClusterNetwork

Dedicated Virtual Machine Hosts


The Oracle Cloud Infrastructure Compute service's dedicated virtual machine host feature gives you the ability to
run Compute virtual machine (VM) instances on dedicated servers that are a single tenant and not shared with other
customers. This feature lets you meet compliance and regulatory requirements for isolation that prevent you from
using shared infrastructure. You can also use this feature to meet node-based or host-based licensing requirements
that require you to license an entire server.

Support and Limitations


When you create a dedicated virtual machine host, you select a shape for the host. For the available shapes and shape
details for dedicated virtual machine hosts, see Dedicated Virtual Machine Host Shapes on page 665. Note that
there is a difference between the number listed for billed OCPUs compared to available OCPUs. This is because four
OCPUs are reserved for virtual machine management.
You are billed for the dedicated virtual machine host as soon as you create it, but you are not billed for any of the
individual VM instances you place on it. You will still be billed for image licensing costs if they apply to the image
you are using for the VM instances.
For instances launched on a dedicated virtual machine host, all of the VM.Standard2 shapes are supported. For details
about these shapes, see VM Shapes on page 663. Most of the Compute service features for VM instances are
supported for instances running on dedicated virtual machine hosts, however the following features are not supported:
• Instance configurations

Oracle Cloud Infrastructure User Guide 735


Compute

• Instance pools
• Autoscaling
Reboot migration is also not supported for dedicated virtual machine hosts. In this scenario, you need to manually
migrate the instance. See Moving an Instance with Manual Migration on page 781 for this process.
You can mix VM instances with different shapes on the same dedicated virtual machine host. This might impact
the maximum number of instances you can place on the dedicated virtual machine host. For more information, see
Optimizing Capacity on your Dedicated Virtual Machine Host on page 739.

Managing Dedicated Virtual Machine Hosts


Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The simplest policy to enable users to work with dedicated virtual machine hosts is listed in Let
users manage Compute dedicated virtual machine hosts on page 2154. It gives the specified group access to launch
instances on dedicated virtual machine hosts and manage dedicated virtual machine hosts.
See Let users launch Compute instances on dedicated virtual machine hosts on page 2154 for an example of a policy
that allows users to launch instances on dedicated virtual machine hosts without giving them full administrator access
to dedicated virtual machine hosts.
Creating a Dedicated Virtual Machine Host
You must create a dedicated virtual machine host before you can place any instances on it. When creating the
dedicated virtual machine host, you select an availability domain and fault domain to launch it in. All the VM
instances that you place on the host will subsequently be created in this availability domain and fault domain. You
also select a compartment when you create the dedicated virtual machine host, but you can move the host to a new
compartment later without impacting any of the instances placed on it. You can also create the instances in a different
compartment than the dedicated virtual machine host, or move them to difference compartments after they have been
launched.
To create a dedicated virtual machine host using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Dedicated Virtual Machine
Hosts.
2. Click Create Dedicated Virtual Machine Host.
3. Enter the following information:
• Compartment: The compartment for the dedicated virtual machine host.
• Name: A user-friendly name or description. Avoid entering confidential information.
• Availability Domain: The availability domain for the dedicated virtual machine host.
• Shape: The shape to use for the dedicated virtual machine host.
4. Optionally, click Show Advanced Options. Then enter the following information:
•Fault Domain: The fault domain for the dedicated virtual machine host.
•Tags: Optionally, you can add tags. If you have permissions to create a resource, you also have permissions to
add free-form tags to that resource. To add a defined tag, you must have permissions to use the tag namespace.
For more information about tagging, see Resource Tags on page 213. If you are not sure if you should add
tags, skip this option (you can add tags later) or ask your administrator.
5. Click Create.
To create a dedicated virtual machine host using the CLI

Oracle Cloud Infrastructure User Guide 736


Compute

Open a command prompt and run:

oci compute dedicated-vm-host create --dedicated-vm-host-shape


DVH.Standard2.52 --wait-for-state ACTIVE --display-name <display_name> --
availability-domain <availability_domain> --compartment-id <compartment_ID>

It can take up to 15 minutes for the dedicated virtual machine host to be fully created. It must be in the ACTIVE state
before you can launch an instance on it.
To query the current state of a dedicated virtual machine host using the CLI, run the following command:

oci compute dedicated-vm-host get --dedicated-vm-host-


id <dedicatedVMhost_ID>

Deleting a Dedicated Virtual Machine Host


To delete a dedicated virtual machine host using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Dedicated Virtual Machine
Hosts.
2. Click the dedicated virtual machine host that you want to delete.
3. Click Delete, and then confirm when prompted.
If you try to delete a dedicated virtual machine host that still has running instances hosted on it, the delete operation
will fail. You need to ensure that all of the instances hosted on it have been terminated. To check if there are any
instances still running on the dedicated virtual machine host, go to the Details page for the dedicated virtual machine
host, and click Hosted Instances in the Resources section. Perform this step for each compartment in your tenancy
that has instances running on the dedicated virtual machine host. To change the compartment for the Host Instances
list, select a different compartment from the Table Scope drop-down list.
To delete a dedicated virtual machine host using the CLI
Open a command prompt and run:

oci compute dedicated-vm-host delete --dedicated-vm-host-


id <dedicated_VM_host_ID>

Before you can delete a dedicated machine host, all of the instances running on it must be terminated.
To list the instances running on a dedicated virtual machine host using the CLI, run the following command:

oci compute dedicated-vm-host list --compartment-id <compartment_ID> --


dedicated-vm-host-id <dedicatedVMhost_ID>

Run this command for every compartment in your tenancy that has instances running on the dedicated virtual machine
host that you want to delete.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following operations for working with dedicated virtual machine hosts:
• CreateDedicatedVmHost
• DeleteDedicatedVmHost
• ListDedicatedVmHosts
• ListDedicatedVmHostShapes
• ListDedicatedVmHostInstances
• ListDedicatedVmHostInstanceShapes
• UpdateDedicatedVmHost

Oracle Cloud Infrastructure User Guide 737


Compute

• ChangeDedicatedVmHostCompartment

Instances on Dedicated Virtual Machine Hosts


Placing an Instance on a Dedicated Virtual Machine Host
You place an instance on a dedicated virtual machine host at the time that you create the instance. The steps are
the same as creating a regular instance, you just need specify that you want to create the instance on a dedicated
virtual machine host when you create the instance. See Creating an Instance on page 700 for the steps to create an
instance. Once you get to the Advanced Options section of the form, use the following steps to place the instance on
a dedicated virtual machine host.
To place an instance on a dedicated virtual machine host using the Console
1. Perform the initial steps to create an instance based on an image and shape type that support placement on a
dedicated virtual machine, until the advanced options.
2. Click Show Advanced Options, and then click the Placement tab.
3. Select the Dedicated host option.
4. Select the dedicated virtual machine host that you want to place the instance on.
Note:

Only dedicated virtual machine hosts with sufficient capacity to launch an


instance based on the shape you have specified are displayed in the list.
If you have a dedicated virtual machine host and it does not appear in the
list, you must do one of the following things to place the instance on that
dedicated virtual machine host:
• Terminate instances you no longer need on the dedicated virtual
machine host to make capacity available.
• Choose another smaller shape for the instance you are trying to place
on the dedicated virtual machine host.
• Create a new dedicated virtual machine host to place the instance on.
For more information, see Optimizing Capacity on your Dedicated Virtual
Machine Host on page 739.
5. Click Create.
If you're using the CLI or REST API to create the instance, pass the dedicated virtual machine host OCID in the
optional parameter dedicatedVmHostId when you use the LaunchInstance operation. If you try to launch an
instance with a shape that requires more capacity than what is available on the dedicated virtual machine host you are
trying to place it on, the launch operation will fail. To avoid this, you can use the ListDedicatedVmHosts operation
and pass the shape you want to use when launching the instance in the InstanceShapeNameQueryParam
parameter. This will return all the dedicated virtual machine hosts that you can place the instance on.
The following example demonstrates how to call this operation in the CLI to return all the dedicated virtual machine
hosts with sufficient capacity for you to place an instance launched using the VM.Standard2.16 shape:

oci compute dedicated-vm-host list --compartment-id <compartment_ID> --


instance-shape-name VM.Standard2.16

Auditing your Dedicated Virtual Machine Host


To fully meet requirements for some compliance scenarios, you might be required to validate that your instances
are running on a dedicated virtual machine host and not using shared infrastructure. The Oracle Cloud Infrastructure
Audit service provides you with the functionality to do this. Use the steps described in Viewing Audit Log Events on
page 498 to access the log events for the dedicated virtual machine host.
The steps described in the To search log events section walk you through how to retrieve the log events with the data
you need to verify that your instances are running on a dedicated virtual machine host. For this procedure:

Oracle Cloud Infrastructure User Guide 738


Compute

• Ensure that you select the dedicated virtual machine host's compartment and not the compartment for the instances
that are hosted on it.
• Use the dedicated virtual machine host's OCID as the search keyword.
After you have retrieved the log events for the dedicated virtual machine host, view the log event lower-level details,
and check the contents of the responsePayload property. This property should contain the OCIDs for the
instances that are running on the dedicated virtual machine host.
Optimizing Capacity on your Dedicated Virtual Machine Host
When you place an instance on a dedicated virtual machine host using the Console, only dedicated virtual machine
hosts with sufficient capacity to launch an instance based on the shape that you have specified are displayed in the
Dedicated Virtual Machine Host drop-down list. If you don't see your dedicated virtual machine host in the list, to
understand why, it can help to understand how instances are launched in this scenario.
When you place instances on a dedicated virtual machine host, Oracle Cloud Infrastructure launches the instances
in a manner to optimize performance. For example, a dedicated virtual machine host created based on the
DVH.Standard2.52 shape has two sockets with 24 cores configured per socket. Instances are placed so that each
instance will only use resources local to a single physical socket. In scenarios where you are creating and terminating
instances with a mix of shapes, this can result in an inefficient distribution of resources, meaning that not all OCPUs
on a dedicated virtual machine host are available to be used. In this scenario, it might appear that a dedicated virtual
machine has enough OCPUs to launch an additional instance on it, but the instance will fail to launch because of the
distribution of existing instances.
In this example, if you are launching instances using a shape with 16 OCPUs on a dedicated virtual machine host, you
can only launch a maximum of two instances using that shape. You cannot launch a third instance with 16 OCPUs,
even though the remaining number of OCPUs showing for the dedicated virtual machine host is 16. You can launch
additional instances using shapes with a smaller number of OCPUs.
When designing your cloud footprint, we recommend that you plan to always launch the largest instance first.

Connecting to an Instance
You can connect to a running instance by using a Secure Shell (SSH) or Remote Desktop connection. Most UNIX-
style systems include an SSH client by default. Windows 10 and Windows Server 2019 systems should include
the OpenSSH client, which you'll need if you created your instance using the SSH keys generated by Oracle
Cloud Infrastructure. For other Windows versions, you can download a free SSH client called PuTTY from http://
www.putty.org.

Required IAM Policy


To connect to a running instance with SSH, you don't need an IAM policy to grant you access. However, to SSH you
need the public IP address of the instance (see Prerequisites on page 740 below). If there's a policy that lets you
launch an instance, that policy probably also lets you get the instance's IP address. The simplest policy that does both
is listed in Let users launch compute instances on page 2151.
For administrators: Here's a more restrictive policy that lets the specified group get the IP address of existing
instances and use power actions on the instances (e.g., stop, start, etc.), but not launch or terminate instances. The
policy assumes the instances and the cloud network are together in a single compartment (XYZ):

Allow group InstanceUsers to read virtual-network-family in compartment XYZ


Allow group InstanceUsers to use instance-family in compartment XYZ

If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Oracle Cloud Infrastructure User Guide 739


Compute

Prerequisites
You'll need the following information to connect to the instance:
• The public IP address of the instance. You can get the address from the Instance Details page in the Console.
Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances. Then, select your
instance. Alternatively, you can use the Core Services API ListVnicAttachments and GetVnic operations.
• The default username for the instance. If you used an Oracle-provided Linux, CentOS, or Windows image to
launch the instance, the username is opc. If you used the Ubuntu image to launch the instance, the username is
ubuntu.
• For Linux instances: The full path to the private key portion of the SSH key pair that you used when you launched
the instance. For more information about key pairs, see Managing Key Pairs on Linux Instances on page 698.
• For Windows instances: If you're connecting to the instance for the first time, you will need the initial password
for the instance. You can get the password from the Instance Details page in the Console.

Connecting to a Linux Instance


You connect to a Linux instance using SSH.
To connect to a Linux instance from a Unix-style system
1. Use the following command to set the file permissions so that only you can read the file:

chmod 400 <private_key_file>

<private_key_file> is the full path and name of the file that contains the private key associated with the instance
you want to access.
2. Use the following SSH command to access the instance.

ssh –i <private_key_file> <username>@<public-ip-address>

<private_key_file> is the full path and name of the file that contains the private key associated with the instance
you want to access.
<username> is the default username for the instance. For Oracle Linux and CentOS images, the default username
is opc. For Ubuntu images, the default username is ubuntu.
<public-ip-address> is your instance IP address that you retrieved from the Console.
To connect to a Linux instance from a Windows system using OpenSSH
If the instance uses a key pair that was generated by Oracle Cloud Infrastructure, use the following procedure.
1. If this is the first time you are using this key pair, you must set the file permissions so that only you can read the
file. Do the following:
a. In Windows Explorer, navigate to the private key file, right-click the file, and then click Properties.
b. On the Security tab, click Advanced.
c. Ensure that the Owner is your user account.
d. Click Disable Inheritance, and then select Convert inherited permissions into explicit permissions on this
object.
e. Select each permission entry that is not your user account and click Remove.
f. Ensure that the access permission for your user account is Full control.
g. Save your changes.

Oracle Cloud Infrastructure User Guide 740


Compute

2. To connect to the instance, open Windows PowerShell and run the following command:

ssh –i <private_key_file> <username>@<public-ip-address>

<private_key_file> is the full path and name of the file that contains the private key associated with the instance
you want to access.
<username> is the default username for the instance. For Oracle Linux and CentOS images, the default username
is opc. For Ubuntu images, the default username is ubuntu.
<public-ip-address> is your instance IP address that you retrieved from the Console.
To connect to a Linux instance from a Windows system using PuTTY
SSH private key files generated by Oracle Cloud Infrastructure are not compatible with PuTTY. If you are using a
private key file generated during the instance creation process you need to convert the file to a .ppk file before you
can use it with PuTTY to connect to the instance.
Convert a generated .key private key file:
1. Open PuTTYgen.
2. Click Load, and select the private key generated when you created the instance. The extension for the key file is
.key.
3. Click Save private key.
4. Specify a name for the key. The extension for new private key is .ppk.
5. Click Save.
Connect to the Linux instance using a .ppk private key file:
If the instance uses a key pair that you created using PuTTY Key Generator, use the following procedure.
1. Open PuTTY.
2. In the Category pane, select Session and enter the following:
• Host Name (or IP address):
<username>@<public-ip-address>
<username> is the default username for the instance. For Oracle Linux and CentOS images, the default
username is opc. For Ubuntu images, the default username is ubuntu.
<public-ip-address> is your instance public IP address that you retrieved from the Console
• Port: 22
• Connection type: SSH
3. In the Category pane, expand Window, and then select Translation.
4. In the Remote character set drop-down list, select UTF-8. The default locale setting on Linux-based instances is
UTF-8, and this configures PuTTY to use the same locale.
5. In the Category pane, expand Connection, expand SSH, and then click Auth.
6. Click Browse, and then select your .ppk private key file.
7. Click Open to start the session.
If this is your first time connecting to the instance, you might see a message that the server's host key is not cached
in the registry. Click Yes to continue the connection.
Tip:

If the connection fails, you may need to update your PuTTY proxy
configuration.

Connecting to a Windows Instance


You can connect to a Windows instance using a Remote Desktop connection. Most Windows systems include a
Remote Desktop client by default.

Oracle Cloud Infrastructure User Guide 741


Compute

To enable Remote Desktop Protocol (RDP) access to the Windows instance, you need to add a stateful ingress
security rule for TCP traffic on destination port 3389 from source 0.0.0.0/0 and any source port. You can implement
this security rule in either a network security group that the Windows instance belongs to, or a security list that is used
by the instance's subnet.
To enable RDP access
1. Open the navigation menu. Under Core Infrastructure, go to Networking and click Virtual Cloud Networks.
2. Choose a compartment you have permission to work in (on the left side of the page). The page updates to display
only the resources in that compartment. If you're not sure which compartment to use, contact an administrator.
3. Click the cloud network that you're interested in.
4. To add the rule to a network security group that the instance belongs to:
a. Under Resources, click Network Security Groups. Then click the network security group that you're
interested in.
b. Click Add Rules.
c. Enter the following values for the rule:
• Stateless: Leave the check box cleared.
• Source Type: CIDR
• Source CIDR: 0.0.0.0/0
• IP Protocol: RDP (TCP/3389)
• Source Port Range: All
• Destination Port Range: 3389
• Description: An optional description of the rule.
d. When done, click Add.
5. Or, to add the rule to a security list that is used by the instance's subnet:
a. Under Resources, click Security Lists. Then click the security list you're interested in.
b. Click Add Ingress Rules.
c. Enter the following values for the rule:
• Stateless: Leave the check box cleared.
• Source Type: CIDR
• Source CIDR: 0.0.0.0/0
• IP Protocol: RDP (TCP/3389)
• Source Port Range: All
• Destination Port Range: 3389
• Description: An optional description of the rule.
d. When done, click Add Ingress Rules.
Connecting to a Windows Instance from a Remote Desktop Client
1. Open the Remote Desktop client.
2. In the Computer field, enter the public IP address of the instance. You can retrieve the public IP address from the
Console.
3. The User name is opc. Depending on the Remote Desktop client you are using, you might have to connect to the
instance before you can enter this credential.
4. Click Connect to start the session.
5. Accept the certificate if you are prompted to do so.
6. If you are connecting to the instance for the first time, enter the initial password that was provided to you by
Oracle Cloud Infrastructure when you launched the instance. You will be prompted to change the password as

Oracle Cloud Infrastructure User Guide 742


Compute

soon as you log in. Your new password must be at least 12 characters long and must comply with Microsoft's
password policy.
Otherwise, enter the password that you created. If you are using a custom image, you might need to know the
password for the instance that the image was created from. For details about Windows custom images, see
Creating Windows Custom Images on page 673.
7. Press Enter.

Troubleshooting the SSH Connection


If you're unable to connect to your instance using SSH, follow these troubleshooting steps to identify common
problems.
• Verify your connection: In your terminal window, run nc <public ip> 22.
• If the SSH banner displays: You successfully connected to your instance using SSH. The underlying
problem might be related to permissions. As a next step, verify your credentials. If the credentials you're using
to SSH to the instance are incorrect, the connection fails.
For Linux instances, you need the full path to the private key portion of the SSH key pair that you used when
you launched the instance. For more information about key pairs, see Managing Key Pairs on Linux Instances
on page 698. For Windows instances, if you're connecting to the instance for the first time, you need the
initial password for the instance. You can get the password from the Instance Details page in the Console.
• If the SSH banner does not display: A network issue might be preventing the SSH connection from
succeeding. Review the following suggestions.
• Add a public IP address: If your connection is routed over the internet, your instance must have a public IP
address in order for you to connect to the instance. Without a public IP address, the instance is not reachable.
For more information about how to manage public IPv4 addresses on instances, see Public IP Addresses on page
2901.
• Verify the network security lists: Oracle Cloud Infrastructure provisions each cloud network with a default
set of security lists to permit SSH traffic. If the security list that permits SSH connections is removed, you can't
access your instance. Ensure a security list that opens port 22 is present. You can use the Console to view and
manage your security lists. For more information about security lists, see Security Lists on page 2876.
• Confirm that SSH is running on the instance: The steps for confirming that SSH is running vary depending on
the operating system. Review the documentation for your operating system to find information explaining how to
confirm that SSH is running.
• Capture serial console history: To capture your instance's serial console data history, use the console-
history resource in the CLI. This information can help determine the cause of connectivity problems. For more
information, see console-history and Command Line Interface (CLI) on page 4228.
When using the CLI to capture the instance's serial console data history, you need to include the following option
to ensure that full history is captured. Without this option, the data might be truncated: --length 10000000.
• Connect to the serial console: Serial console connections allow you to remotely troubleshoot malfunctioning
instances. For more information, see Troubleshooting Instances Using Instance Console Connections on page
819.

Adding Users on an Instance


You can add additional users to a Compute instance.
If you created your instance using an Oracle-provided Linux or CentOS image, you can use SSH to access your
instance from a remote host as the opc user. If you created your instance using the Ubuntu image, you can use SSH
to access your instance from a remote host as the ubuntu user. After signing in, you can add users to the instance.
If you created your instance using an Oracle-provided Windows image, you can create new users after you sign in to
the instance through a Remote Desktop client.

Oracle Cloud Infrastructure User Guide 743


Compute

Creating Additional Users on a Linux Instance


If you do not want to share your SSH key, you can create additional SSH-enabled users for a Linux instance. At a
high level, you do the following things:
• Generate SSH key pairs for the users offline.
• Add the new users.
• Append a public key to the ~/.ssh/authorized_keys file for each new user.
Tip:

If you re-create an instance from an Oracle-provided image, users and SSH


public keys that you added or edited manually (that is, users that weren’t
defined in the machine image) must be added again.
If you need to edit the ~/.ssh/authorized_keys file of a user on
your instance, start a second SSH session before you make any changes to
the file and ensure that it remains connected while you edit the file. If the
~/.ssh/authorized_keys file becomes corrupted or you inadvertently
make changes that lock you out of the instance, you can use the backup SSH
session to fix or revert the changes. Before closing the backup SSH session,
test all changes you made by logging in with the new or updated SSH key.
The new users then can SSH to the instance using the appropriate private keys.
To create an additional SSH-enabled user:
1. Generate an SSH key pair for the new user.
2. Copy the public key value to a text file for use later in this procedure.
3. Log in to the instance.

Oracle Cloud Infrastructure User Guide 744


Compute

4. Become the root user:

sudo su
5. Create the new user:

useradd <new_user>
6. Create a .ssh directory in the new user’s home directory:

mkdir /home/<new_user>/.ssh
7. Copy the SSH public key that you saved to a text file into the /home/new_user/.ssh/authorized_keys
file:

echo <public_key> > /home/<new_user>/.ssh/authorized_keys


8. Change the owner and group of the /home/username/.ssh directory to the new user:

chown -R <new_user>:<group> /home/<new_user>/.ssh


9. To enable sudo privileges for the new user, run the visudo command and edit the /etc/sudoers file as
follows:
a. In /etc/sudoers, look for:

%<username> ALL=(ALL) NOPASSWD: ALL


b. Add the following line immediately after the preceding line:

%<group> ALL=(ALL) NOPASSWD: ALL

The new user can now sign in to the instance.

Creating Additional Users on a Windows Instance


1. Log in to the instance using a Remote Desktop client.
2. On the Start menu, click Control Panel.
3. Click User Accounts, and then click User Accounts again.
4. Click Manage User Accounts.
5. Click Manage Another Account.
6. Click Add User Account.
7. Enter a User name and Password.
8. Confirm the password, and then create a Password hint.
9. Click Next.
10. Verify the account, and then click Finish.
The new user can now sign in to the instance.

Displaying the Console for an Instance


You can capture and display the serial console data for an instance. The data includes configuration messages that
occur when the instance boots, such as kernel and BIOS messages, and is useful for checking the status of the
instance or diagnosing problems. Note that the raw console data, including multi-byte characters, is captured.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message

Oracle Cloud Infrastructure User Guide 745


Compute

that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to manage
console history data. If the specified group doesn't need to launch instances or attach volumes, you could simplify that
policy to include only manage instance-family, and remove the statements involving volume-family and
virtual-network-family.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409.
Use these API operations to manage the serial console logs:
• CaptureConsoleHistory
• DeleteConsoleHistory
• GetConsoleHistory
• GetConsoleHistoryContent
• ListConsoleHistories

Managing Plugins with Oracle Cloud Agent


Oracle Cloud Agent is a lightweight process that manages plugins running on compute instances. Plugins collect
performance metrics, install OS updates, and perform other instance management tasks.
To use plugins on an instance, the Oracle Cloud Agent software must be installed on the instance, the plugins must be
enabled, and the plugins must be running. You might need to perform additional configuration tasks before you can
use certain plugins.

Supported Images
Oracle Cloud Agent: Oracle Cloud Agent is supported on current Oracle-provided images and on custom images
that are based on current Oracle-provided images. Oracle Cloud Agent is installed by default on current Oracle-
provided images.
If you use an older Oracle-provided image, you must manually install the Oracle Cloud Agent software. Select an
image dated after November 15, 2018 (except Ubuntu, which must be dated after February 28, 2019).
You might have success manually installing Oracle Cloud Agent on other images, though it has not been tested on
other operating systems and there is no guarantee that it will work. Oracle Cloud Agent is not supported on Windows
Server 2008 R2 custom images.
Plugins: Plugins are installed as part of Oracle Cloud Agent. The plugins that are supported for an instance
depend on the version of Oracle Cloud Agent and on the image that you use to create the instance. To
determine which plugins are supported for a particular image, use the Console to create an instance. Or, use the
ListInstanceagentAvailablePlugins API operation, providing the OS name and OS version of the image.

Available Plugins
Each Oracle Cloud Agent plugin provides functionality related to compute instances. This functionality can enable
features that are part of the Compute service, and features that are part of other services.
The following Oracle Cloud Agent plugins are available.

Plugin Name Description Steps to Configure and Use


Compute Instance Emits metrics about the instance's See Enabling Monitoring for
Monitoring health, capacity, and performance. Compute Instances on page 790
These metrics are consumed by the and Compute Instance Metrics on
Monitoring service. page 794.

Oracle Cloud Infrastructure User Guide 746


Compute

Plugin Name Description Steps to Configure and Use


Compute Instance Run Runs scripts within the instance to See Running Commands on an
Command remotely configure, manage, and Instance on page 758.
troubleshoot the instance.
Custom Logs Monitoring Ingests custom logs into the Logging See Custom Logs on page 2647.
service.
OS Management Service Manages updates and patches for the See OS Management.
Agent operating system environment on the
instance.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to enable
and disable individual plugins, as well as start and stop all plugins on an instance. If the specified group doesn't
need to launch instances or attach volumes, you could simplify that policy to include only manage instance-
family, and remove the statements involving volume-family and virtual-network-family. In addition,
you must use the following policy to allow users to access the available plugins:

Allow group PluginUsers to read instance-agent-plugins in compartment ABC

If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Installing the Oracle Cloud Agent Software


If you create an instance using a current Oracle-provided image or a custom image that's based on a current Oracle-
provided image, then Oracle Cloud Agent is installed by default. No action is needed.
If you want to manually install the Oracle Cloud Agent software on an instance that uses another supported image,
use the following steps.
To manually install Oracle Cloud Agent on a legacy Linux instance
1. Connect to the instance.
2. To download the Oracle Cloud Agent software, run one of the following scripts.
Oracle Autonomous Linux, Oracle Linux
a. To determine whether the Oracle Cloud Agent software is installed, run the following command:

sudo yum info oracle-cloud-agent

The command returns the Oracle Cloud Agent version that is currently installed.
b. If Oracle Cloud Agent isn't installed, or if the installed version is not the latest version, install the latest version
by running the following command:

sudo yum install -y oracle-cloud-agent

Note:

If you don't have access to the yum repository that has Oracle Cloud
Agent, run one of the following scripts.

Oracle Cloud Infrastructure User Guide 747


Compute

Oracle Linux 6.x

#!/bin/sh
cd ~
curl -O https://objectstorage.us-
phoenix-1.oraclecloud.com/p/
uKbXkuUa6TVKkRdQhxpg9C4bvnaxr0dqatbtuzq2k2V0XZrQw2cKFuz1a_9jRNQc/
n/imagegen/b/agents/o/oracle-cloud-
agent-1.8.2-3749.el6.x86_64.rpm -v

Oracle Autonomous Linux 7.x, Oracle Linux 7.x

#!/bin/sh
cd ~
curl -O https://objectstorage.us-
phoenix-1.oraclecloud.com/p/
U_9mEBnaTqVKu_HVI26djsZ8N4QxG0OkDGTA3izpxKTc5OI44TuukvKz9VrlJP8B/
n/imagegen/b/agents/o/oracle-cloud-
agent-1.8.2-3843.el7.x86_64.rpm -v

Oracle Linux 8.x

#!/bin/sh
cd ~
curl -O https://objectstorage.us-
phoenix-1.oraclecloud.com/p/
tbPX4bN0SLrp3rIe2DTpJP6F8n5iHFKaSS81LHkEjzAu35tNuhxxfgyzUm0QKCL9/
n/imagegen/b/agents/o/oracle-cloud-
agent-1.8.2-3843.el8.x86_64.rpm -v

CentOS 7.x

#!/bin/sh
cd ~
curl -O https://objectstorage.us-phoenix-1.oraclecloud.com/p/
U_9mEBnaTqVKu_HVI26djsZ8N4QxG0OkDGTA3izpxKTc5OI44TuukvKz9VrlJP8B/n/
imagegen/b/agents/o/oracle-cloud-agent-1.8.2-3843.el7.x86_64.rpm -v

CentOS 8.x

#!/bin/sh
cd ~

Oracle Cloud Infrastructure User Guide 748


Compute

curl -O https://objectstorage.us-phoenix-1.oraclecloud.com/p/
tbPX4bN0SLrp3rIe2DTpJP6F8n5iHFKaSS81LHkEjzAu35tNuhxxfgyzUm0QKCL9/n/
imagegen/b/agents/o/oracle-cloud-agent-1.8.2-3843.el8.x86_64.rpm -v

Ubuntu 16.04, Ubuntu 18.04


Note:

To install Oracle Cloud Agent on instances that use Ubuntu images,


Snapcraft must be installed on the instance. Install Snapcraft by running the
following commands, in sequence:

sudo apt update

sudo apt install snapd

sudo snap install oracle-cloud-agent --classic

This command installs and runs the Oracle Cloud Agent software.
3. To run the Oracle Cloud Agent software on the instance, enter one of the following commands.
Oracle Linux

sudo yum install -y <instance-agent-filename>

CentOS

sudo yum install -y <instance-agent-filename>

Ubuntu
No further action is needed. The command in the previous step installs and runs the software.
To manually install Oracle Cloud Agent on a legacy Windows instance
1. Connect to the instance.
2. Download the Oracle Cloud Agent software from the following URL:
https://objectstorage.us-phoenix-1.oraclecloud.com/p/GvqT-zkjDRzemw0FP740k-
uQHzAl130EEyidJbeEw_BezvpzIMGczIffTCpnjm0p/n/imagegen/b/agents/o/OracleCloudAgentSetup_v1.8.0.msi
3. As a user with administrative privileges, enter the following command to run the Oracle Cloud Agent software on
the instance.

msiexec /qb /i <instance-agent-filename>

To install Oracle Cloud Agent using cloud-init when creating an instance using a legacy image
If you want to install Oracle Cloud Agent on an instance that uses an older image as part of the instance launch, you
can provide a cloud-init script (cloudbase-init on Windows instances) when you create the instance.
1. Follow the steps to create an instance, until the advanced options.
2. Click Show Advanced Options.

Oracle Cloud Infrastructure User Guide 749


Compute

3. On the Management tab, in the Initialization Script section, select Paste cloud-init script. Then, copy and paste
one of the following scripts, depending on the image.
Oracle Linux

sudo yum install -y oracle-cloud-agent

Note:

If you don't have access to the yum repository that has Oracle Cloud
Agent, copy and paste one of the following scripts.
Oracle Linux 6.x

#!/bin/sh
curl -O https://objectstorage.us-
phoenix-1.oraclecloud.com/p/
uKbXkuUa6TVKkRdQhxpg9C4bvnaxr0dqatbtuzq2k2V0XZrQw2cKFuz1a_9jRNQc/
n/imagegen/b/agents/o/oracle-cloud-
agent-1.8.2-3749.el6.x86_64.rpm
yum install -y ~/oracle-cloud-
agent-1.8.2-3749.el6.x86_64.rpm -v

Oracle Linux 7.x

#!/bin/sh
curl -O https://objectstorage.us-
phoenix-1.oraclecloud.com/p/
U_9mEBnaTqVKu_HVI26djsZ8N4QxG0OkDGTA3izpxKTc5OI44TuukvKz9VrlJP8B/
n/imagegen/b/agents/o/oracle-cloud-
agent-1.8.2-3843.el7.x86_64.rpm -v
yum install -y ~/oracle-cloud-
agent-1.8.2-3843.el7.x86_64.rpm -v

Oracle Linux 8.x

#!/bin/sh
curl -O https://objectstorage.us-
phoenix-1.oraclecloud.com/p/
tbPX4bN0SLrp3rIe2DTpJP6F8n5iHFKaSS81LHkEjzAu35tNuhxxfgyzUm0QKCL9/
n/imagegen/b/agents/o/oracle-cloud-
agent-1.8.2-3843.el8.x86_64.rpm -v
yum install -y ~/oracle-cloud-
agent-1.8.2-3843.el8.x86_64.rpm -v

CentOS 7.x

#!/bin/sh
curl -O https://objectstorage.us-phoenix-1.oraclecloud.com/p/
U_9mEBnaTqVKu_HVI26djsZ8N4QxG0OkDGTA3izpxKTc5OI44TuukvKz9VrlJP8B/n/
imagegen/b/agents/o/oracle-cloud-agent-1.8.2-3843.el7.x86_64.rpm -v
yum install -y ~/oracle-cloud-agent-1.8.2-3843.el7.x86_64.rpm -v

CentOS 8.x

#!/bin/sh
curl -O https://objectstorage.us-phoenix-1.oraclecloud.com/p/
tbPX4bN0SLrp3rIe2DTpJP6F8n5iHFKaSS81LHkEjzAu35tNuhxxfgyzUm0QKCL9/n/
imagegen/b/agents/o/oracle-cloud-agent-1.8.2-3843.el7.x86_64.rpm -v

Oracle Cloud Infrastructure User Guide 750


Compute

yum install -y ~/oracle-cloud-agent-1.8.2-3843.el8.x86_64.rpm -v

Ubuntu 16.04, Ubuntu 18.04


Note:

To install Oracle Cloud Agent on instances that use Ubuntu images,


Snapcraft must be installed on the instance. Install Snapcraft by running the
following commands, in sequence:

sudo apt update

sudo apt install snapd

sudo snap install oracle-cloud-agent --classic

Windows Server 2012 R2, Windows Server 2016, and Windows Server 2019
Note:

For legacy versions of Windows images, ensure that cloudbase-init is


supported. See WinRM and cloudbase-init on Windows images.

#ps1_sysnative
cd \Users\opc\Desktop
Start-BitsTransfer -Source "https://objectstorage.us-
phoenix-1.oraclecloud.com/p/GvqT-zkjDRzemw0FP740k-
uQHzAl130EEyidJbeEw_BezvpzIMGczIffTCpnjm0p/n/imagegen/b/agents/o/
OracleCloudAgentSetup_v1.8.0.msi" -Destination "c:\Users\opc\Desktop
\OracleCloudAgentSetup.msi"
msiexec /i "c:\Users\opc\Desktop\OracleCloudAgentSetup.msi" /quiet /L*V
"c:\Users\opc\Desktop\OracleCloudAgentSetup.log"
4. Click Create.

Managing Plugins Using the Console


To see which plugins are enabled for an instance
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Click the Oracle Cloud Agent tab.
The list of plugins is displayed. Enabled plugins can have the following statuses:
• RUNNING: The plugin is running.
• STOPPED: The plugin is stopped.
• NOT_SUPPORTED: The plugin is not supported on this platform.
• INVALID: The plugin status is not recognizable by the service.
To enable or disable a plugin
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Click the Oracle Cloud Agent tab.

Oracle Cloud Infrastructure User Guide 751


Compute

4. Toggle the Enabled or Disabled switch for the plugin.


Caution:

Functionality that depends on the plugin, such as monitoring, autoscaling,


or OS management, will not work when the plugin is disabled.
It takes up to 10 minutes for the change to take effect.
5. If you enabled a plugin, if necessary, perform any configuration tasks that are required before you can use the
plugin. For information about how to configure each plugin, see the documentation for each plugin in Available
Plugins on page 746.
To stop all plugins on an instance
You can stop all of the plugins that are running on an instance. Any individual plugins that are enabled on the
instance remain enabled, but the plugin processes stop running. The plugin processes will only start running again
after you restart all plugins.
For example, if you want to troubleshoot plugins, you can stop all plugins and then disable the plugins that you
think might have an error. Reenable the plugins one-by-one, restarting the plugins after you enable each plugin,
to determine which plugin has an issue. For more information about troubleshooting plugins, see Generating a
Diagnostic File for Oracle Cloud Agent on page 757.
To stop all plugins on an instance:
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Click the Oracle Cloud Agent tab.
4. Click Stop Plugins.
Caution:

Functionality that depends on plugins, such as monitoring, autoscaling, and


OS management, will not work when all plugins are stopped.
5. Click Stop Plugins.
It might take several minutes for all plugins to stop. Oracle Cloud Agent continues to run when plugins are
stopped.
To start all plugins on an instance
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Click the Oracle Cloud Agent tab.
4. Click Start Plugins.
It takes up to 10 minutes for the plugins to restart.

Managing Plugins Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to manage Oracle Cloud Agent plugins:
• In the Core Services API:
• LaunchInstance - enables or disables plugins, or stops all plugins, when you create an instance.
• GetInstance and ListInstances - gets information about which plugins are enabled on an instance (or a list of
instances).
• UpdateInstance - enables or disables individual plugins, and stops or starts all plugins, for an existing instance.

Oracle Cloud Infrastructure User Guide 752


Compute

• In the Oracle Cloud Agent API:


• ListInstanceagentAvailablePlugins - lists the plugins that are available for all instances. You can filter the
results based on the image that you plan to use to launch an instance.
• ListInstanceAgentPlugins - gets information about the plugins that are available on an existing compute
instance.
• GetInstanceAgentPlugin - gets information about a specific plugin on an existing compute instance.

Updating the Oracle Cloud Agent Software


We recommend always running the latest version of the Oracle Cloud Agent software.
If the instance can access the internet, then no action is needed. Oracle Cloud Agent periodically checks for newer
versions and installs the latest version when an update is available.
If the instance does not have access to the internet, then you must manually update the Oracle Cloud Agent software.
For example, a compute instance cannot access the internet if it does not have a public IP address, internet gateway,
or service gateway. In this situation, Oracle Cloud Agent cannot complete its checks for newer versions.
To see which version of Oracle Cloud Agent is installed
Connect to the instance and then do one of the following things:
• For Oracle Linux and CentOS, run the following command:

sudo yum info oracle-cloud-agent


• For Ubuntu, run the following command:

snap info oracle-cloud-agent


• For Windows, do one of the following things:
• In Control Panel, select Programs and Features and then find the version number provided for "Oracle Cloud
Agent."
• In PowerShell, run the following command:

Get-WmiObject -Class Win32_Product | Where-Object { $_.Name -eq "Oracle


Cloud Agent" }

Example output:

IdentifyingNumber : {exampleuniqueidentifer}
Name : Oracle Cloud Agent
Vendor : Oracle Corporation
Version : 0.0.10.0
Caption : Oracle Cloud Agent

To manually update Oracle Cloud Agent on a compute instance


Do one of the following things:
• Temporarily allow the instance to access the internet so that Oracle Cloud Agent can update itself.
• Redo the installation steps, using the latest version.
Oracle Cloud Agent Release Notes
Linux versions

Oracle Cloud Infrastructure User Guide 753


Compute

Version Date Changes


1.8.3 January 13, 2021 Ubuntu instances only:
• Adds support to enable or disable
individual plugins.
• Adds two new metrics for
monitoring.
• Fix for updater start in new
images.
• Updater fix for signature
verification on packages.
• Adds support for reattachable
plugins so that Oracle Cloud
Agent can be upgraded without
stopping plugins.

1.8.2 January 13, 2021 Compute Instance Monitoring:


• Improve filtering of UNIX disk
devices.

1.8.1 January 13, 2021 OS Management Service Agent:


• Disabled by default.

1.8.0 January 13, 2021 Adds support to enable or disable


individual plugins.
Adds two new metrics for
monitoring.
OS Management Service Agent:
• Enabled by default.

1.7.1 December 17, 2020 Fix for updater start in new images.
OS Management Service Agent
disabled in US Government Cloud.

1.7.0 December 7, 2020 Updater fix for signature verification


on packages.
Custom Logs Monitoring:
• Bug fix for signature verification.
• Add default bucket namespace
for non-commercial realms.

Oracle Cloud Infrastructure User Guide 754


Compute

Version Date Changes


1.6.0 November 6, 2020 Adds support for reattachable plugins
so that Oracle Cloud Agent can be
upgraded without stopping plugins.
Compute Instance Run Command:
• Includes support for the run
command feature in all regions in
the Oracle Cloud Infrastructure
commercial realm.
Custom Logs Monitoring:
• Enables package signature
verification in CentOS.
OS Management Service Agent:
• Fixes the plugin to stop its
process when it is requested to
stop rather than staying up idle.
• Fixes an upgrade kill cycle bug
where OS Management upgrades
Oracle Cloud Agent using yum,
which then stops Oracle Cloud
Agent, which stops the plugin.

1.5.1 October 27, 2020 Includes support for the run


command feature.
1.4.1 October 21, 2020 Hotfix for agent termination of
orphaned processes.
1.4.0 October 2, 2020 Fixes in updater daemon and plugins
to make them more resilient.

1.3.2 September 9, 2020 Fix auto update download directory


permissions.
Minor enhancements to the Compute
Instance Monitoring plugin. Enable
additional plugins.
Create grpc sockets in /var/lib/oracle-
cloud-agent/tmp.

1.2.0 August 3, 2020 Upgrade the agent to support plugins


0.0.19 May 28, 2020 Fix updater failing to run on images
that mount a filesystem with noexec
flag set, to /tmp.
Use instance metadata to generate
client side URLs.
Includes support for the instance
metadata service (IMDS) v2.

0.0.18 May 11, 2020 Miscellaneous updates.

Oracle Cloud Infrastructure User Guide 755


Compute

Version Date Changes


0.0.15 January 15, 2020 Migrate from Python 2.7.15 to
Python 3.6.9.
0.0.13 November 4, 2019 Fix a bug in handling monitoring
service internal server errors.
0.0.11 September 13, 2019 Fix retry strategy for sending metrics
and refresh security tokens.
0.0.10 July 15, 2019 Fix for correct handling of forced
termination of the oracle-cloud-
agent-updater.

Windows versions

Version Date Changes


1.8.0 January 13, 2021 Adds support to enable or disable
individual plugins.
Adds two new metrics for
monitoring.
OS Management Service Agent:
• Enabled by default.

1.7.1 December 17, 2020 All plugins disabled in US


Government Cloud.
1.7.0 December 7, 2020 Updater fix for signature verification
on packages.
Compute Instance Run Command:
• Enabled for Windows.
Custom Logs Monitoring:
• Add default bucket namespace
for non-commercial realms.
OS Management Service Agent:
• Clean up leftover
OS Management temporary
directories due to
OS Management being
terminated on system reboot.

1.5.0.0 November 6, 2020 Adds support for reattachable plugins


so that Oracle Cloud Agent can be
upgraded without stopping plugins.
Custom Logs Monitoring plugin
enabled in US Government Cloud
realms.

1.4.1.0 October 2, 2020 Fixes in updater daemon and plugins


to make them more resilient.

Oracle Cloud Infrastructure User Guide 756


Compute

Version Date Changes


1.3.0.0 August 7, 2020 Minor enhancements to the Compute
Instance Monitoring plugin.
1.2.0.0 June 26, 2020 Miscellaneous updates.
1.0.0.0 April 28, 2020 Includes all Microsoft patches as of
April 24, 2020.
Includes a new version of the Oracle
Cloud Agent with a plugin for
Windows for the OS Management
service.
Includes support for the instance
metadata service (IMDS) v2.

0.0.13.0 January 15, 2020 Fixed: Migrate from Python 2.7.15 to


Python 3.6.9.
0.0.11.0 November 5, 2019 Fixed: Fix a bug in handling
monitoring service internal server
errors.
0.0.10.0 September 13, 2019 Fixed:
• Fix retry strategy for sending
metrics and refresh security
tokens
• Fix for correct handling of forced
termination of the oracle-cloud-
agent-update

0.0.9.0 June 6, 2019 Fixed: Bug fix where agent restarts


when telemetry or auth service
returns 5xx.

Generating a Diagnostic File for Oracle Cloud Agent


To make it easier for Oracle support to help you troubleshoot issues with the Oracle Cloud Agent software, you
can install the Oracle Cloud Agent diagnostic tool on your compute instances. When you run the diagnostic tool,
it generates a file that contains debugging information and logs for the plugins that are managed by Oracle Cloud
Agent.
To generate a diagnostic file on a Linux instance
1. Connect to the instance.
2. Download the diagnostic tool by running the following command:

curl https://objectstorage.us-phoenix-1.oraclecloud.com/p/
nBzj8SEG2UHheMQ7XDKHFWjFOeCuqfnHHe0UdifZC90DgmXihFvJ42xw81qV5Kh7/n/
imagegen/b/agents/o/oca-diagnostic-util-linux-01-12-21 > oca-diag-01-12-21
3. Change the permissions on the diagnostic tool to make it an executable:

chmod 744 ./oca-diag-01-12-21

Oracle Cloud Infrastructure User Guide 757


Compute

4. Run the diagnostic tool:

./oca-diag-01-12-21

The tool generates a TAR file with a name in the format oca-diag-<date>.<identifier>.tar.gz.
Provide the file when you open your support request.
To generate a diagnostic file on a Windows instance
1. Connect to the instance.
2. Open PowerShell as an administrator. Then, download the diagnostic tool by running the following command:

$wc = New-Object System.Net.WebClient


$url = "https://objectstorage.us-phoenix-1.oraclecloud.com/p/
P57bTyJYBYq0U2zSfwW6jChEd70nUY0q0pTtEN9fv_2nXP4OasMoAFaEePftEh_B/n/
imagegen/b/agents/o/oca-diagnostic-util-win-11-01-20.ps1"
$output_file = "C:\Users\opc\oca-diag-11-01-20.ps1"
$wc.DownloadFile($url, $output_file)

Note:

If you get an error when you try to download the diagnostic tool, it might
be due to the Transport Layer Security (TLS) version. Run the following
command, and then try downloading the diagnostic tool again.

[Net.ServicePointManager]::SecurityProtocol =
[Net.SecurityProtocolType]::Tls12
3. Change directories to the folder where the diagnostic tool is saved:

cd C:\Users\opc
4. Run the diagnostic tool:

.\oca-diag-11-01-20.ps1

The tool generates a ZIP file and saves it to C:\Users\opc\Desktop\. Provide the file when you open your
support request.

Running Commands on an Instance


You can remotely configure, manage, and troubleshoot compute instances by running scripts within the instance using
the run command feature.
For example, the run command feature can help you automate tasks such as configuring secondary virtual network
interface cards (VNICs), joining instances to an identity provider, troubleshooting SSH connectivity, or responding to
cross-region disaster recovery scenarios.
You can run commands on an instance even when the instance does not have SSH access or open inbound ports.
The run command feature uses the Compute Instance Run Command plugin that is managed by the Oracle Cloud
Agent software.
Caution:

Do not use the run command feature to provide or retrieve passwords, secrets,
or other confidential information in plain text. To securely provide and
retrieve confidential information, use an Object Storage location to store the
script file and response. Use Oracle Cloud Infrastructure Vault to manage
keys and secret credentials.

Oracle Cloud Infrastructure User Guide 758


Compute

Supported Images
The run command feature is supported on compute instances that use the following Oracle-provided images:
• Oracle Autonomous Linux
• Oracle Linux
• CentOS
• Windows Server
Custom images that are based on a supported Oracle-provided image also support the run command feature.

Supported Regions
The run command feature is supported in all regions in the Oracle Cloud Infrastructure commercial realm.

Limitations and Considerations


• The maximum size for a script file that you upload directly to an instance in plain text is 4 KB. To provide a larger
file, save the file in an Object Storage location.
• The output of a script when returned as plain text is limited to the last 1 KB. To save a larger response, save the
output to an Object Storage location.
• When you use an Object Storage location to save the script file or response, the instance must have outbound
connectivity such as a Network Access Translation (NAT) gateway, service gateway, or internet gateway. The
instance must also allow egress traffic on port 443 for the Oracle Cloud Agent software, Object Storage, and IAM.
• Two scripts can run at a time by default. To change the default, update the run command configuration file:

cat /etc/oracle-cloud-agent/plugins/runcommand/config.yml

Set the following parameters:

logDir: /var/log/oracle-cloud-agent/plugins/runcommand
commandExecutionMaxWorkers: <number-of-parallel-scripts>
• A maximum of five scripts can be in flight at a time. A script is considered to be in flight if it has been received by
the Compute Instance Run Command plugin, but not yet deleted from the queue.
• To perform long-running tasks, you should use the run command feature to schedule a cron job on the instance.
Command orchestration is not supported.
• Each script runs once. If you want a script to run multiple times, use cron to configure a schedule for the script.
• Scripts that prompt for information are not supported. However, you can use the instance metadata service
(IMDS) to programatically retrieve information about the instance that the script runs on.
• The exit codes that are returned are standard Linux error codes. An exit code of 0 indicates success.
• If you apply an optional timeout for a script, the default is 1 hour. The maximum is 24 hours.
• The maximum time that a script can run is 1 day.
• To monitor the resources that scripts consume, such as CPU utilization, use metrics.
• Canceling a script is a best-effort attempt. Commands can't be canceled after they have finished running or if they
have expired.
• Script files and responses that are saved in plain text are retained for one month. Script files and responses that are
saved in an Object Storage location are retained until you delete them.
• Do not run a script that causes the Oracle Cloud Agent software or the Compute Instance Run Command plugin to
stop.

Running Commands with Administrator Privileges


If a command requires sudo permissions, you must grant sudo permissions to the Compute Instance Run Command
plugin to be able to run the command. The plugin runs as the ocarun user.

Oracle Cloud Infrastructure User Guide 759


Compute

You can use cloud-init to configure permissions at instance launch, or connect to an instance after it has launched and
configure permissions manually. Do the following:
1. On the instance, create a sudoers configuration file for the Compute Instance Run Command plugin:

vi ./101-oracle-cloud-agent-run-command
2. Allow the ocarun user to run all commands as sudo by adding the following line to the configuration file:

ocarun ALL=(ALL) NOPASSWD:ALL

You can optionally list specific commands. See the Linux man page for sudoers for more information.
3. Validate that the syntax in the configuration file is correct:

visudo -cf ./101-oracle-cloud-agent-run-command

If the syntax is correct, the follow message is returned:

./101-oracle-cloud-agent-run-command: parsed OK
4. Add the configuration file to /etc/suoder.d:

sudo cp ./101-oracle-cloud-agent-run-command /etc/sudoers.d/

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: To write policy for the run command feature, do the following:
1. Create a group that includes the users who you want to allow to issue commands, cancel commands, and view
the command output for the instances in a compartment. Then, write the following policy to grant access for the
group:

Allow group RunCommandUsers to manage instance-agent-command-family in


compartment ABC
2. Create a dynamic group that includes the instances that you want to allow commands to run on. For example, a
rule inside the dynamic group can state:

any { instance.id = 'ocid1.instance.oc1.phx.<unique_ID_1>',


'ocid1.instance.oc1.phx.<unique_ID_2>' }
3. Write the following policy to grant access for the dynamic group:
Note:

If you create an instance and then add it to a dynamic group, it takes up to


30 minutes for the instance to start to poll for commands. If you create the
dynamic group first and then create the instance, the instance starts to poll
for commands as soon as the instance is created.

Allow dynamic-group RunCommandDynamicGroup to use instance-


agent-command-execution-family in compartment ABC where
request.instance.id=target.instance.id

Oracle Cloud Infrastructure User Guide 760


Compute

4. To allow the dynamic group to access the script file from an Object Storage bucket and save the response to an
Object Storage bucket, write the following policies:

Allow dynamic-group RunCommandDynamicGroup to read objects in compartment


ABC where all {target.bucket.name = '<bucket_with_script_file>'}
Allow dynamic-group RunCommandDynamicGroup to manage
objects in compartment ABC where all {target.bucket.name =
'<bucket_for_command_output>'}

If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Prerequisites
• The Compute Instance Run Command plugin must be enabled on the instance, and plugins must be running. For
more information about how to enable and run plugins, see Managing Plugins with Oracle Cloud Agent on page
746.
• For Oracle-provided images that were released before October 2020, the Oracle Cloud Agent software must be
updated to a version that supports the Compute Instance Run Command plugin (version 1.5.1 or later).
• You have prepared the script that you want to run. On Linux instances, the script runs in a Bash shell. On
Windows instances, the script runs in a batch shell. We recommend that you test the command in a non-
production environment before deploying it on instances that run production workflows.
• To provide the script file from an Object Storage location, upload the file to an Object Storage bucket in the same
region as the target instance. Note the bucket and file name, or the Object Storage URL for the file. To use the
same command across tenancies, create a pre-authenticated request that points to the file.
• To save the command output to an Object Storage location, create a bucket to save it in, in the same region as
the target instance. Note the bucket name or the Object Storage URL for the bucket. You can optionally save the
command output using a pre-authenticated request that points to an Object Storage location.

Using the Console

To create a command to run on an instance


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Under Resources, click Run Command.
4. Click Create Command.
5. Enter a name for the command. Avoid entering confidential information.
6. In the Timeout in seconds box, enter the amount of time to give the Compute Instance Run Command plugin to
run the command on the instance before timing out. The timer starts when the plugin starts the command. For no
timeout, enter 0.
7. In the Add script section, upload the script that you want the Compute Instance Run Command plugin to run on
the instance. Select one of the following options:
• Paste script: Paste the command in the box.
• Select a file: Upload the script as a text (.txt) file. Either browse to the file that you want to upload, or drag
and drop the file into the box.
• Import from an Object Storage bucket: Select the bucket that contains the script file. In the Object name
box, enter the file name.
• Import from an Object Storage URL: Enter the Object Storage URL for the script file.

Oracle Cloud Infrastructure User Guide 761


Compute

8. In the Output type section, select the location to save the output of the command:
•Output as text: The output is saved as plain text. You can review the output on the Instance Details page.
•Output to an Object Storage bucket: The output is saved to an Object Storage bucket. Select a bucket. In the
Object name box, enter a name for the output file. Avoid entering confidential information.
• Output to an Object Storage URL: The output is saved to an Object Storage URL. Enter the URL.
9. Click Create Command.
To view the output of a command
If the command output was saved to an Object Storage location, either download the response object from the bucket
where it was saved or navigate to the Object Storage pre-authenticated request URL.
If the command output was saved as a plain text file, do the following:
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Under Resources, click Run Command.
4. Find the command in the list, click the Actions icon (three dots), and then click View Command Details.
To cancel a command
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Under Resources, click Run Command.
4. Find the command in the list, click the Actions icon (three dots), and then click Cancel Command. Confirm when
prompted.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to work with the run command feature:
• CreateInstanceAgentCommand
• GetInstanceAgentCommand
• GetInstanceAgentCommandExecution
• ListInstanceAgentCommands
• ListInstanceAgentCommandExecutions
• CancelInstanceAgentCommand

Troubleshooting the Compute Instance Run Command Plugin


To troubleshoot the Compute Instance Run Command plugin, you can view the logs that the plugin generates.
Connect to the instance and then use the following:

tail -f /var/log/oracle-cloud-agent/plugins/runcommand/runcommand.log

For easier visibility into the plugin's operations without having to connect to the instance, you can create custom logs
using the Oracle Cloud Infrastructure Logging service.
Log Errors
This section describes how to resolve errors that appear in the log file.

Failure to Poll
If the Compute Instance Run Command plugin is failing to poll for commands, you might see the following error in
the log file:

Oracle Cloud Infrastructure User Guide 762


Compute

poll command err: circuitbreaker:[pollCommand] is open, last err:Service


error:NotAuthorizedOrNotFound. Authorization failed or requested resource not
found. http status code: 404.
This error can occur when the dynamic group policy for the run command feature is not enabled or if the instance was
recently added to the dynamic group. Instances don't belong to tenancy administrator groups by default, so you need
to explicitly set dynamic group permissions for the instance. For instructions explaining how to enable the dynamic
group policy, see Required IAM Policy on page 760.
When you create an instance and then add it to a dynamic group, it takes up to 30 minutes for the instance to start to
poll for commands. If you create the dynamic group first and then create the instance, the instance starts to poll for
commands as soon as the instance is created.
To test the dynamic group policy as soon as you add the instance to a dynamic group, restart the service manually
using one of the following commands:
Oracle Linux 6.x

sudo initctl restart oracle-cloud-agent

Oracle Linux 7.x and Oracle Linux 8.x

sudo systemctl restart oracle-cloud-agent

Windows Server

net restart ocarun

Getting Instance Metadata


The instance metadata service (IMDS) provides information about a running instance, including a variety of details
about the instance, its attached virtual network interface cards (VNICs), and any custom metadata that you define.
IMDS also provides information to cloud-init that you can use for various system initialization tasks.
You can find some of this information in the Console on the Instance Details page, or you can get all of it by logging
in to the instance and using the metadata service. The service runs on every instance and is an HTTP endpoint
listening on 169.254.169.254. If an instance has multiple VNICs, you must send the request using the primary VNIC.
Important:

To increase the security of metadata requests, we strongly recommend that


you update all applications to use the IMDS version 2 endpoint, if supported
by the image. Then, disable requests to IMDS version 1.

Upgrading to the Instance Metadata Service v2


The instance metadata service is available in two versions, version 1 and version 2. IMDSv2 offers increased security
compared to v1.
When you disable IMDSv1 and allow requests only to IMDSv2, the following things change:
• All requests must be made to the v2 endpoints (/opc/v2). Requests to the v1 endpoints (/opc/v1 and /
openstack) are rejected with a 404 not found error.
• All requests to the v2 endpoints must include an authorization header. Requests that do not include the
authorization header are rejected.
• Requests that are forwarded using the HTTP headers Forwarded, X-Forwarded-For, or X-Forwarded-
Host are rejected.
To upgrade the instance metadata service on a compute instance, use the following high-level steps:
1. Verify that the instance uses an image that supports IMDSv2.

Oracle Cloud Infrastructure User Guide 763


Compute

2. Identify requests to the legacy v1 endpoints.


3. Migrate all applications to support the v2 endpoints.
4. Disable all requests to the legacy v1 endpoints.

Supported Images for IMDSv2


IMDSv2 is supported on the following Oracle-provided platform images:
• Oracle Autonomous Linux 7.x images released in June 2020 or later
• Oracle Linux 8.x, Oracle Linux 7.x, and Oracle Linux 6.x images released in July 2020 or later
Other Oracle-provided platform images, most custom images, and most Marketplace images images do not support
IMDSv2. Custom Linux images might support IMDSv2 if cloud-init is updated to version 20.03 or later and Oracle
Cloud Agent is updated to version 0.0.19 or later. Custom Windows images might support IMDSv2 if Oracle Cloud
Agent is updated to version 1.0.0.0 or later; cloudbase-init does not support IMDSv2.

Identifying Requests to the Legacy IMDSv1 Endpoints


To identify the specific IMDS endpoints that requests are being made to, and the agents that are making the requests,
use the InstanceMetadataRequests metric.
To identify which versions of IMDS are enabled for an instance, do either of the following things:
• Using the Console:
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. In the Instance Details section, next to Instance Metadata Service, note the version numbers.
• Using the API: Use the GetInstance operation or the ListInstances operation. In the response, the
areLegacyImdsEndpointsDisabled attribute in the InstanceOptions object returns false if both
IMDSv1 and IMDSv2 are enabled for the instance. It returns true if IMDSv1 is disabled.

Disabling Requests to the Legacy IMDSv1 Endpoints


After you migrate all applications so that they make requests only to the IMDSv2 endpoints, you should disable all
requests to the legacy IMDSv1 endpoints.
Important:

Verify that the instance does not use the IMDSv1 endpoints before you
disable requests to IMDSv1. If the instance still relies on IMDSv1 when you
disable requests to it, you might lose some functionality.
Do either of the following things:
• Using the Console:
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. In the Instance Details section, next to Instance Metadata Service, click Edit.
4. For Allowed IMDS version, select the Version 2 only option.
5. Click Save Changes.
• Using the API: Use the UpdateInstance operation. In the request body, in the InstanceOptions object, pass
the value true for the areLegacyImdsEndpointsDisabled attribute.
Note:

If you disable IMDSv1 on an instance that does not support IMDSv2, you
might not be able to connect to the instance when you launch it. To reenable
IMDSv1: using the Console, on the Instance Details page, next to Instance
Metadata Service, click Edit. Select the Version 1 and version 2 option,

Oracle Cloud Infrastructure User Guide 764


Compute

save your changes, and then restart the instance. Using the API, use the
UpdateInstance operation.

Required IAM Policy


No IAM policy is required if you're logged in to the instance and using cURL to get the metadata.
For administrators: Users can also get instance metadata through the Compute API (for example, with GetInstance).
The policy in Let users launch compute instances on page 2151 covers that ability. If the specified group doesn't
need to launch instances or attach volumes, you could simplify that policy to include only manage instance-
family, and remove the statements involving volume-family and virtual-network-family.
To require that legacy IMDSv1 endpoints are disabled on any new instances that are created, use the following policy:

Allow group InstanceLaunchers to manage instances in compartment ABC


where request.instanceOptions.areLegacyEndpointsDisabled= 'true'

If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Getting Instance Metadata on Oracle-Provided Images


You can get instance metadata for Oracle-provided images by using cURL on Linux instances. On Windows
instances, you can use cURL (if supported by the Windows version) or an internet browser.
All requests to the instance metadata service v2 must include the following header:

Authorization: Bearer Oracle

Instance metadata accessed using IMDSv2 is available at the following root URLs:
• All of the instance information:

http://169.254.169.254/opc/v2/instance/
• Information about the VNICs that are attached to the instance:

http://169.254.169.254/opc/v2/vnics/

Instance metadata accessed using IMDSv1 is available at the following root URLs. No header is necessary.
• All of the instance information:

http://169.254.169.254/opc/v1/instance/
• Information about the VNICs that are attached to the instance:

http://169.254.169.254/opc/v1/vnics/

The values for specific metadata keys are available as sub-paths below the root URL.

To get instance metadata for Linux instances


1. Connect to a Linux instance using SSH.
2. Use cURL to issue a GET request to the instance metadata URL that you're interested in. For example:

curl -H "Authorization: Bearer Oracle" -L http://169.254.169.254/opc/v2/


instance/

Oracle Cloud Infrastructure User Guide 765


Compute

To get instance metadata for Windows instances


The steps to get metadata on a Windows instance depend on which version of the instance metadata service you're
requesting metadata from.
To get Windows instance metadata using IMDSv2:
1. Connect to a Windows instance by using a Remote Desktop connection.
2. Depending on whether your Windows version includes cURL, do either of the following:
• If your Windows version includes cURL, use cURL to issue a GET request to the instance metadata URL that
you're interested in. For example:

curl -H "Authorization: Bearer Oracle" -L http://169.254.169.254/opc/v2/


instance/
• If your Windows version does not include cURL, you can get instance metadata in your internet browser.
Navigate to the instance metadata URL that you're interested in, and pass a request that includes the
authorization header. See the instructions for your browser for more information about including headers in a
request. You might need to install a third-party browser extension that lets you include request headers.
To get Windows instance metadata using IMDSv1:
1. Connect to a Windows instance by using a Remote Desktop connection.
2. Open an internet browser and then navigate to the instance metadata URL that you're interested in.

Metadata Keys
The instance metadata includes default metadata keys that are defined by Compute and cannot be edited, as well as
custom metadata keys that you create.
Some metadata entries are directories that contain additional metadata keys. In the following tables, entries with a
trailing slash indicate a directory. For example, regionInfo/ is a directory that contains other metadata keys.

Metadata Keys for an Instance


The following metadata is available about an instance. The paths are relative to http://169.254.169.254/
opc/v2/instance/.

Metadata Entry Description


availabilityDomain The availability domain the instance is running in.
This name includes the tenancy-specific prefix for the
availability domain name.
Example: Uocm:PHX-AD-1

faultDomain The name of the fault domain the instance is running in.
Example: FAULT-DOMAIN-1

compartmentId The OCID of the compartment that contains the instance.

displayName The user-friendly name of the instance.


hostname The hostname of the instance.
id The OCID of the instance.
image The OCID of the image used to boot the instance.

Oracle Cloud Infrastructure User Guide 766


Compute

Metadata Entry Description


metadata/ A directory containing any custom metadata that you
provide for the instance.
To query the metadata for a specific custom metadata
key, use metadata/<key-name>, where <key-
name> is the name of the key that you defined when
creating the instance.

metadata/ssh_authorized_keys For Linux instances, the public SSH key that was
provided when creating the instance.
metadata/user_data User data to be used by cloud-init or cloudbase-init to
run custom scripts or provide custom configuration.
region The region that contains the availability domain the
instance is running in.
For the us-phoenix-1 and us-ashburn-1 regions, phx and
iad are returned, respectively. For all other regions, the
full region identifier is returned.
Examples: phx, eu-frankfurt-1

canonicalRegionName The region identifier for the region that contains the
availability domain the instance is running in.
Example: us-phoenix-1

ociAdName The availability domain the instance is running in. This


name is used internally and corresponds to the data
center label.
Example: phx-ad-1

regionInfo/ A directory containing information about the region that


contains the availability domain the instance is running
in.
regionInfo/realmKey The key for the realm that the region is in.
Example: oc1

regionInfo/realmDomainComponent The domain for the realm.


Example: oraclecloud.com

regionInfo/regionKey The 3-letter key for the region.


Example: PHX

regionInfo/regionIdentifier The region identifier.


Example: us-phoenix-1

shape The shape of the instance. The shape determines the


number of CPUs and the amount of memory allocated to
the instance. You can enumerate all available shapes by
calling the ListShapes operation.

Oracle Cloud Infrastructure User Guide 767


Compute

Metadata Entry Description


state The current lifecycle state of the instance. For a list of
allowed values, see Instance.
Example: Running

timeCreated The date and time the instance was created, in the format
defined by RFC3339.
agentConfig/ A directory containing information about the Oracle
Cloud Agent software running on the instance.
agentConfig/monitoringDisabled A Boolean value indicating whether the Oracle Cloud
Agent software can gather performance metrics and
monitor the instance.
agentConfig/managementDisabled A Boolean value indicating whether the Oracle Cloud
Agent software can run all the available management
plugins.
freeformTags/ A directory containing any free-form tags that are added
to the instance.
definedTags/ A directory containing any defined tags that are added to
the instance.

Here's an example response that shows of all of the information for an instance:

{
"availabilityDomain" : "EMIr:PHX-AD-1",
"faultDomain" : "FAULT-DOMAIN-3",
"compartmentId" : "ocid1.tenancy.oc1..exampleuniqueID",
"displayName" : "my-example-instance",
"hostname" : "my-hostname",
"id" : "ocid1.instance.oc1.phx.exampleuniqueID",
"image" : "ocid1.image.oc1.phx.exampleuniqueID",
"metadata" : {
"ssh_authorized_keys" : "example-ssh-key"
},
"region" : "phx",
"canonicalRegionName" : "us-phoenix-1",
"ociAdName" : "phx-ad-1",
"regionInfo" : {
"realmKey" : "oc1",
"realmDomainComponent" : "oraclecloud.com",
"regionKey" : "PHX",
"regionIdentifier" : "us-phoenix-1"
},
"shape" : "VM.Standard.E3.Flex",
"state" : "Running",
"timeCreated" : 1600381928581,
"agentConfig" : {
"monitoringDisabled" : false,
"managementDisabled" : false
},
"freeformTags": {
"Department": "Finance"
},
"definedTags": {
"Operations": {
"CostCenter": "42"
}

Oracle Cloud Infrastructure User Guide 768


Compute

}
}

Metadata Keys for Attached VNICs


The following metadata is available about the VNICs that are attached to the instance. The paths are relative to
http://169.254.169.254/opc/v2/vnics/.

Metadata Entry Description


vnicId The OCID of the VNIC.
privateIp The private IP address of the primary privateIp
object on the VNIC. The address is within the CIDR of
the VNIC's subnet.
vlanTag The Oracle-assigned VLAN tag of the attached VNIC.
If the VNIC belongs to a VLAN as part of the Oracle
Cloud VMware Solution, the vlanTag value is instead
the value of the vlanTag attribute for the VLAN. See
Vlan.

macAddr The MAC address of the VNIC.


If the VNIC belongs to a VLAN as part of the Oracle
Cloud VMware Solution, the MAC address is learned.
If the VNIC belongs to a subnet, the MAC address is a
static, Oracle-provided value.

virtualRouterIp The IP address of the virtual router.


subnetCidrBlock The subnet's CIDR block.
nicIndex Which physical network interface card (NIC) the VNIC
uses. Certain bare metal instance shapes have two active
physical NICs (0 and 1). If you add a secondary VNIC
to one of these instances, you can specify which NIC
the VNIC will use. For more information, see Virtual
Network Interface Cards (VNICs) on page 2881.

Here's an example response that shows the VNICs that are attached to an instance:

[ {
"vnicId" : "ocid1.vnic.oc1.phx.exampleuniqueID",
"privateIp" : "10.0.3.6",
"vlanTag" : 11,
"macAddr" : "00:00:00:00:00:01",
"virtualRouterIp" : "10.0.3.1",
"subnetCidrBlock" : "10.0.3.0/24",
"nicIndex" : 0
}, {
"vnicId" : "ocid1.vnic.oc1.phx.exampleuniqueID",
"privateIp" : "10.0.4.3",
"vlanTag" : 12,
"macAddr" : "00:00:00:00:00:02",
"virtualRouterIp" : "10.0.4.1",
"subnetCidrBlock" : "10.0.4.0/24",
"nicIndex" : 0
} ]

Oracle Cloud Infrastructure User Guide 769


Compute

Updating Instance Metadata


You can add and update custom metadata for a compute instance using the Command Line Interface (CLI) on page
4228 or REST APIs on page 4409.
When you create an instance using the LaunchInstance operation, you can specify custom metadata for the instance in
the LaunchInstanceDetails datatype's metadata or extendedMetadata attributes.
To update an instance's metadata, use the UpdateInstance operation, specifying the custom metadata in the
UpdateInstanceDetails datatype's metadata or extendedMetadata attributes.
The metadata attribute supports key/value string pairs. The extendedMetadata attribute supports nested JSON
objects. The combined size of these two attributes can be a maximum of 32,000 bytes.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to update
instance metadata. If the specified group doesn't need to launch instances or attach volumes, you could simplify that
policy to include only manage instance-family, and remove the statements involving volume-family and
virtual-network-family.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Using the API


When you use the UpdateInstance operation, the instance's metadata will be the combination of the values
specified in the UpdateInstanceDetails datatype's metadata or extendedMetadata attributes. Any set of
key/value pairs specified for these attributes in the UpdateInstance operation will replace the existing values
for these attributes, so you need to include all the metadata values for the instance in each call, not just the ones
you want to add. If you leave the attribute empty when calling UpdateInstance, the existing metadata values
in that attribute will be used. You cannot specify a value for the same metadata key twice, as this will cause the
UpdateInstance operation to fail due to there being duplicate keys.
To understand this, consider the example scenario where you created an instance using the LaunchInstance
operation and specified the following key/value pair for the metadata attribute:

"myCustomMetadataKey" : "myCustomMetadataValue"

If you then call the UpdateInstance operation, and add new metadata by specifying additional key/value
pairs in the extendedMetadata attribute, but you leave the metadata attribute empty, do not include the
myCustomMetadataKey key/value in the extendedMetadata attribute, as this will cause the operation
to fail since that key already exists. If you do specify values for the metadata attribute, you need to include the
myCustomMetadataKey key/value to maintain it in the instance's metadata. In this case, you can specify it in
either of the attributes.
There are two reserved keys, user_data and ssh_authorized_keys, that can only be set for an instance
at launch time, they cannot be updated later. If you use the metadata attribute to add or update metadata for an
instance, you need to ensure that you include the values specified at launch time for both these keys, otherwise the
UpdateInstance operation will fail.

Best Practices for Updating an Instance's Metadata


When using the UpdateInstance operation, we recommend the following:

Oracle Cloud Infrastructure User Guide 770


Compute

• Use the GetInstance operation to retrieve the existing custom metadata for the instance to ensure that you include
the values you want to maintain in the appropriate attributes when you call UpdateInstance. The metadata
values are returned in the metadata and extendedMetadata attributes for the Instance . For a code example
demonstrating this, see the UpdateInstanceExample in the SDK for Java on page 4262.
• Unless you are updating custom metadata that was added using the metadata attribute, use the
extendedMetadata attribute to add custom metadata. Otherwise you need to include the launch time
values for the user_data and ssh_authorized_keys reserved keys. If you use the metadata attribute
to add values and you leave out the values for these reserved keys or specify different values for them, the
UpdateInstance call will fail.

Editing an Instance
You can edit the properties of a compute instance without having to rebuild the instance or redeploy your
applications.
On supported instances, you can edit the following properties:
• Name. See Renaming an Instance on page 771.
• The shape that's used for the instance. Changing the shape affects the processor, number of OCPUs, memory, and
other resources that are allocated to the instance. This lets you scale up your Compute resources for increased
performance, or scale down to reduce cost. See Changing the Shape of an Instance on page 772.
• The fault domain where the instance is placed. You can distribute your instances across fault domain to protect
against unexpected hardware failures or maintenance outages. See Editing the Fault Domain for an Instance on
page 774.
• The launch options for the instance, which include the following properties:
• The networking type for the virtual network interface card (VNIC). The networking interface handles
functions such as disk input/output and network communication. Choose between paravirtualized and
hardware-assisted (SR-IOV) networking. Paravirtualized networking provides more management flexibility.
SR-IOV networking offers better performance.
• The boot volume attachment type. Choose between paravirtualized and iSCSI attachment types.
See Editing the Launch Options for an Instance on page 775.
• Whether in-transit encryption is used for the boot volume or block volume. See Enabling In-Transit Encryption
Between an Instance and Boot Volumes or Block Volumes on page 779.
• Whether a running instance is recovered to the same lifecycle state or stopped after a maintenance event or an
infrastructure failure affects the underlying infrastructure. See Setting Instance Availability During Maintenance
Events on page 780.
When you edit an instance, the instance's OCID remains the same.

Renaming an Instance
You can rename an instance without changing its Oracle Cloud Identifier (OCID).
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to rename
an instance. If the specified group doesn't need to launch instances or attach volumes, you could simplify that policy
to include only manage instance-family, and remove the statements involving volume-family and
virtual-network-family.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Oracle Cloud Infrastructure User Guide 771


Compute

Using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Click Edit.
4. Enter a new name. Avoid entering confidential information.
5. Click Save Changes.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use this API operation to rename an instance:
• UpdateInstance

Changing the Shape of an Instance


You can change the shape of a virtual machine (VM) instance without having to rebuild your instances or redeploy
your applications. This lets you scale up your Compute resources for increased performance, or scale down to reduce
cost.
When you change the shape of an instance, it affects the number of OCPUs, amount of memory, network bandwidth,
and maximum number of VNICs for the instance. Optionally, you can select a shape that uses a different processor.
The instance's public and private IP addresses, volume attachments, and VNIC attachments remain the same.
Supported Shapes
The shape series and image of the original shape determine which shapes you can select as a target for the new shape.
You can resize instances that use these shapes:
• VM Standard shapes:
Includes shapes in the VM.Standard.E4, VM.Standard.E3, VM.Standard.E2, VM.Standard2, VM.Standard.B1,
and VM.Standard1 series.
• Linux images: You can change the number of OCPUs and the amount of memory allocated to a flexible
shape. You can also change a Standard shape in one series to a Standard shape in another series. For example,
you can change a fixed shape to a flexible shape.
• Windows images: For flexible shapes, you can change the number of OCPUs and the amount of memory
allocated. You can also change a VM.Standard.E4 shape to a VM.Standard.E3 shape, and vice versa. For
fixed shapes, you can change the shape to a new shape only within the same series. For example, you can
change a VM.Standard2.1 shape to a VM.Standard2.2 shape, but you can't change a VM.Standard2.1 shape to
a VM.Standard1.1 shape.
Important:


For Windows Server 2019 instances, resize a VM.Standard.E3 shape to
a maximum of 32 OCPUs. See this known issue for more information.
• Instances that use the VM.Standard.E3.Flex shape or the
VM.Standard.E4.Flex shape, and that also use hardware assisted (SR-
IOV) networking, can be allocated a maximum of 1010 GB of memory.
See this known issue for more information.
• VM.GPU3 series: Can be changed to any shape in the VM.GPU3 series.
These shapes cannot be changed:
• VM.Standard.E2.1.Micro series
• VM.GPU2 series
• VM instances that run on dedicated virtual machine hosts
• Bare metal shapes

Oracle Cloud Infrastructure User Guide 772


Compute

Limitations and Considerations


Be aware of the following information:
• The image that's used to launch the instance must be compatible with the new shape. To see which shapes are
compatible, do either of the following things:
• In the Console, on the Instance Details page, click the name of the image. Then, refer to the list of compatible
shapes.
• Using the API, call the ListShapes operation and pass the image OCID as a parameter.
• Some Marketplace images cannot be resized because of licensing constraints. If you want to resize a Microsoft
SQL Server image, contact support.
• You must have sufficient service limits for the new shape. If you don't have service limits, the instance will
remain with the original shape.
• Different shapes are billed at different rates. When you change the shape of an instance, you are billed to the
nearest second of usage for each shape that you use. For more information, see Compute Pricing and Resource
Billing for Stopped Instances on page 786.
• If the instance has secondary VNICs configured, you might need to reconfigure them after the instance is
rebooted. For more information, see Virtual Network Interface Cards (VNICs) on page 2881.
• If the instance is running when you change the shape, it is rebooted as part of the change shape operation. If the
applications that run on the instance take a long time to shut down, they could be improperly stopped, resulting in
data corruption. To avoid this, shut down the instance using the commands available in the OS before you change
the shape.
• When you change the shape from one hardware series to a different series, some hardware details such as the
network interface name might change. This might cause problems for some guest OSs, particularly if the OS has
been customized. If the OS fails to boot after you change the shape, then you should change the instance back to
the original shape.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to change
the shape of an instance. If the specified group doesn't need to launch instances or attach volumes, you could simplify
that policy to include only manage instance-family, and remove the statements involving volume-family
and virtual-network-family.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.
Prerequisites
• If you want to change the instance to a smaller shape that supports fewer VNICs, detach the extra VNICs.
Using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Click Edit.

Oracle Cloud Infrastructure User Guide 773


Compute

4. Click Edit Shape. Then, select the shape that you want to scale to. The following options are available:
• AMD Rome: The flexible shapes, which use current generation AMD processors and have a customizable
number of OCPUs and amount of memory.
• For Number of OCPUs, choose the number of OCPUs that you want to allocate to this instance by
dragging the slider. You can select from 1 to 64 OCPUs.
• For Amount of memory (GB), choose the amount of memory that you want to allocate to this instance by
dragging the slider. The amount of memory allowed is based on the number of OCPUs selected. For each
OCPU, you can select up to 64 GB of memory, with a maximum of 1024 GB total. The minimum amount
of memory allowed is either 1 GB or a value matching the number of OCPUs, whichever is greater. For
example, if you select 25 OCPUs, the minimum amount of memory allowed is 25 GB.
The other resources scale proportionately.
•Intel Skylake: Standard shapes that use the current generation Intel processor and have a fixed number of
OCPUs.
• Specialty and Previous Generation: Standard shapes with previous generation Intel and AMD processors.
5. Click Change Shape.
If the instance is running, it is rebooted.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use this API operation to change the shape of an instance:
• UpdateInstance

Editing the Fault Domain for an Instance


You can change the fault domain where a virtual machine (VM) instance is placed.
A fault domain is a grouping of hardware and infrastructure that is distinct from other fault domains in the same
availability domain. By properly leveraging fault domains you can increase the availability of applications running on
Oracle Cloud Infrastructure. For more information and best practices, see Fault Domains on page 599.
Supported Shapes
You can change the fault domain for instances that use these shapes:
• VM.Standard.E4 series
• VM.Standard.E3 series
• VM.Standard.E2 series
• VM.Standard2 series
• VM.Standard.B1 series
• VM.Standard1 series
• VM.DenseIO1 series
• VM.DenseIO2 series
• VM.GPU3 series
These shapes cannot be edited:
• VM.Standard.E2.1.Micro series
• VM.GPU2 series
• VM instances that run on dedicated virtual machine hosts
• Bare metal shapes

Oracle Cloud Infrastructure User Guide 774


Compute

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to change
the fault domain for an instance. If the specified group doesn't need to launch instances or attach volumes, you could
simplify that policy to include only manage instance-family, and remove the statements involving volume-
family and virtual-network-family.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.
Using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Click Edit.
4. Click Edit Fault Domain. Then, select a new fault domain.
5. Click Save Changes.
If the instance is running, it is rebooted.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use this API operation to change the fault domain for an instance:
• UpdateInstance

Editing the Launch Options for an Instance


You can tune the compatibility and performance of virtual machine (VM) instances by changing the networking type
or the boot volume attachment type.

Networking Launch Types


The networking interface handles functions such as disk input/output and network communication. The following
networking types are available:
• Paravirtualized networking: For general purpose workloads such as enterprise applications, microservices,
and small databases. Paravirtualized networking also provides increased flexibility to use the same image across
different hardware platforms. Linux images with paravirtualized networking support live migration during
infrastructure maintenance.
• Hardware-assisted (SR-IOV) networking: Single root input/output virtualization. For low-latency workloads
such as video streaming, real-time applications, and large or clustered databases. Hardware-assisted (SR-IOV)
networking uses the VFIO driver framework.
Important:

To use a particular networking type, both the shape and the image must
support that networking type.
Shapes: The following table lists the default and supported networking types for VM shapes.

Oracle Cloud Infrastructure User Guide 775


Compute

Shape series Default Networking Type Supported Networking Types

VM.Standard1 SR-IOV Paravirtualized, SR-IOV

VM.Standard2 Paravirtualized Paravirtualized, SR-IOV

VM.Standard.E2 Paravirtualized Paravirtualized only


VM.Standard.E3 SR-IOV Paravirtualized, SR-IOV
VM.Standard.E4 SR-IOV Paravirtualized, SR-IOV
VM.DenseIO1 SR-IOV Paravirtualized, SR-IOV
VM.DenseIO2 Paravirtualized Paravirtualized, SR-IOV
VM.GPU2 SR-IOV Paravirtualized, SR-IOV
VM.GPU3 SR-IOV Paravirtualized, SR-IOV

Images: Paravirtualized networking is supported on these Oracle-provided images:


• Oracle Linux 8: All images.
• Oracle Linux 7, Oracle Linux 6: Images published in March 2019 or later.
• CentOS 8: All images.
• CentOS 7: Images published in July 2019 or later.
• Ubuntu 18.04, Ubuntu 16.04: Images published in March 2019 or later.
• Windows Server 2019: All images.
• Windows Server 2016: Images published in August 2019 or later.
SR-IOV networking is supported on all Oracle-provided images, with the following exceptions: On Windows Server
2019, when launched using a VM.Standard2 shape, SR-IOV networking is not supported. On Windows Server 2012
R2, SR-IOV networking is only supported on the VM.Standard2 and VM.DenseIO2 shapes.

Boot Volume Attachment Types


The following boot volume attachment types are available:
• iSCSI: A TCP/IP-based standard used for communication between a volume and attached instance.
• Paravirtualized: A virtualized attachment available for VMs. This is the default for boot volumes and remote
block storage volumes on Oracle-provided platform images.
Supported Shapes
You can edit the launch options for instances that use these shapes:
• VM.Standard.E4 series
• VM.Standard.E3 series
• VM.Standard.E2 series
• VM.Standard2 series
• VM.Standard.B1 series
• VM.Standard1 series
• VM.DenseIO1 series
• VM.DenseIO2 series
• VM.GPU3 series
These shapes cannot be edited:
• VM.Standard.E2.1.Micro series
• VM.GPU2 series

Oracle Cloud Infrastructure User Guide 776


Compute

• VM instances that run on dedicated virtual machine hosts


• Bare metal shapes
Limitations and Considerations
Caution:

Some instances might not function properly if you change the networking
type or the boot volume attachment type. This happens due to shape and
image compatibility and driver support. After the instance reboots and is
running, connect to it. If the connection fails or the OS doesn't behave as
expected, the changes are not supported. Revert the instance to the original
settings.
Before you change the networking type or the boot volume attachment type, you must ensure that paravirtualized
drivers are installed on the image. The steps depend on the image:
Oracle Linux 7.x, CentOS 8.x, CentOS 7.x, Ubuntu 20, Ubuntu 18, Ubuntu 16
Paravirtualized drivers are installed on Oracle-provided platform images.
Windows Server 2019, Windows Server 2016, Windows Server 2012 R2
The Oracle VirtIO Drivers for Microsoft Windows release 1.1.5 must be installed on Oracle-provided platform
images.
1. To determine whether the VirtIO drivers are installed, connect to the instance using a Remote Desktop connection.
Then, do either of the following things:
• Open Control Panel > Program and Features. If Oracle Windows VirtIO Drivers is installed, note the
version number.
• In Registry Editor, go to HKEY_LOCAL_MACHINE\\Software\\Wow6432Node\\Oracle Corporation\
\Oracle Windows VirtIO Drivers. If the VirtIO drivers are installed, note the version number.
2. If the drivers are not installed or a version that is not 1.1.5 is installed, download Oracle VirtIO Drivers release
1.1.5:
a.Sign in to the Oracle Software Delivery Cloud site.
b.In the All Categories list, select Release.
c.Type Oracle Linux 7.7 in the search box and click Search.
d.Add REL: Oracle Linux 7.7.x to your cart, and then click Continue.
e.In the Platforms/Languages list, select x86 64 bit. Click Continue.
f.Accept the license agreement and then click Continue.
g.Select the check box next to Oracle VirtIO Drivers Version for Microsoft Windows 1.1.5. Clear the other
check boxes.
h. Click Download and then follow the prompts.
3. Install the drivers and then restart the instance. For steps, see Installing the Oracle VirtIO Drivers for Microsoft
Windows on Existing Microsoft Windows Guests.
Oracle Linux 6.x
For Oracle-provided platform images, connect to the instance using a Secure Shell (SSH) connection. Then, run the
following commands:

sudo bash
cd /boot/efi
echo "fs0:\EFI\redhat\grub.efi"> startup.nsh
chmod 500startup.nsh
sync

Images that are not Oracle-provided platform images

Oracle Cloud Infrastructure User Guide 777


Compute

To verify that your system has paravirtualized drivers installed, run the following command:

lsinitrd | grep virtio

• If paravirtualized drivers are installed, you will see multiple files listed with paths similar to lib/
modules/4.4.21-69-default/kernel/drivers/block/virtio_blk.ko.
• If no files are listed, your system either does not support paravirtualized drivers, or does not have paravirtualized
drivers installed. Refer to the documentation for your operating system for more information.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to edit the
launch options for an instance. If the specified group doesn't need to launch instances or attach volumes, you could
simplify that policy to include only manage instance-family, and remove the statements involving volume-
family and virtual-network-family.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.
Prerequisites
• Detach (delete) all secondary VNICs and detach all block volumes. The primary VNIC and boot volume should
remain attached.
Using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Click Edit.
4. Click Show Advanced Options.
5. To change the networking type, in the Networking type section, select from the following options:
• Hardware-assisted (SR-IOV) networking: Single root input/output virtualization. For low-latency
workloads such as video streaming, real-time applications, and large or clustered databases.
• Paravirtualized networking: For general purpose workloads such as enterprise applications, microservices,
and small databases. The image must have paravirtualized drivers, as described in Limitations and
Considerations on page 777.
For more information, see Recommended Networking Launch Types on page 701.
6. To change the boot volume attachment type, in the Boot volume attachment type section, select from the
following options:
•iSCSI: A TCP/IP-based standard used for communication between a volume and attached instance.
•Paravirtualized: A virtualized attachment available for VMs. This is the default for boot volumes and remote
block storage volumes on Oracle-provided platform images.
7. Click Save Changes.
If the instance is running, it is rebooted.
8. Connect to the instance after it reboots and is running. If the connection fails or the OS doesn't behave as
expected, the changes are not supported. Revert the instance to the original settings.
9. If necessary, reattach any secondary VNICs and block volumes.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Oracle Cloud Infrastructure User Guide 778


Compute

Use this API operation to edit the launch options for an instance:
• UpdateInstance

Enabling In-Transit Encryption Between an Instance and Boot Volumes or Block


Volumes
After you create a virtual machine (VM) instance, you can enable or disable in-transit encryption between the
instance and its paravirtualized boot volume and block volume attachments.
All boot volume and block volume data at rest is always encrypted by the Oracle Cloud Infrastructure Block Volume
service using the Advanced Encryption Standard (AES) algorithm with 256-bit encryption. For more information, see
Block Volume Encryption on page 508.
Important:

See this known issue for more information about editing the in-transit
encryption settings.
Supported Shapes and Images
You can enable or disable in-transit encryption for existing instances that use these shapes:
• VM.Standard.E4 series
• VM.Standard.E3 series
• VM.Standard.E2 series
• VM.Standard2 series
• VM.Standard.B1 series
• VM.Standard1 series
• VM.DenseIO1 series
• VM.DenseIO2 series
• VM.GPU3 series
In-transit encryption for boot volumes and block volumes is available for Oracle-provided images. It is not supported
in most cases for instances launched from custom images imported for "bring your own image" (BYOI) scenarios. To
confirm support for certain Linux-based custom images, contact support.
These shapes cannot be edited:
• VM.Standard.E2.1.Micro series
• VM.GPU2 series
• VM instances that run on dedicated virtual machine hosts
• Bare metal shapes
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to change
the shape of an instance. If the specified group doesn't need to launch instances or attach volumes, you could simplify
that policy to include only manage instance-family, and remove the statements involving volume-family
and virtual-network-family.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Oracle Cloud Infrastructure User Guide 779


Compute

Using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Click Edit.
4. Click Show Advanced Options.
5. Select the Use in-transit encryption check box.
6. Click Save Changes.
If the instance is running, it is rebooted.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use this API operation to enable or disable in-transit encryption between an instance and its paravirtualized boot
volume attachments:
• UpdateInstance

Setting Instance Availability During Maintenance Events


When the underlying infrastructure for a virtual machine (VM) instance needs to undergo planned maintenance
or recover from an unexpected failure, Oracle Cloud Infrastructure automatically attempts to recover the instance
by migrating it to healthy hardware. By default, the instance is recovered to the same lifecycle state as before
the maintenance event. If you have an alternate process to recover the instance after a reboot migration, you can
optionally configure the instance to remain stopped after it is migrated to healthy hardware. You can then restart the
instance on your own schedule.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to edit
the maintenance recovery action for an instance. If the specified group doesn't need to launch instances or attach
volumes, you could simplify that policy to include only manage instance-family, and remove the statements
involving volume-family and virtual-network-family.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.
Using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Click Edit.
4. Click Show Advanced Options.
5. For the Restore instance lifecycle state after infrastructure maintenance check box, select an option:
• To reboot a running instance after it is recovered, select the check box.
• To recover the instance in the stopped state, clear the check box.
6. Click Save Changes.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Oracle Cloud Infrastructure User Guide 780


Compute

Use this API operation to edit the maintenance recovery action for an instance:
• UpdateInstance

Moving a Compute Instance to a New Host


This topic covers how to relocate a virtual machine or a bare metal instance by using reboot migration or a manual
process.
Note:

• Dedicated virtual machine hosts: Dedicated virtual machine hosts do


not support reboot migration. To relocate these instances, use the process
described in Moving an Instance with Manual Migration on page 781.
• Oracle Platform Services:
For instances that were created with Oracle Platform Services and located
in the compartment ManagedCompartmentForPaaS, you must use
the interface for the specific Platform Service to reboot the instances.

Live Migration
During an infrastructure maintenance event, where applicable, Oracle Cloud Infrastructure live migrates Standard
VM instances from the physical VM host that needs maintenance to a healthy VM host without disrupting running
instances. If a VM cannot be live migrated, there might be a short downtime while the instance is reboot migrated.
You can avoid this downtime by proactively reboot migrating the instance.

Reboot Migration
For instances with a date in the Reboot Maintenance field (available in the Console, CLI, and SDKs), you can
reboot your instance to move it to new infrastructure. After you reboot the instance, the Reboot Maintenance field is
cleared. This change indicates that the instance was moved successfully.

Prerequisites for Reboot Migration


1. Prepare the instance for reboot migration:
• Ensure that any remote block volumes defined in /etc/fstab use the recommended options.
• Ensure that any File Storage service (NFS) mounts use the nofail option.
• If you use the Oracle-provided script to configure secondary VNICs, ensure it runs automatically at startup.

Moving an Instance with Reboot Migration


After you complete the prerequisites:
1. Stop any running applications.
2. Reboot the instance.
3. Confirm that the Reboot Maintenance field no longer has a date.
4. Start and test any applications on the instance.

Moving an Instance with Manual Migration


For instances without a date in the Reboot Maintenance field (available in the Console, CLI, and SDKs), you must
move the instance manually. This method requires that you terminate the instance, and then launch a new instance
from the retained boot volume. Instances that have additional VNICs, secondary IP addresses, remote attached block
volumes, or that belong to a backend set of a load balancer require additional steps.

Limitations and Warnings for Manual Migration


Be aware of the following limitations and warnings when performing a manual migration:

Oracle Cloud Infrastructure User Guide 781


Compute

• Any public IP addresses assigned to your instance from a reserved public pool are retained. Any that were not
assigned from a reserved public IP address pool will change. Private IP addresses do not change.
• MAC addresses, CPUIDs, and other unique hardware identifiers do change during the move. If any applications
running on the instance use these identifiers for licensing or other purposes, be sure to take note of this
information before moving the instance to help you manage the change.

Prerequisites for Manual Migration


1. Before moving the instance, document all critical details:
•The instance's region, availability domain, and fault domain.
•The instance's display name.
•All private IP addresses, names, and subnets. Note that the instance can have multiple VNICs, and each VNIC
can have multiple secondary IP addresses.
• All private DNS names. The instance can have multiple VNICs, and each VNIC can have multiple secondary
IP addresses. Each private IP address can have a DNS name.
• Any public IP addresses assigned from a reserved public pool. Note that the instance can have multiple
VNICs, and each VNIC can have multiple secondary private IP addresses. Each VNIC and secondary private
IP address can have an attached public IP address.
• Any remote block volumes attached to the instance.
• Any tags on the instance or attached resources.
2. Prepare the instance for manual migration:
• Ensure that any remote block volumes defined in /etc/fstab use the recommended options.
• Ensure that any File Storage service (NFS) mounts use the nofail option.
• If you have statically defined any network interfaces belonging to secondary VNICs using their MAC
addresses, such as those defined in /etc/sysconfig/network-scripts/ifcfg*, those interfaces
will not start due to the change in the MAC address. Remove the static mapping.
• If you use the Oracle-provided script to configure secondary VNICs, ensure it runs automatically at startup.

Moving an Instance Manually


After you complete the prerequisites:
1. Stop any running applications.
2. Ensure that those applications will not start automatically.
Caution:

When the relocated instance starts for the first time, remote block volumes,
secondary VNICs, or any resource that relies on them, will not be attached.
The absence of these resources can cause application issues.
3. If your instance has local NVMe storage (dense instances), you must back up this data:
a. Create and attach one or more remote block volumes to the instance.
b. Copy the data from the NVMe devices to the remote block volumes.
4. Unmount any remote block volumes or File Storage service (NFS) mounts.
5. Back up all remote block volumes. See Overview of Block Volume Backups on page 544 for more information.
6. Create a backup of the root volume.
Important:

Do not generalize or specialize Windows instances.

Oracle Cloud Infrastructure User Guide 782


Compute

7. Terminate the instance:


Using the Console
To terminate the instance, follow the steps in Terminating an Instance on page 789, ensuring that the
Permanently delete the attached boot volume check box is cleared. This preserves the boot volume that is
associated with the instance.
Using the API
To terminate the instance, use the TerminateInstance operation and pass the preserveBootVolume parameter
set to true in the request.
Using the CLI
To terminate the instance, use the terminate operation and set the preserve-boot-volume option to true.
8. Create a new instance using the boot volume from the terminated instance.
9. In the launch instance flow, specify the private IP address that was attached to the primary VNIC. If the public IP
address was assigned from a reserved IP address pool, be sure to assign the same IP address.
10. When the instance state changes to RUNNING, Stop the instance.
11. Recreate any secondary VNICs and secondary IP addresses.
12. Attach any remote block volumes.
Note:

This step includes any volumes used to back up local NVMe devices. Copy
the data onto the NVMe storage on the new instance, and then detach the
volumes.
13. Start the instance.
14. Start and test any applications on the instance.
15. Configure the applications to start automatically, as required.
16. Recreate the required tags.
17. (Optional) After you confirm that the instance and applications are healthy, you can delete the volume backups.

Moving Compute Resources to a Different Compartment


You can move Compute resources such as instances, instance pools, and custom images from one compartment to
another.
When you move a Compute resource to a new compartment, associated resources such as boot volumes and VNICs
are not moved.
After you move the resource to the new compartment, inherent policies apply immediately and affect access to the
resource through the Console. For more information, see Managing Compartments on page 2450.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The following policies allow users to move Compute resources to a different compartment:

Allow group ComputeCompartmentMovers to manage instance-family in tenancy

Allow group ComputeCompartmentMovers to manage compute-management-family in


tenancy

Allow group ComputeCompartmentMovers to manage auto-scaling-configurations


in tenancy

Oracle Cloud Infrastructure User Guide 783


Compute

If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150.

Security Zones
Security Zones ensure that your cloud resources comply with Oracle security principles. If any operation on a
resource in a security zone compartment violates a policy for that security zone, then the operation is denied.
The following security zone policies affect your ability to move Compute resources from one compartment to
another:
• You can't move a compute instance from a security zone to a standard compartment.
• You can't move a compute instance from a standard compartment to a compartment that is in a security zone.

Using the Console


To move an instance to a different compartment
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. In the List Scope section, select a compartment.
3. Click the instance that you're interested in.
4. Click More Actions, and then click Move Resource.
5. Choose the destination compartment from the list.
6. Click Move Resource.
To track the progress of the operation, you can monitor the associated work request.
7. If there are alarms monitoring the instance, update the alarms to reference the new compartment. See To update an
alarm after moving a resource on page 2763 for more information.
8. Optionally, move the resources that are attached to the instance to the new compartment.
To move an instance configuration to a different compartment
Note:

Most of the properties for an existing instance configuration, including


the compartment, cannot be modified after you create the instance
configuration. Although you can move an instance configuration to a
different compartment, you will not be able to use the instance configuration
to manage instance pools in the new compartment. If you want to update an
instance configuration to point to a different compartment, you should instead
create a new instance configuration in the target compartment. For steps, see
Creating an Instance Configuration on page 714.
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instance Configurations.
2. In the List Scope section, select a compartment.
3. Click the instance configuration that you're interested in.
4. Click Move Resource.
5. Choose the destination compartment from the list.
6. Click Move Resource.
To move an instance pool to a different compartment
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instance Pools.
2. In the List Scope section, select a compartment.
3. Click the instance pool that you're interested in.
4. Click More Actions, and then click Move Resource.
5. Choose the destination compartment from the list.
6. Click Move Resource.

Oracle Cloud Infrastructure User Guide 784


Compute

7. Optionally, update the instance pool with an instance configuration that points to the new compartment. Do the
following:
a. Create a new instance configuration in the new compartment. You can do this using the Console or the API.
For steps, see Creating an Instance Configuration on page 714.
b. Update the instance pool with the new instance configuration. You can do this using the API. For steps, see
Updating an Instance Pool on page 718.
8. Optionally, move the instances and other resources that are associated with the instance pool to the new
compartment.
To move an autoscaling configuration to a different compartment
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Autoscaling Configurations.
2. In the List Scope section, select a compartment.
3. Click the autoscaling configuration that you're interested in.
4. Click Move Resource.
5. Choose the destination compartment from the list.
6. Click Move Resource.
To move a custom image to a different compartment
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Custom Images.
2. In the List Scope section, select a compartment.
3. Click the custom image that you're interested in.
4. Click Move Resource.
5. Choose the destination compartment from the list.
6. Click Move Resource.
To move a cluster network to a different compartment
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Cluster Networks.
2. In the List Scope section, select a compartment.
3. Click the cluster network that you're interested in.
4. Click Move Resource.
5. Choose the destination compartment from the list.
6. Click Move Resource.
7. Optionally, move the instances and other resources that are associated with the cluster network to the new
compartment.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to move Compute resources to different compartments:
• ChangeInstanceCompartment
• ChangeInstanceConfigurationCompartment
• ChangeInstancePoolCompartment
• ChangeAutoScalingConfigurationCompartment
• ChangeImageCompartment
• ChangeClusterNetworkCompartment

Stopping and Starting an Instance


You can stop and start an instance as needed to update software or resolve error conditions.
For steps to manage the lifecycle state of instances in an instance pool, see Stopping and Starting the Instances in an
Instance Pool on page 721.

Oracle Cloud Infrastructure User Guide 785


Compute

Stopping or Restarting an Instance Using the Instance's OS


In addition to using the API and Console, you can stop and restart instances using the commands available in the
operating system when you are logged in to the instance. Stopping an instance using the instance's OS does not stop
billing for that instance. If you stop an instance this way, be sure to also stop it from the Console or API.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to stop
or start an existing instance. If the specified group doesn't need to launch instances or attach volumes, you could
simplify that policy to include only manage instance-family, and remove the statements involving volume-
family and virtual-network-family.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Resource Billing for Stopped Instances


For both VM and bare metal instances, billing depends on the shape that you use to create the instance:
• Standard shapes: Stopping an instance pauses billing. However, stopped instances continue to count toward your
service limits.
• Dense I/O shapes: Billing continues for stopped instances because the NVMe storage resources are preserved.
Related resources continue to count toward your service limits. To halt billing and remove related resources from
your service limits, you must terminate the instance.
• GPU shapes: Billing continues for stopped instances because GPU resources are preserved. Related resources
continue to count toward your service limits. To halt billing and remove related resources from your service
limits, you must terminate the instance.
• HPC shapes: Billing continues for stopped instances because the NVMe storage resources are preserved. Related
resources continue to count toward your service limits. To halt billing and remove related resources from your
service limits, you must terminate the instance.
Stopping an instance using the instance's OS does not stop billing for that instance. If you stop an instance this way,
be sure to also stop it from the Console or API.
For more information about Compute pricing, see Compute Pricing. For more information about how instances
running Microsoft Windows Server are billed when they are stopped, see How am I charged for Windows Server on
Oracle Cloud Infrastructure? on page 811.

Recovering a Virtual Machine (VM) During Planned Maintenance


Oracle Cloud Infrastructure performs routine data center maintenance on the physical infrastructure for VM instances.
This maintenance includes tasks such as upgrading and replacing hardware or performing maintenance that halts
power to the host. When an underlying infrastructure component needs to undergo maintenance, you are notified in
advance before the impact to your instances.
During an infrastructure maintenance event, where applicable, Oracle Cloud Infrastructure live migrates Standard
VM instances to a healthy physical VM host without disrupting running instances. If a VM instance cannot be live
migrated, then the instance is stopped on the physical VM host that needs maintenance, reboot migrated to a healthy
VM host, and then restarted. If you have an alternate process to recover the instances, you can optionally configure
the instances to remain stopped after they are reboot migrated to healthy hardware.
If VM instances are scheduled for a maintenance reboot, you can proactively reboot (or stop and start) the instances
at any time before the scheduled reboot. Proactively rebooting lets you control how and when your applications

Oracle Cloud Infrastructure User Guide 786


Compute

experience downtime. Customer-managed VM maintenance is supported on Standard and GPU instance shapes,
including Oracle-provided platform images and custom images that were imported from outside of Oracle Cloud
Infrastructure.
To identify the VM instances that you can proactively reboot, do any of the following things:
Using the Console: To see which instances in the current compartment are scheduled for a maintenance reboot
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
If the instance has a maintenance reboot scheduled and can be proactively rebooted, a warning icon appears next
to the instance name.
2. Click the instance that you're interested in, and then check the Maintenance Reboot field for the instance. This
field displays the date and start time for the maintenance reboot.
Using the API: To see which instances in a compartment are scheduled for a maintenance reboot
Use the ListInstances operation. The timeMaintenanceRebootDue field for the Instance returns the date and
start time for the maintenance reboot.
Using Search: To find all instances that are scheduled for a maintenance reboot
1. In the top navigation bar, click Search for resources, services, and documentation, and then click Advanced
Resource Query.
2. Click Select Sample Query, and then click Query for all instances which have an upcoming scheduled
maintenance reboot.
3. Click Search.
If you choose not to reboot before the scheduled time, then Oracle Cloud Infrastructure will migrate your instances
within a 24-hour period after the scheduled time.
An instance is no longer impacted by a maintenance event when the Maintenance Reboot field for the instance is
blank.

VM Recovery Due to Infrastructure Failure


When the underlying infrastructure of a VM instance fails due to software or hardware issues, Oracle Cloud
Infrastructure automatically attempts to recover the instance.
Standard and GPU VM instances are recovered using a reboot migration, which automatically restores the VM on a
healthy host, whether that's the original physical host or a different physical host. The VM failure is detected within
one minute of occurrence. If the host cannot be recovered immediately, a healthy move occurs, whereby the VM is
moved to a different host. In this scenario, the process of migrating to and restarting on a healthy host automatically
begins within five minutes. During the reboot, instance properties such as private and ephemeral public IP addresses,
attached block volumes, and VNICs are preserved.
Dense I/O VM instances are recovered by rebooting the instance on the same physical host. If recovering a Dense I/
O instance on the same physical host isn't possible, Oracle Cloud Infrastructure notifies you to terminate the instance
within 14 days. If you don't terminate the instance before the deadline, Oracle Cloud Infrastructure disables the
instance on the deadline and terminates it within the next seven days. The boot volume and remote attached data
volume are preserved.
Oracle Cloud Infrastructure notifies you by email or announcements of any VM infrastructure failure events, with
the status of the recovery action that was taken. You can also monitor the instance status metric to stay aware of any
unexpected reboots.
You can choose not to have your VMs automatically restart by configuring your instances to remain stopped after
they are recovered.

Hardware Reclamation for Stopped Bare Metal Instances


When a bare metal instance remains in the stopped state for longer than 48 hours, the instance is taken offline and the
physical hardware is reclaimed. The next time that you restart the instance, it starts on different physical hardware.

Oracle Cloud Infrastructure User Guide 787


Compute

There are no changes to the block volumes, boot volumes, and instance metadata, including the ephemeral and public
IP addresses.
However, the following properties do change when a bare metal instance restarts on different physical hardware:
the MAC addresses and the host serial number. You might also notice changes in the BIOS firmware version, BIOS
settings, and CPU microcode. If you want to keep the same physical hardware, do not stop the instance using the
Console or the API, SDKs, or CLI. Instead, shut down the instance using the instance's OS. When you want to restart
the instance, use the Console or the API, SDKs, or CLI.
This behavior applies to Linux instances that use the following shapes:
• BM.Standard1.36
• BM.Standard.B1.44
• BM.Standard2.52
• BM.Standard.E2.64

Using the Console

To start an instance
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Click Start.

To stop an instance
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Click Stop.
4. By default, the Console gracefully stops the instance by sending a shutdown command to the operating system.
After waiting 15 minutes for the OS to shut down, the instance is powered off.
Note:

If the applications that run on the instance take more than 15 minutes to
shut down, they could be improperly stopped, resulting in data corruption.
To avoid this, shut down the instance using the commands available in the
OS before you stop the instance using the Console.
If you want to stop the instance immediately, without waiting for the OS to respond, select the Force stop the
instance by immediately powering off check box.
5. Click Stop Instance.

To reboot an instance
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Click Reboot.
4. By default, the Console gracefully restarts the instance by sending a shutdown command to the operating system.
After waiting 15 minutes for the OS to shut down, the instance is powered off and then powered back on.
Note:

If the applications that run on the instance take more than 15 minutes to
shut down, they could be improperly stopped, resulting in data corruption.

Oracle Cloud Infrastructure User Guide 788


Compute

To avoid this, shut down the instance using the commands available in the
OS before you restart the instance using the Console.
If you want to reboot the instance immediately, without waiting for the OS to respond, select the Force reboot the
instance by immediately powering off, then powering back on check box.
5. Click Reboot Instance.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the InstanceAction operation to stop, start, or reboot an instance.

Terminating an Instance
You can permanently terminate (delete) instances that you no longer need. Any attached VNICs and volumes are
automatically detached when the instance terminates. Eventually, the instance's public and private IP addresses are
released and become available for other instances.
By default, the instance's boot volume is preserved when you terminate the instance. You can attach the boot volume
to a different instance as a data volume, or use it to launch a new instance. If you no longer need the boot volume, you
can permanently delete it at the same time that you terminate the instance.
Caution:

If your instance has NVMe storage, terminating the instance securely


erases the NVMe drives. Any data that was on the NVMe drives becomes
unrecoverable. Ensure that you back up any important data before you
terminate an instance. For more information, see Protecting Data on NVMe
Devices.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to terminate
an instance (with or without an attached block volume).
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.

Using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Click More Actions, and then click Terminate.
4. If you want to delete the boot volume that is associated with the instance, select the Permanently delete the
attached boot volume check box.
5. Click Terminate Instance.
Terminated instances temporarily remain in the list of instances with the state Terminated.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Oracle Cloud Infrastructure User Guide 789


Compute

Use the TerminateInstance operation to terminate an instance.

Enabling Monitoring for Compute Instances


This topic describes how to enable monitoring, specifically for the compute instance metrics, on compute instances.
The compute instance metrics provide data about the activity level and throughput of the instance. These metrics are
required to use features such as autoscaling, metrics, alarms, and notifications with compute instances. A compute
instance emits these metrics only when the Compute Instance Monitoring plugin is enabled and running on the
instance.
The Compute Instance Monitoring plugin is managed by the Oracle Cloud Agent software.

Supported Images
Compute instance metrics are supported on current Oracle-provided images and on custom images that are based on
current Oracle-provided images.
If you use an older Oracle-provided image, you must manually install the Oracle Cloud Agent software before you
can use the Compute Instance Monitoring plugin. Select an image dated after November 15, 2018 (except Ubuntu,
which must be dated after February 28, 2019).
You might have success enabling compute instance metrics on other images that support the Oracle Cloud Agent
software, though the Compute Instance Monitoring plugin has not been tested on other operating systems and there
is no guarantee that it will work. Compute instance metrics are not supported on Windows Server 2008 R2 custom
images.

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: For more information about the IAM policies that are needed to create and update a compute
instance, see Creating an Instance on page 700.

Prerequisites
• Service gateways or public IP addresses: The compute instance must have either a public IP address or a service
gateway to be able to send compute instance metrics to the Monitoring service.
If the instance does not have a public IP address, set up a service gateway on the virtual cloud network (VCN).
The service gateway lets the instance send compute instance metrics to the Monitoring service without the traffic
going over the internet. Here are special notes for setting up the service gateway to access the Monitoring service:
• When creating the service gateway, enable the service label called All <region> Services in Oracle Services
Network. It includes the Monitoring service.
• When setting up routing for the subnet that contains the instance, set up a route rule with Target Type set to
Service Gateway, and the Destination Service set to All <region> Services in Oracle Services Network.
For detailed instructions, see Access to Oracle Services: Service Gateway on page 3284.
• Oracle Cloud Agent: The Oracle Cloud Agent software must be installed on the instance. Oracle Cloud Agent is
installed by default on current Oracle-provided images. For steps to manually install Oracle Cloud Agent on older
images, see Installing the Oracle Cloud Agent Software on page 747.
• Compute Instance Monitoring plugin: For the instance to emit the compute instance metrics, the Compute Instance
Monitoring plugin must be enabled on the instance and plugins must be running. For more information about how
to enable and run plugins, see Managing Plugins with Oracle Cloud Agent on page 746.

Oracle Cloud Infrastructure User Guide 790


Compute

Enabling Monitoring for a New Compute Instance


To configure a new compute instance to emit the compute instance metrics, use the following steps.

Creating a Monitoring-Enabled Instance Using the Console


1. Follow the steps to create an instance, until the advanced options. Ensure that the instance has either a public IP
address or a service gateway, as described in the prerequisites.
2. Click Show Advanced Options.
3. On the Oracle Cloud Agent tab, select the Compute Instance Monitoring check box.
Note:

If you're using an older Oracle-provided image or a custom image that is


not based on a recent Oracle-provided image, you must manually install
the Oracle Cloud Agent software. You can do this by providing a cloud-
init script. For more information, see Installing the Oracle Cloud Agent
Software on page 747. Compare the date of the image to the date listed
in Supported Images.
4. Click Create.
The newly created, monitoring-enabled instance emits compute instance metrics to the Monitoring service.

Creating a Monitoring-Enabled Instance Using the API


Use the LaunchInstance operation. Include the following parameters:

{
"agentConfig": {
"isMonitoringDisabled": false,
"areAllPluginsDisabled": false,
"pluginsConfig": [
{
"name": "Compute Instance Monitoring",
"desiredState": "ENABLED"
}
]
}
}

Ensure that the instance has either a public IP address or a service gateway, as described in the prerequisites.
Note:

If you're using an older Oracle-provided image or a custom image that is


not based on a recent Oracle-provided image, you must manually install
the Oracle Cloud Agent software. You can do this by providing a cloud-init
script. For more information, see Installing the Oracle Cloud Agent Software
on page 747. Compare the date of the image to the date listed in Supported
Images.

Enabling Monitoring for an Existing Compute Instance


To configure an existing Compute instance to emit the compute instance metrics, use the following steps.
To enable monitoring on an existing compute instance using the Console
1. Install the Oracle Cloud Agent software, if it is not already installed.
2. Enable the Compute Instance Monitoring plugin.
3. Confirm that plugins are running on the instance.
4. Ensure that the instance has either a public IP address or a service gateway, as described in the prerequisites.

Oracle Cloud Infrastructure User Guide 791


Compute

5. To confirm that monitoring is enabled:


a. Go to the Metrics page for the instance:
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Under Resources, click Metrics.
4. In the Metric Namespace list, select oci_computeagent.
b. If you see metric charts with data, then the Monitoring service is receiving compute instance metrics from this
instance. For more information about these metrics, see Compute Instance Metrics on page 794.
If monitoring is not enabled (and the instance uses a supported image), then a button is available to enable
monitoring. Click Enable monitoring.
To enable monitoring on an existing compute instance using the API
1. Install the Oracle Cloud Agent software, if it is not already installed.
2. Use the UpdateInstance operation. Include the following parameters:

{
"agentConfig": {
"isMonitoringDisabled": false,
"areAllPluginsDisabled": false,
"pluginsConfig": [
{
"name": "Compute Instance Monitoring",
"desiredState": "ENABLED"
}
]
}
}
3. Ensure that the instance has either a public IP address or a service gateway, as described in the prerequisites.

Managing the Compute Instance Monitoring Plugin


For an instance to emit the compute instance metrics, the Compute Instance Monitoring plugin must be enabled on the
instance and plugins must be running.
If you want to temporarily prevent the instance from emitting compute instance metrics, you can disable the Compute
Instance Monitoring plugin. You can also stop all of the plugins that run on the instance, including the Compute
Instance Monitoring plugin.
Caution:

Functionality that depends on the plugin, such as monitoring and autoscaling,


does not work when the plugin is disabled or stopped.
For more information about how to enable and run plugins, see Managing Plugins with Oracle Cloud Agent on page
746.

Finding Out if Monitoring Has Your Metrics


To determine whether Monitoring is receiving the compute instance metrics, you can either query the instance
metrics, or view the instance properties to confirm that the Compute Instance Monitoring plugin is enabled and
running.
Using the Console: To find out whether Monitoring is receiving metrics by querying instance metrics
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Under Resources, click Metrics.

Oracle Cloud Infrastructure User Guide 792


Compute

4. In the Metric Namespace list, select oci_computeagent.


If you see metric charts with data, then the Monitoring service is receiving metrics from this instance. For more
information about these metrics, see Compute Instance Metrics on page 794.
If you see a message that monitoring is not enabled, or that the Oracle Cloud Agent software needs to be installed,
then complete those tasks.
Using the Console: To find out whether the Compute Instance Monitoring plugin is enabled and running
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Click the Oracle Cloud Agent tab.
4. Confirm that the Compute Instance Monitoring plugin is enabled, and all plugins are running.
Using the API: To find out whether Monitoring is receiving metrics by querying instance metrics
Use the SummarizeMetricsData API operation. If metrics are returned, it indicates that the Monitoring service is
receiving metrics from the instance.
Using the API: To find out whether the Compute Instance Monitoring plugin is enabled and running
Use the GetInstance operation (or ListInstances operation, for multiple instances).
In the response, if the agentConfig object returns the following values, it indicates that the Compute Instance
Monitoring plugin is enabled and all plugins are running.

{
"agentConfig": {
"isMonitoringDisabled": false,
"areAllPluginsDisabled": false,
"pluginsConfig": [
{
"name": "Compute Instance Monitoring",
"desiredState": "ENABLED"
}
]
}
}

Not seeing metrics for your instance?


If you don't see any metric charts, the instance might not be emitting metrics. See the following possible causes and
resolutions.

Possible cause How to check Resolution


The Compute Instance Monitoring plugin is disabled Review the instance Enable the Compute
on the instance or plugins are stopped. properties. Instance Monitoring plugin
and start all plugins.
The instance cannot access the Monitoring service Review the instance's IP Set up a service gateway.
because its VCN does not use the internet. address. If it's not public,
then a service gateway is
needed.
The instance does not use a supported image. Review the supported Create an instance with a
images. supported image.
Older images and custom images: No Oracle Cloud Connect to the instance and Install the Oracle Cloud
Agent software exists on the instance. look for the software. Agent software.

Oracle Cloud Infrastructure User Guide 793


Compute

Possible cause How to check Resolution

New instance in a new compartment: The IAM (not applicable) Check back after 10 or 20
policies required for the instance to publish metrics to minutes.
Monitoring are not yet initialized.
More information: IAM policies are automatically
created for new instances and are immediately
available, unless the instances are in a new
compartment. For a new instance in a new
compartment, the policies can take up to 20 minutes
to initialize, which delays the emission of metrics.

Compute Metrics
You can monitor the health, capacity, and performance of your Oracle Cloud Infrastructure resources by using
metrics, alarms, and notifications. For more information, see Monitoring Overview on page 2686 and Notifications
Overview on page 3378.
There are multiple Monitoring service metric namespaces related to Compute resources:
• oci_computeagent: Metrics related to the activity level and throughput of compute instances, as emitted by the
Compute Instance Monitoring plugin. See Compute Instance Metrics on page 794.
• oci_instancepools: Metrics related to the lifecycle state of instances in instance pools. See Instance Pool Metrics
on page 799.
• oci_compute_infrastructure_health: Metrics related to the up/down status, health, and maintenance status of
compute instances. See Infrastructure Health Metrics on page 801.
• oci_compute: Metrics related to the instance metadata service (IMDS) that provides information about running
compute instances. See Compute Management Metrics on page 804.

Compute Instance Metrics


You can monitor the health, capacity, and performance of your Compute instances by using metrics, alarms, and
notifications.
This topic describes the metrics emitted by the metric namespace oci_computeagent (the Compute Instance
Monitoring plugin on Compute instances).
You can view these metrics for individual Compute instances, and for all the instances in an instance pool.
Resources: Monitoring-enabled Compute instances.
Overview of Metrics for an Instance and Related Resources
This section gives an overall picture of the different types of metrics available for an instance and its storage and
network devices. See the following diagram and table for a summary.

Oracle Cloud Infrastructure User Guide 794


Compute

Metric Namespace Resource ID Where Measured Available Metrics


oci_computeagent Instance OCID On the instance. The See Available Metrics:
metrics in this namespace oci_computeagent on page
are aggregated across all 796.
the related resources on
the instance. For example,
DiskBytesRead is
aggregated across all
the instance's attached
storage volumes, and
NetworkBytesIn is
aggregated across all the
instance's attached VNICs.
oci_blockstore Boot or block volume By the Block Volume See Block Volume Metrics
OCID service. The metrics are on page 589.
for an individual volume
(either boot volume or
block volume).
oci_vcn VNIC OCID By the Networking service. See VNIC Metrics on page
The metrics are for an 3349.
individual VNIC.

Prerequisites
• IAM policies: To monitor resources, you must be given the required type of access in a policy written by an
administrator, whether you're using the Console or the REST API with an SDK, CLI, or other tool. The policy
must give you access to the monitoring services as well as the resources being monitored. If you try to perform
an action and get a message that you don’t have permission or are unauthorized, confirm with your administrator
the type of access you've been granted and which compartment you should work in. For more information on user
authorizations for monitoring, see the Authentication and Authorization section for the related service: Monitoring
or Notifications.
• Metrics exist in Monitoring: The resources that you want to monitor must emit metrics to the Monitoring service.
• Compute instances: To emit metrics, the Compute Instance Monitoring plugin must be enabled on the instance,
and plugins must be running. The instance must also have either a service gateway or a public IP address to send

Oracle Cloud Infrastructure User Guide 795


Compute

metrics to the Monitoring service. For more information, see Enabling Monitoring for Compute Instances on page
790.
Available Metrics: oci_computeagent
The compute instance metrics help you measure activity level and throughput of compute instances. The metrics
listed in the following table are available for any monitoring-enabled compute instance. You must enable monitoring
on the instance to get these metrics.
The metrics in this namespace are aggregated across all the related resources on the instance. For example,
DiskBytesRead is aggregated across all the instance's attached storage volumes, and NetworkBytesIn is
aggregated across all the instance's attached VNICs.
You also can use the Monitoring service to create custom queries.
Each metric includes the following dimensions
availabilityDomain
The availability domain where the instance resides.
faultDomain
The fault domain where the instance resides.
imageId
The OCID of the image for the instance.
instancePoolId
The instance pool that the instance belongs to.
region
The region where the instance resides.
resourceDisplayName
The friendly name of the instance.
resourceId
The OCID of the instance.
shape
The shape of the instance.

Metric Metric Display Unit Description Dimensions


Name
CpuUtilization CPU Utilization percent Activity level from availabilityDomain
CPU. Expressed as
faultDomain
a percentage of total
time. imageId
For instance pools, instancePoolId
the value is averaged
region
across all instances in
the pool. resourceDisplayName

DiskBytesRead1, Disk Read Bytes bytes Read throughput. resourceId


3
Expressed as bytes shape
read per interval.

Oracle Cloud Infrastructure User Guide 796


Compute

Metric Metric Display Unit Description Dimensions


Name
1,
DiskBytesWrittenDisk Write Bytes bytes Write throughput.
3
Expressed as bytes
written per interval.
DiskIopsRead1, 3 Disk Read I/O operations Activity level from I/
O reads. Expressed
as reads per interval.
DiskIopsWritten1,Disk Write I/O operations Activity level from I/
3
O writes. Expressed
as writes per interval.
LoadAverage Load Average number of processes Average system load
calculated over a 1
minute period.
Memory Allocation
MemoryAllocationStalls number of stalls Number of times
Stalls page reclaim was
called directly.
1
Memory
MemoryUtilization Utilization percent Space currently in
use. Measured by
pages. Expressed as
a percentage of used
pages.
For instance pools,
the value is averaged
across all instances in
the pool.

NetworksBytesIn1,Network Receive bytes Network receipt


2
Bytes throughput.
Expressed as bytes
received.
1,
NetworksBytesOutNetwork Transmit bytes Network
2
Bytes transmission
throughput.
Expressed as bytes
transmitted.
1
This metric is a cumulative counter that shows monotonically increasing behavior for each session of the
Oracle Cloud Agent software, resetting when the operating system is restarted.
2
The Networking service provides additional metrics (in the oci_vcn metric namespace) for each VNIC on
the instance. For more information, see Networking Metrics on page 3348.
3
The Block Volume service provides additional metrics (in the oci_blockstore metric namespace) for
each volume attached to the instance. For more information, see Block Volume Metrics on page 589.

Using the Console


To view default metric charts for a single Compute instance
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Under Resources, click Metrics.

Oracle Cloud Infrastructure User Guide 797


Compute

4. In the Metric Namespace list, select oci_computeagent.


The Metrics page displays a default set of charts for the current instance.
Not seeing any metric charts for the instance?
If you don't see any metric charts, the instance might not be emitting metrics. See the following possible causes
and resolutions.

Possible cause How to check Resolution


The Compute Instance Monitoring plugin is Review the instance Enable the Compute
disabled on the instance or plugins are stopped. properties. Instance Monitoring
plugin and start all
plugins.
The instance cannot access the Monitoring service Review the instance's IP Set up a service gateway.
because its VCN does not use the internet. address. If it's not public,
then a service gateway is
needed.
The instance does not use a supported image. Review the supported Create an instance with a
images. supported image.
Older images and custom images: No Oracle Cloud Connect to the instance Install the Oracle Cloud
Agent software exists on the instance. and look for the software. Agent software.

New instance in a new compartment: The IAM (not applicable) Check back after 10 or 20
policies required for the instance to publish metrics minutes.
to Monitoring are not yet initialized.
More information: IAM policies are automatically
created for new instances and are immediately
available, unless the instances are in a new
compartment. For a new instance in a new
compartment, the policies can take up to 20 minutes
to initialize, which delays the emission of metrics.

For more information about monitoring metrics and using alarms, see Monitoring Overview on page 2686. For
information about notifications for alarms, see Notifications Overview on page 3378.
To view default metric charts for resources related to a Compute instance
• For an attached block volume: While viewing the instance's details, under Resources, click Attached Block
Volumes, and then click the volume that you're interested in. Click Metrics to see the volume's charts. For more
information about the emitted metrics, see Block Volume Metrics on page 589.
• For the attached boot volume: While viewing the instance's details, under Resources, click Boot Volume, and
then click the volume that you're interested in. Click Metrics to see the volume's charts. For more information
about the emitted metrics, see Block Volume Metrics on page 589.
• For an attached VNIC: While viewing the instance's details, under Resources, click Attached VNICs, and then
click the VNIC that you're interested in. Click Metrics to see the charts for the VNIC. For more information about
the emitted metrics, see Networking Metrics on page 3348.
To view default metric charts for all Compute instances in a compartment
1. Open the navigation menu. Under Solutions and Platform, go to Monitoring and click Service Metrics.
2. Select a compartment.

Oracle Cloud Infrastructure User Guide 798


Compute

3. For Metric Namespace, select oci_computeagent.


The Service Metrics page dynamically updates the page to show charts for each metric that is emitted by the
selected metric namespace.
For more information about monitoring metrics and using alarms, see Monitoring Overview on page 2686. For
information about notifications for alarms, see Notifications Overview on page 3378.
To view default metric charts for the instances in an instance pool
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instance Pools.
2. Click the instance pool that you're interested in.
3. Under Resources, click Metrics.
4. In the Metric Namespace list, select oci_computeagent.
The Metrics page displays a default set of charts for the current instance pool.
For more information about monitoring metrics and using alarms, see Monitoring Overview on page 2686. For
information about notifications for alarms, see Notifications Overview on page 3378.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following APIs for monitoring:
• Monitoring API for metrics and alarms
• Notifications API for notifications (used with alarms)

Instance Pool Metrics


You can monitor the health, capacity, and performance of your Compute instance pools by using metrics, alarms, and
notifications.
This topic describes the metrics emitted by the metric namespace oci_instancepools.
Resources: Compute instance pools.
Overview of Metrics: oci_instancepools
The instance pool metrics help you monitor the lifecycle state of instances in your instance pools.

Required IAM Policy


To monitor resources, you must be given the required type of access in a policy written by an administrator, whether
you're using the Console or the REST API with an SDK, CLI, or other tool. The policy must give you access to
the monitoring services as well as the resources being monitored. If you try to perform an action and get a message
that you don’t have permission or are unauthorized, confirm with your administrator the type of access you've been
granted and which compartment you should work in. For more information on user authorizations for monitoring, see
the Authentication and Authorization section for the related service: Monitoring or Notifications.
Available Metrics: oci_instancepools
The metrics listed in the following table are automatically available for each instance pool that you create. You do not
need to enable monitoring on the instances in the pool to get these metrics.
You also can use the Monitoring service to create custom queries.
Depending on the metric, the following dimensions are available:
AvailabilityDomain
The availability domain where the instance pool resides.

Oracle Cloud Infrastructure User Guide 799


Compute

DisplayName
The friendly name of the instance pool.
FaultDomain
The fault domain where the instance pool resides.
region
The region where the instance pool resides.
resourceId
The OCID of the instance pool.

Metric Metric Display Unit Description Dimensions


Name
InstancePoolSizeInstance Pool Size instances Number of instances DisplayName
in the pool.
region
resourceId

Instances
ProvisioningInstances instances Number of instances AvailabilityDomain
Provisioning in the pool that are
provisioning. DisplayName
FaultDomain
region
resourceId

RunningInstancesInstances Running instances Number of running AvailabilityDomain


instances in the pool.
DisplayName
FaultDomain
region
resourceId

Instances
TerminatedInstances instances Number of instances AvailabilityDomain
Terminated in the pool that
are terminating or DisplayName
terminated. FaultDomain
region
resourceId

Using the Console


To view default metric charts for a single instance pool
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instance Pools.
2. Click the instance pool that you're interested in.
3. Under Resources, click Metrics.
4. In the Metric Namespace list, select oci_instancepools.
The Metrics page displays a default set of charts for the current instance pool.
For more information about monitoring metrics and using alarms, see Monitoring Overview on page 2686. For
information about notifications for alarms, see Notifications Overview on page 3378.

Oracle Cloud Infrastructure User Guide 800


Compute

To view default metric charts for all instance pools in a compartment


1. Open the navigation menu. Under Solutions and Platform, go to Monitoring and click Service Metrics.
2. Select a compartment.
3. For Metric Namespace, select oci_instancepools.
The Service Metrics page dynamically updates the page to show charts for each metric that is emitted by the
selected metric namespace.
For more information about monitoring metrics and using alarms, see Monitoring Overview on page 2686. For
information about notifications for alarms, see Notifications Overview on page 3378.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following APIs for monitoring:
• Monitoring API for metrics and alarms
• Notifications API for notifications (used with alarms)

Infrastructure Health Metrics


You can monitor the health, capacity, and performance of the infrastructure for your Compute virtual machine (VM)
and bare metal instances by using metrics, alarms, and notifications.
This topic describes the metrics emitted by the metric namespace oci_compute_infrastructure_health.
Resources: Compute instances.
Overview of Metrics: oci_compute_infrastructure_health
The Compute infrastructure health metrics help you monitor the status and health of Compute instances.
• Instance health (up/down) status: The instance_status metric lets you check whether a VM or bare metal
instance is available (up) or unavailable (down) when in the running state.
• Instance maintenance status: The maintenance_status metric lets you monitor whether a VM instance is
scheduled for planned infrastructure maintenance.
• Bare metal infrastructure health status: The health_status metric helps you monitor the health of the
infrastructure for bare metal instances, including hardware components such as the CPU and memory.
Based on the value of the metrics, you can proactively move affected instances to healthy hardware and thereby
minimize the impact on your applications.

Required IAM Policy


To monitor resources, you must be given the required type of access in a policy written by an administrator, whether
you're using the Console or the REST API with an SDK, CLI, or other tool. The policy must give you access to
the monitoring services as well as the resources being monitored. If you try to perform an action and get a message
that you don’t have permission or are unauthorized, confirm with your administrator the type of access you've been
granted and which compartment you should work in. For more information on user authorizations for monitoring, see
the Authentication and Authorization section for the related service: Monitoring or Notifications.
Available Metrics: oci_compute_infrastructure_health
The metrics listed in the following table are automatically available for your instances. The instance_status
metric is available for both VM and bare metal instances, the maintenance_status metric is available only
for VM instances, and the health_status metric is available only for bare metal instances. You do not need to
enable monitoring on the instance to get these metrics.
You also can use the Monitoring service to create custom queries.
The metric includes the following dimensions:

Oracle Cloud Infrastructure User Guide 801


Compute

faultClass
The type of hardware issue:
• CPU: A fault has been detected in one or more CPUs.
• MEM-BOOT: A fault in the memory subsystem was detected during instance launch or a recent reboot.
• MEM-RUNTIME: A fault in the memory subsystem was detected.
• MGMT-CONTROLLER: A fault in the instance management controller has been detected.
• PCI: A fault in the PCI subsystem has been detected.
• PCI-NIC: A fault in the instance network interface card (NIC) has been detected.
• SDN-INTERFACE: A fault in the instance software defined network interface has been detected.
For troubleshooting suggestions and more information about these hardware issues, see Compute Health
Monitoring for Bare Metal Instances on page 807.
resourceDisplayName
The friendly name of the instance.
resourceId
The OCID of the instance.
maintenanceDueTime
The scheduled start time of the 24-hour maintenance window, in the format defined by RFC3339.
computeMaintenanceAction
The action that Oracle Cloud Infrastructure will perform on an instance during a scheduled maintenance
event:
• REBOOT: The instance is migrated from the physical VM host that needs maintenance to a healthy VM
host. If live migration is not possible, then the instance is reboot migrated.
recommendedAction
The action that you can take before the scheduled maintenance event, so that you can control how and when
your applications experience downtime.
• REBOOT: You can proactively reboot the instance before the scheduled maintenance time. When
you reboot an instance for maintenance, the instance is stopped on the physical VM host that needs
maintenance, and then restarted on a healthy VM host. For more information, see Moving a Compute
Instance to a New Host on page 781.

Metric Metric Display Unit Description Dimensions


Name
health_status Infrastructure Issues The number of faultClass
Health Status health issues for a
bare metal instance. resourceDisplayName
Any non-zero value resourceId
indicates a health
defect.

Oracle Cloud Infrastructure User Guide 802


Compute

Metric Metric Display Unit Description Dimensions


Name
instance_status Instance Status Count The status of a resourceDisplayName
running VM or bare
metal instance. A resourceId
value of 0 indicates
that the instance is
available (up). A
value of 1 indicates
that the instance
is not available
(down) due to an
infrastructure issue.
If the instance is
stopped, then the
metric does not have
a value.
Maintenance Status
maintenance_status Count The maintenance maintenanceDueTime
status of a VM
instance. A value computeMaintenanceAction
of 0 indicates that recommendedAction
the instance is
not scheduled for resourceDisplayName
an infrastructure resourceId
maintenance event. A
value of 1 indicates
that the instance
is scheduled for
an infrastructure
maintenance event.

Using the Console


To view infrastructure health metrics for a single Compute instance
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Under Resources, click Metrics.
4. In the Metric Namespace list, select oci_compute_infrastructure_health.
The Metrics page displays a default set of charts for the current instance.
For more information about monitoring metrics and using alarms, see Monitoring Overview on page 2686. For
information about notifications for alarms, see Notifications Overview on page 3378.
To view infrastructure health metrics for all Compute instances in a compartment
1. Open the navigation menu. Under Solutions and Platform, go to Monitoring and click Service Metrics.
2. Select a compartment.
3. For Metric Namespace, select oci_compute_infrastructure_health.
The Service Metrics page dynamically updates to show charts for each metric that is emitted by the selected
metric namespace.
For more information about monitoring metrics and using alarms, see Monitoring Overview on page 2686. For
information about notifications for alarms, see Notifications Overview on page 3378.

Oracle Cloud Infrastructure User Guide 803


Compute

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following APIs for monitoring:
• Monitoring API for metrics and alarms
• Notifications API for notifications (used with alarms)

Compute Management Metrics


You can monitor requests to the instance metadata service on compute virtual machine (VM) and bare metal instances
by using metrics, alarms, and notifications.
This topic describes the metrics emitted by the metric namespace oci_compute.
Resources: compute instances.
Overview of Metrics: oci_compute
The instance metadata service (IMDS) provides metadata about an instance, its attached VNICs, and custom metadata
that you supply. IMDS is available in two versions, version 1 and version 2. IMDSv2 offers increased security
compared to the legacy v1.
Use the Compute management metric to identify requests to the legacy v1 endpoints. After you migrate any
applications to support the v2 endpoints, you can disable all requests to the legacy v1 endpoints.

Required IAM Policy


To monitor resources, you must be given the required type of access in a policy written by an administrator, whether
you're using the Console or the REST API with an SDK, CLI, or other tool. The policy must give you access to
the monitoring services as well as the resources being monitored. If you try to perform an action and get a message
that you don’t have permission or are unauthorized, confirm with your administrator the type of access you've been
granted and which compartment you should work in. For more information on user authorizations for monitoring, see
the Authentication and Authorization section for the related service: Monitoring or Notifications.
Available Metrics: oci_compute
The metric listed in the following table is automatically available for your instances. You do not need to enable
monitoring on the instance to get this metric.
You also can use the Monitoring service to create custom queries.
The metric includes the following dimensions:
metadataVersion
The version of the instance metadata service that requests were made to.
path
The URL path that instance metadata requests were made to.
resourceId
The OCID of the instance.
userAgent
The source of the instance metadata request.
status
The HTTP response code for requests to instance metadata service.

Oracle Cloud Infrastructure User Guide 804


Compute

Metric Metric Display Unit Description Dimensions


Name
Instance Metadata
InstanceMetadataRequests Sum The number of metadataVersion
Requests V1 Versus requests to the
V2 instance metadata path
service, comparing resourceId
the version 1 and
version 2 endpoints. userAgent
status

Using the Console


To view management metrics for a single compute instance
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Under Resources, click Metrics.
4. In the Metric Namespace list, select oci_compute.
The Metrics page displays a default set of charts for the current instance.
For more information about monitoring metrics and using alarms, see Monitoring Overview on page 2686. For
information about notifications for alarms, see Notifications Overview on page 3378.
To view management metrics for all compute instances in a compartment
1. Open the navigation menu. Under Solutions and Platform, go to Monitoring and click Service Metrics.
2. Select a compartment.
3. For Metric Namespace, select oci_compute.
The Service Metrics page dynamically updates to show charts for each metric that is emitted by the selected
metric namespace.
For more information about monitoring metrics and using alarms, see Monitoring Overview on page 2686. For
information about notifications for alarms, see Notifications Overview on page 3378.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following APIs for monitoring:
• Monitoring API for metrics and alarms
• Notifications API for notifications (used with alarms)

Compute NVMe Performance


The content in the sections below apply to Category 7 and Section 3.a of the Oracle PaaS and IaaS Public Cloud
Services Pillar documentation.
Oracle Cloud Infrastructure provides a variety of instance configurations in both bare metal and virtual machine (VM)
shapes. Each shape varies on multiple dimensions including memory, CPU cores, network bandwidth, and the option
of local NVMe SSD storage found in DenseIO and HPC shapes.
Oracle Cloud Infrastructure provides a service-level agreement (SLA) for NVMe performance. Measuring
performance is complex and open to variability.
An NVMe drive also has non-uniform drive performance over the period of drive usage. An NVMe drive performs
differently when tested brand new compared to when tested in a steady state after some duration of usage. New drives
have not incurred many write/erase cycles and the inline garbage collection has not had a significant impact on IOPS

Oracle Cloud Infrastructure User Guide 805


Compute

performance. To achieve the goal of reproducibility and reduced variability, our testing focuses on the steady state
duration of the NVMe drive’s operation.

Testing Methodology
Caution:

Before running any tests, protect your data by making a backup of your
data and operating system environment to prevent any data loss. The tests
described in this document will overwrite the data on the disk, and cause data
corruption.
Summary: To capture the IOPS measure, first provision a shape such as the BM.DenseIO2.52 shape, and then use
the Gartner Cloud Harmony test suite to run tests on an instance running the latest supported Oracle Linux 7 image
for each NVMe drive target.
Instructions:
1. Launch an instance based on the latest supported Oracle Linux 7 image and select a shape such as the
BM.DenseIO2.52 shape. For instructions, see Creating an Instance on page 700.
2. Run the Gartner Cloud Harmony test suite tests on the instance for each NVMe drive target. The following is an
example of a command that will work for all shapes and drives on the shape:

sudo ./run.sh `ls /dev/nvme[0-9]n1 | sed -e 's/\//\--target=\//'`


--nopurge –noprecondition --fio_direct=1 --fio_size=10g --test=iops
--skip_blocksize=512b --skip_blocksize=8k --skip_blocksize=16k
--skip_blocksize=32k --skip_blocksize=64k --skip_blocksize=128k
--skip_blocksize=1m

The SLA for NVMe drive performance is measured against 4k block sizes with 100% random write workload on
DenseIO shapes where the drive is in a steady state of operation.

Performance Benchmarks
The following table lists the minimum IOPS for the specified shape to meet the SLA, given the testing methodology
with 4k block sizes for 100% random write tests using the tests described in the previous section.

Shape Minimum Supported IOPS

VM.DenseIO1.4 200k
VM.DenseIO1.8 250k
VM.DenseIO1.16 400k
BM.DenseIO1.36 2.5MM
VM.DenseIO2.8 250k
VM.DenseIO2.16 400k
VM.DenseIO2.24 800k
BM.DenseIO2.52 3.0MM
BM.HPC2.36 250k

Although the NVMe drives are capable of higher IOPS, Oracle Cloud Infrastructure currently guarantees this
minimum level of IOPS performance.

Oracle Cloud Infrastructure User Guide 806


Compute

Frequently Asked Questions


Q: I suspect a slowdown in my NVMe drive performance. Is there a SLA violation?
A: We test hosts on a regular basis to ensure that are our low-level software updates do not regress performance. If
you have reproduced the testing methodology and your drive’s performance does not meet the terms in the SLA,
please contact your Oracle sales team.
Q: Why does the testing methodology not represent a diversity of IO workloads such as random reads and
writes to reflect real world IO?
A: We focused on reproducibility and we believe the tests provide a significant indicator of overall drive
performance.
Q: Will Oracle Cloud Infrastructure change the tests in this document?
A: We will make changes to provide greater customer value through better guarantees and improved reproducibility.

Compute Health Monitoring for Bare Metal Instances


Compute health monitoring for bare metal instances is a feature that provides notifications about hardware issues with
your bare metal instances. With the health monitoring feature, you can monitor the health of the hardware for your
bare metal instances, including components such as the CPU, motherboard, DIMM, and NVMe drives. You can use
the notifications to identify problems, letting you proactively redeploy your instances to improve availability.
Health monitoring notifications are emailed to the tenant administrator within one business day of the error occurring.
This warning helps you to take action before any potential hardware failure and redeploy your instances to healthy
hardware to minimize the impact on your applications.
You can also use the infrastructure health metrics available in the Monitoring service to create alarms and
notifications based on hardware issues.

Error Messages and Troubleshooting


This section contains information about the most common health monitoring error messages and provides
troubleshooting suggestions for you to try for your bare metal instance.
A fault has been detected in one or more CPUs
Fault class: CPU
Details: This error indicates that a processor or one or more cores have failed in your instance. Your instance might
be inaccessible or there might be fewer available cores than expected.
Troubleshooting steps:
• If the instance is inaccessible, you must replace it using the steps in Moving a Compute Instance to a New Host on
page 781.
• If your instance is available, check for the expected number of cores:
• On Linux-based systems, run the following command:

nproc --all
• On Windows-based systems, open Resource Monitor.
Compare the core count to the expected values documented in Compute Shapes on page 659. If the number
of cores is less than expected and this reduction impacts your application, we recommend that you replace the
instance using the steps in Moving a Compute Instance to a New Host on page 781.
A fault in the memory subsystem was detected during instance launch or a recent reboot
Fault class: MEM-BOOT
Details: This error indicates that one or more failed DIMMs were detected in your instance while the instance was
being launched or rebooted. Any failed DIMMs have been disabled.

Oracle Cloud Infrastructure User Guide 807


Compute

Troubleshooting steps: The total amount of memory in the instance will be lower than expected. If this impacts your
application, we recommend that you replace the instance using the steps in Moving a Compute Instance to a New
Host on page 781.
To check for the amount of memory in the instance:
• On Linux-based systems, run the following command:

awk '$3=="kB"{$2=$2/1024**2;$3="GB";} 1' /proc/meminfo | column -t | grep


MemTotal
• On Windows-based systems, open Resource Monitor.
The expected values are documented in Compute Shapes on page 659.
A fault in the memory subsystem was detected
Fault class: MEM-RUNTIME
Details: This error indicates that one or more non-critical errors were detected on a DIMM in your instance. The
instance might have unexpectedly rebooted in the last 72 hours.
Troubleshooting steps:
• If the instance has unexpectedly rebooted in the last 72 hours, one or more DIMMs might have been disabled. To
check for the total amount of memory in the instance:
• On Linux-based systems, run the following command:

awk '$3=="kB"{$2=$2/1024**2;$3="GB";} 1' /proc/meminfo | column -t |


grep MemTotal
• On Windows-based systems, open Resource Monitor.
If the total memory in the instance is lower than expected, then one or more DIMMs have failed. If this impacts
your application, we recommend that you replace the instance using the steps in Moving a Compute Instance to a
New Host on page 781.
• If the instance has not unexpectedly rebooted, it is at increased risk of doing so. During the next reboot, one
or more DIMMs might be disabled. We recommend that you replace the instance using the steps in Moving a
Compute Instance to a New Host on page 781.
A fault in the instance management controller has been detected
Fault class: MGMT-CONTROLLER
Details: This error indicates that a device used to manage your instance might have failed. You might not be able to
use the Console, CLI, SDKs, or APIs to stop, start, or reboot your instance. This functionality will still be available
from within the instance using the standard operating system commands. You also might not be able to create a
console connection to your instance. You will still be able to terminate your instance.
Troubleshooting steps: If this loss of control impacts your application, we recommend that you replace the instance
using the steps in Moving a Compute Instance to a New Host on page 781.
A fault in the PCI subsystem has been detected
Fault class: PCI
Details: This error indicates that one or more of the PCI devices in your instance have failed or are not operating at
peak performance.
Important:

The PCI fault class will be deprecated in the future. You should migrate to
the PCI-NIC fault class for similar functionality.
Troubleshooting steps:

Oracle Cloud Infrastructure User Guide 808


Compute

• If you cannot connect to the instance over the network, the NIC might have failed. Use the Console or CLI to stop
the instance and then start the instance. For steps, see Stopping and Starting an Instance on page 785.
If you're still unable to connect to the instance over the network, you might be able to connect to it using a console
connection. Follow the steps in Connecting to the Serial Console on page 823 or Connecting to the VNC
Console on page 828 to establish a console connection and then reboot the instance. If the instance remains
inaccessible, you must replace it using the steps in Moving a Compute Instance to a New Host on page 781.
• An NVMe device may have failed.
On Linux-based systems, run the command sudo lsblk to get a list of the attached NVMe devices.
On Windows-based systems, open Disk Manager. Check the count of NVMe devices against the expected number
of devices in Compute Shapes on page 659.
If you determine that an NVMe device is missing from the list of devices for your instance, we recommend that
you replace the instance using the steps in Moving a Compute Instance to a New Host on page 781.
A fault in the instance network interface card (NIC) has been detected
Fault class: PCI-NIC
Details: This error indicates that one or more of the instance network interface card (NIC) devices in your instance
have failed or are not operating at peak performance.
Troubleshooting steps: If you cannot connect to the instance over the network, the NIC might have failed. Use the
Console or CLI to stop the instance and then start the instance. For steps, see Stopping and Starting an Instance on
page 785.
If you're still unable to connect to the instance over the network, you might be able to connect to it using a console
connection. Follow the steps in Connecting to the Serial Console on page 823 or Connecting to the VNC Console
on page 828 to establish a console connection and then reboot the instance. If the instance remains inaccessible,
you must replace it using the steps in Moving a Compute Instance to a New Host on page 781.
A fault in the instance software defined network interface has been detected
Fault class: SDN-INTERFACE
Details: If you cannot connect to the instance or if you're experiencing networking issues, the software-defined
network interface device might have a fault.
Troubleshooting steps: Although restarting the instance might temporarily resolve the issue, we recommend that you
replace the instance using the steps in Moving a Compute Instance to a New Host on page 781.

Microsoft Licensing on Oracle Cloud Infrastructure


This topic provides information about the licensing requirements to use Microsoft products on Oracle Cloud
Infrastructure.
For more information about how to bring your own Microsoft licenses to Oracle Cloud Infrastructure, see Licensing
Options for Microsoft Windows on page 815.
For information about how to move eligible Microsoft server application licenses to Oracle Cloud Infrastructure by
enrolling in the License Mobility through Microsoft Software Assurance benefit, see Moving Microsoft Licenses to
Oracle Cloud Infrastructure: Microsoft License Mobility on page 817.

Using Microsoft Windows on Oracle Cloud Infrastructure: FAQ


Oracle Cloud Infrastructure is licensed to provide Microsoft software offerings. Oracle is a member of the Microsoft
Partner Network, licensed to sell Microsoft software under the Service Provider License Agreement (SPLA). Oracle
is also an authorized Microsoft Authorized Mobility Partner with an active Premier Support for Partners agreement
with Microsoft.
For the latest Microsoft licensing requirements, refer to the Microsoft Product Terms.

Oracle Cloud Infrastructure User Guide 809


Compute

If you can't find the answer to your question here, or you need more assistance running Microsoft products on Oracle
Cloud Infrastructure, contact Oracle Support.
General Questions
What OS editions of Microsoft Windows Server are supported?
Oracle-provided images
These Windows versions are available for Oracle-provided images:
• Windows Server 2012 R2 Standard, Datacenter
• Windows Server 2016 Standard, Datacenter
• Windows Server 2019 Standard, Datacenter
Bring Your Own Image (BYOI)
These Windows versions support custom image import:
• Windows Server 2008 R2 Standard, Enterprise, Datacenter
• Windows Server 2012 Standard, Datacenter
• Windows Server 2012 R2 Standard, Datacenter
• Windows Server 2016 Standard, Datacenter
• Windows Server 2019 Standard, Datacenter
If you don't need to migrate your Windows OS licenses, you can use the Bring Your Own Image process to migrate
your Windows image to Oracle Cloud Infrastructure.
Is Windows Server 2019 available as an Oracle-provided image?
Yes, Windows Server 2019 is available as an Oracle-provided image.
Is Windows Server 2019 available as a Bring Your Own Image (BYOI) image?
Yes, you can import your own Windows Server 2019 image for virtual machines only. For source image requirements
and steps to import an image, see Importing Custom Windows Images on page 682.
What VM and bare metal options are available for Windows Server operating systems?
The following table shows support for Microsoft Windows Server operating systems on Oracle Cloud Infrastructure.

Use Case Bare Metal Machines Virtual Machines (VMs) License


Use an Oracle-provided Supported Supported SPLA volume license
Windows Server issued by Oracle Cloud
operating system image Infrastructure
for Windows Server 2012
R2 and later versions.
Bring your own virtual Not supported Supported SPLA volume license
machine image. You can issued by Oracle Cloud
import your own custom Infrastructure
virtual machine Windows
Server OS image.
Bring your own Windows Not supported Not supported Customer-owned license
Server ISO image.

Oracle Cloud Infrastructure User Guide 810


Compute

Use Case Bare Metal Machines Virtual Machines (VMs) License


Bring your own Supported Not supported SPLA volume license
hypervisor. You can use issued by Oracle Cloud
a Windows Server 2016 Infrastructure
or Windows Server 2019
Datacenter hypervisor
host provided by Oracle
Cloud Infrastructure and
import your own VM
images.

Does Oracle Cloud Infrastructure support Bring Your Own Image (BYOI) for Windows Server?
Yes, you are permitted to import your own generalized custom image of Windows Server.
When you create an instance with an imported image on a VM or a shared bare metal machine, Oracle Cloud
Infrastructure licenses the instance. For more information about imported images, see Creating Windows Custom
Images on page 673.
If you want to use your own license, BYOI is supported only for bare metal machines on a dedicated host.
How am I charged for Windows Server on Oracle Cloud Infrastructure?
The cost of a Microsoft Windows Server license is an additional cost, on top of the underlying Compute instance
price. You pay separately for the Compute instance and the Windows Server license. For more information about
Microsoft Windows Server pricing, see Compute Pricing.
Billing for the Windows Server license is based on per-OCPU, per-second usage. Billing starts when an instance is in
the "running" state and ends when you terminate (delete) the instance.
When an instance is stopped, billing for the Windows Server license depends on the shape that was used to create the
instance. Billing pauses for instances that use a Standard shape. Billing continues for instances that use a Dense I/
O shape, GPU shape, or HPC shape. Depending on the shape, you might also be billed for the underlying Compute
instance when the instance is stopped.
How does Windows Server get updated with the latest patches?
You must update your VCN's security list to enable egress traffic for port 80 (HTTP) and port 443 (HTTPS) to install
patches from Microsoft. Oracle Cloud Infrastructure enables automatic updates for Microsoft Windows Server and
uses the default settings for applying Windows Server patches.
Can I take a snapshot image after customizing a running Window Server instance?
Yes, there are several options available on both bare metal and virtual machines:
• Create a custom image: Creates a custom image that you can use to launch other instances. Instances that you
launch from your image include the customizations, configuration, and software installed when you created the
image.
• Clone a boot volume: Makes a copy of an existing boot volume without needing to go through the backup and
restore process. A boot volume clone is a point-in-time direct disk-to-disk deep copy of the source boot volume,
so all the data that is in the source boot volume when the clone is created is copied to the boot volume clone.
• Back up a block volume: Makes a point-in-time backup of data on a block volume. You can restore a backup to a
new volume either immediately after a backup or at a later time that you choose.
• Back up a boot volume: Makes a backup of a boot volume. Boot volume backup capabilities are the same as
block volume backup capabilities and are in-region only. Windows boot volume backups cannot be copied across
regions.
Can I export a custom Windows Server image?
Yes, exporting custom Windows Server operating system images is supported.

Oracle Cloud Infrastructure User Guide 811


Compute

When exporting Windows-based images, you are responsible for complying with the Microsoft Product Terms and all
product use conditions, as well as verifying your compliance with Microsoft.
For steps to export an image, see Image Import/Export on page 675.
What support is available for Microsoft Windows Server on Oracle Cloud Infrastructure?
Oracle Support provides limited assistance for Oracle-provided Microsoft Windows Server images if the Windows
Server version has not reached end of support with Microsoft. All other Microsoft software is supported directly by
Microsoft Support.
Oracle Support can help verify that the operating system boots, that the operating system connects to the network,
and that attached storage connects and performs as expected. If you encounter other issues with Microsoft Windows
Server, work directly with Microsoft Support to resolve the issue. For more information, see Support Options for
Microsoft Windows on page 818.
How do I upgrade to a newer version of Windows Server?
To upgrade to a newer version of Windows Server, you can do either of the following things:
• Obtain the installation media from Microsoft or your Microsoft reseller, and then upgrade the existing Compute
instance. The license issued by Oracle Cloud Infrastructure remains in effect.
• Create a new Compute instance using the desired version of the Windows Server platform image, and then
migrate your applications and data to the new instance.
Licensing - Windows Server
What is BYOL?
BYOL stands for "bring your own license." BYOL lets you use software licenses that you already own to deploy
software on Oracle Cloud Infrastructure, without any additional licensing fees. This process uses the License Mobility
through Microsoft Software Assurance benefit provided by Microsoft. You must have active Software Assurance with
Microsoft to bring your licenses to Oracle Cloud Infrastructure.
What is Microsoft License Mobility?
License Mobility through Software Assurance is a Microsoft benefit that permits you to move your eligible Microsoft
licenses to cloud services providers such as Oracle Cloud Infrastructure. Oracle is an Authorized Mobility Partner for
License Mobility.
With License Mobility through Software Assurance, you can deploy eligible application servers on bare metal hosts
or virtual shared hardware in Oracle Cloud Infrastructure. An example of an application eligible for License Mobility
through Software Assurance is Microsoft SQL. Windows Server operating systems are not eligible.
You may move Microsoft licenses from on-premises or another Authorized Mobility Partner only after more than 90
days have passed since the last license move.
For more information about this Microsoft benefit, see License Mobility through Microsoft Software Assurance. For
steps to move your Microsoft licenses to Oracle Cloud Infrastructure, see Moving Microsoft Licenses to Oracle Cloud
Infrastructure: Microsoft License Mobility on page 817.
Is Oracle a Microsoft Authorized Mobility Partner?
Yes, Oracle is an Authorized Mobility Partner for the Microsoft License Mobility through Software Assurance
benefit.
Can I bring my own license for Microsoft Windows Server to Oracle Cloud Infrastructure?
Yes. You can bring your own license (BYOL) for Microsoft Windows Server on a dedicated bare metal or dedicated
virtual machine host, subject to the Microsoft Product Terms. You are responsible for managing your own licenses
to maintain compliance with Microsoft licensing terms. For more information, see Licensing Options for Microsoft
Windows on page 815.
The following table shows the BYOL requirements for Microsoft licenses on Oracle Cloud Infrastructure.

Oracle Cloud Infrastructure User Guide 812


Compute

Microsoft License Bare Metal Machines and Virtual Machines (Multi-Tenant


Dedicated Virtual Machine Hosts Shared Host)
Windows Server BYOL on a bare metal dedicated Not eligible.
host is only eligible when using
Shared hosts must use Oracle-
a KVM hypervisor. BYOL is not
provided images that include the
eligible for Microsoft Windows
Microsoft license.
Server using Oracle-provided
images or when importing your own
Microsoft Windows Server image.

SQL Server Eligible. Eligible.


Subject to the Microsoft Product You must have License Mobility You must have License Mobility
Terms through Software Assurance. through Software Assurance.

Visual Studio (MSDN) Eligible. Eligible.


Non-production use only. Non-production use only.

Microsoft 365 Apps for enterprise Eligible. Not eligible.


(Office 365 ProPlus) and Office
Professional Plus
Windows 7, Windows 8, and Eligible. Not eligible.
Windows 10
You must have an Enterprise
Agreement license with Software
Assurance or a Windows Virtual
Desktop Access (VDA) license.

Other Microsoft applications Eligible. Eligible.


Subject to the Microsoft Product You must have License Mobility
Terms. through Software Assurance.

Application licenses such as SQL Server or System Center require License Mobility through Software Assurance
when running on Oracle Cloud Infrastructure VM instances. License Mobility is not used for Microsoft Office,
Windows clients, or Windows Server BYOL. Review the Microsoft Product Terms to validate which applications
support License Mobility.
Direct questions about your licensing rights to Microsoft or your Microsoft reseller.
Can I use virtual machines and bring my own license for Microsoft Windows Server to Oracle Cloud
Infrastructure?
You cannot migrate your Windows Server OS licenses when using Oracle Cloud Infrastructure virtual machines.
However, you can bring your own hypervisor (KVM) to run a Windows Server VM with your own Windows Server
OS license.
The following restrictions apply:
• You can use VMs with their own license only if you use bring your own hypervisor on a dedicated bare metal
host.
• BYOL of Microsoft Windows Server is not supported for VMs running on a shared host. Oracle Cloud
Infrastructure-provided VMs offer Windows Server.
• You can use a bare metal instance under bring your own hypervisor.
• You must install and manage a hypervisor (KVM or Hyper-V) and launch your own VMs. This will ensure
isolation, because all Oracle VMs are running on a dedicated bare metal server.

Oracle Cloud Infrastructure User Guide 813


Compute

• BYOL on a dedicated host (KVM hypervisor only) is permitted for Microsoft Windows Server.
• VMs can run Windows Server with a Visual Studio (MSDN) subscription license when used for development use
only.
Licensing - Other Microsoft Software
What other Microsoft applications can I bring to Oracle Cloud Infrastructure?
Any Microsoft Server licenses permitted on Oracle Cloud Infrastructure must be eligible according to the latest
Microsoft Product Terms. It is your responsibility to verify that the licensing agreements with Microsoft permit you
to bring on-premises perpetual Microsoft licenses to Oracle Cloud Infrastructure and are eligible licensed products
according to the latest Microsoft Product Terms. Microsoft application products that are currently eligible for License
Mobility require an active Software Assurance benefit to move your license. For more information, see Moving
Microsoft Licenses to Oracle Cloud Infrastructure: Microsoft License Mobility on page 817.
Can I bring my own SQL Server license to Oracle Cloud Infrastructure?
Yes, you can bring your own SQL Server license using License Mobility through Active Software Assurance. The
following restrictions apply:
• When you move your Microsoft SQL license using the license mobility process, the Microsoft Windows Server
license is not included. Microsoft Windows Server licenses are not permitted to be moved under License Mobility.
Windows Server operating systems must use the license issued by Oracle Cloud Infrastructure.
• Perpetual licenses can be moved from on-premises or other cloud providers only after more than 90 days have
passed since the last license move.
• End-of-support versions are not supported on shared host virtual machines on Oracle Cloud Infrastructure.
Follow the license mobility process to move your SQL Server license to Oracle Cloud Infrastructure.
Can I use my Visual Studio (MSDN) license on Microsoft Windows Server on Oracle Cloud
Infrastructure?
Yes, you can use your Visual Studio (MSDN) subscription license for non-production purposes on Oracle Cloud
Infrastructure on either bare metal or virtual machine instances. You are responsible for complying with the Visual
Studio subscription terms.
Can I buy a Visual Studio (MSDN) subscription from Oracle Cloud Infrastructure?
No, Oracle does not sell Visual Studio (MSDN) subscriptions. Contact Microsoft or your Microsoft reseller.
Can I use a Visual Studio (MSDN) license for a production environment?
No, Visual Studio (MSDN) subscription licenses are for development, testing, or demonstration purposes only.
How can I remote access to a Windows Server instance on Oracle Cloud Infrastructure?
Follow the steps to connect to a Windows instance. Windows operating systems permit remote access for a maximum
of two users using Remote Desktop Services (RDS) for Administration purposes.
RDS Client Access Licenses (CALs) are required for each user or device using Remote Desktop.
Does Oracle Cloud Infrastructure offer additional Remote Desktop Services licenses for
applications running on Windows VMs?
No, Oracle Cloud Infrastructure does not offer Microsoft RDS (Remote Desktop Server) Subscriber Access Licenses
(SALs). You can bring your own license (BYOL) and use your RDS Client Access Licenses (CALs) on Oracle Cloud
Infrastructure bare metal or virtual machines only if you have active Software Assurance coverage and move those
licenses using the license mobility process.
Can I bring my own RDS CALs if I want more than two users to access my Windows Server
instance?
Yes, you can use your Remote Desktop Services (RDS) Client Access Licenses (CALs) on Oracle Cloud
Infrastructure if you use the Oracle Cloud Infrastructure bare metal offering. In addition, you can use virtual machines
with their own Visual Studio (MSDN) subscription license if you bring your own hypervisor (KVM).

Oracle Cloud Infrastructure User Guide 814


Compute

You can use your RDS CAL licenses on Oracle Cloud Infrastructure virtual machines only if you have active
Software Assurance coverage and move your CALs using the license mobility process.
Can I bring my own System Center Management Licenses to Oracle Cloud Infrastructure?
You can bring Microsoft System Center server Management Licenses (server MLs) using the license mobility
process. There are minimums to take into consideration with System Center Management License 2-core licenses and
16-core licenses. A virtual machine requires a minimum of 16 core licenses to be assigned to it, and more if the VM
has more than 16 virtual cores.
System Center client Management Licenses (client MLs) are not eligible for license mobility and cannot be moved to
Oracle Cloud Infrastructure.
Other Windows Server Questions
Are there user data capabilities when launching Windows Server images?
Yes, Oracle-provided Windows images include cloudbase-init installed by default. You can use cloudbase-init to run
PowerShell scripts, batch scripts, or other user data content on instance launch. Cloudbase-init is the equivalent of
cloud-init on Linux-based images.
Can I use Windows Remote Management on Oracle Cloud Infrastructure?
Yes, Microsoft Windows Remote Management (WinRM) is enabled by default on Oracle-provided Windows images.
WinRM enables you to remotely manage the operating system.
What is Microsoft end of support?
Microsoft establishes the support lifecycle policy for its products. When a product reaches the end of its support
lifecycle, Microsoft no longer provides security updates for the product. You should upgrade to the latest version to
remain secure.
Can I use Windows Server 2008 R2 even though it's past the end-of-support date?
Windows Server 2008 R2 reached the end of its support lifecycle on January 14, 2020. Although you can continue
to import your own Windows 2008 R2 images and run your existing instances, you are at a higher risk of security
issues, incompatibility, or failures. Oracle does not provide any operating system support for end-of-support operating
systems.
Oracle Cloud Infrastructure does not provide platform images after the end-of-support date. However, you can import
your own image and launch it on a shared host VM.
There are no restrictions to running end-of-support operating systems on bare metal machines on a dedicated host.
You may bring your own image (BYOI) of a Windows Server 2008 R2 image, but you must import a custom OS
image and run the image on a dedicated host.
Can I purchase Microsoft Extended Security Updates for end-of-support Windows OSs?
Yes, you can purchase Extended Security Updates (ESUs) from Microsoft for use on Oracle Cloud Infrastructure.
For VMs on shared infrastructure, you must have an enterprise agreement in place with Microsoft. With that
agreement in place, you can purchase ESUs per virtual core matching the number of OCPUs per VM instance, with a
minimum requirement of 16 virtual core licenses per VM instance.
For bare metal machines, you must have an enterprise agreement in place with Microsoft. With that agreement in
place, you can purchase ESUs per physical core of the dedicated bare metal host.
Oracle Cloud Infrastructure cannot purchase ESUs on your behalf.
You are fully responsible for purchasing the correct number of ESUs for your instances. Oracle Cloud Infrastructure
does not keep track of whether you have enough ESUs.

Licensing Options for Microsoft Windows


You can choose to bring your own license (BYOL) for Microsoft applications that you want to run on Oracle Cloud
Infrastructure, or use a license that is issued by Oracle.

Oracle Cloud Infrastructure User Guide 815


Compute

The following table describes the licensing models that are available for using Microsoft Windows images on Oracle
Cloud Infrastructure.

Image License Additional Requirements


Oracle-provided image Issued by Oracle VM instances on a shared host, bare
metal instances, and VM instances
launched on a dedicated host are all
permitted.
BYOL for Oracle-provided images is
not available.

Bring your own image (BYOI) Issued by Oracle VM instances on a shared host, bare
metal instances, and VM instances
launched on a dedicated host are all
permitted.
Import the image. Oracle will issue
a license when an instance is created
using the imported image.

Bring your own image (BYOI) BYOL Instances must be launched on a


dedicated host.
Create a bare metal instance using
the KVM image from Marketplace.
Then, copy your Windows guest OS
to the hypervisor.

Bring your own Hyper-V Issued by Oracle Instances must be launched on a


dedicated host.
Create a bare metal instance using
the Oracle-provided Windows Server
Datacenter platform image. Then,
copy your on-premises guest OS to
Hyper-V on the bare metal instance.

Bringing Your Own Microsoft License


You can BYOL for Microsoft application licenses that are eligible for License Mobility with active Software
Assurance onto virtual machines and dedicated hosts. According to the Microsoft Product Terms, some of the
applications that are permitted for BYOL include the following:
• Microsoft SQL Server
• Microsoft Exchange Server
• Microsoft SharePoint Server
• Microsoft System Center
• Microsoft Dynamics products
Any Microsoft Server licenses permitted on Oracle Cloud Infrastructure must be eligible according to the latest
Microsoft Product Terms. It is your responsibility to verify that the licensing agreements with Microsoft permit you
to bring on-premises perpetual Microsoft licenses to Oracle Cloud Infrastructure and are eligible licensed products
according to the latest Microsoft Product Terms. Microsoft application products that are currently eligible for License
Mobility require an active Software Assurance benefit to move your license. For more information, see Moving
Microsoft Licenses to Oracle Cloud Infrastructure: Microsoft License Mobility on page 817.

Oracle Cloud Infrastructure User Guide 816


Compute

For Microsoft Windows Server OS, there are restrictions. BYOL is only permitted for eligible Windows OS licenses
with active Software Assurance running on a dedicated bare metal host or a dedicated VM host using a KVM
hypervisor.
For Windows clients (Windows 8 and Windows 10), BYOL is permitted for Enterprise Agreement licenses with an
active Software Assurance benefit or with Windows Virtual Desktop Access (VDA) licenses.
Activating licenses can either be scripted on startup using the slmgr.vbs command line tool or by using your own
Microsoft Key Management Service (KMS) server.
Activating Licenses on a Dedicated Host
When you bring your own Microsoft licenses, you are responsible for validating that your licensing agreements with
Microsoft permit you to move your own licenses to Oracle Cloud Infrastructure. It is your responsibility to ensure that
you comply with all Microsoft product terms and product use conditions.

Moving Microsoft Licenses to Oracle Cloud Infrastructure: Microsoft License


Mobility
Microsoft Volume Licensing customers can move eligible Microsoft server application licenses purchased under a
Volume Licensing agreement to Oracle Cloud Infrastructure. To do this, you must enroll in the License Mobility
through Microsoft Software Assurance benefit. This benefit is included with an active Software Assurance contract.
You don't need to purchase additional Microsoft software licenses, and there are no associated mobility fees.
For more information about this Microsoft benefit, see License Mobility through Microsoft Software Assurance.
Eligibility Requirements
To enroll in Microsoft License Mobility through Software Assurance, you must be a Microsoft Volume License
customer with eligible server application products. The following are key requirements:
• Windows Server operating systems, desktop client operating systems, and desktop applications such as Microsoft
Office are not eligible under License Mobility through Software Assurance. You can bring your own license
(BYOL) outside of License Mobility onto a dedicated host. For more information, see Can I bring my own license
for Microsoft Windows Server to Oracle Cloud Infrastructure? on page 812.
• Active Software Assurance coverage is required on eligible licenses migrated to Oracle Cloud Infrastructure.
• All licenses that are used to run and access your licensed software require active Software Assurance coverage.
This includes server licenses, processor licenses, Client Access Licenses (CALs), External Connector (EC)
licenses, and server management licenses. Your rights to run licensed software and manage instances on Oracle
Cloud Infrastructure expire with the expiration of the Software Assurance coverage on those licenses.
• Eligible Volume Licensing programs include the Microsoft Enterprise Agreement, Microsoft Enterprise
Subscription Agreement, and Microsoft Open Value agreement, where Software Assurance is included, and
other Volume Licensing programs where Software Assurance is an option, such as the Microsoft Open License
agreement and the Microsoft Select Plus agreement.
• You may move Microsoft licenses from on-premises or another cloud services provider only after more than 90
days have passed since the last license move.
• Eligible Microsoft licenses on Oracle Cloud Infrastructure must be maintained for a minimum period of 90 days
in a specific Oracle Cloud Infrastructure region. After the 90-day period, you may move the licensed software to a
shared host in another Oracle Cloud Infrastructure region.
• Any Microsoft Server licenses permitted on Oracle Cloud Infrastructure must be eligible according to the
latest Microsoft Product Terms. It is your responsibility to verify that the licenses you bring to Oracle Cloud
Infrastructure are eligible according to the latest Microsoft Product Terms.
Enrolling in License Mobility through Software Assurance
All customers using License Mobility through Software Assurance must complete a license verification process.
Microsoft verifies that you have eligible licenses with active Software Assurance and sends confirmation when the
verification process is complete.
You can deploy your application server software before completing the verification process, but you must submit the
license verification form within 10 days of deployment.

Oracle Cloud Infrastructure User Guide 817


Compute

You are responsible for managing true ups and renewals as required under your Volume Licensing agreement.
You must submit a new form each time that you deploy additional licenses, when you renew your agreement, and
when you deploy any previously unverified products.
To enroll in License Mobility through Software Assurance:
1. Verify that you are a Microsoft Volume Licensing customer with eligible application server licenses that are
covered by active Software Assurance.
2. Download the license verification form:
a. Go to the Microsoft Product Licensing search page.
b. In the Document Type area, select License Verification.
c. Filter the results by language, region, and business sector. Note that the verification form is not available in the
WW (World Wide) region.
d. Download the LicenseMobilityVerif document.
3. Complete the license verification form. To specify Oracle as the Authorized Mobility Partner, provide the
following information:
• Authorized Mobility Partner Name: Oracle America, Inc.
• Authorized Mobility Partner Website URL: http://www.oracle.com/
• Authorized Mobility Partner Email Address: [email protected]
For instructions to complete the form, see the Microsoft License Mobility Verification Guide (PDF).
4. Submit the completed verification form to both Microsoft and Oracle:
• Microsoft: Submit the form through your Microsoft reseller or directly to the email address in the form.
• Oracle: Send the form to [email protected].
Microsoft and Oracle verify that the product licenses for the workloads you deploy to Oracle Cloud Infrastructure
are eligible according to the terms of your License Mobility through Software Assurance benefit. Microsoft will
communicate your verification status to you and to Oracle as an Authorized Mobility Partner.

Support Options for Microsoft Windows


Oracle Support provides limited assistance for Microsoft Windows Server operating systems running on Oracle Cloud
Infrastructure and for SQL Server images provided by Oracle Cloud Marketplace. For product issues, work directly
with Microsoft Support.
Microsoft Windows Server
Oracle Support provides limited assistance for Microsoft Windows Server operating systems running on Oracle Cloud
Infrastructure as long as the Windows Server version has not reached end of support with Microsoft. For details about
Microsoft Windows lifecycles, see Lifecycle FAQ - Windows.
In order for Oracle to provide support for Bring Your Own Image (BYOI) images, the images must follow the
guidelines outlined in Importing Custom Windows Images on page 682 and must meet all Microsoft licensing
requirements.
Under these conditions, Oracle Support can help verify that:
• the operating system boots
• the operating system connects to the network
• attached storage connects and performs as expected
If you encounter other issues with Microsoft Windows Server, work directly with Microsoft Support to resolve the
issue. Microsoft Support provides assistance when you have an existing Premier Support agreement or when you pay
for professional support.
SQL Server
Oracle Support provides limited assistance with SQL Server images provided by Oracle Cloud Marketplace. For
product issues, work directly with Microsoft Support.

Oracle Cloud Infrastructure User Guide 818


Compute

For SQL Server images provided by Oracle Cloud Marketplace, Oracle Support can help verify that the services start
and that they allow local connections.
The following tasks fall outside the scope of Oracle Support and should be addressed directly with Microsoft Support.
Microsoft Support provides assistance when you have an existing Premier Support agreement or when you pay for
professional support.
• Query optimization
• Failover clustering
• Issues with third-party applications and with Microsoft applications that are not included in the Oracle Cloud
Marketplace image. For example, software fails to install errors.
• Image configurations that diverge from Oracle's standard configurations. For example, requests such as Roaming
Profiles are going into read-only mode when our users log in via terminal server/RDS are out of scope.
• Activities that breach the Microsoft license terms of use

Troubleshooting Compute Instances


For information about how to troubleshoot issues with compute instances, see the following topics:
• Troubleshooting Instances Using Instance Console Connections on page 819. To debug issues that happen
during instance launch or during the OS boot sequence, use a serial console connection or a VNC console
connection.
• Sending a Diagnostic Interrupt on page 832. To debug a running virtual machine (VM) instance that becomes
unresponsive, you can generate a copy of the system memory (also called a crash dump) by sending a diagnostic
interrupt to the instance.

Troubleshooting Instances Using Instance Console Connections


The Oracle Cloud Infrastructure Compute service provides console connections that enable you to remotely
troubleshoot malfunctioning instances, such as:
• An imported or customized image that does not complete a successful boot
• A previously working instance that stops responding
Two types of instance console connections exist: serial console connections and VNC console connections. Instance
console connections are for troubleshooting purposes only. To connect to a running instance for administration and
general use with Secure Shell (SSH) or Remote Desktop connection, see Connecting to an Instance on page 739.
To configure your console connection, follow these steps:
1. Make sure you have the correct permissions.
2. Complete the prerequisites, including creating your SSH key pair.
3. Create the instance console connection.
4. Connect to the serial console or connect to the VCN console.
5. If you're trying to connect to the serial console and you think the connection isn't working, test your connection to
the serial console using Cloud Shell.
Required IAM Policies
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
To create instance console connections, an administrator needs to grant user access to manage instance console
connections and to read instances through an IAM policy. The resource name for instance console connections is
instance-console-connection. The resource name for instances is instance. The following policies
grant users the ability to create instance console connections:

Allow group <group_name> to manage instance-console-connection in tenancy

Oracle Cloud Infrastructure User Guide 819


Compute

Allow group <group_name> to read instance in tenancy

Instance console connections also support network sources. The following policies grant users the ability to create
instance console connections with a network source:

Allow group <group_name> to manage instance-console-connection in tenancy


where request.networkSource.name='example-network-source'
Allow group <group_name> to read instance in tenancy where
request.networkSource.name='example-network-source'

If you're new to policies, see Getting Started with Policies and Common Policies.
Prerequisites
Complete these prerequistes before creating the instance console connection.
Installing an SSH Client and a Command-line Shell (Windows)
Windows does not include an SSH client by default. If you are connecting from a Windows client, you need to
install an SSH client. You can use PuTTY plink.exe with Windows PowerShell or software that includes a version of
OpenSSH such as:
• Git for Windows
• Windows Subsystem for Linux
The instructions in this topic frequently use PuTTY and Windows PowerShell.
If you want to make the console connection from Windows with Windows PowerShell, PowerShell might already
be installed on your Windows operating system. If not, follow the steps at the link. If you are connecting to the
instance from a Windows client using PowerShell, plink.exe is required. plink.exe is the command link connection
tool included with PuTTY. You can install PuTTY or install plink.exe separately. For installation information, see
http://www.putty.org.
Creating SSH Key Pairs
To create the secure console connection, you need an SSH key pair. The method to use for creating key pairs depends
on your operating system. When connecting to the serial console, you must use an RSA key. The instructions in this
section show how to create an RSA SSH key pair.

Creating the SSH key pair for Linux


For detailed instructions about creating an SSH key pair to use on Linux, see Managing Key Pairs on Linux Instances
on page 698.
To create an SSH key pair on the command line
If you're using a UNIX-style system, you probably already have the ssh-keygen utility installed. To determine
whether the utility is installed, type ssh-keygen on the command line. If the utility isn't installed, you can
download OpenSSH for UNIX from http://www.openssh.com/portable.html and install it.
1. Open a shell or terminal for entering the commands.
2. At the prompt, enter ssh-keygen and provide a name for the key when prompted. Optionally, include a
passphrase.
The keys will be created with the default values: RSA keys of 2048 bits.
Alternatively, you can type a complete ssh-keygen command, for example:

ssh-keygen -t rsa -N "" -b 2048 -C "<key_name>" -f <path/root_name>

The command arguments are shown in the following table:

Argument Description
-t rsa Use the RSA algorithm.

Oracle Cloud Infrastructure User Guide 820


Compute

Argument Description
-N "<passphrase>" A passphrase to protect the use of the key (like a password). If
you don't want to set a passphrase, don't enter anything between
the quotes.
A passphrase is not required. You can specify one as a security
measure to protect the private key from unauthorized use. If
you specify a passphrase, when you connect to the instance you
must provide the passphrase, which typically makes it harder to
automate connecting to an instance.

-b 2048 Generate a 2048-bit key. You don't have to set this if 2048 is
acceptable, as 2048 is the default.
A minimum of 2048 bits is recommended for SSH-2 RSA.

-C "<key_name>" A name to identify the key.


-f <path/root_name> The location where the key pair will be saved and the root name
for the files.

Creating the SSH key pair for Windows using PuTTY


If you are using a Windows client to connect to the instance console connection, use an SSH key pair generated by
PuTTY.
To create the SSH key pair using PuTTY
Important:

Ensure that you are using the latest version of PuTTY, see http://
www.putty.org.
1. Find puttygen.exe in the PuTTY folder on your computer, for example, C:\Program Files
(x86)\PuTTY. Double-click puttygen.exe to open it.
2. Specify a key type of SSH-2 RSA and a key size of 2048 bits:
• In the Key menu, confirm that the default value of SSH-2 RSA key is selected.
• For the Type of key to generate, accept the default key type of RSA.
• Set the Number of bits in a generated key to 2048 if not already set.
3. Click Generate.
4. To generate random data in the key, move your mouse around the blank area in the PuTTY window.
When the key is generated, it appears under Public key for pasting into OpenSSH authorized_keys file.
5. A Key comment is generated for you, including the date and timestamp. You can keep the default comment or
replace it with your own more descriptive comment.
6. Leave the Key passphrase field blank.

Oracle Cloud Infrastructure User Guide 821


Compute

7. Click Save private key, and then click Yes in the prompt about saving the key without a passphrase.
The key pair is saved in the PuTTY Private Key (PPK) format, which is a proprietary format that works only with
the PuTTY tool set.
You can name the key anything you want, but use the ppk file extension. For example, mykey.ppk.
8. Select all of the generated key that appears under Public key for pasting into OpenSSH authorized_keys file,
copy it using Ctrl + C, paste it into a text file, and then save the file in the same location as the private key.
(Do not use Save public key because it does not save the key in the OpenSSH format.)
You can name the key anything you want, but for consistency, use the same name as the private key and a file
extension of pub. For example, mykey.pub.
9. Write down the names and location of your public and private key files. You need the public key when creating
an instance console connection. You need the private key to connect to the instance console connection using
PuTTY.
Signing in to an instance from the serial console (optional)
To troubleshoot instances and see serial output using the serial console, you don't need to sign in. To connect to a
running instance for administration and general use with Secure Shell (SSH) or Remote Desktop connection, see
Connecting to an Instance on page 739.
If you want to sign in to an instance using an instance console connection, you can use Secure Shell (SSH) or Remote
Desktop connection to sign in. If you want to sign in with a username and password, you need a user account with a
password. Oracle Cloud Infrastructure does not set a default password for the opc user. Therefore, if you want to sign
in as the opc user, you need to create a password for the opc user. Otherwise, add a different user with a password
and sign in as that user.
Connecting Through Firewalls
If your system is behind a firewall, the system must be able to reach the console servers. The client system
connecting to the serial console must be able to reach the serial or VCN console server (for example, instance-
console.us-ashburn-1.oraclecloud.com) over SSH using port 443, directly or through a proxy.
Supported Instance Types
Serial console connections are supported on the following types of instances:
• Virtual machine (VM) instances launched in September 2017 or later
• Bare metal instances launched in November 2017 or later
VNC console connections are supported on the following types of instances:
• VM instances launched on October 13, 2017 or later
• Bare metal instances that use one of the following shapes:
• BM.Standard2.52 - launched on February 21, 2019 or later
• BM.Standard.E2.64 - launched after September 17, 2020 in the Oracle Cloud Infrastructure commercial realm
• BM.Standard.E3.128 - launched on February 21, 2019 or later
• BM.DenseIO2.52 - launched on February 21, 2019 or later
• BM.GPU2.2 - launched on February 21, 2019 or later
• BM.GPU3.8 - launched on February 21, 2019 or later
• BM.GPU.4.8 - launched on February 21, 2019 or later
• BM.HPC2.36 - launched on February 21, 2019 or later
Creating the Instance Console Connection
Before you can connect to the serial console or VNC console, you need to create the instance console connection.
Note:

Instance console connections are limited to one client at a time. If the client
fails, the connection remains active for approximately five minutes. During
this time, no other client can connect. After five minutes, the connection is

Oracle Cloud Infrastructure User Guide 822


Compute

closed, and a new client can connect. During the five-minute timeout, any
attempt to connect a new client fails with the following message:

channel 0: open failed: administratively


prohibited: console access is limited to one
connection at a time
Connection to <instance and OCID information>
closed.

1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Under Resources, click Console Connection.
4. Click Create Console Connection.
5. Upload the public key portion for the SSH key. You have three options for adding the SSH key.
• Generate SSH key pair: You can have Oracle Cloud Infrastructure generate an SSH key pair to use. If
you are using PowerShell or PuTTY to connect to the instance from a Windows client, you cannot use the
generated SSH key pair without first converting it to a .ppk file.
To convert a generated .key private key file
a. Open PuTTYgen.
b. Click Load, and select the private key generated when you created the instance. The extension for the key
file is .key.
c. Click Save private key.
d. Specify a name for the key. The extension for new private key is .ppk.
e. Click Save.
• Choose public key file: Browse to a public key file on your computer. If you followed the steps in Creating
SSH Key Pairs on page 820 in the Prerequisites section to create a key pair, use this option to navigate to
the .pub file.
• Paste public key: Paste the content of your public key file into the text box.
6. Click Create Console Connection.
When the console connection has been created and is available, the state changes to Active.
Connecting to the Serial Console
After you create the console connection for the instance, you can connect to the serial console using a Secure Shell
(SSH) connection. When connecting to the serial console, you must use an RSA key. You can use the same SSH key
for the serial console that was used when you launched the instance, or you can use a different SSH key.
When you are finished with the serial console and have terminated the SSH connection, you should delete the serial
console connection. If you do not disconnect from the session, Oracle Cloud Infrastructure terminates the serial
console session after 24 hours and you must reauthenticate to connect again.

Connecting from Mac OS X and Linux Operating Systems


Use an SSH client to connect to the serial console. Mac OS X and most Linux and UNIX-like operating systems
include the SSH client OpenSSH by default.
To connect to the serial console for an instance using OpenSSH on Mac OS X or Linux
1. On the instance details page in the Oracle Cloud Infrastructure Console, under Resources, click Console
Connection.
2. Click the Actions icon (three dots), and then click Copy Serial Console Connection for Linux/Mac.

Oracle Cloud Infrastructure User Guide 823


Compute

3. Paste the connection string into a terminal window on a Mac OS X or Linux system, and then press Enter to
connect to the console.
If you are not using the default SSH key or ssh-agent, modify the serial console connection string to include the
identity file flag, -i, to specify the private key portion for the SSH key to use, for example id_rsa. Specify this
flag for both the SSH connection and the SSH ProxyCommand, as shown in the following line:

ssh -i /<path>/<ssh_key> -o ProxyCommand='ssh -i /<path>/<ssh_key> -W %h:


%p -p 443...
4. Press Enter again to activate the console. If the connection is active, a message appears in the console:
IMPORTANT: Use a console connection to troubleshoot a malfunctioning
instance.
5. In the Oracle Cloud Infrastructure Console, reboot your instance. If the instance is functional and the connection
is active, the serial output appears in your console. If serial output does not appear in the console, the instance
operating system is not booting.

Connecting from Windows Operating Systems


The steps to connect to the serial console from Windows Powershell are different from the steps for OpenSSH. The
following steps do not work in the Windows terminal.
Important:

If you are connecting to the instance from a Windows client using


PowerShell, plink.exe is required. plink.exe is the command link connection
tool included with PuTTY. You can install PuTTY or install plink.exe
separately. For more information, see Installing an SSH Client and a
Command-line Shell (Windows) on page 820.
To connect to the serial console for an instance on Microsoft Windows
1. On the instance details page in the Oracle Cloud Infrastructure Console, under Resources, click Console
Connection.
2. Click the Actions icon (three dots). Depending on which SSH client you are using, do one of the following:
• If you are using Windows PowerShell, click Copy Serial Console Connection for Windows.
• If you are using OpenSSH, click Copy Serial Console Connection for Linux/Mac.
Tip:

The copied connection string for Windows contains the parameter -i


specifying the location of the private key file. The default value for this
parameter in the connection string references an environment variable
which might not be configured on your Windows client, or it might not
represent the location where the private key file is saved. Verify the value
specified for the -i parameter and make any required changes before
proceeding to the next step.
3. Paste the connection string copied from the previous step into a text file so that you can add the file path to the
private key file.
4. In the text file, replace $env:homedrive$env:homepath\oci\console.ppk with the file path to the
.ppk file on your computer. This file path appears twice in the string. Replace it in both locations.
5. Paste the modified connection string into the PowerShell window or your OpenSSH client, and then press Enter
to connect to the console.
6. Press Enter again to activate the console.
7. In the Oracle Cloud Infrastructure Console, reboot your instance. If the instance is functional and the connection is
active, the serial output appears in your client. If serial output does not appear in the client, the instance operating
system is not booting.

Oracle Cloud Infrastructure User Guide 824


Compute

Connecting from Cloud Shell


If you encounter issues when connecting to your instance's serial console using the steps for connection from Mac OS
X, Linux, or Windows, test connecting to the serial console using Cloud Shell. Cloud Shell is a web browser-based
terminal accessible from the Console, see Cloud Shell for more information. This procedure includes steps to access
Cloud Shell. For an introductory walkthrough of using Cloud Shell, see Using Cloud Shell.
Note:

You cannot use Cloud Shell for VNC console connections. You can only use
it for serial console connections.
To connect to the serial console for an instance using Cloud Shell
1. Sign in to the Console.

Oracle Cloud Infrastructure User Guide 825


Compute

2. Click the Cloud Shell icon in the Console header as shown in the following screenshot:

This action displays the Cloud Shell in a "drawer" at the bottom of the console as shown in the following
screenshot:

Oracle Cloud Infrastructure User Guide 826


Compute

Oracle Cloud Infrastructure User Guide 827


Compute

3. Run the following command in Cloud Shell to generate an SSH key pair:

ssh-keygen -t rsa
4. At the prompt to enter the file in which to save the key, press Enter to use the default location.
5. At the passphrase prompt, press Enter for no passphrase, and then press Enter again to confirm.
6. Run the following command to display the public key, and then copy the output:

cat $HOME/.ssh/id_rsa.pub
7. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
8. Click the instance that you're interested in.
9. Under Resources, click Console Connection.
10. Click Create Console Connection.
11. Select Paste SSH Key and paste the public key contents you copied in step 6.
12. Click Create Console Connection.
After the console connection state changes to Active proceed to the next step.
13. Click the Actions icon (three dots), and then click Copy Serial Console Connection for Linux/Mac.
14. Paste the connection string copied from the previous step to Cloud Shell, and then press Enter to connect to the
console.
15. Press Enter again to activate the console.
Connecting to the VNC Console
After you create the console connection for the instance, you need to set up a secure tunnel to the VNC server on the
instance, and then you can connect with a VNC client.
Caution:

The VNC console connection uses SSH port forwarding to create a secure
connection from your local system to the VNC server attached to your
instance's console. Although this method is a secure way to use VNC over
the internet, owners of multiuser systems should know that opening a port on
the local system makes it available to all users on that system until a VNC
client connects. For this reason, we don't recommend using this product on
a multiuser system unless you take proper actions to secure the port or you
isolate the VNC client by running it in a virtual environment, such as Oracle
VM VirtualBox.
To set up a secure tunnel to the VNC server on the instance using OpenSSH on Mac OS X or Linux
1. On the instance details page in the Oracle Cloud Infrastructure Console, under Resources, click Console
Connection.
2. Click the Actions icon (three dots), and then click Copy VNC Connection for Linux/Mac.
3. Paste the connection string copied from the previous step to a terminal window on a Mac OS X or Linux system,
and then press Enter to set up the secure connection.
4. After the connection is established, open your VNC client and specify localhost as the host to connect to and
5900 as the port to use.
Note:

Mac OS X Screen Sharing.app Not Compatible with VNC Console


Connections
The Mac OS X built-in VNC client, Screen Sharing.app does not work with
VNC console connections in Oracle Cloud Infrastructure. Use another VNC
client, such as Real VNC Viewer or Chicken.
To set up a secure tunnel to the VNC server on the instance using PowerShell on Windows

Oracle Cloud Infrastructure User Guide 828


Compute

Important:

If you are connecting to the VNC server on the instance from a Windows
client using PowerShell, plink.exe is required. plink.exe is the command
link connection tool included with PuTTY. You can install PuTTY or install
plink.exe separately. For installation information, see http://www.putty.org.
1. On the instance details page in the Oracle Cloud Infrastructure Console, under Resources, click Console
Connection.
2. Click the Actions icon (three dots), and then click Copy VNC Connection for Windows.
Tip:

The copied connection string for Windows contains the parameter -i


specifying the location of the private key file. The default value for this
parameter in the connection string references an environment variable
which might not be configured on your Windows client, or it might not
represent the location where the private key file is saved. Verify the value
specified for the -i parameter and make any required changes before
proceeding to the next step.
3. Paste the connection string copied from the previous step to Windows Powershell, and then press Enter to set up
the secure connection.
4. After the connection is established, open your VNC client and specify localhost as the host to connect to and
5900 as the port to use.
Note:

Secure Connection Warning


When you connect, you might see a warning from the VNC client that the
connection is not encrypted. Because you are connecting through SSH, the
connection is secure, so this warning is not an issue.
Troubleshooting Instances from Instance Console Connections on Linux
The following tasks describe steps specific to instances running Oracle Autonomous Linux 7.x, Oracle Linux 8.x,
and Oracle Linux 7.x, connecting from OpenSSH. Other operating system versions and SSH clients might require
different steps.
After you are connected with an instance console connection, you can perform various tasks, such as:
• Edit system configuration files.
• Add or reset the SSH keys for the opc user.
• Reset the password for the opc user.
These tasks require you to boot into a bash shell in maintenance mode.
To boot into maintenance mode
1. Reboot the instance from the Console.
2. Depending on the version of Linux you're using, do one of the following.
• For instances running Oracle Linux 8.x, follow these steps.
a. When the reboot process starts, immediately switch back to the terminal window and press Esc or F5
repeatedly until a menu appears.
b. In the menu that appears, select Boot Manager, and press Enter.
c. In the Boot Manager menu, select UEFI Oracle BlockVolume, and press Enter. Immediately press the
up/down arrow key and continue pressing it until the boot menu appears. If Console messages start to

Oracle Cloud Infrastructure User Guide 829


Compute

appear in the window, the opportunity to access the boot menu passed, and you need to start the reboot
process again.
• For instances running Oracle Autonomous Linux 7.x or Oracle Linux 7.x, when the reboot process starts,
switch back to the terminal window, and you see Console messages start to appear in the window. As soon as
the GRUB boot menu appears, use the up/down arrow key to stop the automatic boot process, enabling you to
use the boot menu.
3. In the boot menu, highlight the top item in the menu, and press e to edit the boot entry.
4. In edit mode, use the down arrow key to scroll down through the entries until you reach the line that starts with
linuxefi for instances running Oracle Autonomous Linux 7.x, Oracle Linux 8.x, and Oracle Linux 7.x.
5. At the end of that line, add the following:

init=/bin/bash
6. Reboot the instance from the terminal window by entering the keyboard shortcut CTRL+X.
When the instance has rebooted, you see the Bash shell command line prompt, and you can proceed with the
following procedures.
To edit the system configuration files
1. From the Bash shell, run the following command to load the SELinux policies to preserve the context of the files
you are modifying:

/usr/sbin/load_policy -i
2. Run the following command to remount the root partition with read/write permissions:

/bin/mount -o remount, rw /
3. Edit the configuration files as needed to try to recover the instance.
4. After you have finished editing the configuration files, to start the instance from the existing shell, run the
following command:

exec /usr/lib/systemd/systemd

Alternatively, to reboot the instance, run the following command:

/usr/sbin/reboot -f

To add or reset the SSH key for the opc user


1. From the Bash shell, run the following command to load the SELinux policies to preserve the context of the files
you are modifying:

/usr/sbin/load_policy -i
2. Run the following command to remount the root partition with read/write permissions:

/bin/mount -o remount, rw /
3. From the Bash shell, run the following command to change to the SSH key directory for the opc user:

cd ~opc/.ssh
4. Rename the existing authorized keys file with the following command:

mv authorized_keys authorized_keys.old
5. Replace the contents of the public key file with the new public key file with the following command:

echo '<contents of public key file>' >> authorized_keys

Oracle Cloud Infrastructure User Guide 830


Compute

6. Restart the instance by running the following command:

/usr/sbin/reboot -f

To reset the password for the opc user


1. From the Bash shell, run the following command to load the SELinux policies to preserve the context of the files
you are modifying. This step is necessary to sign in to your instance using SSH and the Console.

/usr/sbin/load_policy -i
2. Run the following command to remount the root partition with read/write permissions:

/bin/mount -o remount, rw /
3. Run the following command to reset the password for the opc user:

sudo passwd opc


4. Restart the instance by running the following command:

sudo reboot -f

Exiting the Instance Console Connection


To exit the serial console connection
When using SSH, the ~ character at the beginning of a new line is used as an escape character.
• To exit the serial console, enter:

~.
• To suspend the SSH session, enter:

~^z

The ^ character represents the CTRL key


• To see all the SSH escape commands, enter:

~?

To exit the VNC console connection


1. Close the VNC client.
2. In the Terminal or PowerShell window, type CTRL C
When you are finished using the console connection, delete the connection for the instance.
To delete the console connection for an instance
1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Under Resources, click Console Connection.
4. Click the Actions icon (three dots), and then click Delete. Confirm when prompted.
Tagging Resources
You can add tags to your resources to help you organize them according to your business needs. You can add tags
at the time you create a resource, or you can update the resource later with the desired tags. For general information
about applying tags, see Resource Tags on page 213.
To manage tags for an instance console connection

Oracle Cloud Infrastructure User Guide 831


Compute

1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click the instance that you're interested in.
3. Under Resources, click Console Connection.
4. For the console connection that you're interested in, click the Actions icon (three dots) and then click Add Tags.
To view existing tags, click View Tags.

Sending a Diagnostic Interrupt


Caution:

This feature is for advanced users. Sending a diagnostic interrupt to a live


system can cause data corruption or system failure.
You can send a diagnostic interrupt to debug an unresponsive or unreachable compute virtual machine (VM) instance.
A diagnostic interrupt causes the instance's OS to crash and reboot. Before you send a diagnostic interrupt, you
must configure the OS to generate a crash dump (also called a memory dump file) when it crashes. The crash dump
captures information about the state of the OS at the time of the crash. After the OS restarts, you can analyze the crash
dump to identify and debug the issue.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let users launch compute instances on page 2151 includes the ability to send a
diagnostic interrupt to an instance. If the specified group doesn't need to launch instances or attach volumes, you
could simplify that policy to include only manage instance-family, and remove the statements involving
volume-family and virtual-network-family.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
reference material about writing policies for instances, cloud networks, or other Core Services API resources, see
Details for the Core Services on page 2192.
Prerequisites
• The instance's OS must be configured to generate a crash dump file.
• The instance must be in the Running state. For more information, see Stopping and Starting an Instance on page
785.
• There are no in-progress actions affecting the instance, such as block volumes or secondary VNICs in the process
of being attached or detached.
Configuring the OS to Generate a Crash Dump
Before you send a diagnostic interrupt to an instance, you must configure the OS to generate a crash dump when it
crashes. The diagnostic interrupt is received as a non-maskable interrupt (NMI) on the target instance.
The steps depend on the OS.

Linux
Note:

On Oracle-provided images for Oracle Linux, the OS is either fully


configured or partially configured to generate a crash dump, depending on the
image release date.
Oracle Linux 8
• Images released in August 2020 or later: The image is fully configured
to generate a crash dump.

Oracle Cloud Infrastructure User Guide 832


Compute

• Earlier images: The dump-capture kernel is installed and configured, but


you must perform the other configuration steps.
Oracle Linux 7
• Images released in August 2020 or later: The image is fully configured
to generate a crash dump.
• Earlier images: The dump-capture kernel is installed and configured, but
you must perform the other configuration steps.
Oracle Linux 6
• Images released in September 2020 or later: The image is fully
configured to generate a crash dump.
• Earlier images: The dump-capture kernel is installed and configured, but
you must perform the other configuration steps.
1. Connect to the instance.
2. Install and configure the dump-capture kernel:
a. Install kdump and kexec by running the following command:

sudo yum install kexec-tools


b. Reserve memory on the kernel to save the crash dump. Do the following:
1. Open the etc/default/grub file in a text editor.
2. In the line that starts with GRUB_CMDLINE_LINUX_DEFAULT, add the parameter
crashkernel=<memory-to-reserve>. For example, to reserve 100 MB, add
crashkernel=100M.
3. Save the changes and close the file.
4. Rebuild the GRUB file by running the following command:

sudo grub2-mkconfig -o /boot/grub2/grub.cfg


3. Configure the kernel to crash when it receives a diagnostic interrupt. To do this, open the /etc/sysctl.conf
file in a text editor and add the following line:

kernel.unknown_nmi_panic=1
4. Apply the change to /etc/sysctl.conf by running the following command:

sysctl -p

Windows Server - Oracle-Provided Image


If you use an Oracle-provided image for Windows Server that was released in April 2020 or later, the image is
already configured to generate a crash dump.
If you use an image that was released before April 2020, do the following:
1. Connect to the instance.

Oracle Cloud Infrastructure User Guide 833


Compute

2. Download the Oracle Windows VirtIO drivers:


a.Sign in to the Oracle Software Delivery Cloud site.
b.In the All Categories list, select Release.
c.Type Oracle Linux 7.7 in the search box and click Search.
d.Add REL: Oracle Linux 7.7.x to your cart, and then click Continue.
e.In the Platforms/Languages list, select x86 64 bit. Click Continue.
f.Accept the license agreement and then click Continue.
g.Select the check box next to Oracle VirtIO Drivers Version for Microsoft Windows 1.1.5. Clear the other
check boxes.
h. Click Download and then follow the prompts.
3. Install the drivers and then restart the instance. For steps, see Installing the Oracle VirtIO Drivers for Microsoft
Windows on Existing Microsoft Windows Guests.

Windows Server - Customer-Provided Image


Refer to the third-party documentation for your operating system for more information.
Sending a Diagnostic Interrupt
After you configure the instance's OS to generate a crash dump when it crashes, use the following procedures to send
a diagnostic interrupt.

To send a diagnostic interrupt using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
2. Click More Actions, and then click Send Diagnostic Interrupt.
Caution:

Sending a diagnostic interrupt to a live system can cause data corruption or


system failure.
3. Review the confirmation message and then click Send Diagnostic Interrupt.
The lifecycle state that appears in the Console remains Running while the instance's OS crashes and restarts. Do
not send multiple diagnostic interrupts.
4. Wait several minutes for the instance's OS to restart, and then connect to the instance. You can now retrieve and
analyze the crash dump.

To send a diagnostic interrupt using the API


Use the InstanceAction operation, passing the value SENDDIAGNOSTICINTERRUPT as the action to perform.
Analyzing a Crash Dump
The crash dump is saved locally on the instance's OS.
• Linux instances: The default location where the crash dump is saved depends on the Linux and UNIX-like
operating systems.
• Oracle Linux 8, Oracle Linux 7: saved in /var/oled/crash
• Other Linux and UNIX-like operating systems: saved in /var/crash/
To change the location, modify the /etc/kdump.conf file.
• Windows instances: The crash dump is saved in %SystemRoot%memory.dmp. On most Windows systems,
this is C:\Windows\memory.dmp.
To analyze the crash dump, use a third-party tool such as the crash utility on Linux instances or WinDbg on Windows
instances.

Oracle Cloud Infrastructure User Guide 834


Compute

Updating the Linux iSCSI Service to Restart Automatically


Oracle Cloud Infrastructure supports iSCSI attached remote boot and block volumes to Compute instances. These
iSCSI attached volumes are managed by the Linux iSCSI initiator service, iscsid . In scenarios where this service
is stopped for any reason, such as the service crashes or a system administrator inadvertently stops the service, it's
important that this service is automatically restarted immediately.
The following platform images distributed by Oracle Cloud Infrastructure are configured so that the iscsid service
restarts automatically:
• Oracle Autonomous Linux 7 images.
• Oracle Linux 8 images.
• Oracle Linux 7 images released February 26, 2019 and later. See the release notes for Oracle-Linux-7.6-Gen2-
GPU-2019.02.20-0 and Oracle-Linux-7.6-2019.02.20-0.
• Oracle Linux 6 images released February 26, 2019 and later. See the release notes for Oracle-
Linux-6.10-2019.02.22-0.
• CentOS 7 images released February 25, 2019 and later. See the release notes for CentOS-7-2019.02.23-0.
Instances created from earlier versions of CentOS 7.x and Oracle Linux platform images, or any versions of
Ubuntu platform images, do not have this configuration. You should update these existing instances and custom
images created from these images so that the iscsid service restarts automatically. You should also check this
configuration on your imported paravirtualized custom images and any instances launched from these images and
update the configuration as needed.
This topic describes how to update the iscsid service on an instance so that it will restart automatically.
Note:

Configuring an instance to automatically restart the iscsid service does not


require a reboot and will increase the stability of your infrastructure.

Oracle Linux 7
Run the following command to update the iscsid service on your Oracle 7 Linux instances:

sudo yum update -y iscsi-initiator-utils

After running this command, the version of the iscsid service should be 6.2.0.874 or newer.
Run the following command to check the version:

yum info iscsi-initiator-utils

This update does not require a system reboot and will not make any changes to your instances beyond configuring
iscsid to restart automatically.

Oracle Linux 6
Run the following command to update the iscsid service on your Oracle 6 Linux instances:

sudo yum update -y iscsi-initiator-utils

After running this command, the version of the iscsid service should be 6.2.0.873 or newer.
Run the following command to check the version:

yum info iscsi-initiator-utils

This update does not require a system reboot and will not make any changes to your instances beyond configuring
iscsid to restart automatically.

Oracle Cloud Infrastructure User Guide 835


Compute

CentOS 7.x
Important:

Do not directly edit the systemd iscsid.service file. You should instead
create an override to ensure that the restart option isn't overwritten the
next time the iscsid service is updated.
On your CentOS 7 instances run the following command to create an override file:

sudo systemctl edit iscsid.service

Paste and save the following into the file:

[Service]
Restart=always

Run the following commands to reload systemd and restart the iscsid service:

sudo systemctl daemon-reload


sudo systemctl restart iscsid

Ubuntu 18
Important:

Do not directly edit the systemd iscsid.service file, instead create an


override to ensure that the restart option isn't overwritten the next time
the iscsid service is updated.
On your Ubuntu 18 instances run the following command to create an override file:

sudo systemctl edit iscsid.service

Paste and save the following into the file:

[Service]
Restart=
Restart=always

Run the following commands to reload systemd and restart the iscsid service:

sudo systemctl daemon-reload


sudo systemctl restart iscsid

Ubuntu 16
Important:

Do not directly edit the systemd iscsid.service file, instead create an


override to ensure that the restart option isn't overwritten the next time
the iscsid service is updated.
On your Ubuntu 16 instances run the following command to create an override file:

sudo systemctl edit iscsid.service

Oracle Cloud Infrastructure User Guide 836


Compute

Paste and save the following into the file:

[Service]
Restart=
Restart=always

Run the following commands to reload systemd and restart the iscsid service:

sudo systemctl daemon-reload


sudo systemctl restart iscsid

Ubuntu 14
On your Ubuntu 14 instances run the following command to install the monit package:

sudo apt-get install monit

Create the /etc/monit/conf.d/iscsid.conf file and include the following commands:

check process iscid with pidfile /run/iscsid.pid


start program = "/etc/init.d/open-iscsi start" with timeout 60 seconds
stop program = "/etc/init.d/open-iscsi stop"

Run the following command to start the monit service:

/etc/init.d/monit start

Testing the iscsid Service Update


Perform these steps to verify that the iscsid service has been updated successfully, and that it restarts
automatically.
Caution:

Do not perform these steps on a production instance. If the iscsid service


fails to restart, the instance may become unresponsive.
1. Run the following command to confirm that the iscsid service is running:

ps -ef | grep iscsid


2. Run the following command to stop the iscsid service:

sudo pkill -9 iscsid


3. Wait 60 seconds and then run the following command to verify that the iscsid service has restarted:

ps -ef | grep iscsid

Windows Generalized Image Support Files


To generalize a Windows custom image, download the appropriate file for your instance based on the shape of the
instance. Then, follow the instructions in Creating Windows Custom Images on page 673.
The files apply to the following Windows versions:
• Windows Server 2019
• Windows Server 2016
• Windows Server 2012 R2

Oracle Cloud Infrastructure User Guide 837


Compute

VM Instances
Use this download for all VM instances.
Download: oracle-cloud_windows-server_generalize_2019-02-06.SED.EXE

Bare Metal Instances - AMD Shapes


Use this download for AMD-based bare metal instances.
Download: oracle-cloud_windows-server-bm-gen2_generalize_2019-01-31.SED.EXE

Bare Metal Instances - X7 Shapes


Use this download for X7-based bare metal instances.
Download: oracle-cloud_windows-server-bm-gen2_generalize_2019-01-31.SED.EXE

Bare Metal Instances - X5 Shapes


Use this download for X5-based bare metal instances.
Download: oracle-cloud_windows-server_generalize_2019-02-06.SED.EXE

Oracle Cloud Infrastructure User Guide 838


Compute

Oracle Cloud Infrastructure User Guide 839


Container Engine for Kubernetes

Chapter

12
Container Engine for Kubernetes
This chapter explains how to define and create Kubernetes clusters to enable the deployment, scaling, and
management of containerized applications.

Overview of Container Engine for Kubernetes


Oracle Cloud Infrastructure Container Engine for Kubernetes is a fully-managed, scalable, and highly available
service that you can use to deploy your containerized applications to the cloud. Use Container Engine for Kubernetes
(sometimes abbreviated to just OKE) when your development team wants to reliably build, deploy, and manage
cloud-native applications. You specify the compute resources that your applications require, and Container Engine for
Kubernetes provisions them on Oracle Cloud Infrastructure in an existing OCI tenancy.
Container Engine for Kubernetes uses Kubernetes - the open-source system for automating deployment, scaling, and
management of containerized applications across clusters of hosts. Kubernetes groups the containers that make up
an application into logical units (called pods) for easy management and discovery. Container Engine for Kubernetes
uses versions of Kubernetes certified as conformant by the Cloud Native Computing Foundation (CNCF). Container
Engine for Kubernetes is itself ISO-compliant (ISO-IEC 27001, 27017, 27018).
You can access Container Engine for Kubernetes to define and create Kubernetes clusters using the Console and the
REST API. You can access the clusters you create using the Kubernetes command line (kubectl), the Kubernetes
Dashboard, and the Kubernetes API.
Container Engine for Kubernetes is integrated with Oracle Cloud Infrastructure Identity and Access Management
(IAM), which provides easy authentication with native Oracle Cloud Infrastructure identity functionality.
For an introductory tutorial, see Creating a Cluster with Oracle Cloud Infrastructure Container Engine for Kubernetes.
A number of related Developer Tutorials are also available.

Ways to Access Oracle Cloud Infrastructure


You can access Oracle Cloud Infrastructure using the Console (a browser-based interface) or the REST API.
Instructions for the Console and API are included in topics throughout this guide. For a list of available SDKs, see
Software Development Kits and Command Line Interface on page 4262.
To access the Console, you must use a supported browser.
Oracle Cloud Infrastructure supports the following browsers and versions:
• Google Chrome 69 or later
• Safari 12.1 or later
• Firefox 62 or later
For general information about using the API, see REST APIs on page 4409.

Creating Automation with Events


You can create automation based on state changes for your Oracle Cloud Infrastructure resources by using event
types, rules, and actions. For more information, see Overview of Events on page 1788.

Oracle Cloud Infrastructure User Guide 840


Container Engine for Kubernetes

See Container Engine for Kubernetes on page 1846 for details about Container Engine for Kubernetes resources that
emit events.

Resource Identifiers
Most types of Oracle Cloud Infrastructure resources have a unique, Oracle-assigned identifier called an Oracle
Cloud ID (OCID). For information about the OCID format and other ways to identify your resources, see Resource
Identifiers on page 199.

Authentication and Authorization


Each service in Oracle Cloud Infrastructure integrates with IAM for authentication and authorization, for all
interfaces (the Console, SDK or CLI, and REST API).
An administrator in your organization needs to set up groups, compartments, and policies that control which users
can access which services, which resources, and the type of access. For example, the policies control who can create
new users, create and manage the cloud network, launch instances, create buckets, download objects, etc. For more
information, see Getting Started with Policies on page 2143. For specific details about writing policies for each of
the different services, see Policy Reference on page 2176.
If you’re a regular user (not an administrator) who needs to use the Oracle Cloud Infrastructure resources that
your company owns, contact your administrator to set up a user ID for you. The administrator can confirm which
compartment or compartments you should be using.
Note that to perform certain operations on clusters created by Container Engine for Kubernetes, you might require
additional permissions granted via a Kubernetes RBAC role or clusterrole. See About Access Control and Container
Engine for Kubernetes on page 919.

Container Engine for Kubernetes Capabilities and Limits


In each region that is enabled for your tenancy, you can create three clusters (Monthly Universal Credits) or one
cluster (Pay-as-You-Go or Promo) by default. Each cluster you create can have a maximum of 1000 nodes. A
maximum of 110 pods can run on each node. See Service Limits on page 217.

Required IAM Service Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
If you're new to policies, see Getting Started with Policies and Common Policies.
For more details about policies for Container Engine for Kubernetes, see:
• Policy Configuration for Cluster Creation and Deployment on page 865
• Details for Container Engine for Kubernetes on page 2188

Container Engine and Kubernetes Concepts


This topic describes key concepts you need to understand when using Container Engine for Kubernetes.

Kubernetes Clusters
A Kubernetes cluster is a group of nodes (machines running applications). Each node can be a physical machine or
a virtual machine. The node's capacity (its number of CPUs and amount of memory) is defined when the node is
created. A cluster comprises:
• Control plane nodes (previously referred to as 'master nodes'). Typically, there will be three control plane nodes
for high availability.
• Worker nodes, organized into node pools.

Oracle Cloud Infrastructure User Guide 841


Container Engine for Kubernetes

Kubernetes Cluster Control Plane and Kubernetes API


The Kubernetes cluster control plane implements core Kubernetes functionality. It runs on compute instances (known
as 'control plane nodes') in the Container Engine for Kubernetes service tenancy. The cluster control plane is fully
managed by Oracle.
The cluster control plane runs a number of processes, including:
• kube-apiserver to support Kubernetes API operations requested from the Kubernetes command line tool (kubectl)
and other command line tools, as well as from direct REST calls. The kube-apiserver includes admissions
controllers required for advanced Kubernetes operations.
• kube-controller-manager to manage different Kubernetes components (for example, replication controller,
endpoints controller, namespace controller, and serviceaccounts controller)
• kube-scheduler to control where in the cluster to run jobs
• etcd to store the cluster's configuration data
The Kubernetes API enables end users to query and manipulate Kubernetes resources (such as pods, namespaces,
configmaps, and events).
You access the Kubernetes API on the cluster control plane through an endpoint hosted in a subnet of your VCN. This
Kubernetes API endpoint subnet can be a private or public subnet. If you specify a public subnet for the Kubernetes
API endpoint, you can optionally assign a public IP address to the Kubernetes API endpoint (in addition to the private
IP address). You control access to the Kubernetes API endpoint subnet using security rules defined for security lists
or network security groups.
Note:

In earlier releases, clusters were provisioned with the public Kubernetes API
endpoint in the Oracle-managed tenancy.
You can continue to create such clusters using the CLI or API, but not the
Console.

Kubernetes Worker Nodes and Node Pools


Worker nodes constitute the cluster data plane. Worker nodes are where you run the applications that you deploy in a
cluster.
Each worker node runs a number of processes, including:
• kubelet to communicate with the cluster control plane
• kube-proxy to maintain networking rules
The cluster control plane processes monitor and record the state of the worker nodes and distribute requested
operations between them.
A node pool is a subset of worker nodes within a cluster that all have the same configuration. Node pools enable you
to create pools of machines within a cluster that have different configurations. For example, you might create one
pool of nodes in a cluster as virtual machines, and another pool of nodes as bare metal machines. A cluster must have
a minimum of one node pool, but a node pool need not contain any worker nodes.
Worker nodes in a node pool are connected to a worker node subnet in your VCN.

Pods
Where an application running on a worker node comprises multiple containers, Kubernetes groups the containers
into a single logical unit called a pod for easy management and discovery. The containers in the pod share the same
networking namespace and the same storage space, and can be managed as a single object by the cluster control
plane. A number of pods providing the same functionality can be grouped into a single logical set known as a service.
For more information about pods, see the Kubernetes documentation.

Oracle Cloud Infrastructure User Guide 842


Container Engine for Kubernetes

Services
In Kubernetes, a service is an abstraction that defines a logical set of pods and a policy by which to access them. The
set of pods targeted by a service is usually determined by a selector.
For some parts of an application (for example, frontends), you might want to expose a service on an external IP
address outside of a cluster.
Kubernetes ServiceTypes enable you to specify the kind of service you want to expose. A LoadBalancer
ServiceType creates an Oracle Cloud Infrastructure load balancer on load balancer subnets in your VCN.
For more information about services in general, see the Kubernetes documentation. For more information about
creating load balancer services with Container Engine for Kubernetes, see Creating Load Balancers to Distribute
Traffic Between Cluster Nodes on page 937.

Manifest Files (or Pod Specs)


A Kubernetes manifest file comprises instructions in a yaml or json file that specify how to deploy an application to
the node or nodes in a Kubernetes cluster. The instructions include information about the Kubernetes deployment, the
Kubernetes service, and other Kubernetes objects to be created on the cluster. The manifest is commonly also referred
to as a pod spec, or as a deployment.yaml file (although other filenames are allowed). The parameters to include in a
Kubernetes manifest file are described in the Kubernetes documentation.

Admission Controllers
A Kubernetes admission controller intercepts authenticated and authorized requests to the Kubernetes API server
before admitting an object (such as a pod) to the cluster. An admission controller can validate an object, or modify it,
or both. Many advanced features in Kubernetes require an enabled admission controller. For more information, see
the Kubernetes documentation.
The Kubernetes version you select when you create a cluster using Container Engine for Kubernetes determines
the admission controllers supported by that cluster. To find out the supported admission controllers, the order in
which they run in the Kubernetes API server, and the Kubernetes versions in which they are supported, see Supported
Admission Controllers on page 924.

Namespaces
A Kubernetes cluster can be organized into namespaces, to divide the cluster's resources between multiple users.
Initially, a cluster has the following namespaces:
• default, for resources with no other namespace
• kube-system, for resources created by the Kubernetes system
• kube-node-lease, for one lease object per node to help determine node availability
• kube-public, usually used for resources that have to be accessible across the cluster
For more information about namespaces, see the Kubernetes documentation.

Preparing for Container Engine for Kubernetes


Before you can use Container Engine for Kubernetes to create a Kubernetes cluster:
• You must have access to an Oracle Cloud Infrastructure tenancy. The tenancy must be subscribed to one or more
of the regions in which Container Engine for Kubernetes is available (see Availability by Region on page 844).
• Your tenancy must have sufficient quota on different types of resource (see Service Limits on page 217). More
specifically:
• Compute instance quota: To create a Kubernetes cluster, at least one compute instance (node) must be
available in the tenancy. However, you'll probably want more than this minimum. For example, to create a

Oracle Cloud Infrastructure User Guide 843


Container Engine for Kubernetes

highly available cluster in a region with three availability domains (ADs), at least three compute instances
must be available (one in each availability domain).
• Block volume quota: If you intend to create Kubernetes persistent volumes, sufficient block volume quota
must be available in each availability domain to meet the persistent volume claim. Persistent volume claims
must request a minimum of 50 gigabytes. See Creating a Persistent Volume Claim on page 947.
• Load balancer quota: If you intend to create a load balancer to distribute traffic between the nodes running a
service in a Kubernetes cluster, sufficient load balancer quota must be available in the region. See Creating
Load Balancers to Distribute Traffic Between Cluster Nodes on page 937.
• Within your tenancy, there must already be a compartment to contain the necessary network resources (such as
a VCN, subnets, internet gateway, route table, security lists). If such a compartment does not exist already, you
will have to create it. Note that the network resources can reside in the root compartment. However, if you expect
multiple teams to create clusters, best practice is to create a separate compartment for each team.
• Within the compartment, network resources (such as a VCN, subnets, internet gateway, route table, security lists)
must be appropriately configured in each region in which you want to create and deploy clusters. For example, to
create a highly available cluster in a region with three availability domains, the VCN must include:
• For worker nodes: a regional subnet (recommended), or three AD-specific subnets (one in each of the
availability domains).
• For load balancers: optionally (but usually) an additional regional subnet (recommended), or an additional two
AD-specific subnets (each in a different availability domain).
Best practice is to use regional subnets to make failover across availability domains simpler to implement.
When creating a new cluster, you can have Container Engine for Kubernetes automatically create and configure
new network resources for the new cluster, or you can specify existing network resources. If you specify existing
network resources, you or somebody else must have already configured those resources appropriately. See
Network Resource Configuration for Cluster Creation and Deployment on page 845.
• To create and/or manage clusters, you must belong to one of the following:
• The tenancy's Administrators group
• A group to which a policy grants the appropriate Container Engine for Kubernetes permissions. See Create
Required Policy for Groups on page 865.
• To perform Kubernetes operations on a cluster:
• You must be able to run the Kubernetes command line tool kubectl. You can use the kubectl installation
included in Cloud Shell, or you can use a local installation of kubectl (see Accessing a Cluster Using Kubectl
on page 889).
• You must have set up your own copy of the cluster's kubeconfig configuration file (see Setting Up Cluster
Access on page 875). Note that you must set up your own kubeconfig file. You cannot access a cluster
using a kubeconfig file that a different user set up.
• You must have appropriate permissions to access the cluster (see About Access Control and Container Engine
for Kubernetes on page 919).

Availability by Region
Container Engine for Kubernetes is available in all the Oracle Cloud Infrastructure regions listed at Regions and
Availability Domains on page 182. Refer to that topic to see region identifiers, region keys, and availability domain
names.
In some cases, you might have to use shortened versions of availability domain names. For example, when defining
a persistent volume claim (PVC), to request storage in a particular availability domain by specifying the value of the
failure-domain.beta.kubernetes.io/zone Kubernetes label. To find out how to construct shortened
versions of availability domain names, see failure-domain.beta.kubernetes.io/zone on page 898.

Oracle Cloud Infrastructure User Guide 844


Container Engine for Kubernetes

Network Resource Configuration for Cluster Creation and Deployment


Before you can use Container Engine for Kubernetes to create and deploy clusters in the regions in a tenancy:
• Within the tenancy, there must already be a compartment to contain the necessary network resources (such as
a VCN, subnets, internet gateway, route table, security lists). If such a compartment does not exist already, you
will have to create it. Note that the network resources can reside in the root compartment. However, if you expect
multiple teams to create clusters, best practice is to create a separate compartment for each team.
• Within the compartment, network resources (such as a VCN, subnets, internet gateway, route table, security lists)
must be appropriately configured in each region in which you want to create and deploy clusters. When creating
a new cluster, you can have Container Engine for Kubernetes automatically create and configure new network
resources in the 'Quick Create' workflow. Alternatively, you can explicitly specify the existing network resources
to use in the 'Custom Create' workflow. If you specify existing network resources, you or somebody else must
have already configured those resources appropriately, as described in this topic.
This topic describes the necessary configuration for each network resource. To see details of a typical configuration,
see Example Network Resource Configurations on page 849.
For an introductory tutorial, see Creating a Cluster with Oracle Cloud Infrastructure Container Engine for Kubernetes.
A number of related Developer Tutorials are also available.
VCN Configuration
The VCN in which you want to create and deploy clusters must be configured as follows:
• The VCN must have a CIDR block defined that is large enough for the number of subnets you specify for
the clusters you create. For example, to create a highly available cluster in a region with three availability
domains will typically require two regional subnets (recommended) or five AD-specific subnets to support the
necessary number of worker nodes and load balancers. However, you can create clusters with fewer subnets.
A /16 CIDR block would be large enough for almost all use cases (10.0.0.0/16 for example). The CIDR block you
specify for the VCN must not overlap with the CIDR block you specify for pods and for the Kubernetes services
(see CIDR Blocks and Container Engine for Kubernetes on page 864).
• The VCN must have an appropriate number of subnets defined for worker nodes, load balancers and the
Kubernetes API endpoint. Best practice is to use regional subnets to make failover across availability domains
simpler to implement. See Subnet Configuration on page 848.
• The VCN must have security lists defined for each subnet. See Security List Configuration on page 847.
• Oracle recommends DNS Resolution is selected for the VCN.
• If you are using public subnets, the VCN must have an internet gateway. See Internet Gateway Configuration on
page 845.
• If you are using private subnets, the VCN must have a NAT gateway and a service gateway. See NAT Gateway
Configuration on page 845 and Service Gateway Configuration on page 846.
• If the VCN has a NAT gateway, service gateway, or internet gateway, it must have a route table with appropriate
rules defined. See Route Table Configuration on page 846.
See VCNs and Subnets on page 2847 and Example Network Resource Configurations on page 849.
Internet Gateway Configuration
If you intend to use public subnets (for worker nodes, load balancers, or the Kubernetes API endpoint) and the
subnets require access to/from the internet, the VCN must have an internet gateway. The internet gateway must be
specified as the target for the destination CIDR block 0.0.0.0/0 as a route rule in a route table.
See VCNs and Subnets on page 2847 and Example Network Resource Configurations on page 849.
NAT Gateway Configuration
If you intend to use private subnets (for worker nodes or the Kubernetes API endpoint) and the subnets require access
to the internet, the VCN must have a NAT gateway. For example, if you expect deployed applications to download
software or to access third party services.
The NAT gateway must be specified as the target for the destination CIDR block 0.0.0.0/0 as a route rule in a route
table.

Oracle Cloud Infrastructure User Guide 845


Container Engine for Kubernetes

See NAT Gateway on page 3275 and Example Network Resource Configurations on page 849.
Service Gateway Configuration
If you intend to use private subnets for worker nodes or the Kubernetes API endpoint, the VCN must have a service
gateway.
Create the service gateway in the same VCN and compartment as the worker nodes subnet and the Kubernetes API
endpoint subnet, and select the All <region> Services in Oracle Services Network option.
The service gateway must be specified as the target for All <region> Services in Oracle Services Network as a
route rule in a route table.
See Access to Oracle Services: Service Gateway on page 3284 and Example Network Resource Configurations on
page 849.
Route Table Configuration

Route Table for Worker Nodes Subnets


If you intend to deploy worker nodes in a public subnet, the subnet route table must have a route rule that specifies the
internet gateway as the target for the destination CIDR block 0.0.0.0/0.
If you intend to deploy worker nodes in a private subnet, the subnet route table must have::
• a route rule that specifies the service gateway as the target for All <region> Services in Oracle Services
Network
• a route rule that specifies the NAT gateway as the target for the destination CIDR block 0.0.0.0/0

Route Table for Kubernetes API Endpoint Subnets


If you intend to deploy the Kubernetes API endpoint in a public subnet, the subnet route table must have a route rule
that specifies the internet gateway as the target for the destination CIDR block 0.0.0.0/0.
If you intend to deploy the Kubernetes API endpoint in a private subnet, the subnet route table must have:
• a route rule that specifies the service gateway as the target for All <region> Services in Oracle Services
Network
• a route rule that specifies the NAT gateway as the target for the destination CIDR block 0.0.0.0/0

Route Table for Load Balancer Subnets


If you intend to deploy load balancers in public subnets, the subnet route table must have a route rule that specifies
the internet gateway as the target for the destination CIDR block 0.0.0.0/0.
If you intend to deploy load balancers in private subnets, the subnet route table can be empty.

Notes about Route Table Configuration


• To avoid the possibility of asymmetric routing, a route table for a public subnet cannot contain both a route rule
that targets an internet gateway as well as a route rule that targets a service gateway (see Issues with access from
Oracle services through a service gateway to your public instances).
• For more information about setting up route tables, see:
• Internet Gateway on page 3271
• NAT Gateway on page 3275
• Access to Oracle Services: Service Gateway on page 3284
• Example Network Resource Configurations on page 849
DHCP Options Configuration
The VCN in which you want to create and deploy clusters must have DHCP Options configured. The default value
for DNS Type of Internet and VCN Resolver is acceptable.

Oracle Cloud Infrastructure User Guide 846


Container Engine for Kubernetes

See DHCP Options on page 2943 and Example Network Resource Configurations on page 849.
Security List Configuration
The VCN in which you want to create and deploy clusters must have security lists defined for worker node subnets,
for the Kubernetes API endpoint subnet, and for load balancer subnets (if specified). The worker node subnets,
Kubernetes API endpoint subnet, and load balancer subnets have different security rule requirements, as described in
this topic. You can add additional rules to meet your specific needs.
See Security Lists on page 2876 and Example Network Resource Configurations on page 849.

Security List for Kubernetes API Endpoint Subnet


The security list for the Kubernetes API endpoint subnet must have the following required ingress rules:

State Source Protocol/Dest. Port Description


Stateful Worker Nodes CIDR TCP/6443 Kubernetes worker to
Kubernetes API endpoint
communication.
Stateful Worker Nodes CIDR TCP/12250 Kubernetes worker
to control plane
communication.
Stateful Worker Nodes CIDR ICMP 3,4 Path Discovery.

The security list for the Kubernetes API endpoint subnet can have the following optional ingress rules:

State Source Protocol/Dest. Port Description


Stateful 0.0.0.0/0 or specific TCP/6443 Client access to Kubernetes
subnets API endpoint

The security list for the Kubernetes API endpoint subnet must have the following egress rules:

State Destination Protocol/Dest. Port Description


Stateful All <region> Services in TCP/443 Allow Kubernetes control
Oracle Services Network plane to communicate with
OKE.
Stateful Worker Nodes CIDR TCP/ALL All traffic to worker nodes.
Stateful Worker Nodes CIDR ICMP 3,4 Path Discovery.

Security List for Worker Node Subnets


Worker nodes are created with public or private IP addresses, according to whether you specify public or private
subnets when defining the node pools in a cluster. Container Engine for Kubernetes must be able to access worker
nodes.
Security lists for worker node subnets must have the following required ingress rules:

State Source Protocol/Dest. Port Description


Stateful Worker Nodes CIDR ALL/ALL Allow pods on one worker
node to communicate
with pods on other worker
nodes.

Oracle Cloud Infrastructure User Guide 847


Container Engine for Kubernetes

State Source Protocol/Dest. Port Description


Stateful Kubernetes API Endpoint TCP/ALL Allow Kubernetes control
CIDR plane to communicate with
worker nodes.
Stateful 0.0.0.0/0 ICMP 3,4 Path Discovery.

Security lists for worker node subnets can have the following optional ingress rules:

State Source Protocol/Dest. Port Description


Stateful 0.0.0.0/0 or subnet CIDR TCP/22 (optional) Allow inbound
SSH traffic to worker
nodes.

Security lists for worker node subnets must have the following required egress rules:

State Destination Protocol/Dest. Port Description


Stateful Worker Nodes CIDR ALL/ALL Allow pods on one worker
node to communicate
with pods on other worker
nodes.
Stateful 0.0.0.0/0 ICMP 3,4 Path Discovery.
Stateful All <region> Services in TCP/ALL Allow nodes to
Oracle Services Network communicate with OKE.
Stateful Kubernetes API Endpoint TCP/6443 Kubernetes worker to
CIDR Kubernetes API endpoint
communication.
Stateful Kubernetes API Endpoint TCP/12250 Kubernetes worker
CIDR to control plane
communication.

Security lists for worker node subnets can have the following optional egress rules:

State Destination Protocol/Dest. Port Description


Stateful 0.0.0.0/0 TCP/ALL (optional) Allow worker
nodes to communicate with
internet.

Security List for Load Balancers


Security lists for load balancers do not require any security rules. The Kubernetes Cloud Controller Manager sets
security rules automatically when Kubernetes services are deployed.
You can use the security list management feature in Kubernetes to manage security list rules yourself. See Specifying
Load Balancer Security List Management Options on page 944.
Subnet Configuration
The characteristics of the cluster you want to create will determine the number of subnets to configure. Best practice
is to use regional subnets to make failover across availability domains simpler to implement.
The VCN in which you want to create and deploy clusters must have at least two (optionally, three) different subnets:
• a Kubernetes API endpoint subnet

Oracle Cloud Infrastructure User Guide 848


Container Engine for Kubernetes

• a worker nodes subnet


• (optionally) one or two load balancer subnets
Although you can choose to combine the subnets, and also to combine security rules, this approach is not
recommended because it makes security harder to manage.
The subnet CIDR blocks must not overlap with CIDR blocks you specify for pods running in the cluster (see CIDR
Blocks and Container Engine for Kubernetes on page 864).
The subnets must be configured with the following properties:
• Route Table: see Route Table Configuration on page 846
• DHCP options:see DHCP Options Configuration on page 846
• Security List: see Security List Configuration on page 847
See VCNs and Subnets on page 2847 and Example Network Resource Configurations on page 849.

Kubernetes API Endpoint Subnet Configuration


The Kubernetes API endpoint subnet hosts an endpoint that provides access to the Kubernetes API on the cluster
control plane.
The Kubernetes API endpoint subnet must be a regional subnet, and can be a private or a public subnet:
• If you specify a public subnet for the Kubernetes API endpoint, you can optionally assign a public IP address to
the Kubernetes API endpoint subnet. The public IP address enables third party cloud services (such as SaaS CI/
CD services) to access the Kubernetes API endpoint.
• If you specify a private subnet for the Kubernetes API endpoint, traffic can access the Kubernetes API endpoint
subnet from another subnet in your VCN, from another VCN, or from your on-premise network (peered with
FastConnect). You can also set up a bastion host on a public subnet to reach the Kubernetes API endpoint.

Worker Node Subnet Configuration


A worker node subnet hosts the worker nodes in a node pool.
The worker node subnet can be either a single regional subnet or multiple AD-specific subnets (one in each of the
availability domains).
The worker node subnet can be either a public subnet or a private subnet. However, we recommend the worker node
subnet is a private subnet for maximum security.

Load Balancer Subnet Configuration


Load balancer subnet(s) connect Oracle Cloud Infrastructure load balancers created by Kubernetes services of type
LoadBalancer.
The load balancer subnets can be single regional subnets or AD-specific subnets (one in each of the availability
domains). However, we recommend regional subnets.
The load balancer subnets can be either public or private subnets, depending on how applications deployed on the
cluster will be accessed. If applications will be accessed internally from within your VCN, use private subnets for
the load balancer subnets. If applications will be accessed from the internet, use public subnets for the load balancer
subnets.

Example Network Resource Configurations


When creating a new cluster, you can use the 'Quick Create' workflow to create new network resources automatically.
Alternatively, you can use the 'Custom Create' workflow to explicitly specify existing network resources. For more
information about the required network resources, see Network Resource Configuration for Cluster Creation and
Deployment on page 845.
This topic gives examples of how you might configure network resources when using the 'Custom Create' workflow
to create highly available clusters in a region with three availability domains:

Oracle Cloud Infrastructure User Guide 849


Container Engine for Kubernetes

• Example 1: Cluster with Public Kubernetes API Endpoint, Public Worker Nodes, and Public Load Balancers on
page 850
• Example 2: Cluster with Public Kubernetes API Endpoint, Private Worker Nodes, and Public Load Balancers on
page 853
• Example 3: Cluster with Private Kubernetes API Endpoint, Private Worker Nodes, and Public Load Balancers on
page 857
• Example 4: Cluster with Private Kubernetes API Endpoint, Private Worker Nodes, and Private Load Balancers on
page 861
For an introductory tutorial, see Creating a Cluster with Oracle Cloud Infrastructure Container Engine for Kubernetes.
A number of related Developer Tutorials are also available.
Example 1: Cluster with Public Kubernetes API Endpoint, Public Worker Nodes, and Public Load
Balancers
This example assumes you want the Kubernetes API endpoint, worker nodes, and load balancers accessible directly
from the internet.

VCN

Resource Example
VCN • Name: acme-dev-vcn
• CIDR Block: 10.0.0.0/16
• DNS Resolution: Selected

Internet Gateway • Name: internet-gateway-0

DHCP Options • DNS Type set to Internet and VCN Resolver

Oracle Cloud Infrastructure User Guide 850


Container Engine for Kubernetes

Subnets

Resource Example
Public Subnet for Kubernetes API Endpoint Name: KubernetesAPIendpoint with the following
properties:
• Type: Regional
• CIDR Block: 10.0.0.0/30
• Route Table: routetable-KubernetesAPIendpoint
• Subnet access: Public
• DNS Resolution: Selected
• DHCP Options: Default
• Security List: seclist-KubernetesAPIendpoint

Public Subnet for Worker Nodes Name: workernodes with the following properties:
• Type: Regional
• CIDR Block: 10.0.1.0/24
• Route Table: routetable-workernodes
• Subnet access: Public
• DNS Resolution: Selected
• DHCP Options: Default
• Security List: seclist-workernodes

Public Subnet for Load Balancers Name: loadbalancers with the following properties:
• Type: Regional
• CIDR Block: 10.0.2.0/24
• Route Table: routetable-serviceloadbalancers
• Subnet access: Public
• DNS Resolution: Selected
• DHCP Options: Default
• Security List: seclist-loadbalancers

Route Tables

Resource Example
Route Table for Public Kubernetes API Endpoint Name: routetable-KubernetesAPIendpoint, with one
Subnet route rule defined as follows:
• Destination CIDR block: 0.0.0.0/0
• Target Type: Internet Gateway
• Target Internet Gateway: internet-gateway-0

Route Table for Public Worker Nodes Subnet Name: routetable-workernodes, with one route rule
defined as follows:
• Destination CIDR block: 0.0.0.0/0
• Target Type: Internet Gateway
• Target Internet Gateway: internet-gateway-0

Oracle Cloud Infrastructure User Guide 851


Container Engine for Kubernetes

Resource Example
Route Table for Public Load Balancers Subnet Name: routetable-serviceloadbalancers, with one route
rule defined as follows:
• Destination CIDR block: 0.0.0.0/0
• Target Type: Internet Gateway
• Target Internet Gateway: internet-gateway-0

Security List Rules for Public Kubernetes API Endpoint Subnet


The seclist-KubernetesAPIendpoint security table has the ingress and egress rules shown here.
Ingress Rules:

State Source Protocol/Dest. Port Description


Stateful 10.0.1.0/24 (Worker Nodes TCP/6443 Kubernetes worker to
CIDR) Kubernetes API endpoint
communication.
Stateful 10.0.1.0/24 (Worker Nodes TCP/12250 Kubernetes worker
CIDR) to control plane
communication.
Stateful 10.0.1.0/24 (Worker Nodes ICMP 3,4 Path Discovery.
CIDR)
Stateful 0.0.0.0/0 or specific CIDR TCP/6443 (optional) External
access to Kubernetes API
endpoint.

Egress Rules:

State: Destination Protocol / Dest. Port Description:


Stateful All <region> Services in TCP/443 Allow Kubernetes control
Oracle Services Network plane to communicate with
OKE.
Stateful 10.0.1.0/24 (Worker Nodes TCP/ALL All traffic to worker nodes.
CIDR)
Stateful 10.0.1.0/24 (Worker Nodes ICMP 3,4 Path Discovery.
CIDR)

Security List Rules for Public Worker Nodes Subnet


The seclist-workernodes security table has the ingress and egress rules shown here.
Ingress Rules:

State: Source Protocol / Dest. Port Description:


Stateful 10.0.1.0/24 (Worker Nodes ALL/ALL Allow pods on one worker
CIDR) node to communicate
with pods on other worker
nodes.

Oracle Cloud Infrastructure User Guide 852


Container Engine for Kubernetes

State: Source Protocol / Dest. Port Description:


Stateful 10.0.0.0/30 (Kubernetes TCP/ALL Allow Kubernetes control
API Endpoint CIDR) plane to communicate with
worker nodes.
Stateful 0.0.0.0/0 ICMP 3,4 Path Discovery.
Stateful 0.0.0.0/0 or subnet CIDR TCP/22 (optional) Allow inbound
SSH traffic to worker
nodes.

Egress Rules:

State: Destination Protocol / Dest. Port Description:


Stateful 10.0.1.0/24 (Worker Nodes ALL/ALL Allow pods on one worker
CIDR) node to communicate
with pods on other worker
nodes.
Stateful 0.0.0.0/0 ICMP 3,4 Path Discovery.
Stateful All <region> Services in TCP/ALL Allow worker nodes to
Oracle Services Network communicate with OKE.
Stateful 10.0.0.0/30 (Kubernetes TCP/6443 Kubernetes worker to
API Endpoint CIDR) Kubernetes API endpoint
communication.
Stateful 10.0.0.0/30 (Kubernetes TCP/12250 Kubernetes worker
API Endpoint CIDR) to control plane
communication.
Stateful 0.0.0.0/0 TCP/ALL (optional) Allow worker
nodes to communicate with
internet.

Security List Rules for Public Load Balancer Subnet


The seclist-loadbalancers security table has the ingress and egress rules shown here.
Ingress Rules: None
Egress Rules: None
Example 2: Cluster with Public Kubernetes API Endpoint, Private Worker Nodes, and Public Load
Balancers
This example assumes you want the Kubernetes API endpoint and load balancers accessible directly from the internet.
The worker nodes are accessible within the VCN.

Oracle Cloud Infrastructure User Guide 853


Container Engine for Kubernetes

VCN

Resource Example
VCN • Name: acme-dev-vcn
• CIDR Block: 10.0.0.0/16
• DNS Resolution: Selected

Internet Gateway • Name: internet-gateway-0

NAT Gateway • Name:nat-gateway-0

Service Gateway • Name: service-gateway-0


• Services: All <region> Services in Oracle Services
Network

DHCP Options • DNS Type set to Internet and VCN Resolver

Subnets

Resource Example
Public Subnet for Kubernetes API Endpoint Name: KubernetesAPIendpoint with the following
properties:
• Type: Regional
• CIDR Block: 10.0.0.0/30
• Route Table: routetable-KubernetesAPIendpoint
• Subnet access: Public
• DNS Resolution: Selected
• DHCP Options: Default
• Security List: seclist-KubernetesAPIendpoint

Oracle Cloud Infrastructure User Guide 854


Container Engine for Kubernetes

Resource Example
Private Subnet for Worker Nodes Name: workernodes with the following properties:
• Type: Regional
• CIDR Block: 10.0.1.0/24
• Route Table: routetable-workernodes
• Subnet access: Private
• DNS Resolution: Selected
• DHCP Options: Default
• Security List: seclist-workernodes

Public Subnet for Service Load Balancers Name: loadbalancers with the following properties:
• Type: Regional
• CIDR Block: 10.0.2.0/24
• Route Table: routetable-serviceloadbalancers
• Subnet access: Public
• DNS Resolution: Selected
• DHCP Options: Default
• Security List: seclist-loadbalancers

Route Tables

Resource Example
Route Table for Public Kubernetes API Endpoint Name: routetable-KubernetesAPIendpoint, with one
Subnet route rule defined as follows:
• Destination CIDR block: 0.0.0.0/0
• Target Type: Internet Gateway
• Target: internet-gateway-0

Route Table for Private Worker Nodes Subnet Name: routetable-workernodes, with two route rules
defined as follows:
• Rule for traffic to internet:
• Destination CIDR block: 0.0.0.0/0
• Target Type: NAT Gateway
• Target: nat-gateway-0
• Rule for traffic to OCI services:
• Destination: All <region> Services in Oracle
Services Network
• Target Type: Service Gateway
• Target: service-gateway-0

Route Table for Public Load Balancers Subnet Name: routetable-serviceloadbalancers, with one route
rule defined as follows:
• Destination CIDR block: 0.0.0.0/0
• Target Type: Internet Gateway
• Target: internet-gateway-0

Oracle Cloud Infrastructure User Guide 855


Container Engine for Kubernetes

Security List Rules for Public Kubernetes API Endpoint Subnet


The seclist-KubernetesAPIendpoint security table has the ingress and egress rules shown here.
Ingress Rules:

State Source Protocol/Dest. Port Description


Stateful 10.0.1.0/24 (Worker Nodes TCP/6443 Kubernetes worker to
CIDR) Kubernetes API endpoint
communication.
Stateful 10.0.1.0/24 (Worker Nodes TCP/12250 Kubernetes worker
CIDR) to control plane
communication.
Stateful 10.0.1.0/24 (Worker Nodes ICMP 3,4 Path Discovery.
CIDR)
Stateful 0.0.0.0/0 or specific CIDR TCP/6443 (optional) External
access to Kubernetes API
endpoint.

Egress Rules:

State: Destination Protocol / Dest. Port Description:


Stateful All <region> Services in TCP/443 Allow Kubernetes control
Oracle Services Network plane to communicate with
OKE.
Stateful 10.0.1.0/24 (Worker Nodes TCP/ALL All traffic to worker nodes.
CIDR)
Stateful 10.0.1.0/24 (Worker Nodes ICMP 3,4 Path Discovery.
CIDR)

Security List Rules for Private Worker Nodes Subnet


The seclist-workernodes security table has the ingress and egress rules shown here.
Ingress Rules:

State: Source Protocol / Dest. Port Description:


Stateful 10.0.1.0/24 (Worker Nodes ALL/ALL Allow pods on one worker
CIDR) node to communicate
with pods on other worker
nodes.
Stateful 10.0.0.0/30 (Kubernetes TCP/ALL Allow Kubernetes control
API Endpoint CIDR) plane to communicate with
worker nodes.
Stateful 0.0.0.0/0 ICMP 3,4 Path Discovery.
Stateful 0.0.0.0/0 or subnet CIDR TCP/22 (optional) Allow inbound
SSH traffic to worker
nodes.

Egress Rules:

Oracle Cloud Infrastructure User Guide 856


Container Engine for Kubernetes

State: Destination Protocol / Dest. Port Description:


Stateful 10.0.1.0/24 (Worker Nodes ALL/ALL Allow pods on one worker
CIDR) node to communicate
with pods on other worker
nodes.
Stateful 0.0.0.0/0 ICMP 3,4 Path Discovery.
Stateful All <region> Services in TCP/ALL Allow worker nodes to
Oracle Services Network communicate with OKE.
Stateful 10.0.0.0/30 (Kubernetes TCP/6443 Kubernetes worker to
API Endpoint CIDR) Kubernetes API endpoint
communication.
Stateful 10.0.0.0/30 (Kubernetes TCP/12250 Kubernetes worker
API Endpoint CIDR) to control plane
communication.
Stateful 0.0.0.0/0 TCP/ALL (optional) Allow worker
nodes to communicate with
internet.

Security List Rules for Public Load Balancer Subnet


The seclist-loadbalancers security table has the ingress and egress rules shown here.
Ingress Rules: None
Egress Rules: None
Example 3: Cluster with Private Kubernetes API Endpoint, Private Worker Nodes, and Public Load
Balancers
This example assumes you want only load balancers accessible directly from the internet. The Kubernetes API
endpoint and the worker nodes are accessible within the VCN.

Oracle Cloud Infrastructure User Guide 857


Container Engine for Kubernetes

VCN

Resource Example
VCN • Name: acme-dev-vcn
• CIDR Block: 10.0.0.0/16
• DNS Resolution: Selected

Internet Gateway • Name: internet-gateway-0

NAT Gateway • Name:nat-gateway-0

Service Gateway • Name: service-gateway-0


• Services: All <region> Services in Oracle Services
Network

DHCP Options • DNS Type set to Internet and VCN Resolver

Subnets

Resource Example
Private Subnet for Kubernetes API Endpoint Name: KubernetesAPIendpoint with the following
properties:
• Type: Regional
• CIDR Block: 10.0.0.0/30
• Route Table: routetable-KubernetesAPIendpoint
• Subnet access: Private
• DNS Resolution: Selected
• DHCP Options: Default
• Security List: seclist-KubernetesAPIendpoint

Private Subnet for Worker Nodes Name: workernodes with the following properties:
• Type: Regional
• CIDR Block: 10.0.1.0/24
• Route Table: routetable-workernodes
• Subnet access: Private
• DNS Resolution: Selected
• DHCP Options: Default
• Security List: seclist-workernodes

Public Subnet for Service Load Balancers Name: loadbalancers with the following properties:
• Type: Regional
• CIDR Block: 10.0.2.0/24
• Route Table: routetable-serviceloadbalancers
• Subnet access: Public
• DNS Resolution: Selected
• DHCP Options: Default
• Security List: seclist-loadbalancers

Oracle Cloud Infrastructure User Guide 858


Container Engine for Kubernetes

Route Tables

Resource Example
Route Table for Private Kubernetes API Endpoint Name: routetable-KubernetesAPIendpoint, with one
Subnet route rule defined as follows:
• Rule for traffic to internet:
• Destination CIDR block: 0.0.0.0/0
• Target Type: NAT Gateway
• Target: nat-gateway-0
• Rule for traffic to OCI services:
• Destination: All <region> Services in Oracle
Services Network
• Target Type: Service Gateway
• Target: service-gateway-0

Route Table for Private Worker Nodes Subnet Name: routetable-workernodes, with two route rules
defined as follows:
• Rule for traffic to internet:
• Destination CIDR block: 0.0.0.0/0
• Target Type: NAT Gateway
• Target: nat-gateway-0
• Rule for traffic to OCI services:
• Destination: All <region> Services in Oracle
Services Network
• Target Type: Service Gateway
• Target: service-gateway-0

Route Table for Public Load Balancers Subnet Name: routetable-serviceloadbalancers, with one route
rule defined as follows:
• Destination CIDR block: 0.0.0.0/0
• Target Type: Internet Gateway
• Target Internet Gateway: internet-gateway-0

Security List Rules for Private Kubernetes API Endpoint Subnet


The seclist-KubernetesAPIendpoint security table has the ingress and egress rules shown here.
Ingress Rules:

State Source Protocol/Dest. Port Description


Stateful 10.0.1.0/24 (Worker Nodes TCP/6443 Kubernetes worker to
CIDR) Kubernetes API endpoint
communication.
Stateful 10.0.1.0/24 (Worker Nodes TCP/12250 Kubernetes worker
CIDR) to control plane
communication.
Stateful 10.0.1.0/24 (Worker Nodes ICMP 3,4 Path Discovery.
CIDR)

Oracle Cloud Infrastructure User Guide 859


Container Engine for Kubernetes

State Source Protocol/Dest. Port Description


Stateful 0.0.0.0/0 or specific CIDR TCP/6443 (optional) External
access to Kubernetes API
endpoint.

Egress Rules:

State: Destination Protocol / Dest. Port Description:


Stateful All <region> Services in TCP/443 Allow Kubernetes control
Oracle Services Network plane to communicate with
OKE.
Stateful 10.0.1.0/24 (Worker Nodes TCP/ALL All traffic to worker nodes.
CIDR)
Stateful 10.0.1.0/24 (Worker Nodes ICMP 3,4 Path Discovery.
CIDR)

Security List Rules for Private Worker Nodes Subnet


The seclist-workernodes security table has the ingress and egress rules shown here.
Ingress Rules:

State: Source Protocol / Dest. Port Description:


Stateful 10.0.1.0/24 (Worker Nodes ALL/ALL Allow pods on one worker
CIDR) node to communicate
with pods on other worker
nodes.
Stateful 10.0.0.0/30 (Kubernetes TCP/ALL Allow Kubernetes control
API Endpoint CIDR) plane to communicate with
worker nodes.
Stateful 0.0.0.0/0 ICMP 3,4 Path Discovery.
Stateful 0.0.0.0/0 or subnet CIDR TCP/22 (optional) Allow inbound
SSH traffic to worker
nodes.

Egress Rules:

State: Destination Protocol / Dest. Port Description:


Stateful 10.0.1.0/24 (Worker Nodes ALL/ALL Allow pods on one worker
CIDR) node to communicate
with pods on other worker
nodes.
Stateful 0.0.0.0/0 ICMP 3,4 Path Discovery.
Stateful All <region> Services in TCP/ALL Allow worker nodes to
Oracle Services Network communicate with OKE.
Stateful 10.0.0.0/30 (Kubernetes TCP/6443 Kubernetes worker to
API Endpoint CIDR) Kubernetes API endpoint
communication.

Oracle Cloud Infrastructure User Guide 860


Container Engine for Kubernetes

State: Destination Protocol / Dest. Port Description:


Stateful 10.0.0.0/30 (Kubernetes TCP/12250 Kubernetes worker
API Endpoint CIDR) to control plane
communication.
Stateful 0.0.0.0/0 TCP/ALL (optional) Allow worker
nodes to communicate with
internet.

Security List Rules for Public Load Balancer Subnet


The seclist-loadbalancers security table has the ingress and egress rules shown here.
Ingress Rules: None
Egress Rules: None
Example 4: Cluster with Private Kubernetes API Endpoint, Private Worker Nodes, and Private Load
Balancers
This example assumes you want no cluster resources accessible directly from the internet. The Kubernetes API
endpoint, the worker nodes, and the load balancers are accessible within the VCN.

VCN

Resource Example
VCN • Name: acme-dev-vcn
• CIDR Block: 10.0.0.0/16
• DNS Resolution: Selected

Internet Gateway • Name: internet-gateway-0

NAT Gateway • Name:nat-gateway-0

Service Gateway • Name: service-gateway-0


• Services: All <region> Services in Oracle Services
Network

DHCP Options • DNS Type set to Internet and VCN Resolver

Oracle Cloud Infrastructure User Guide 861


Container Engine for Kubernetes

Subnets

Resource Example
Private Subnet for Kubernetes API Endpoint Name: KubernetesAPIendpoint with the following
properties:
• Type: Regional
• CIDR Block: 10.0.0.0/30
• Route Table: routetable-KubernetesAPIendpoint
• Subnet access: Private
• DNS Resolution: Selected
• DHCP Options: Default
• Security List: seclist-KubernetesAPIendpoint

Private Subnet for Worker Nodes Name: workernodes with the following properties:
• Type: Regional
• CIDR Block: 10.0.1.0/24
• Route Table: routetable-workernodes
• Subnet access: Private
• DNS Resolution: Selected
• DHCP Options: Default
• Security List: seclist-workernodes

Private Subnet for Service Load Balancers Name: loadbalancers with the following properties:
• Type: Regional
• CIDR Block: 10.0.2.0/24
• Route Table:
• Subnet access: Private
• DNS Resolution: Selected
• DHCP Options: Default
• Security List: seclist-loadbalancers

Route Tables

Resource Example
Route Table for Private Kubernetes API Endpoint Name: routetable-KubernetesAPIendpoint, with one
Subnet route rule defined as follows:
• Rule for traffic to internet:
• Destination CIDR block: 0.0.0.0/0
• Target Type: NAT Gateway
• Target: nat-gateway-0
• Rule for traffic to OCI services:
• Destination: All <region> Services in Oracle
Services Network
• Target Type: Service Gateway
• Target: service-gateway-0

Oracle Cloud Infrastructure User Guide 862


Container Engine for Kubernetes

Resource Example
Route Table for Private Worker Nodes Subnet Name: routetable-workernodes, with two route rules
defined as follows:
• Rule for traffic to internet:
• Destination CIDR block: 0.0.0.0/0
• Target Type: NAT Gateway
• Target: nat-gateway-0
• Rule for traffic to OCI services:
• Destination: All <region> Services in Oracle
Services Network
• Target Type: Service Gateway
• Target: service-gateway-0

Route Table for Private Load Balancers Subnet None

Security List Rules for Private Kubernetes API Endpoint Subnet


The seclist-KubernetesAPIendpoint security table has the ingress and egress rules shown here.
Ingress Rules:

State Source Protocol/Dest. Port Description


Stateful 10.0.1.0/24 (Worker Nodes TCP/6443 Kubernetes worker to
CIDR) Kubernetes API endpoint
communication.
Stateful 10.0.1.0/24 (Worker Nodes TCP/12250 Kubernetes worker
CIDR) to control plane
communication.
Stateful 10.0.1.0/24 (Worker Nodes ICMP 3,4 Path Discovery.
CIDR)
Stateful 0.0.0.0/0 or specific CIDR TCP/6443 (optional) External
access to Kubernetes API
endpoint.

Egress Rules:

State: Destination Protocol / Dest. Port Description:


Stateful All <region> Services in TCP/443 Allow Kubernetes control
Oracle Services Network plane to communicate with
OKE.
Stateful 10.0.1.0/24 (Worker Nodes TCP/ALL All traffic to worker nodes.
CIDR)
Stateful 10.0.1.0/24 (Worker Nodes ICMP 3,4 Path Discovery.
CIDR)

Security List Rules for Private Worker Nodes Subnet


The seclist-workernodes security table has the ingress and egress rules shown here.
Ingress Rules:

Oracle Cloud Infrastructure User Guide 863


Container Engine for Kubernetes

State: Source Protocol / Dest. Port Description:


Stateful 10.0.1.0/24 (Worker Nodes ALL/ALL Allow pods on one worker
CIDR) node to communicate
with pods on other worker
nodes.
Stateful 10.0.0.0/30 (Kubernetes TCP/ALL Allow Kubernetes control
API Endpoint CIDR) plane to communicate with
worker nodes.
Stateful 0.0.0.0/0 ICMP 3,4 Path Discovery.
Stateful 0.0.0.0/0 or subnet CIDR TCP/22 (optional) Allow inbound
SSH traffic to worker
nodes.

Egress Rules:

State: Destination Protocol / Dest. Port Description:


Stateful 10.0.1.0/24 (Worker Nodes ALL/ALL Allow pods on one worker
CIDR) node to communicate
with pods on other worker
nodes.
Stateful 0.0.0.0/0 ICMP 3,4 Path Discovery.
Stateful All <region> Services in TCP/ALL Allow worker nodes to
Oracle Services Network communicate with OKE.
Stateful 10.0.0.0/30 (Kubernetes TCP/6443 Kubernetes worker to
API Endpoint CIDR) Kubernetes API endpoint
communication.
Stateful 10.0.0.0/30 (Kubernetes TCP/12250 Kubernetes worker
API Endpoint CIDR) to control plane
communication.
Stateful 0.0.0.0/0 TCP/ALL (optional) Allow worker
nodes to communicate with
internet.

Security List Rules for Private Load Balancer Subnet


The seclist-loadbalancers security table has the ingress and egress rules shown here.
Ingress Rules: None
Egress Rules: None

CIDR Blocks and Container Engine for Kubernetes


When configuring the VCN and subnets (for the Kubernetes API endpoint, worker nodes and load balancers) to
use with Container Engine for Kubernetes, you specify CIDR blocks to indicate the network addresses that can be
allocated to the resources. See Network Resource Configuration for Cluster Creation and Deployment on page 845.
When creating a cluster with Container Engine for Kubernetes, you specify:
• CIDR blocks for the Kubernetes services
• CIDR blocks that can be allocated to pods running in the cluster (see Creating a Kubernetes Cluster on page
869)

Oracle Cloud Infrastructure User Guide 864


Container Engine for Kubernetes

Note the following:


• The CIDR block you specify for the VCN must not overlap with the CIDR block you specify for the Kubernetes
services.
• The Kubernetes API endpoint subnet only requires a small CIDR block, since the cluster only requires one IP
address in this subnet. A /30 CIDR block of network addresses is sufficient for the Kubernetes API endpoint
subnet.
• The CIDR blocks you specify for pods running in the cluster must not overlap with CIDR blocks you specify for
the Kubernetes API endpoint, worker node, and load balancer subnets.
• Each pod running on a worker node is assigned its own network address. Container Engine for Kubernetes
allocates a /25 CIDR block of network addresses for each worker node in a cluster, to assign to pods running on
that node. A /25 CIDR block equates to 128 distinct IP addresses, of which one is reserved. So a maximum of 127
network addresses are available to assign to pods running on each worker node (more than sufficient, given that
the number of pods per node is capped at 110).
• When you create a cluster, you specify a value for the cluster's Pods CIDR Block property, either implicitly in
the case of the 'Quick Create' workflow or explicitly in the case of the 'Custom Create' workflow. You cannot
change the cluster's Pods CIDR Block property after the cluster has been created. The cluster's Pods CIDR Block
property constrains the maximum total number of network addresses available for allocation to pods running on
all the nodes in the cluster, and therefore effectively limits the number of nodes in the cluster. By default, the
cluster's Pods CIDR Block property is set to a /16 CIDR block, making 65,536 network addresses available for
all the nodes in the cluster. Since 128 network addresses are allocated for each node, specifying a /16 CIDR block
for the cluster's Pods CIDR Block property limits the number of nodes in the cluster to 512. This is generally
sufficient. To support more than 512 nodes in a cluster, create a cluster in the 'Custom Create' workflow and
specify a larger value for the cluster's Pods CIDR Block property. For example, specify a /14 CIDR block for the
cluster's Pods CIDR Block property to create a cluster with 262,144 network addresses available for the nodes in
the cluster (more than sufficient, given that the number of nodes per cluster is capped at 1000).

Policy Configuration for Cluster Creation and Deployment


When a tenancy is created, an Administrators group is automatically created for the tenancy. Users that are members
of the Administrators group can perform any operation on resources in the tenancy. If all the users that will be
working with Container Engine for Kubernetes are already members of the Administrators group, there's no need to
create additional policies. However, if you want to enable users that are not members of the Administrators group to
use Container Engine for Kubernetes, you must create policies to enable the groups to which those users do belong
to perform operations on resources in the tenancy or in individual compartments. Some policies are required, some
are optional. See Create Required Policy for Groups on page 865 and Create One or More Additional Policies for
Groups on page 867.
Note that in addition to the above policies managed by IAM, you can also use the Kubernetes RBAC Authorizer
to enforce additional fine-grained access control for users on specific clusters via Kubernetes RBAC roles and
clusterroles. See About Access Control and Container Engine for Kubernetes on page 919.
Create Required Policy for Groups
To create, update, and delete clusters and node pools, users that are not members of the Administrators group must
have permissions to work with cluster-related resources. To give users the necessary access, you must create a policy
with a number of required policy statements for the groups to which those users do belong:
1. In the Console, open the navigation menu. Under Governance and Administration, go to Identity and click
Policies. A list of the policies in the compartment you're viewing is displayed.
2. Select the tenancy's root compartment or an individual compartment containing cluster-related resources from the
list on the left.
3. Click Create Policy.
4. Enter the following:
• Name: A name for the policy (for example, acme-dev-team-oke-required-policy) that is unique
within the compartment. If you are creating the policy in the tenancy's root compartment, the name must

Oracle Cloud Infrastructure User Guide 865


Container Engine for Kubernetes

be unique across all policies in your tenancy. You cannot change this later. Avoid entering confidential
information.
• Description: A friendly description. You can change this later if you want to.
• Statement: The following required policy statements to enable users to use Container Engine for Kubernetes
to create, update, and delete clusters and node pools:

Allow group <group-name> to manage instance-family in <location>

Allow group <group-name> to use subnets in <location>

Allow group <group-name> to read virtual-network-family in <location>

Allow group <group-name> to inspect compartments in <location>

Allow group <group-name> to use vnics in <location>

Allow group <group-name> to use network-security-groups in <location>

Allow group <group-name> to use private-ips in <location>

Allow group <group-name> to manage public-ips in <location>

The following required policy statement to enable users to perform any operation on cluster-related resources
(this 'catch-all' policy effectively makes all users administrators insofar as cluster-related resources are
concerned):

Allow group <group-name> to manage cluster-family in <location>

In the above policy statements, replace <location> with either tenancy (if you are creating the policy in
the tenancy's root compartment) or compartment <compartment-name> (if you are creating the policy
in an individual compartment).
Note:

Depending on the type of cluster, some required policy statements might


not be necessary:
• To work with "VCN-native" clusters (where the Kubernetes API
endpoint is fully integrated with your VCN), the use network-
security-groups and use public-ips policy statements
are only necessary if the clusters' network security group and public
IP address options are selected.
• To work with clusters where the public Kubernetes API endpoint is
in an Oracle-managed tenancy, the use network-security-
groups, use private-ips, and use public-ips policy
statements are unnecessary.
For more information about VCN-native clusters, see Kubernetes Cluster
Control Plane and Kubernetes API on page 842.
• Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags
to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more
information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip
this option (you can apply tags later) or ask your administrator.

Oracle Cloud Infrastructure User Guide 866


Container Engine for Kubernetes

5. Click Create.
Create One or More Additional Policies for Groups
To enable users that are not members of the Administrators group to use Container Engine for Kubernetes, create
additional policies to enable the groups to which those users do belong to perform operations on cluster-related
resources as follows:
1. In the Console, open the navigation menu. Under Governance and Administration, go to Identity and click
Policies. A list of the policies in the compartment you're viewing is displayed.
2. Select the tenancy's root compartment or an individual compartment containing cluster-related resources from the
list on the left.
3. Click Create Policy.
4. Enter the following:
• Name: A name for the policy (for example, acme-dev-team-oke-additional-policy) that is
unique within the compartment. If you are creating the policy in the tenancy's root compartment, the name
must be unique across all policies in your tenancy. You cannot change this later. Avoid entering confidential
information.
• Description: A friendly description. You can change this later if you want to.
• Statement: A suitable policy statement to allow existing groups to perform operations on cluster-related
resources. In the example policy statements below, replace <location> with either tenancy (if you are

Oracle Cloud Infrastructure User Guide 867


Container Engine for Kubernetes

creating the policy in the tenancy's root compartment) or compartment <compartment-name> (if you
are creating the policy in an individual compartment):
• To enable users in the acme-dev-team group to automatically create and configure associated new network
resources when creating new clusters in the 'Quick Create' workflow, policies must also grant the group:
• VCN_READ and VCN_CREATE permissions. Enter a policy statement like:

Allow group acme-dev-team to manage vcns in <location>


• SUBNET_READ and SUBNET_CREATE permissions. Enter a policy statement like:

Allow group acme-dev-team to manage subnets in <location>


• INTERNET_GATEWAY_CREATE permission. Enter a policy statement like:

Allow group acme-dev-team to manage internet-gateways in <location>


• NAT_GATEWAY_CREATE permission. Enter a policy statement like:

Allow group acme-dev-team to manage nat-gateways in <location>


• ROUTE_TABLE_UPDATE permission. Enter a policy statement like:

Allow group acme-dev-team to manage route-tables in <location>


• SECURITY_LIST_CREATE permission. Enter a policy statement like:

Allow group acme-dev-team to manage security-lists in <location>


• To enable users in the acme-dev-team-cluster-viewers group to simply list the clusters, enter a policy
statement like:

Allow group acme-dev-team-cluster-viewers to inspect clusters in


<location>
• To enable users in the acme-dev-team-pool-admins group to list, create, update, and delete node pools,
enter a policy statement like:

Allow group acme-dev-team-pool-admins to use cluster-node-pools in


<location>
• To enable users in the acme-dev-team-auditors group to see details of operations performed on clusters,
enter a policy statement like:

Allow group acme-dev-team-auditors to read cluster-work-requests in


<location>
• To enable users in the acme-dev-team-sgw group to create a service gateway to enable worker nodes
to access other resources in the same region without exposing data to the public internet, enter a policy
statement like:

Allow group acme-dev-team-sgw to manage service-gateways in


<location>
• To enable users in the acme-dev-team group to access clusters using Cloud Shell, enter a policy statement
like:

Allow group acme-dev-team to use cloud-shell in <location>

Note that to access clusters using Cloud Shell, you'll also need to set up the kubeconfig file appropriately
(see Setting Up Cloud Shell Access to Clusters on page 875). For more information about Cloud Shell,
see Cloud Shell.

Oracle Cloud Infrastructure User Guide 868


Container Engine for Kubernetes

• Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags
to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more
information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip
this option (you can apply tags later) or ask your administrator.
5. Click Create.

Creating a Kubernetes Cluster


You can use Container Engine for Kubernetes to create new Kubernetes clusters. To create a cluster, you
must either belong to the tenancy's Administrators group, or belong to a group to which a policy grants the
CLUSTER_MANAGE permission. See Policy Configuration for Cluster Creation and Deployment on page 865.
You first specify basic details for the new cluster (the cluster name, and the Kubernetes version to install on control
plane nodes). You can then create the cluster in one of two ways:
• Using default settings in the 'Quick Create' workflow to create a cluster with new network resources as required.
This approach is the fastest way to create a new cluster. If you accept all the default values, you can create a new
cluster in just a few clicks. New network resources for the cluster are created automatically, including regional
subnets for the Kubernetes API endpoint, for worker nodes, and for load balancers. The regional subnet for load
balancers is public, but you specify whether the regional subnets for the Kubernetes API endpoint and for worker
nodes are public or private. To create a cluster in the 'Quick Create' workflow, you must belong to a group to
which a policy grants the necessary permissions to create the new network resources (see Create One or More
Additional Policies for Groups on page 867).
• Using custom settings in the 'Custom Create' workflow. This approach gives you the most control over the new
cluster. You can explicitly define the new cluster's properties. And you can explicitly specify which existing
network resources to use, including the existing public or private subnets in which to create the Kubernetes API
endpoint, worker nodes, and load balancers.
Note that although you will usually define node pools immediately when defining a new cluster in the 'Custom
Create' workflow, you don't have to. You can create a cluster with no node pools, and add node pools later.
One reason to create a cluster that initially has no node pools is if you intend to install and configure a CNI
network provider like Calico to support Kubernetes NetworkPolicy resources. If you install Calico on a cluster
that has existing node pools in which pods are already running, you'll have to recreate the pods when the Calico
installation is complete. For example, by running the kubectl rollout restart command. If you install
Calico on a cluster before creating any node pools in the cluster (recommended), you can be sure that there will be
no pods to recreate. See Example: Installing Calico and Setting Up Network Policies on page 960.
Regardless of how you create a cluster, Container Engine for Kubernetes gives names to worker nodes in the
following format:
oke-c<part-of-cluster-OCID>-n<part-of-node-pool-OCID>-s<part-of-subnet-OCID>-
<slot>
where:
• oke is the standard prefix for all worker nodes created by Container Engine for Kubernetes
• c<part-of-cluster-OCID> is a portion of the cluster's OCID, prefixed with the letter c
• n<part-of-node-pool-OCID> is a portion of the node pool's OCID, prefixed with the letter n
• s<part-of-subnet-OCID> is a portion of the subnet's OCID, prefixed with the letter s
• <slot> is an ordinal number of the node in the subnet (for example, 0, 1)
For example, if you specified a cluster is to have two nodes in a node pool, the two nodes might be named:
• oke-cywiqripuyg-nsgagklgnst-st2qczvnmba-0
• oke-cywiqripuyg-nsgagklgnst-st2qczvnmba-1
Do not change the auto-generated names that Container Engine for Kubernetes gives to worker nodes.
To ensure high availability, Container Engine for Kubernetes:

Oracle Cloud Infrastructure User Guide 869


Container Engine for Kubernetes

• creates the Kubernetes Control Plane on multiple Oracle-managed control plane nodes (distributing the control
plane nodes across different availability domains in a region, where supported)
• creates worker nodes in each of the fault domains in an availability domain (distributing the worker nodes as
evenly as possible across the fault domains, subject to any other infrastructure restrictions)

Using the Console to create a Cluster with Default Settings in the 'Quick Create'
workflow
To create a cluster with default settings and new network resources in the 'Quick Create' workflow using Container
Engine for Kubernetes:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
Kubernetes Clusters.
2. Choose a Compartment you have permission to work in.
3. On the Cluster List page, click Create Cluster.
4. In the Create Cluster dialog, select Quick Create and click Launch Workflow.
5. On the Create Cluster page, either just accept the default configuration details for the new cluster, or specify
alternatives as follows:
• Name: The name of the new cluster. Either accept the default name or enter a name of your choice. Avoid
entering confidential information.
• Compartment: The compartment in which to create the new cluster and the associated network resources.
• Kubernetes Version: The version of Kubernetes to run on the control plane nodes and worker nodes of
the cluster. Either accept the default version or select a version of your choice. Amongst other things, the
Kubernetes version you select determines the default set of admission controllers that are turned on in the
created cluster (see Supported Admission Controllers on page 924).
• Kubernetes API Endpoint: The type of access to the cluster's Kubernetes API endpoint. The Kubernetes API
endpoint is either private (accessible by other subnets in the VCN) or public (accessible directly from internet):
• Private Endpoint: A private regional subnet is created and the Kubernetes API endpoint is hosted in that
subnet. The Kubernetes API endpoint is assigned a private IP address.
• Public Endpoint: A public regional subnet is created and the Kubernetes API endpoint is hosted in that
subnet. The Kubernetes API endpoint is assigned a public IP address as well as a private IP address.
Private and public endpoints are assigned a network security group with a security rule that grants access to the
Kubernetes API endpoint (TCP/6443).
For more information, see Kubernetes Cluster Control Plane and Kubernetes API on page 842.
• Kubernetes Worker Nodes: The type of access to the cluster's worker nodes. The worker nodes are either
private (accessible through other VCN subnets) or public (accessible directly from internet):
• Private: A private regional subnet is created to host worker nodes. The worker nodes are assigned a private
IP address.
• Public: A public regional subnet is created to host worker nodes. The worker nodes are assigned a public
IP address as well as a private IP address.
Note that a public regional subnet is always created to host load balancers in clusters created in the 'Quick
Create' workflow, regardless of your selection here.
• Shape: The shape to use for each node in the node pool. The shape determines the number of CPUs and the
amount of memory allocated to each node. If you select a flexible shape, you can explicitly specify the number
of CPUs and the amount of memory. The list shows only those shapes available in your tenancy that are
supported by Container Engine for Kubernetes. See Supported Images (Including Custom Images) and Shapes
for Worker Nodes on page 922.
• Number of Nodes: The number of worker nodes to create in the node pool, placed in the regional subnet
created for the cluster. The nodes are distributed as evenly as possible across the availability domains in a
region (or in the case of a region with a single availability domain, across the fault domains in that availability
domain).

Oracle Cloud Infrastructure User Guide 870


Container Engine for Kubernetes

6. Either accept the default size of worker node boot volumes (as determined from the default image used for
worker nodes) or click Specify a Custom Boot Volume Size and specify an alternative size for worker node
boot volumes in Boot Volume Size in GB:. If you do specify a custom boot volume size, it must be larger than
the image's default boot volume size. The minimum and maximum sizes you can specify are 50 GB and 32 TB
respectively. See Custom Boot Volume Sizes on page 614.
7. Either accept the defaults for advanced cluster options, or click Show Advanced Options and specify alternatives
as follows:
• Public SSH Key: (Optional) The public key portion of the key pair you want to use for SSH access to each
node in the node pool. The public key is installed on all worker nodes in the cluster. Note that if you don't
specify a public SSH key, Container Engine for Kubernetes will provide one. However, since you won't have
the corresponding private key, you will not have SSH access to the worker nodes. Note that if you specify
that you want the worker nodes in the cluster to be hosted in a private regional subnet, you cannot use SSH to
access them directly (see Connecting to Worker Nodes in Private Subnets Using SSH on page 904).
• Kubernetes Labels: (Optional) One or more labels (in addition to a default label) to add to worker nodes in
the node pool to enable the targeting of workloads at specific node pools.
8. Click Next to review the details you entered for the new cluster.
9. Click Create Cluster to create the new network resources and the new cluster.
Container Engine for Kubernetes starts creating resources (as shown in the Creating cluster and associated
network resources dialog):
• the network resources (such as the VCN, internet gateway, NAT gateway, route tables, security lists, a regional
subnet for worker nodes and another regional subnet for load balancers), with auto-generated names in the
format oke-<resource-type>-quick-<cluster-name>-<creation-date>
• the cluster, with the name you specified
• the node pool, named pool1
• worker nodes, with auto-generated names in the format oke-c<part-of-cluster-OCID>-n<part-
of-node-pool-OCID>-s<part-of-subnet-OCID>-<slot>
Do not change the resource names that Container Engine for Kubernetes has auto-generated. Note that if the
cluster is not created successfully for some reason (for example, if you have insufficient permissions or if you've
exceeded the cluster limit for the tenancy), any network resources created during the cluster creation process are
not deleted automatically. You will have to manually delete any such unused network resources.
10. Click Close to return to the Console.
Initially, the new cluster appears in the Console with a status of Creating. When the cluster has been created, it has a
status of Active.
Container Engine for Kubernetes also creates a Kubernetes kubeconfig configuration file that you use to access the
cluster using kubectl.

Using the Console to create a Cluster with Explicitly Defined Settings in the
'Custom Create' workflow
To create a cluster with explicitly defined settings and existing network resources in the 'Custom Create' workflow
using Container Engine for Kubernetes:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
Kubernetes Clusters.
2. Choose a Compartment you have permission to work in.
3. On the Cluster List page, click Create Cluster.
4. In the Create Cluster dialog, select Custom Create and click Launch Workflow.

Oracle Cloud Infrastructure User Guide 871


Container Engine for Kubernetes

5. On the Create Cluster page, either just accept the default configuration details for the new cluster, or specify
alternatives as follows:
• Name: The name of the new cluster. Either accept the default name or enter a name of your choice. Avoid
entering confidential information.
• Compartment: The compartment in which to create the new cluster.
• Kubernetes Version: The version of Kubernetes to run on the cluster's control plane nodes and worker nodes.
Either accept the default version or select a version of your choice. Amongst other things, the Kubernetes
version you select determines the default set of admission controllers that are turned on in the created cluster
(see Supported Admission Controllers on page 924).
6. Either accept the defaults for advanced cluster options, or click Show Advanced Options and set the options as
follows:
a. Specify whether to encrypt Kubernetes secrets at rest in the etcd key-value store for the cluster using the Vault
service:
• No Encryption: Kubernetes secrets at rest in the etcd key-value store are not encrypted.
• Encrypt Using Customer-Managed Keys: Encrypt Kubernetes secrets in the etcd key-value store and
specify:
• Choose a Vault in <compartment-name>: The vault that contains the master encryption key, from
the list of vaults in the specified compartment. By default, <compartment-name> is the compartment
in which you are creating the cluster, but you can select a different compartment by clicking Change
Compartment.
• Choose a Key in <compartment-name>: The name of the master encryption key, from the list of keys
in the specified compartment. By default, <compartment-name> is the compartment in which you are
creating the cluster, but you can select a different compartment by clicking Change Compartment.
Note that you cannot change the master encryption key after the cluster has been created.
Note that if you do want to use encryption, a suitable master encryption key, dynamic group, and policy must
already exist before you can create the cluster. For more information, see Encrypting Kubernetes Secrets at
Rest in Etcd on page 900.
b. Specify whether to control the operations that pods are allowed to perform on the cluster by enforcing pod
security policies:
• Not Enforced: Do not enforce pod security policies.
• Enforced: Do enforce pod security policies, by enabling the PodSecurityPolicy admission controller. Only
pods that meet the conditions in a pod security policy are accepted by the cluster. For more information,
see Using Pod Security Polices with Container Engine for Kubernetes on page 904.
Caution:

It is very important to note that when you enable a cluster's


PodSecurityPolicy admission controller, no application pods can start on
the cluster unless suitable pod security policies exist, along with roles (or
clusterroles) and rolebindings (or clusterrolebindings) to associate pods
with policies. You will not be able to run application pods on a cluster
with an enabled PodSecurityPolicy admission controller unless these
prerequisites are met.
We strongly recommend you use PodSecurityPolicy admission
controllers as follows:
• Whenever you create a new cluster, enable the Pod Security
Admission Controller.
• Immediately after creating a new cluster, create pod security
policies, along with roles (or clusterroles) and rolebindings (or
clusterrolebindings).

Oracle Cloud Infrastructure User Guide 872


Container Engine for Kubernetes

7. Click Next and specify the existing network resources to use for the new cluster on the Network Setup page:
• VCN in <compartment-name>: The existing virtual cloud network that has been configured for cluster
creation and deployment. By default, <compartment-name> is the compartment in which you are creating
the cluster, but you can select a different compartment by clicking Change Compartment. See VCN
Configuration on page 845.
• Kubernetes Service LB Subnets: Optionally, the existing subnets that have been configured to host load
balancers. Load balancer subnets must be different from worker node subnets, can be public or private,
and can be regional (recommended) or AD-specific. You don't have to specify any load balancer subnets.
However, if you do specify load balancer subnets, the number of load balancer subnets to specify depends on
the region in which you are creating the cluster and whether the subnets are regional or AD-specific.
If you are creating a cluster in a region with three availability domains, you can specify:
• Zero or one load balancer regional subnet (recommended).
• Zero or two load balancer AD-specific subnets. If you specify two AD-specific subnets, the two subnets
must be in different availability domains.
If you are creating a cluster in a region with a single availability domain, you can specify:
• Zero or one load balancer regional subnet (recommended).
• Zero or one load balancer AD-specific subnet.
See Subnet Configuration on page 848.
• Kubernetes API Endpoint Subnet: A regional subnet to host the cluster's Kubernetes API endpoint. The
Kubernetes API endpoint is assigned a private IP address. The subnet you specify can be public or private.
To simplify access management, Oracle recommends the Kubernetes API endpoint is in a different subnet to
worker nodes and load balancers. For more information, see Kubernetes Cluster Control Plane and Kubernetes
API on page 842.
• Use network security groups to control traffic: Restrict the access to the cluster's Kubernetes API endpoint
using one or more network security groups that you specify. For more information about the security rules to
specify for the network security group, see Security Rules on page 2870.
• Assign a public IP address to the API endpoint: If you selected a public subnet for the Kubernetes API
endpoint, you can assign a public IP to the Kubernetes API endpoint (as well as the private IP address).
8. Either accept the defaults for advanced cluster options, or click Show Advanced Options and specify alternatives
as follows:
• Kubernetes Service CIDR Block: The available group of network addresses that can be exposed as
Kubernetes services (ClusterIPs), expressed as a single, contiguous IPv4 CIDR block. For example,
10.96.0.0/16. The CIDR block you specify must not overlap with the CIDR block for the VCN. See CIDR
Blocks and Container Engine for Kubernetes on page 864.
• Pods CIDR Block: The available group of network addresses that can be allocated to pods running in the
cluster, expressed as a single, contiguous IPv4 CIDR block. For example, 10.244.0.0/16. The CIDR block
you specify must not overlap with the CIDR blocks for subnets in the VCN, and can be outside the
VCN CIDR block. See CIDR Blocks and Container Engine for Kubernetes on page 864.
9. Click Next and specify configuration details for the first node pool in the cluster on the Node Pools page:
• Name: A name of your choice for the new node pool. Avoid entering confidential information.
• Version: The version of Kubernetes to run on each worker node in the node pool. By default, the version of
Kubernetes specified for the control plane nodes is selected. The Kubernetes version on worker nodes must
be either the same version as that on the control plane nodes, or an earlier version that is still compatible. See
Kubernetes Versions and Container Engine for Kubernetes on page 925.
• Image: The image to use on each node in the node pool. An image is a template of a virtual hard drive that
determines the operating system and other software for the node. See Supported Images (Including Custom
Images) and Shapes for Worker Nodes on page 922.
• Shape: The shape to use for each node in the node pool. The shape determines the number of CPUs and the
amount of memory allocated to each node. If you select a flexible shape, you can explicitly specify the number
of CPUs and the amount of memory. The list shows only those shapes available in your tenancy that are

Oracle Cloud Infrastructure User Guide 873


Container Engine for Kubernetes

supported by Container Engine for Kubernetes. See Supported Images (Including Custom Images) and Shapes
for Worker Nodes on page 922.
• Number of Nodes: The number of worker nodes to create in the node pool, placed in the availability domains
you select, and in the regional subnet (recommended) or AD-specific subnet you specify for each availability
domain.
• Availability Domain 1:
• Availability Domain: An availability domain in which to place worker nodes.
• Subnet: A regional subnet (recommended) or AD-specific subnet configured to host worker nodes. If you
specified load balancer subnets, the worker node subnets must be different. The subnets you specify can be
public or private, and can be regional (recommended) or AD-specific. See Subnet Configuration on page
848.
Optionally click Add Availability Domain to select additional domains and subnets in which to place worker
nodes.
When they are created, the worker nodes are distributed as evenly as possible across the availability domains
you select (or in the case of a single availability domain, across the fault domains in that availability domain).
• Specify a Custom Boot Volume Size: Either accept the default size of worker node boot volumes (as
determined from the image used for worker nodes) or click and specify an alternative size for worker node
boot volumes in Boot Volume Size in GB:. If you do specify a custom boot volume size, it must be larger
than the image's default boot volume size. The minimum and maximum sizes you can specify are 50 GB and
32 TB respectively. See Custom Boot Volume Sizes on page 614.
• Public SSH Key: (Optional) The public key portion of the key pair you want to use for SSH access to each
node in the node pool. The public key is installed on all worker nodes in the cluster. Note that if you don't
specify a public SSH key, Container Engine for Kubernetes will provide one. However, since you won't have
the corresponding private key, you will not have SSH access to the worker nodes. Note that you cannot use
SSH to access directly any worker nodes in private subnets (see Connecting to Worker Nodes in Private
Subnets Using SSH on page 904).
• Kubernetes Labels: (Optional) One or more labels (in addition to a default label) to add to worker nodes in
the node pool to enable the targeting of workloads at specific node pools.
10. (Optional) Click Another node pool and specify configuration details for a second and subsequent node pools in
the cluster.
If you define multiple node pools in a cluster, you can host all of them on a single AD-specific subnet. However,
it's best practice to host different node pools for a cluster on a regional subnet (recommended) or on different AD-
specific subnets (one in each availability domain in the region).
11. Click Next to review the details you entered for the new cluster.
12. Click Create Cluster to create the new cluster.
Container Engine for Kubernetes starts creating the cluster with the name you specified.
If you specified details for one or more node pools, Container Engine for Kubernetes creates:
• node pools with the names you specified
• worker nodes with auto-generated names in the format oke-c<part-of-cluster-OCID>-n<part-
of-node-pool-OCID>-s<part-of-subnet-OCID>-<slot>
Do not change the auto-generated names of worker nodes.
13. Click Close to return to the Console.
Initially, the new cluster appears in the Console with a status of Creating. When the cluster has been created, it has a
status of Active.
Container Engine for Kubernetes also creates a Kubernetes kubeconfig configuration file that you use to access the
cluster using kubectl.

Oracle Cloud Infrastructure User Guide 874


Container Engine for Kubernetes

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the CreateCluster operation to create a cluster.

Setting Up Cluster Access


To access a cluster using kubectl, you have to set up a Kubernetes configuration file (commonly known as a
'kubeconfig' file) for the cluster. The kubeconfig file (by default named config and stored in the $HOME/.kube
directory) provides the necessary details to access the cluster. Having set up the kubeconfig file, you can start using
kubectl to manage the cluster.
The steps to follow when setting up the kubeconfig file depend on how you want to access the cluster:
• To access the cluster using kubectl in Cloud Shell, run an Oracle Cloud Infrastructure CLI command in the Cloud
Shell window to set up the kubeconfig file. Note that Cloud Shell access is currently only available to clusters that
have a Kubernetes API endpoint with a public IP address.
See Setting Up Cloud Shell Access to Clusters on page 875.
• To access the cluster using a local installation of kubectl:
• Generate an API signing key pair (if you don't already have one).
• Upload the public key of the API signing key pair.
• Install and configure the Oracle Cloud Infrastructure CLI.
• Set up the kubeconfig file.
See Setting Up Local Access to Clusters on page 876.

Setting Up Cloud Shell Access to Clusters


When a cluster's Kubernetes API endpoint has a public IP address, you can access the cluster in Cloud Shell by
setting up a kubeconfig file.
To set up the kubeconfig file:
Step 1: Set up the kubeconfig file
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
Kubernetes Clusters.
2. Choose a Compartment you have permission to work in.
3. On the Cluster List page, click the name of the cluster you want to access using kubectl. The Cluster page shows
details of the cluster.
4. Click the Access Cluster button to display the Access Your Cluster dialog box.
5. Click Cloud Shell Access.
6. Click Launch Cloud Shell to display the Cloud Shell window. For more information about Cloud Shell
(including the required IAM policy), see Cloud Shell.

Oracle Cloud Infrastructure User Guide 875


Container Engine for Kubernetes

7. Run the Oracle Cloud Infrastructure CLI command to set up the kubeconfig file and save it in a location
accessible to kubectl.
For example, enter the following command (or copy and paste it from the Access Your Cluster dialog box) in the
Cloud Shell window:

oci ce cluster create-kubeconfig --cluster-id


ocid1.cluster.oc1.phx.aaaaaaaaae... --file $HOME/.kube/config --region
us-phoenix-1 --token-version 2.0.0

where ocid1.cluster.oc1.phx.aaaaaaaaae... is the OCID of the current cluster. For convenience, the command in the
Access Your Cluster dialog box already includes the cluster's OCID.
Note that if a kubeconfig file already exists in the location you specify, details about the cluster will be added as a
new context to the existing kubeconfig file. The current-context: element in the kubeconfig file will be set
to point to the newly-added context.
Tip:

For clipboard operations in the Cloud Shell window, Windows users can
use Ctrl-C or Ctrl-Insert to copy, and Shift-Insert to paste. For Mac OS
users, use Cmd-C to copy and Cmd-V to paste.
8. If you don't save the kubeconfig file in the default location ($HOME/.kube) or with the default name (config),
set the value of the KUBECONFIG environment variable to point to the name and location of the kubeconfig file.
For example, enter the following command in the Cloud Shell window:

export KUBECONFIG=$HOME/.kube/config

Step 2: Verify that kubectl can access the cluster


Verify that kubectl can connect to the cluster by entering the following command in the Cloud Shell window:

$ kubectl get nodes

Information about the nodes in the cluster is shown.


You can now use kubectl to perform operations on the cluster.

Setting Up Local Access to Clusters


When a cluster's Kubernetes API endpoint does not have a public IP address, you can access the cluster from a
local workstation if your network is peered with the cluster's VCN. If there is a bastion host on a public subnet of
the cluster's VCN, you can optionally complete an additional step to set up an SSH tunnel to the Kubernetes API
endpoint.
To set up the kubeconfig file:
Step 1: Generate an API signing key pair
If you already have an API signing key pair, go straight to the next step. If not:
1. Use OpenSSL commands to generate the key pair in the required PEM format. If you're using Windows, you'll
need to install Git Bash for Windows and run the commands with that tool. See How to Generate an API Signing
Key on page 4216.
2. Copy the contents of the public key to the clipboard (you'll need to paste the value into the Console later).
Step 2: Upload the public key of the API signing key pair
1.
In the top-right corner of the Console, open the Profile menu ( ) and then click User Settings to view the
details.
2. Click Add Public Key.

Oracle Cloud Infrastructure User Guide 876


Container Engine for Kubernetes

3. Paste the public key's value into the window and click Add.
The key is uploaded and its fingerprint is displayed (for example,
d1:b2:32:53:d3:5f:cf:68:2d:6f:8b:5f:77:8f:07:13).
Step 3: Install and configure the Oracle Cloud Infrastructure CLI
1. Install the Oracle Cloud Infrastructure CLI version 2.6.4 (or later). See Quickstart on page 4231.
2. Configure the Oracle Cloud Infrastructure CLI. See Configuring the CLI on page 4238.
Step 4: Set up the kubeconfig file
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
Kubernetes Clusters.
2. Choose a Compartment you have permission to work in.
3. On the Cluster List page, click the name of the cluster you want to access using kubectl. The Cluster page shows
details of the cluster.
4. Click the Access Cluster button to display the Access Your Cluster dialog box.
5. Click Local Access.
6. Create a directory to contain the kubeconfig file. By default, the expected directory name is $HOME/.kube.
For example, on Linux, enter the following command (or copy and paste it from the Access Your Cluster dialog
box) in a local terminal window:

mkdir -p $HOME/.kube
7. Run the Oracle Cloud Infrastructure CLI command to set up the kubeconfig file and save it in a location
accessible to kubectl.
For example, on Linux, enter the following command (or copy and paste it from the Access Your Cluster dialog
box) in a local terminal window:

oci ce cluster create-kubeconfig --cluster-id


ocid1.cluster.oc1.phx.aaaaaaaaae... --file $HOME/.kube/config --region
us-phoenix-1 --token-version 2.0.0

where ocid1.cluster.oc1.phx.aaaaaaaaae... is the OCID of the current cluster. For convenience, the command in the
Access Your Cluster dialog box already includes the cluster's OCID.
Note that if a kubeconfig file already exists in the location you specify, details about the cluster will be added as a
new context to the existing kubeconfig file. The current-context: element in the kubeconfig file will be set
to point to the newly-added context.
8. If you don't save the kubeconfig file in the default location ($HOME/.kube) or with the default name (config),
set the value of the KUBECONFIG environment variable to point to the name and location of the kubeconfig file.
For example, on Linux, enter the following command (or copy and paste it from the Access Your Cluster dialog
box) in a local terminal window:

export KUBECONFIG=$HOME/.kube/config

Step 5: (Optional, for bastion host access) Set up an SSH tunnel


If there is a bastion host on a public subnet of the cluster's VCN, you can optionally set up an SSH tunnel between the
local workstation and the cluster's Kubernetes API endpoint:
1. Open the kubeconfig file you saved in the previous step.

Oracle Cloud Infrastructure User Guide 877


Container Engine for Kubernetes

2. Change the IP address specified for server in the kubeconfig file:


a. Locate the line:

server: https://x.x.x.x:6443
b. Change the line to:

server: https://127.0.0.1:6443
3. Set up an SSH tunnel to the cluster's Kubernetes API endpoint by entering:

ssh -fNT -L 6443:<k8s-api-endpoint-ip>:6443 -i <bastion-ssh-private-key>


opc@<bastion-public-ip>

where:
• <k8s-api-endpoint-ip> is the private IP address of the cluster's Kubernetes API endpoint.
• <bastion-ssh-private-key> is the path to your private ssh key to access the bastion host.
• <bastion-public-ip> is the public IP address of the bastion host.
For more information about bastion hosts, see Bastion Hosts: Protected Access for Virtual Cloud Networks.
Step 6: Verify that kubectl can access the cluster
1. Verify that kubectl is available by entering the following command in a local terminal window:

kubectl version

The response shows:


• the version of kubectl installed and running locally
• the version of Kubernetes (strictly speaking, the version of the kube-apiserver) running on the cluster's control
plane nodes
Note that the kubectl version must be within one minor version (older or newer) of the Kubernetes version running
on the control plane nodes. If kubectl is more than one minor version older or newer, install an appropriate version
of kubectl. See Kubernetes version and version skew support policy in the Kubernetes documentation.
If the command returns an error indicating that kubectl is not available, install kubectl (see the kubectl
documentation), and repeat this step.
2. Verify that kubectl can connect to the cluster by entering the following command in a local terminal window:

kubectl get nodes

Information about the nodes in the cluster is shown.


You can now use kubectl to perform operations on the cluster.

Notes about Kubeconfig Files


Note the following about kubeconfig files:
• A single kubeconfig file can include the details for multiple clusters, as multiple contexts. The cluster on which
operations will be performed is specified by the current-context: element in the kubeconfig file.
• A kubeconfig file includes an Oracle Cloud Infrastructure CLI command that dynamically generates an
authentication token and inserts it when you run a kubectl command. The Oracle Cloud Infrastructure CLI must be
available on your shell's executable path (for example, $PATH on Linux).
• The authentication tokens generated by the Oracle Cloud Infrastructure CLI command in the kubeconfig file are
short-lived, cluster-scoped, and specific to individual users. As a result, you cannot share kubeconfig files between
users to access Kubernetes clusters.

Oracle Cloud Infrastructure User Guide 878


Container Engine for Kubernetes

• The Oracle Cloud Infrastructure CLI command in the kubeconfig file uses your current CLI profile when
generating an authentication token. If you have defined multiple profiles in different tenancies in the
CLI configuration file (for example, in ~/.oci/config), specify which profile to use when generating the
authentication token as follows. In both cases, <profile-name> is the name of the profile defined in the
CLI configuration file:
• Add --profile to the args: section of the kubeconfig file as follows:

user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- ce
- cluster
- generate-token
- --cluster-id
- <cluster ocid>
- --profile
- <profile-name>
command: oci
env: []
• Set the OCI_CLI_PROFILE environment variable to the name of the profile defined in the CLI configuration
file before running kubectl commands. For example:

export OCI_CLI_PROFILE=<profile-name>

kubectl get nodes


• The authentication tokens generated by the Oracle Cloud Infrastructure CLI command in the kubeconfig file
are appropriate to authenticate individual users accessing the cluster using kubectl. However, the generated
authentication tokens are unsuitable if you want other processes and tools to access the cluster, such as continuous
integration and continuous delivery (CI/CD) tools. In this case, consider creating a Kubernetes service account
and adding its associated authentication token to the kubeconfig file. For more information, see Adding a Service
Account Authentication Token to a Kubeconfig File on page 893.

Upgrading Kubeconfig Files from Version 1.0.0 to Version 2.0.0


Container Engine for Kubernetes currently supports kubeconfig version 2.0.0 files, and no longer supports kubeconfig
version 1.0.0 files.
Enhancements in kubeconfig version 2.0.0 files provide security improvements for your Kubernetes environment,
including short-lived cluster-scoped tokens with automated refreshing, and support for instance principals to access
Kubernetes clusters. Additionally, authentication tokens are generated on-demand for each cluster, so kubeconfig
version 2.0.0 files cannot be shared between users to access Kubernetes clusters (unlike kubeconfig version 1.0.0
files).
Note that kubeconfig version 2.0.0 files are not compatible with kubectl versions prior to version 1.11.9. If you are
currently running kubectl version 1.10.x or older, upgrade kubectl to version 1.11.9 or later. For more information
about compatibility between different versions of kubernetes and kubectl, see the Kubernetes documentation.
Follow the instructions below to determine the current version of kubeconfig files, and how to upgrade any remaining
kubeconfig version 1.0.0 files to version 2.0.0.

Determine the kubeconfig file version


To determine the version of a cluster's kubeconfig file:

Oracle Cloud Infrastructure User Guide 879


Container Engine for Kubernetes

1. In a terminal window (the Cloud Shell window or a local terminal window as appropriate), enter the following
command to see the format of the kubeconfig file currently pointed at by the KUBECONFIG environment variable:

kubectl config view

2. If the kubeconfig file is version 1.0.0, you see a response in the following format:

users:
- name: <username>
user:
token: <token-value>

If you see a response in the above format, you have to upgrade the kubeconfig file. See Upgrading Kubeconfig Files
from Version 1.0.0 to Version 2.0.0 on page 879.
3. If the kubeconfig file is version 2.0.0, you see a response in the following format:

user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- ce
- cluster
- generate-token
- --cluster-id
- <cluster ocid>
command: oci
env: []

If you see a response in the above format, no further action is required.

Upgrade a kubeconfig version 1.0.0 file to version 2.0.0


To upgrade a kubeconfig version 1.0.0 file:
1. In the case of a local installation of kubectl, confirm that Oracle Cloud Infrastructure CLI version 2.6.4 (or later) is
installed by entering:

oci -version

If the Oracle Cloud Infrastructure CLI version is earlier than version 2.6.4, upgrade the CLI to a later version. See
Upgrading the CLI on page 4257.
2. Follow the appropriate instructions to set up the kubeconfig file for use in Cloud Shell or locally (see Setting Up
Cloud Shell Access to Clusters on page 875 or Setting Up Local Access to Clusters on page 876). Running
the oci ce cluster create-kubeconfig command shown in the Access Your Cluster dialog box
upgrades the existing kubeconfig version 1.0.0 file. If you change the name or location of the kubeconfig file, set
the KUBECONFIG environment variable to point to the new name and location of the file.
3. Confirm the kubeconfig file is now version 2.0.0:
a. In a terminal window (the Cloud Shell window or a local terminal window as appropriate), enter:

kubectl config view


b. Confirm that that the response is in the following format:

user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- ce
- cluster

Oracle Cloud Infrastructure User Guide 880


Container Engine for Kubernetes

- generate-token
- --cluster-id
- <cluster ocid>
command: oci
env: []

Modifying Kubernetes Cluster Properties


You can use Container Engine for Kubernetes to modify the properties of existing Kubernetes clusters.
You can change:
• the name of a cluster
• the number of node pools in a cluster by adding new node pools, or deleting existing node pools
• the version of Kubernetes to run on control plane nodes
• the enforcement of pod security policies
• access details for a cluster's Kubernetes API endpoint
• some properties of node pools and worker nodes (see Modifying Node Pool and Worker Node Properties on page
882)
However, note that you cannot change the master encryption key (if specified when the cluster was created).
Also note that you must not change the auto-generated names of resources that Container Engine for Kubernetes has
created (such as the names of worker nodes).

Using the Console


To modify an existing Kubernetes cluster:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
Kubernetes Clusters.
2. Choose a Compartment you have permission to work in.
3. On the Cluster List page, click the name of the cluster you want to modify.
4. Click Edit Cluster to:
• Change the name of the cluster.
• Change whether pod security policies are being enforced (by enabling the cluster's PodSecurityPolicy
admission controller). Note that you must create pod security policies before enabling the PodSecurityPolicy
admission controller of an existing cluster that is already in production. We also strongly recommend you first
verify the cluster's pod security policies in a development or test environment. That way, you can be sure the
pod security policies work as you expect and correctly allow (or refuse) pods to start on the cluster. Also note
that if you disable a cluster's PodSecurityPolicy admission controller, any pod security policies (along with
roles, rolebindings, clusterroles, and clusterrolebindings) you've defined are not deleted, they are simply not
enforced. See Using Pod Security Polices with Container Engine for Kubernetes on page 904.
• Change access details for the Kubernetes API endpoint, including the use of network security groups and
whether to assign a public IP address to the Kubernetes API endpoint subnet. See Kubernetes Cluster Control
Plane and Kubernetes API on page 842.
Note that if you change the cluster's name or whether pod security policies are being enforced, save those changes
before changing access details for the Kubernetes API endpoint.

Oracle Cloud Infrastructure User Guide 881


Container Engine for Kubernetes

5. If a newer version of Kubernetes is available than the one running on the control plane nodes in the cluster, the
Upgrade Available button is enabled. If you want to upgrade the control plane nodes to a newer version, click
Upgrade Available (see Upgrading the Kubernetes Version on Control Plane Nodes in a Cluster on page 930).
6. Use the Cluster Details tab to see information about the cluster, including:
• The status of the cluster, and of the node pools in the cluster.
• The cluster's OCID.
• The Kubernetes version running on the control plane nodes in the cluster.
• The address of the Kubernetes API endpoint.
• Whether pod security policies are being enforced.
7. Use the Node Pools tab to:
• View information about each of the node pools in the cluster, including:
• The status of the node pool.
• The node pool's OCID.
• The configuration currently used when starting new worker nodes in the node pool, including the
Kubernetes version, the shape, and the image.
• The availability domains, and different regional subnets (recommended) or AD-specific subnets hosting
worker nodes.
Note that you can change some of these node pool and worker node properties (see Modifying Node Pool and
Worker Node Properties on page 882).
• Add a new node pool to the cluster by clicking the Add Node Pool button and entering details for the new
node pool.
• Delete a node pool by selecting Delete Node Pool from the Actions menu.
8. Use the Quick Start tab to:
• Set up access to the cluster (see Setting Up Cluster Access on page 875).
• Download and deploy a sample Nginx application using the Kubernetes command line tool kubectl from
the instructions in a manifest file (see Deploying a Sample Nginx App on a Cluster Using Kubectl on page
896).

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the Update Cluster operation to modify an existing Kubernetes cluster.

Modifying Node Pool and Worker Node Properties


You can use Container Engine for Kubernetes to modify the properties of node pools and worker nodes in existing
Kubernetes clusters.
You can change:
• the name of a node pool
• the version of Kubernetes to run on new worker nodes
• the number of worker nodes in a node pool, and the availability domains and subnets in which to place them
• the image to use for new worker nodes
• the shape to use for new worker nodes
• the boot volume size to use for new worker nodes
• the public SSH key to use to access new worker nodes
Note that you must not change the auto-generated names of resources that Container Engine for Kubernetes has
created (such as the names of worker nodes).

Oracle Cloud Infrastructure User Guide 882


Container Engine for Kubernetes

Also note the following:


• Any changes you make to worker node properties will only apply to new worker nodes. You cannot change the
properties of existing worker nodes.
• In some situations, you might want to update properties of all the worker nodes in a node pool simultaneously,
rather than just the properties of new worker nodes that start in the node pool. For example, to upgrade all worker
nodes to a new version of Oracle Linux. In this case, you can create a new node pool with worker nodes that have
the required properties, and shift work from the original node pool to the new node pool using the kubectl
drain command and pod disruption budgets. For more information, see Updating Worker Nodes by Creating a
New Node Pool on page 884.
• If you use the UpdateNodePool API operation to modify properties of an existing node pool, be aware of the
Worker node properties out-of-sync with updated node pool properties known issue and its workarounds.
• Do not use the kubectl delete node command to scale down or terminate worker nodes in a cluster that
was created byContainer Engine for Kubernetes. Instead, reduce the number of worker nodes by changing the
corresponding node pool properties using the Console or the API. The kubectl delete node command
does not change a node pool's properties, which determine the desired state (including the number of worker
nodes). Also, although the kubectl delete node command removes the worker node from the cluster's etcd
key-value store, the command does not delete the underlying compute instance.

Using the Console


To modify the properties of node pools and worker nodes of existing Kubernetes clusters:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
Kubernetes Clusters.
2. Choose a Compartment you have permission to work in.
3. On the Cluster List page, click the name of the cluster you want to modify.
4. On the Cluster page, click the name of the node pool that you want to modify.
5. Use the Node Pool Details tab to view information about the node pool, including:
• The status of the node pool.
• The node pool's OCID.
• The configuration currently used when starting new worker nodes in the node pool, including:
• the version of Kubernetes to run on worker nodes
• the shape to use for worker nodes
• the image to use on worker nodes
• The availability domains, and different regional subnets (recommended) or AD-specific subnets hosting
worker nodes.
6. (optional) Change properties of the node pool and worker nodes by clicking Edit and specifying:
• Name: A different name for the node pool. Avoid entering confidential information.
• Version: A different version of Kubernetes to run on new worker nodes in the node pool when performing
an in-place upgrade. The Kubernetes version on worker nodes must be either the same version as that on the
control plane nodes, or an earlier version that is still compatible (see Kubernetes Versions and Container
Engine for Kubernetes on page 925). To start new worker nodes running the Kubernetes version you
specify, 'drain' existing worker nodes in the node pool (to prevent new pods starting and to delete existing
pods) and then terminate each of the existing worker nodes in turn.
You can also specify a different version of Kubernetes to run on new worker nodes by performing an out-of-
place upgrade. For more information about upgrading worker nodes, see Upgrading the Kubernetes Version on
Worker Nodes in a Cluster on page 930.
• Image: A different image to use on the nodes in the node pool. An image is a template of a virtual hard
drive that determines the operating system and other software for the node. See Supported Images (Including
Custom Images) and Shapes for Worker Nodes on page 922.
• Shape: A different shape to use for the nodes in the node pool. The shape determines the number of CPUs
and the amount of memory allocated to each node. The list shows only those shapes available in your tenancy

Oracle Cloud Infrastructure User Guide 883


Container Engine for Kubernetes

that are supported by Container Engine for Kubernetes. See Supported Images (Including Custom Images) and
Shapes for Worker Nodes on page 922.
• Boot Volume Size in GB: A different boot volume size for worker nodes. The default size of worker node
boot volumes is determined from the image specified for worker nodes, but you can specify a custom boot
volume size. If you do specify a custom boot volume size, it must be larger than the image's default boot
volume size. The minimum and maximum sizes you can specify are 50 GB and 32 TB respectively (see
Custom Boot Volume Sizes on page 614). If you change the boot volume size for worker nodes, consider
extending the partition for the boot volume to take advantage of the larger size (see Extending the Partition for
a Boot Volume on page 615).
• Public SSH Key: (Optional) A different public key portion of the key pair you want to use for SSH access
to the nodes in the node pool. The public key is installed on all worker nodes in the cluster. Note that if you
don't specify a public SSH key, Container Engine for Kubernetes will provide one. However, since you won't
have the corresponding private key, you will not have SSH access to the worker nodes. Note that you cannot
use SSH to access directly any worker nodes in private subnets (see Connecting to Worker Nodes in Private
Subnets Using SSH on page 904).
7. (optional) Change the number and placement of worker nodes in the node pool by clicking Scale and specifying:
• the number of worker nodes you want in the node pool after the scale operation is complete
• the availability domains in which to place the worker nodes
• the regional subnets (recommended) or AD-specific subnets to host the worker nodes
8. Use the Nodes tab to see information about specific worker nodes in the node pool. Optionally edit the
configuration details of a specific worker node by clicking the worker node's name.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the UpdateNodePool operation to modify an existing node pool.

Updating Worker Nodes by Creating a New Node Pool


You can modify the properties of new worker nodes that start in an existing node pool (see Modifying Node Pool and
Worker Node Properties on page 882). However, in some situations, you might want to update properties of all the
worker nodes in a node pool simultaneously, rather than just the properties of new worker nodes that start in the node
pool. For example, to upgrade all worker nodes to a new version of Oracle Linux.
In this case, you can create a new node pool with worker nodes that have the required properties, and shift work
from the original node pool to the new node pool. Having 'drained' existing worker nodes in the original node pool to
prevent new pods starting and to delete existing pods, you can then delete the original node pool.
To update the properties of all worker nodes in a node pool by creating a new node pool:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
Kubernetes Clusters.
2. Choose a Compartment you have permission to work in.
3. On the Cluster List page, click the name of the cluster where you want to update worker node properties.
4. On the Cluster page, display the Node Pools tab, and then click Add Node Pool to create a new node pool and
specify the required properties for its worker nodes.
5. If there are labels attached to worker nodes in the original node pool and those labels are used by selectors (for
example, to determine the nodes on which to run pods), then use the kubectl label nodes command
to attach the same labels to the new worker nodes in the new node pool. See Assigning Pods to Nodes in the
Kubernetes documentation.

Oracle Cloud Infrastructure User Guide 884


Container Engine for Kubernetes

6. For each worker node in the original node pool, prevent new pods from starting and delete existing pods by
entering kubectl drain <node_name> for each worker node.
For more information:
• about using kubectl, see Accessing a Cluster Using Kubectl on page 889
• about the drain command, see drain in the Kubernetes documentation
Recommended: Leverage pod disruption budgets as appropriate for your application to ensure that there's a
sufficient number of replica pods running throughout the drain operation.
After all the worker nodes have been drained from the original node pool and pods are running on worker nodes in
the new node pool, you can delete the original node pool.
7. On the Cluster page, display the Node Pools tab, and then select Delete Node Pool from the Actions menu
beside the original node pool.
The original node pool and all its worker nodes are deleted.

Deleting a Kubernetes Cluster


You can delete a cluster along with its control plane nodes, worker nodes, and node pools.
Note the following:
• When you delete a cluster, no other resources created during the cluster creation process or associated with the
cluster (such as VCNs, internet gateways, NAT gateways, route tables, security lists, load balancers, and block
volumes) are deleted automatically. If you want to delete these resources, you have to do so manually.
• Container Engine for Kubernetes creates the worker nodes (compute instances) in a cluster with auto-generated
names in the format oke-c<part-of-cluster-OCID>-n<part-of-node-pool-OCID>-s<part-
of-subnet-OCID>-<slot>. Do not change the auto-generated names of worker nodes. If you do change the
auto-generated name of a worker node and then delete the cluster, the renamed worker node is not deleted. You
would have to delete the renamed worker node manually.

Using the Console


To delete a Kubernetes cluster using Container Engine for Kubernetes:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
Kubernetes Clusters.
2. Choose a Compartment you have permission to work in.
3. On the Cluster List page, click the Delete icon beside the cluster to delete, and confirm that you want to delete it.
You can also delete a cluster using the Delete Cluster button on the Cluster page.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the DeleteCluster operation to delete a cluster.

Monitoring Clusters
Having created a cluster, you can monitor the overall status of the cluster itself, and the nodes and node pools within
it.
In addition to monitoring the overall status of clusters, node pools, and nodes, you can monitor their health, capacity,
and performance at a more granular level using metrics, alarms, and notifications. See Container Engine for
Kubernetes Metrics on page 962.

Oracle Cloud Infrastructure User Guide 885


Container Engine for Kubernetes

Using the Console


To monitor a Kubernetes cluster:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
Kubernetes Clusters.
2. Choose a Compartment you have permission to work in.
The Status column on the Cluster List page shows a summary status for each individual cluster and its control
plane nodes. Clusters can have one of the following statuses:

Cluster Status Explanation Possible Reason


Creating Cluster is in the process of being Application is being deployed.
created.
Active Cluster is running normally. Control plane nodes are running
normally.
Failed Cluster is not running due to an Possible reasons:
unrecoverable error.
• a problem setting up load
balancers
• conflicts in networking ranges

Deleting Cluster is in the process of being Application no longer required, so


deleted. Application no longer resources in the process of being
required, so resources in the process released.
of being released.
Deleted Cluster has been deleted. Application no longer required, so
Application no longer required, so resources have been released.
resources have been released.
Updating Version of Kubernetes on the A newly supported version of
control plane nodes is in the process Kubernetes has become available.
of being upgraded.

Note that the cluster's summary status is not necessarily directly related to the status of node pools and nodes
within the cluster.
3. On the Cluster List page, click the name of the cluster for which you want to see detailed status.
4. Display the cluster's Metrics tab to see more granular information about the health, capacity, and performance of
the cluster. See Container Engine for Kubernetes Metrics on page 962.
5. Display the Node Pools tab to see the summary status of each node pool in the cluster.
6. On the Node Pools tab, click the name of a node pool for which you want to see detailed status.
7. Display the node pool's Metrics tab to see more granular information about the health, capacity, and performance
of the node pool. See Container Engine for Kubernetes Metrics on page 962.
8. Display the Nodes tab to see the summary status of each worker node in the node pool.
Worker nodes can have one of the following statuses:

Node Status Explanation Possible Reason


Creating Node is being created. Compute instance in the process of
being created.
Active Node is running normally. Node is running normally.

Oracle Cloud Infrastructure User Guide 886


Container Engine for Kubernetes

Node Status Explanation Possible Reason


Updating Node is in the process of being Container Engine for Kubernetes
updated. is performing an operation on the
node.

Deleting Node is in the process of being Application no longer required, so


deleted. resources in the process of being
released.
Deleted Node has been deleted. Application no longer required, so
resources have been released.
Inactive Node still exists, but is not running. Compute resource has a status of
Stopped, Stopping, or Down For
Maintenance.
9. Click View Metrics beside a worker node to see more granular information about the health, capacity, and
performance of that node. See Container Engine for Kubernetes Metrics on page 962.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the GetCluster and GetNodePool operations to monitor the status of Kubernetes clusters.

Viewing Work Requests and Kubernetes API Server Audit Logs


It's often useful to understand the context behind activities happening in a cluster. For example, to perform
compliance checks, to identify security anomalies, and to troubleshoot errors by identifying who did what and when.
You can view operations performed by Container Engine for Kubernetes and the Kubernetes API server as follows:
• You can use the Work Requests tab of the cluster's Summary page to view and manage operations performed on
a particular cluster by Container Engine for Kubernetes.
• You can use the Oracle Cloud Infrastructure Audit service to view all operations performed by:
• Container Engine for Kubernetes, which emits audit events whenever you perform actions on a cluster, such as
create and delete.
• The Kubernetes API server, which emits audit events whenever you use tools like kubectl to make
administrative changes to a cluster, such as creating a service. Kubernetes API server audit events are shown
in the Oracle Cloud Infrastructure Audit service for clusters running Kubernetes version 1.13.x (or later). Note
that events are only shown from 15 July, 2020 onward.
Note that in addition to viewing operations as described in this topic, you can also monitor the health, capacity, and
performance of Kubernetes clusters themselves using metrics, alarms, and notifications. See Container Engine for
Kubernetes Metrics on page 962.

Using the Console


To view and manage operations performed by Container Engine for Kubernetes on a particular cluster:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
Kubernetes Clusters.
2. Choose a Compartment you have permission to work in.
3. On the Cluster List page, click the name of the cluster for which you want to view and manage operations.
The Cluster page shows information about the cluster.
4. Display the Work Requests tab, showing the recent operations performed on the cluster.

Oracle Cloud Infrastructure User Guide 887


Container Engine for Kubernetes

To view operations performed by Container Engine for Kubernetes and the Kubernetes API server as log events in the
Oracle Cloud Infrastructure Audit service:
1. In the Console, open the navigation menu. Under Governance and Administration, go to Governance and click
Audit.
2. Choose a Compartment you have permission to work in.
3. Search and filter to show the operations you're interested in:
• To view operations performed by Container Engine for Kubernetes, enter ClustersAPI in the Keywords
field and click Search.
• To view operations performed by the Kubernetes API server, enter OKE API Server Admin Access in
the Keywords field and click Search.
For more information about using the Oracle Cloud Infrastructure Audit service, see Viewing Audit Log Events
on page 498.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the GetWorkRequest, DeleteWorkRequest, ListWorkRequestErrors, ListWorkRequestLogs, and
ListWorkRequests operations to view and manage operations performed by Container Engine for Kubernetes.

Viewing Application Logs on Worker Nodes


Having created a cluster using Container Engine for Kubernetes, you can use Oracle Cloud Infrastructure Logging to
view and search the logs of applications running on worker node compute instances in the cluster.
Before you can collect and parse the application logs using Oracle Cloud Infrastructure Logging:
• You must have already:
• Enabled monitoring for worker node compute instances (see Enabling Monitoring for Compute Instances on
page 790).
• Installed the Oracle Cloud Agent software on worker node compute instances. The agent enables you to
specify which logs to collect and how to parse them. The agent is installed by default on worker node compute
instances. To confirm that the agent is already installed, see Verify Agent Installation on page 2651.
• You must have already:
• Created a dynamic group with a rule that includes worker nodes in the cluster's node pools as target hosts (see
About Dynamic Groups on page 2441 and Selecting Target Hosts with Dynamic Groups on page 2653).
For example:

instance.compartment.id = 'ocid1.tenancy.oc1..<unique-id>'
• Created a policy for the dynamic group with a policy statement to allow the target hosts in the dynamic group
to push logs to Oracle Cloud Infrastructure Logging (see Selecting Target Hosts with Dynamic Groups on
page 2653). For example:

allow dynamic group <dynamic-group-name> to use log-content in tenancy

Having completed the above prerequisites, you can then define custom logs and associated agent configurations
to view application logs on worker node compute instances. For more information about custom logs and agent
configurations, see Custom Logs on page 2647.
Note that in addition to viewing application logs on worker node compute instances, you can also:
• Monitor the overall status of the cluster itself, node pools, and nodes. See Monitoring Clusters on page 885.
• Monitor the health, capacity, and performance of clusters, node pools, and nodes at a more granular level using
metrics, alarms, and notifications. See Container Engine for Kubernetes Metrics on page 962.

Oracle Cloud Infrastructure User Guide 888


Container Engine for Kubernetes

Using the Console


To define a new custom log object and an associated agent configuration to enable you to view and search the logs of
applications running on a cluster's worker node compute instances:
1. Open the navigation menu. Under Solutions and Platform, go to Logging, and then click Logs.
2. Choose a Compartment you have permission to work in.
3. Click Create custom log to create a new custom log.
4. On the Create custom log page, specify:
•Custom Log Name: A name of your choosing for the new custom log. Avoid entering confidential
information.
• Compartment: The compartment in which to create the new custom log.
• Log Group: The log group in which to place the custom log. Optionally, click Create New Group to create a
new log group (see Managing Logs and Log Groups on page 2605).
5. Click Create custom log.
A new custom log is created, and the Create agent configuration page is displayed.
For convenience, these instructions now describe how to create a new agent configuration associated with the new
custom log (although you can create a new agent configuration later if you prefer).
6. On the Create agent configuration page, select Create new configuration and specify:
• Configuration Name: A name of your choosing for the new agent configuration. Avoid entering confidential
information.
• Compartment: The compartment in which to create the new agent configuration.
7. In the Agent configuration panel on the Create agent configuration page, specify:
• Choose host groups: One or more host groups, as follows:
• Group type: Select Dynamic group.
• Group: An existing dynamic group that includes worker nodes in the cluster's node pools as target hosts.
The dynamic group you select must have permission to access the compartment you specified for the agent
configuration, and must also allow target hosts to push logs to Oracle Cloud Infrastructure Logging.
• Configure log inputs: One or more locations from which to obtain application logs as inputs to the custom
log, as follows:
• Input type: Select Log path.
• Input name: A name of your choosing for the new log input.
• File paths: The path to application logs on the worker node compute instance. For example, /var/log/
containers/*
The Select log destination options are pre-populated with the custom log details you specified previously.
8. Click Create custom log to create the agent configuration associated with the custom log.
To view and search the contents of a custom log created for an application running on a cluster's worker node
compute instances:
1. Open the navigation menu. Under Solutions and Platform, go to Logging, and then click Logs.
2. Click the name of the custom log that you want to view. You can sort log entries by age, and filter by time.
3. (Optional) Click Explore with Log Search to open the central logging Search page. You can apply filters, and
explore and visualize the log data in different ways (see Viewing Custom Logs in a Compute Instance on page
2655).

Accessing a Cluster Using Kubectl


You can use the Kubernetes command line tool kubectl to perform operations on a cluster you've created with
Container Engine for Kubernetes. You can use the kubectl installation included in Cloud Shell, or you can use a local
installation of kubectl. In both cases, before you can use kubectl to access a cluster, you have to specify the cluster on
which to perform operations by setting up the cluster's kubeconfig file.

Oracle Cloud Infrastructure User Guide 889


Container Engine for Kubernetes

Note the following:


• An Oracle Cloud Infrastructure CLI command in the kubeconfig file generates authentication tokens that are
short-lived, cluster-scoped, and specific to individual users. As a result, you cannot share kubeconfig files between
users to access Kubernetes clusters. The generated authentication tokens are also unsuitable if you want other
processes and tools to access the cluster, such as continuous integration and continuous delivery (CI/CD) tools.
In this case, consider creating a Kubernetes service account and adding its associated authentication token to the
kubeconfig file. For more information, see Adding a Service Account Authentication Token to a Kubeconfig File
on page 893.
• The version of kubectl you use must be compatible with the version of Kubernetes running on clusters created
by Container Engine for Kubernetes. In the case of Cloud Shell, kubectl is regularly updated so it is always
compatible with the versions of Kubernetes currently supported by Container Engine for Kubernetes. In the case
of a local installation of kubectl, it is your responsibility to update kubectl regularly. For more information about
compatibility between different versions of kubernetes and kubectl, see the Kubernetes documentation.
• Currently, to access a cluster using kubectl in Cloud Shell, the Kubernetes API endpoint must have a public IP
address.

Accessing a Cluster Using kubectl in Cloud Shell


To access a cluster using kubectl in Cloud Shell:
1. If you haven't already done so, follow the steps to set up the cluster's kubeconfig configuration file for use in
Cloud Shell, and (if necessary) set the KUBECONFIG environment variable to point to the file. Note that you
must set up your own kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set
up. See Setting Up Cloud Shell Access to Clusters on page 875.
2. In the Cloud Shell window, enter kubectl followed by the command for the operation you want to perform on
the cluster. For a list of available commands and options, see the kubectl documentation.
Note that you must have the appropriate permissions to run the command you enter. See About Access Control
and Container Engine for Kubernetes on page 919.

Accessing a Cluster Using kubectl Installed Locally


To access a cluster using kubectl installed locally:
1. If you haven't already done so, install kubectl (see the kubectl documentation).
2. If you haven't already done so, follow the steps to set up the cluster's kubeconfig configuration file for use locally,
and (if necessary) set the KUBECONFIG environment variable to point to the file. Note that you must set up your
own kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set up. See Setting
Up Local Access to Clusters on page 876.
3. In a local terminal window, enter kubectl followed by the command for the operation you want to perform on
the cluster. For a list of available commands and options, see the kubectl documentation.
Note that you must have the appropriate permissions to run the command you enter. See About Access Control
and Container Engine for Kubernetes on page 919.

Accessing a Cluster Using the Kubernetes Dashboard


The Kubernetes Dashboard is a web-based management interface that enables you to:
• deploy and edit containerized applications
• assess the status of containerized applications
• troubleshoot containerized applications
For more information about the Kubernetes Dashboard (sometimes called the Web UI or the Dashboard UI), see the
Web UI (Dashboard) topic in the Kubernetes documentation.
The Kubernetes Dashboard is not deployed in clusters by default. However, you can deploy the Kubernetes
Dashboard in clusters you create with Container Engine for Kubernetes in the following ways:

Oracle Cloud Infrastructure User Guide 890


Container Engine for Kubernetes

• To manually deploy the Kubernetes Dashboard on an existing cluster, see the Kubernetes documentation.
When you follow the instructions to manually deploy the Kubernetes Dashboard, it is deployed in the kube-
dashboard namespace (not the kube-system namespace). The URL to display a manually deployed
Kubernetes Dashboard is:

http://localhost:8001/api/v1/namespaces/kube-dashboard/services/
https:kubernetes-dashboard:/proxy/#!/login
• To have Container Engine for Kubernetes automatically deploy the Kubernetes Dashboard during cluster creation,
create the cluster using the API and set the isKubernetesDashboardEnabled attribute to true. When Container
Engine for Kubernetes automatically deploys the Kubernetes Dashboard, it is deployed in the kube-system
namespace. The URL to display an automatically deployed Kubernetes Dashboard is:

http://localhost:8001/api/v1/namespaces/kube-system/services/
https:kubernetes-dashboard:/proxy/#!/login

Note the following:


• You cannot run the Kubernetes Dashboard in Cloud Shell.
• You cannot use Container Engine for Kubernetes to deploy the Kubernetes Dashboard on existing clusters. You
have to manually deploy the Kubernetes Dashboard on existing clusters.
• The commands to use to delete the Kubernetes Dashboard from a cluster will depend on the version of Kubernetes
running on the cluster. See Notes about Deleting the Kubernetes Dashboard on page 893.
• An Oracle Cloud Infrastructure CLI command in the kubeconfig file generates authentication tokens that are
short-lived, cluster-scoped, and specific to individual users. As a result, you cannot share kubeconfig files between
users to access Kubernetes clusters. The generated authentication tokens are also unsuitable if you want other
processes and tools to access the cluster, such as continuous integration and continuous delivery (CI/CD) tools.
In this case, consider creating a Kubernetes service account and adding its associated authentication token to the
kubeconfig file. For more information, see Adding a Service Account Authentication Token to a Kubeconfig File
on page 893.

Accessing a Cluster using the Kubernetes Dashboard

To access a cluster using the Kubernetes Dashboard:


1. If you haven't already done so, follow the steps to set up the cluster's kubeconfig configuration file and (if
necessary) set the KUBECONFIG environment variable to point to the file. Note that you must set up your own
kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set up. See Setting Up
Cluster Access on page 875.
2. In a text editor, create a file (for example, called oke-admin-service-account.yaml) with the following content:

apiVersion: v1
kind: ServiceAccount
metadata:
name: oke-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: oke-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: oke-admin

Oracle Cloud Infrastructure User Guide 891


Container Engine for Kubernetes

namespace: kube-system

The file defines an administrator service account and a clusterrolebinding, both called oke-admin.
3. Create the service account and the clusterrolebinding in the cluster by entering:

kubectl apply -f <filename>

where <filename> is the name of the file you created earlier. For example:

kubectl apply -f oke-admin-service-account.yaml

The output from the above command confirms the creation of the service account and the clusterrolebinding:

serviceaccount "oke-admin" created


clusterrolebinding.rbac.authorization.k8s.io "oke-admin" created

You can now use the oke-admin service account to view and control the cluster, and to connect to the Kubernetes
dashboard.
4. Obtain an authentication token for the oke-admin service account by entering:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret


| grep oke-admin | awk '{print $1}')

The output from the above command includes an authentication token (a long alphanumeric string) as the value of
the token: element, as shown below:

Name: oke-admin-token-gwbp2
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: oke-admin
kubernetes.io/service-account.uid: 3a7fcd8e-e123-11e9-81ca-0a580aed8570
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1289 bytes
namespace: 11 bytes
token: eyJh______px1Q

In the example above, eyJh______px1Q (abbreviated for readability) is the authentication token.
5. Copy the value of the token: element from the output. You will use this token to connect to the dashboard.
6. In a terminal window, enter kubectl proxy to make the Kubernetes Dashboard available.
7. Open a browser and go to the following URL to display the Kubernetes Dashboard that was deployed when
cluster was created:

http://localhost:8001/api/v1/namespaces/kube-system/services/
https:kubernetes-dashboard:/proxy/#!/login

Note that if you followed the instructions in the Kubernetes documentation to manually deploy the Kubernetes
Dashboard on an existing cluster, it is deployed in the kube-dashboard namespace rather than the kube-
system namespace. As a result, the URL to display the manually deployed Kubernetes Dashboard is:

http://localhost:8001/api/v1/namespaces/kube-dashboard/services/
https:kubernetes-dashboard:/proxy/#!/login.
8. In the Kubernetes Dashboard, select Token and paste the value of the token: element you copied earlier into the
Token field.

Oracle Cloud Infrastructure User Guide 892


Container Engine for Kubernetes

9. In the Kubernetes Dashboard, click Sign In, and then click Overview to see the applications deployed on the
cluster.

Notes about Deleting the Kubernetes Dashboard


If you want to delete the Kubernetes Dashboard from a cluster, the commands to use will depend on the version of
Kubernetes running on the cluster:
• For clusters running Kubernetes versions prior to version 1.16.8, run the following kubectl commands to delete
the Kubernetes Dashboard:

kubectl delete deployment kubernetes-dashboard -n kube-system


kubectl delete sa -n kube-system kubernetes-dashboard
kubectl delete svc -n kube-system kubernetes-dashboard
kubectl delete secret -n kube-system kubernetes-dashboard-certs
kubectl delete secret -n kube-system kubernetes-dashboard-key-holder
kubectl delete cm -n kube-system kubernetes-dashboard-settings
kubectl delete role -n kube-system kubernetes-dashboard-minimal
kubectl delete rolebinding -n kube-system kubernetes-dashboard-minimal
kubectl delete deploy -n kube-system kubernetes-dashboard
• For clusters running Kubernetes version 1.16.8 (or later), run the following kubectl commands to delete the
Kubernetes Dashboard:

kubectl delete deployment kubernetes-dashboard -n kube-system


kubectl delete sa -n kube-system kubernetes-dashboard
kubectl delete svc -n kube-system kubernetes-dashboard
kubectl delete secret -n kube-system kubernetes-dashboard-certs
kubectl delete secret -n kube-system kubernetes-dashboard-csrf
kubectl delete secret -n kube-system kubernetes-dashboard-key-holder
kubectl delete cm -n kube-system kubernetes-dashboard-settings
kubectl delete role -n kube-system kubernetes-dashboard
kubectl delete rolebinding -n kube-system kubernetes-dashboard
kubectl delete clusterrole -n kube-system kubernetes-dashboard
kubectl delete clusterrolebinding -n kube-system kubernetes-dashboard
kubectl delete deploy -n kube-system kubernetes-dashboard

Adding a Service Account Authentication Token to a Kubeconfig File


When you set up the kubeconfig file for a cluster, by default it contains an Oracle Cloud Infrastructure CLI command
to generate a short-lived, cluster-scoped, user-specific authentication token. The authentication token generated by the
CLI command is appropriate to authenticate individual users accessing the cluster using kubectl and the Kubernetes
Dashboard.
However, the generated authentication token is not appropriate to authenticate processes and tools accessing the
cluster, such as continuous integration and continuous delivery (CI/CD) tools. To ensure access to the cluster, such
tools require long-lived, non-user-specific authentication tokens.
One solution is to use a Kubernetes service account, as described in this topic. A service account has an associated
service account authentication token, which is stored as a Kubernetes secret. Having created a service account, you
bind it to a clusterrolebinding that has cluster administration permissions. You can then add the service account (and
its service account authentication token) as a user definition in the kubeconfig file itself. Other tools can then use the
service account authentication token when accessing the cluster.
Note that to run the commands in this topic, you must have the appropriate permissions. See About Access Control
and Container Engine for Kubernetes on page 919.
To add a service account authentication token to a kubeconfig file:
1. If you haven't already done so, follow the steps to set up the cluster's kubeconfig configuration file and (if
necessary) set the KUBECONFIG environment variable to point to the file. Note that you must set up your own

Oracle Cloud Infrastructure User Guide 893


Container Engine for Kubernetes

kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set up. See Setting Up
Cluster Access on page 875.
2. In a terminal window, create a new service account in the kube-system namespace by entering the following
kubectl command:

kubectl -n kube-system create serviceaccount <service-account-name>

For example, to create a service account called kubeconfig-sa, enter:

kubectl -n kube-system create serviceaccount kubeconfig-sa

The output from the above command confirms the creation of the service account. For example:

serviceaccount/kubeconfig-sa created

Note that creating the service account in the kube-system namespace is recommended good practice, and is
assumed in the instructions in this topic. However, if you prefer, you can create the service account in another
namespace to which you have access.
3. Create a new clusterrolebinding with cluster administration permissions and bind it to the service account you just
created by entering the following kubectl command:

kubectl create clusterrolebinding <binding-name> --clusterrole=cluster-


admin --serviceaccount=kube-system:<service-account-name>

For example, to create a clusterrolebinding called add-on-cluster-admin and bind it to the kubeconfig-sa service
account, enter:

kubectl create clusterrolebinding add-on-cluster-admin --


clusterrole=cluster-admin --serviceaccount=kube-system:kubeconfig-sa

The output from the above command confirms the creation of the clusterrolebinding. For example:

clusterrolebinding.rbac.authorization.k8s.io/add-on-cluster-admin created
4. Obtain the name of the service account authentication token and assign its value to an environment variable
by entering the following command (these instructions assume you specify TOKENNAME as the name of the
environment variable):

TOKENNAME=`kubectl -n kube-system get serviceaccount/<service-account-


name> -o jsonpath='{.secrets[0].name}'`

For example:

TOKENNAME=`kubectl -n kube-system get serviceaccount/kubeconfig-sa -o


jsonpath='{.secrets[0].name}'`

Oracle Cloud Infrastructure User Guide 894


Container Engine for Kubernetes

5. Obtain the value of the service account authentication token and assign its value (decoded from base64) to an
environment variable. These instructions assume you specify TOKEN as the name of the environment variable.
The commands to enter depend on the operating system:
• To obtain the value of the service account authentication token in a MacOS, Linux, or Unix environment, enter
the following command:

TOKEN=`kubectl -n kube-system get secret $TOKENNAME -o


jsonpath='{.data.token}'| base64 --decode`
• To obtain the value of the service account authentication token in a Windows environment:
a. Enter the following command:

kubectl -n kube-system get secret $TOKENNAME -o


jsonpath='{.data.token}'
b. Copy the output from the above command and paste it into a base64 decoder (for example, https://
www.base64decode.org, https://www.base64decode.net, or similar).
c. Copy the output from the base64 decoder.
d. Enter the following command:

TOKEN=`[<base64-decoded-output>]`

where <base64-decoded-output> is the output you copied from the base64 decorder.
6. Add the service account (and its authentication token) as a new user definition in the kubeconfig file by entering
the following kubectl command:

kubectl config set-credentials <service-account-name> --token=$TOKEN

The service account (and its authentication token) is added to the list of users defined in the kubeconfig file.
For example, to add the kubeconfig-sa service account and its authentication token to the kubeconfig file, enter:

kubectl config set-credentials kubeconfig-sa --token=$TOKEN

The output from the above command confirms the service account has been added to the kubeconfig file. For
example:

User "kubeconfig-sa" set.


7. Set the user specified in the kubeconfig file for the current context to be the new service account user you created,
by entering the following kubectl command:

kubectl config set-context --current --user=<service-account-name>

For example:

kubectl config set-context --current --user=kubeconfig-sa

The output from the above command confirms the current context has been changed. For example:

Context "context-ctdiztdhezd" modified.

Oracle Cloud Infrastructure User Guide 895


Container Engine for Kubernetes

8. (Optional) To verify that authentication works as expected, run a kubectl command to confirm that the service
account user can be successfully authenticated using the service account authentication token.
For example, if you have previously deployed a sample Nginx application on the cluster (see Deploying a Sample
Nginx App on a Cluster Using Kubectl on page 896), enter the following command:

kubectl get pods

The output from the above command shows the pods running on the cluster. If the command runs successfully,
the service account user in the kubeconfig file has been successfully authenticated using the service account
authentication token.
9. Distribute the kubeconfig file as necessary to enable other processes and tools (such as continuous integration and
continuous delivery (CI/CD) tools) to access the cluster.
Note:

If you subsequently want to remove access to the cluster from the service
account, delete the Kubernetes secret containing the service account
authentication token by entering the following command:

kubectl -n kube-system delete secret $TOKENNAME

Deploying a Sample Nginx App on a Cluster Using Kubectl


Having created a Kubernetes cluster using Container Engine for Kubernetes, you'll typically want to try it out by
deploying an application on the nodes in the cluster. For convenience, the Quick Start tab (available from the
Cluster page) makes it easy to view and copy the commands to:
• set up access to the cluster
• download and deploy a sample Nginx application using the Kubernetes command line tool kubectl from the
instructions in a manifest file
To deploy the sample nginx application:
1. If you haven't already done so, follow the steps to set up the cluster's kubeconfig configuration file and (if
necessary) set the KUBECONFIG environment variable to point to the file. Note that you must set up your own
kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set up. See Setting Up
Cluster Access on page 875.
2. In a terminal window, deploy the sample Nginx application by entering:

kubectl create -f https://k8s.io/examples/application/deployment.yaml

Tip:

If the command fails to connect to https://k8s.io/examples/


application/deployment.yaml , go to the url in a browser and
download the manifest file deployment.yaml to a local directory.
Repeat the kubectl create command and specify the local location of
the deployment.yaml file.
3. Confirm that the sample application has been deployed successfully by entering:

kubectl get pods

You can see the Nginx sample application has been deployed as two pods, on two nodes in the cluster.

Pulling Images from Registry during Deployment


During the deployment of an application to a Kubernetes cluster, you'll typically want one or more images to be
pulled from a Docker registry. In the application's manifest file you specify the images to pull, the registry to pull

Oracle Cloud Infrastructure User Guide 896


Container Engine for Kubernetes

them from, and the credentials to use when pulling the images. The manifest file is commonly also referred to as a
pod spec, or as a deployment.yaml file (although other filenames are allowed).
If you want the application to pull images that reside in Oracle Cloud Infrastructure Registry, you have to perform
two steps:
• You have to use kubectl to create a Docker registry secret. The secret contains the Oracle Cloud Infrastructure
credentials to use when pulling the image. When creating secrets, Oracle strongly recommends you use the latest
version of kubectl (see the kubectl documentation).
• You have to specify the image to pull from Oracle Cloud Infrastructure Registry, including the repository location
and the Docker registry secret to use, in the application's manifest file.
To create a Docker registry secret:
1. If you haven't already done so, follow the steps to set up the cluster's kubeconfig configuration file and (if
necessary) set the KUBECONFIG environment variable to point to the file. Note that you must set up your own
kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set up. See Setting Up
Cluster Access on page 875.
2. In a terminal window, enter:

kubectl create secret docker-registry <secret-name> --docker-


server=<region-key>.ocir.io --docker-username='<tenancy-namespace>/<oci-
username>' --docker-password='<oci-auth-token>' --docker-email='<email-
address>'

where:
• <secret-name> is a name of your choice, that you will use in the manifest file to refer to the secret . For
example, ocirsecret
• <region-key> is the key for the Oracle Cloud Infrastructure Registry region you're using. For example,
iad. See Availability by Region on page 3542.
• ocir.io is the Oracle Cloud Infrastructure Registry name.
• <tenancy-namespace> is the auto-generated Object Storage namespace string of the tenancy containing
the repository from which the application is to pull the image (as shown on the Tenancy Information page).
For example, the namespace of the acme-dev tenancy might be ansh81vru1zp. Note that for some older
tenancies, the namespace string might be the same as the tenancy name in all lower-case letters (for example,
acme-dev).
• <oci-username> is the username to use when pulling the image. The username must have access to the
tenancy specified by <tenancy-name>. For example, [email protected] . If your tenancy is federated with
Oracle Identity Cloud Service, use the format oracleidentitycloudservice/<username>
• <oci-auth-token> is the auth token of the user specified by <oci-username>. For example,
k]j64r{1sJSSF-;)K8
• <email-address> is an email address. An email address is required, but it doesn't matter what you
specify. For example, [email protected]
Note the use of single quotes around strings containing special characters.
For example, combining the previous examples, you might enter:

kubectl create secret docker-registry ocirsecret --docker-


server=phx.ocir.io --docker-username='ansh81vru1zp/[email protected]' --
docker-password='k]j64r{1sJSSF-;)K8' --docker-email='[email protected]'

Having created the Docker secret, you can now refer to it in the application manifest file.
To specify the image to pull from Oracle Cloud Infrastructure Registry, along with the Docker secret to use, during
deployment of an application to a cluster:
1. Open the application's manifest file in a text editor.

Oracle Cloud Infrastructure User Guide 897


Container Engine for Kubernetes

2. Add the following sections to the manifest file:


a. Add a containers section that specifies the name and location of the container you want to pull from
Oracle Cloud Infrastructure Registry, along with other deployment details.
b. Add an imagePullSecrets section to the manifest file that specifies the name of the Docker secret you
created to access the Oracle Cloud Infrastructure Registry.
Here's an example of what the manifest might look like when you've added the containers and
imagePullSecrets sections:

apiVersion: v1
kind: Pod
metadata:
name: ngnix-image
spec:
containers:
- name: ngnix
image: phx.ocir.io/ansh81vru1zp/project01/ngnix-lb:latest
imagePullPolicy: Always
ports:
- name: nginx
containerPort: 8080
protocol: TCP
imagePullSecrets:
- name: ocirsecret
3. Save and close the manifest file.

Supported Labels for Different Usecases


Container Engine for Kubernetes uses a number of different labels when creating and managing clusters, including:
• failure-domain.beta.kubernetes.io/zone on page 898
• oci.oraclecloud.com/fault-domain on page 899
• node.kubernetes.io/exclude-from-external-load-balancers on page 899
For more information about Kubernetes labels, see the Kubernetes documentation.

failure-domain.beta.kubernetes.io/zone
Container Engine for Kubernetes automatically adds the failure-domain.beta.kubernetes.io/zone
label to each worker node (compute instance) in a cluster, according to the availability domain in which it is placed.
An availability domain is one or more data centers located within a region. A region is composed of one or more
availability domains. Availability domains are isolated from each other, fault tolerant, and very unlikely to fail
simultaneously. See Regions and Availability Domains on page 182.
You can use the failure-domain.beta.kubernetes.io/zone label in different ways:
• You can use the failure-domain.beta.kubernetes.io/zone label (in conjunction with the
oci.oraclecloud.com/fault-domain label) to constrain the worker nodes on which to run a
pod, in the case of a cluster with worker nodes in multiple availability domains. Include the failure-
domain.beta.kubernetes.io/zone label in the pod specification to specify the availability domain in
which worker nodes must have been placed.
• You can use the failure-domain.beta.kubernetes.io/zone label to specify the availability domain
and region to provision persistent volume claims on the Block Volume service when using the FlexVolume
volume plugin. See Creating a Persistent Volume Claim on page 947.
When you specify a value for the failure-domain.beta.kubernetes.io/zone label, you must use the
correct shortened version of the availability domain name in an Oracle Cloud Infrastructure region.

Oracle Cloud Infrastructure User Guide 898


Container Engine for Kubernetes

In most cases, the shortened versions of availability domain names are in the format <region-identifier>-1-
AD-<availability-domain-number>. For example UK-LONDON-1-AD-1, UK-LONDON-1-AD-2, UK-
LONDON-1-AD-3, AP-MELBOURNE-1-AD-1, ME-JEDDAH-1-AD-1. To find out the region identifiers and
availability domains to use, see Availability by Region on page 844.
Note that the shortened versions of availability domain names in the Ashburn and Phoenix regions are exceptions, as
shown below:
• For the Phoenix region, shortened versions of availability domain names are in the format PHX-AD-
<availability-domain-number>. For example, PHX-AD-1, PHX-AD-2, PHX-AD-3.
• For the Ashburn region, shortened versions of availability domain names are in the format US-ASHBURN-AD-
<availability-domain-number>. For example, US-ASHBURN-AD-1, US-ASHBURN-AD-2, US-
ASHBURN-AD-3.

oci.oraclecloud.com/fault-domain
Container Engine for Kubernetes automatically adds the oci.oraclecloud.com/fault-domain label to each
worker node (compute instance) in a cluster, according to the fault domain in which it is placed.
A fault domain is a grouping of hardware and infrastructure that is distinct from other fault domains in the same
availability domain. Each availability domain has three fault domains (named FAULT-DOMAIN-1, FAULT-
DOMAIN-2, FAULT-DOMAIN-3). Every compute instance is placed in a fault domain. See Fault Domains on page
184.
You can constrain the worker nodes on which to run a pod by including the oci.oraclecloud.com/fault-
domain label in the pod specification. Use the oci.oraclecloud.com/fault-domain label to specify the
fault domain in which worker nodes must have been placed.
You'll typically use the oci.oraclecloud.com/fault-domain label to achieve high availability when a
cluster is located in a region with a single availability domain.
For example:

apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
nodeSelector:
oci.oraclecloud.com/fault-domain: FAULT-DOMAIN-3

If you apply the above example pod spec to a cluster, an nginx pod is only created if the cluster has worker nodes
in FAULT-DOMAIN-3 in the availability domain. If the cluster only has worker nodes in FAULT-DOMAIN-1 or
FAULT-DOMAIN-2, the pod is not created and remains in a pending status.
If a cluster has worker nodes in multiple availability domains, include both the failure-
domain.beta.kubernetes.io/zone label and the oci.oraclecloud.com/fault-domain label in a
pod specification to specify both the availability domain and the fault domain of the worker nodes on which to run the
pod.

node.kubernetes.io/exclude-from-external-load-balancers
Container Engine for Kubernetes automatically enables the ServiceNodeExclusion feature gate on the clusters
it creates. With the ServiceNodeExclusion feature gate enabled on a cluster, you can add a label to particular
worker nodes to exclude them from the list of backend servers in an Oracle Cloud Infrastructure load balancer
backend set. The fewer worker nodes included in a backend set, the faster the load balancer can be updated.

Oracle Cloud Infrastructure User Guide 899


Container Engine for Kubernetes

To exclude a worker node from the list of backend servers in a backend set, add the node.kubernetes.io/
exclude-from-external-load-balancers label to the node by entering:

kubectl label nodes <node-name> node.kubernetes.io/exclude-from-external-


load-balancers=true

For example:

kubectl label nodes 10.0.1.2 node.kubernetes.io/exclude-from-external-load-


balancers=true

Note that having added the label to a node, the node is excluded from the list of backend servers regardless of the
value of the label. For example, even if you specify node.kubernetes.io/exclude-from-external-
load-balancers label=false, the worker node is still excluded from the list of backend servers.
To remove the label from the node, enter:

kubectl label nodes <node-name> node.kubernetes.io/exclude-from-external-


load-balancers-

Encrypting Kubernetes Secrets at Rest in Etcd


The control plane nodes in a Kubernetes cluster store sensitive configuration data (such as authentication tokens,
passwords, and SSH keys) as Kubernetes secret objects in etcd. Etcd is an open source distributed key-value
store that Kubernetes uses for cluster coordination and state management. In the Kubernetes clusters created by
Container Engine for Kubernetes, etcd writes and reads data to and from block storage volumes in the Oracle Cloud
Infrastructure Block Volume service. Although the data in block storage volumes is encrypted, Kubernetes secrets at
rest in etcd itself are not encrypted by default.
For additional security, when you create a new cluster you can specify that Kubernetes secrets at rest in etcd are to be
encrypted using the Oracle Cloud Infrastructure Vault service (see Overview of Vault on page 3988). Before you
can create a cluster where Kubernetes secrets are encrypted in the etcd key-value store, you have to:
• know the name and OCID of a suitable master encryption key in Vault
• create a dynamic group that includes all clusters in the compartment in which you are going to create the new
cluster
• create a policy authorizing the dynamic group to use the master encryption key
Having created the cluster and specified that you want Kubernetes secrets at rest in the etcd key-value store to be
encrypted, you can optionally restrict the use of the master encryption key by modifying the dynamic group to include
just that cluster.
Note the following:
• You can only select the option to encrypt the Kubernetes secrets in the cluster's etcd key-value store when creating
a new cluster in the 'Custom Create' workflow. You cannot encrypt Kubernetes secrets in the cluster's etcd key-
value store when creating a new cluster in the 'Quick Create' workflow. And you cannot encrypt Kubernetes
secrets in the etcd key-value stores of clusters you previously created in the 'Custom Create' workflow.
• You can only select the option to encrypt Kubernetes secrets in the cluster's etcd key-value store if you specify
Kubernetes version 1.13.x or later as the version of Kubernetes to run on the control plane nodes of the cluster.
• Policies must have been defined to authorize Container Engine for Kubernetes to use the master encryption key,
and to authorize users to delegate key usage to Container Engine for Kubernetes in the first place. For more
information, see Let a user group delegate key usage in a compartment and Let Block Volume, Object Storage,
File Storage, and Container Engine for Kubernetes services encrypt and decrypt volumes, buckets, file systems,
and Kubernetes secrets in Common Policies on page 2150.
• After you've specified a master encryption key for a new cluster and created the cluster, do not subsequently
delete the master encryption key in the Vault service. As soon as you schedule a key for deletion in Vault, the
Kubernetes secrets stored for the cluster in etcd become inaccessible. If you have already scheduled the key
for deletion, it might still be in the Pending Deletion state. If that is the case, cancel the scheduled key deletion

Oracle Cloud Infrastructure User Guide 900


Container Engine for Kubernetes

(see To cancel the deletion of a key on page 4002) to restore access to the Kubernetes secrets. If you allow the
scheduled key deletion operation to complete and the master encryption key to be deleted, the Kubernetes secrets
stored for the cluster in etcd are permanently inaccessible. As a result, cluster upgrades will fail. In this situation,
you have no choice but to delete and recreate the cluster.

Master Encryption Keys in Other Tenancies


You can create a cluster in one tenancy that uses a master encryption key in a different tenancy. In this case, you have
to write cross-tenancy policies to enable the cluster in its tenancy to access the master encryption key in the Vault
service's tenancy. Note that if you want to create a cluster and specify a master encryption key that's in a different
tenancy, you cannot use the Console to create the cluster.
For example, assume the cluster is in the ClusterTenancy, and the master encryption key is in the KeyTenancy. Users
belonging to a group (OKEAdminGroup) in the ClusterTenancy have permissions to create clusters. A dynamic group
(OKEAdminDynGroup) has been created in the cluster, with the rule ALL {resource.type = 'cluster',
resource.compartment.id = 'ocid1.compartment.oc1..<unique_ID>'}, so all clusters created
in the ClusterTenancy belong to the dynamic group.
In the root compartment of the KeyTenancy, the following policies:
• use the ClusterTenancy's OCID to map ClusterTenancy to the alias OKE_Tenancy
• use the OCIDs of OKEAdminGroup and OKEAdminDynGroup to map them to the aliases
RemoteOKEAdminGroup and RemoteOKEClusterDynGroup respectively
• give RemoteOKEAdminGroup and RemoteOKEClusterDynGroup the ability to list, view, and perform
cryptographic operations with a particular master key in the KeyTenancy

Define tenancy OKE_Tenancy as ocid1.tenancy.oc1..<unique_ID>


Define dynamic-group RemoteOKEClusterDynGroup as
ocid1.dynamicgroup.oc1..<unique_ID>
Define group RemoteOKEAdminGroup as ocid1.group.oc1..<unique_ID>
Admit dynamic-group RemoteOKEClusterDynGroup of tenancy ClusterTenancy to
use keys in tenancy where target.key.id = 'ocid1.key.oc1..<unique_ID>'
Admit group RemoteOKEAdminGroup of tenancy ClusterTenancy to use keys in
tenancy where target.key.id = 'ocid1.key.oc1..<unique_ID>'

In the root compartment of the ClusterTenancy, the following policies:


• use the KeyTenancy's OCID to map KeyTenancy to the alias KMS_Tenancy
• give OKEAdminGroup and OKEAdminDynGroup the ability to use master keys in the KeyTenancy
• allow OKEAdminDynGroup to use a specific master key obtained from the KeyTenancy in the ClusterTenancy

Define tenancy KMS_Tenancy as ocid1.tenancy.oc1..<unique_ID>


Endorse group OKEAdminGroup to use keys in tenancy KMS_Tenancy
Endorse dynamic-group OKEAdminDynGroup to use keys in tenancy KMS_Tenancy
Allow dynamic-group OKEAdminDynGroup to use keys in tenancy where
target.key.id = 'ocid1.key.oc1..<unique_ID>'

See Accessing Object Storage Resources Across Tenancies on page 3536 for more examples of writing cross-
tenancy policies.
Having entered the policies, you can now run a command similar to the following to create a cluster in the
ClusterTenancy that uses the master key obtained from the KeyTenancy:

oci ce cluster create --name oke-with-cross-kms --kubernetes-


version v1.16.8 --vcn-id ocid1.vcn.oc1.iad.<unique_ID> --
service-lb-subnet-ids '["ocid1.subnet.oc1.iad.<unique_ID>"]' --
compartment-id ocid1.compartment.oc1..<unique_ID> --kms-key-id
ocid1.key.oc1.iad.<unique_ID>

Oracle Cloud Infrastructure User Guide 901


Container Engine for Kubernetes

Using the Console


To create a new cluster in the 'Custom Create' workflow where Kubernetes secrets are encrypted in the cluster's etcd
key-value store:
1. Log in to the Console.
2. If you know the OCID of the master encryption key to use to encrypt Kubernetes secrets, go straight to the next
step. Otherwise:
• If a suitable master encryption key already exists in Vault but you're not sure of its OCID, follow the
instructions in To view key details on page 3999 and make a note of the master encryption key's OCID.
• If a suitable master encryption key does not already exist in Vault, follow the instructions in To create a new
master encryption key on page 3999 to create one. Having created a new master encryption key, make a note
of its OCID.
3. Create a new dynamic group containing all the clusters in the compartment in which you intend to create the new
cluster:
a. Open the navigation menu. Under Governance and Administration, go to Identity and click Dynamic
Groups.
b. Follow the instructions in To create a dynamic group on page 2442, and give the dynamic group a name (for
example, acme-oke-kms-dyn-grp).
c. Enter a rule that includes all clusters in the compartment in the format:

ALL {resource.type = 'cluster', resource.compartment.id = '<compartment-


ocid>'}

where <compartment-ocid> is the OCID of the compartment in which you intend to create the new
cluster.
For example:

ALL {resource.type = 'cluster', resource.compartment.id =


'ocid1.compartment.oc1..aaaaaaaa23______smwa'}
d. Click Create Dynamic Group.
Having created a dynamic group that includes all clusters in the compartment, you can now create a policy to give
the dynamic group access to the master encryption key in Vault.
4. Create a new policy to enable use of the master encryption key:
a. Open the navigation menu. Under Governance and Administration, go to Identity and click Policies.
b. Follow the instructions in To create a policy on page 2473, and give the policy a name (for example, acme-
oke-kms-dyn-grp-policy).
c. Enter a policy statement to give the dynamic group access to the master encryption key, in the format:

Allow dynamic-group <dynamic-group-name> to use keys in compartment


<compartment-name> where target.key.id = '<key-OCID>'

where:
• <dynamic-group-name> is the name of the dynamic group you created earlier.
• <compartment-name> is the name of the compartment containing the master encryption key.
• <key-OCID> is the OCID of the master encryption key in Vault.
For example:

Allow dynamic-group <acme-oke-kms-dyn-grp> to use keys in


compartment acme-kms-key-compartment where target.key.id =
'ocid1.key.oc1.iad.annrl______trfg'
d. Click Create to create the new policy.

Oracle Cloud Infrastructure User Guide 902


Container Engine for Kubernetes

5. Follow the instructions to create a new cluster in Using the Console to create a Cluster with Explicitly Defined
Settings in the 'Custom Create' workflow on page 871, select the Encrypt Using Customer-Managed Keys
option, and select:
• Choose a Vault in <compartment-name>: The vault that contains the master encryption key, from the list of
vaults in the specified compartment. By default, <compartment-name> is the compartment in which you are
creating the cluster, but you can select a different compartment by clicking Change Compartment.
• Choose a Key in <compartment-name>: The name of the master encryption key, from the list of keys in the
specified compartment. By default, <compartment-name> is the compartment in which you are creating the
cluster, but you can select a different compartment by clicking Change Compartment. Note that you cannot
change the master encryption key after the cluster has been created.
6. (Optional) Having created the cluster, for additional security:
a. Make a note of the OCID of the new cluster you just created.
b. Restrict the use of the master encryption key by modifying the dynamic group rule you created earlier to
explicitly specify the OCID of the new cluster, rather than all clusters in the compartment. For example:

resource.id = 'ocid1.cluster.oc1.iad.aaaaaaaaaf______yg5q'

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the CreateCluster operation to create a cluster.

Connecting to Worker Nodes Using SSH


If you provided a public SSH key when creating the node pool in a cluster, the public key is installed on all worker
nodes in the cluster. On UNIX and UNIX-like platforms (including Solaris and Linux), you can then connect through
SSH to the worker nodes using the ssh utility (an SSH client) to perform administrative tasks.
Note the following instructions assume the UNIX machine you use to connect to the worker node:
• Has the ssh utility installed.
• Has access to the SSH private key file paired with the SSH public key that was specified when the cluster was
created.
How to connect to worker nodes using SSH depends on whether you specified public or private subnets for the
worker nodes when defining the node pools in the cluster.

Connecting to Worker Nodes in Public Subnets Using SSH


Before you can connect to a worker node in a public subnet using SSH, you must define an ingress rule in the subnet's
security list to allow SSH access. The ingress rule must allow access to port 22 on worker nodes from source 0.0.0.0/0
and any source port, as follows:

Type Source CIDR IP Protocol Source Port Dest. Port Type and Allows: and
Range Range Code Description:
Stateful 0.0.0.0/0 TCP All 22 n/a Allows: TCP
traffic for
ports: 22 SSH
Remote Login
Protocol
Description:
Enables SSH
access.

Oracle Cloud Infrastructure User Guide 903


Container Engine for Kubernetes

To connect to a worker node in a public subnet through SSH from a UNIX machine using the ssh utility:
1. Find out the IP address of the worker node to which you want to connect. You can do this in a number of ways:
• Using kubectl. If you haven't already done so, follow the steps to set up the cluster's kubeconfig configuration
file and (if necessary) set the KUBECONFIG environment variable to point to the file. Note that you must set
up your own kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set up.
See Setting Up Cluster Access on page 875. Then in a terminal window, enter kubectl get nodes to
see the public IP addresses of worker nodes in node pools in the cluster.
• Using the Console. In the Console, display the Cluster List page and then select the cluster to which the
worker node belongs. On the Node Pools tab, click the name of the node pool to which the worker node
belongs. On the Nodes tab, you see the public IP address of every worker node in the node pool.
• Using the REST API. Use the ListNodePools operation to see the public IP addresses of worker nodes in a
node pool.
2. In the terminal window, enter ssh opc@<node_ip_address> to connect to the worker node, where
<node_ip_address> is the IP address of the worker node that you made a note of earlier. For example, you
might enter ssh [email protected].
Note that if the SSH private key is not stored in the file or in the path that the ssh utility expects (for example, the
ssh utility might expect the private key to be stored in ~/.ssh/id_rsa), you must explicitly specify the private key
filename and location in one of two ways:
• Use the -i option to specify the filename and location of the private key. For example, ssh -i ~/.ssh/
my_keys/my_host_key_filename [email protected]
• Add the private key filename and location to an SSH configuration file, either the client configuration file
(~/.ssh/config) if it exists, or the system-wide client configuration file (/etc/ssh/ssh_config). For example, you
might add the following:

Host 192.0.2.254 IdentityFile ~/.ssh/my_keys/my_host_key_filename

For more about the ssh utility’s configuration file, enter man ssh_config
Note also that permissions on the private key file must allow you read/write/execute access, but prevent other
users from accessing the file. For example, to set appropriate permissions, you might enter chmod 600
~/.ssh/my_keys/my_host_key_filename. If permissions are not set correctly and the private key file is
accessible to other users, the ssh utility will simply ignore the private key file.

Connecting to Worker Nodes in Private Subnets Using SSH


Worker nodes in private subnets have private IP addresses only (they do not have public IP addresses). They can only
be accessed by other resources inside the VCN. Oracle recommends using bastion hosts to control external access
(such as SSH) to worker nodes in private subnets. A bastion host is in a public subnet, has a public IP address, and is
accessible from the internet. For more information about bastion hosts, see the white paper Bastion Hosts: Protected
Access for Virtual Cloud Networks.

Using Pod Security Polices with Container Engine for Kubernetes


You can control the operations that pods are allowed to perform on a cluster you have created with Container Engine
for Kubernetes by setting up pod security policies for the cluster. Pod security policies are a way to ensure that pods
meet security-related conditions before they can be accepted by a cluster. For example, you can use pod security
polices to:
• limit the storage choices available to pods
• restrict the host networking and ports that pods can access
• prevent pods from running as the root user
• prevent pods from running in privileged mode
You can also use pod security policies to provide default values for pods, by 'mutating' the pod.

Oracle Cloud Infrastructure User Guide 904


Container Engine for Kubernetes

Having defined a pod security policy for a cluster, you have to authorize the requesting user or target pod's service
account to use the policy. You do this by creating a role (or clusterrole) to access the pod security policy, and then
creating a rolebinding (or clusterrolebinding) between the role (or clusterrole) and the requesting user or target
pod's service account. For more information about roles, clusterroles, and bindings, see About Access Control and
Container Engine for Kubernetes on page 919.
You specify whether a cluster enforces the pod security policies defined for it by enabling the cluster's
PodSecurityPolicy admission controller. The PodSecurityPolicy admission controller acts on creation and
modification of a pod and determines if the pod should be admitted to the cluster based on the requested
security context in the pod spec and the cluster's pod security policies. If multiple pod security policies exist,
the PodSecurityPolicy admission controller first compares the pod against non-mutating policies in alphabetical
order and uses the first policy that successfully validates the pod. If no non-mutating pods validate the pod, the
PodSecurityPolicy admission controller compares the pod against mutating policies in alphabetical order and uses the
first policy that successfully validates the pod.
Caution:

Warning 1:
It is very important to note that when you enable a cluster's
PodSecurityPolicy admission controller, no application pods can start on
the cluster unless suitable pod security policies exist, along with roles (or
clusterroles) and rolebindings (or clusterrolebindings) to associate pods with
policies. You will not be able to run application pods on a cluster with an
enabled PodSecurityPolicy admission controller unless these prerequisites are
met.
We strongly recommend you use PodSecurityPolicy admission controllers as
follows:
• Whenever you create a new cluster, enable the Pod Security Admission
Controller.
• Immediately after creating a new cluster, create pod security
policies, along with roles (or clusterroles) and rolebindings (or
clusterrolebindings).
Warning 2:
You must create pod security policies before enabling the PodSecurityPolicy
admission controller of an existing cluster that is already in production
(that is, some time after you created it). If you decide to enable an existing
cluster's PodSecurityPolicy admission controller, we strongly recommend
you first verify the cluster's pod security policies in a development or test
environment. That way, you can be sure the pod security policies work as you
expect and correctly allow (or refuse) pods to start on the cluster.
When you enable the PodSecurityPolicy admission controller of a cluster you've created with Container Engine for
Kubernetes, a pod security policy for Kubernetes system privileged pods is automatically created (along with the
associated clusterrole and clusterrolebinding). This pod security policy, and the clusterrole and clusterrolebinding,
enable the Kubernetes system pods to run. The pod security policy, clusterrole, and clusterrolebinding are defined in
the kube-system.yaml file (see kube-system.yaml Reference on page 908).
Note that you can create pod security policies for a cluster before enabling the cluster's PodSecurityPolicy admission
controller. Also note that you can disable a cluster's PodSecurityPolicy admission controller that was previously
enabled. In this case, any previously created pod security policies, roles (or clusterroles), and rolebindings (or
clusterrolebindings) are not deleted. The pod security policies are simply not enforced. Any application pod will be
able to run on the cluster.
For more information about pod security policies and the PodSecurityPolicy admission controller, see the Kubernetes
documentation.

Oracle Cloud Infrastructure User Guide 905


Container Engine for Kubernetes

Creating a Pod Security Policy for Application Pods


To create a pod security policy for application pods, create a role to access the pod security policy, and create a
rolebinding to enable the application pods to use the pod security policy:
1. Create the pod security policy for application pods:
a. Define and save the pod security policy in a file. For example, in acme-app-psp.yaml).
For example, this policy (taken from the Kubernetes documentation) simply prevents the creation of privileged
pods:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: acme-app-psp
spec:
privileged: false # Don't allow privileged pods!
# The rest fills in some required fields.
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
volumes:
- '*'
b. Enter the following command to create the pod security policy:

kubectl create -f <filename>.yaml

For example:

kubectl create -f acme-app-psp.yaml


2. Create the role (or clusterrole) to access the pod security policy:
a. Define and save a role (or clusterrole) in a file. For example, in acme-app-psp-crole.yaml.
For example:

# Cluster role which grants access to the app pod security policy
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: acme-app-psp-crole
rules:
- apiGroups:
- policy
resourceNames:
- acme-app-psp
resources:
- podsecuritypolicies
verbs:

Oracle Cloud Infrastructure User Guide 906


Container Engine for Kubernetes

- use
b. Enter the following command to create the role (or clusterrole):

kubectl create -f <filename>.yaml

For example:

kubectl create -f acme-app-psp-crole.yaml


3. Create the rolebinding (or clusterrolebinding) to authorize the application pods to use the pod security policy:
a. Define and save the rolebinding (or clusterrolebinding) in a file. For example, in acme-app-psp-crole-
bind.yaml.
For example:

# Role binding which grants access to the app pod security policy
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: acme-app-psp-binding
namespace: acme-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: acme-app-psp-crole
subjects:
# For all service accounts in acme-namespace
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts:acme-namespace
b. Enter the following command to create the rolebinding (or clusterrolebinding):

kubectl create -f <filename>.yaml

For example:

kubectl create -f acme-app-psp-crole-bind.yaml

Having defined a pod security policy and authorized application pods to use it by creating a role and rolebinding (or
a clusterrole and clusterrolebinding), enable the cluster's PodSecurityPolicy admission controller to enforce the pod
security policy (if it's not enabled already).

Using the Console to Enable the PodSecurityPolicy Admission Controller


To enable the PodSecurityPolicy admission controller when creating new clusters using the Console:
1. Log in to the Console.
2. Follow the instructions to create a new cluster in Using the Console to create a Cluster with Explicitly Defined
Settings in the 'Custom Create' workflow on page 871, click Show Advanced Options, and select the Pod
Security Policies - Enforced option. This option enables the PodSecurityPolicy admission controller.
No application pods will be accepted into the new cluster unless suitable pod security policies exist, along with
roles (or clusterroles) and rolebindings (or clusterrolebindings) to associate pods with policies.
3. Follow the instructions to set the remaining cluster details, and click Create Cluster to create the new cluster.
4. Follow the instructions in Creating a Pod Security Policy for Application Pods on page 906 to create pod
security policies for the PodSecurityPolicy admission controller to enforce when accepting pods into the new
cluster.

Oracle Cloud Infrastructure User Guide 907


Container Engine for Kubernetes

To enable the PodSecurityPolicy admission controller in existing clusters using the Console:
1. Follow the instructions in Creating a Pod Security Policy for Application Pods on page 906 to create pod
security policies for the PodSecurityPolicy admission controller to enforce when accepting pods into the existing
cluster.
We strongly recommend you first verify the pod security policies in a development or test environment. That way,
you can be sure the pod security policies work as you expect and correctly allow (or refuse) pods to start on the
cluster.
2. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
Kubernetes Clusters.
3. Choose a Compartment you have permission to work in.
4. On the Cluster List page, click the name of the cluster you want to modify.
5. On the Cluster Details tab, click Not Enforced beside Pod Security Policies.
6. In the Pod Security Policies window, select Enforced.
From now on, no new application pods will be accepted into the cluster unless suitable pod security policies exist,
along with roles (or clusterroles) and rolebindings (or clusterrolebindings) to associate pods with policies. Note
that any currently running pods will continue to run regardless.
7. Click Save Changes.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
To enable the PodSecurityPolicy admission controller, use the:
• CreateCluster operation when creating new clusters
• Update Cluster operation when modifying existing clusters

kube-system.yaml Reference
The pod security policy, and the associated clusterrole and clusterrolebinding, for Kubernetes system privileged pods
are automatically created when you enable a cluster's PodSecurityPolicy admission controller. These allow any pod in
the kube-system namespace to run. They are created from definitions in the kube-system.yaml shown below:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
annotations:
# See https://kubernetes.io/docs/concepts/policy/pod-security-policy/
#seccomp
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
name: oke-privileged
spec:
allowedCapabilities:
- '*'
allowPrivilegeEscalation: true
fsGroup:
rule: 'RunAsAny'
hostIPC: true
hostNetwork: true
hostPID: true
hostPorts:
- min: 0
max: 65535
privileged: true
readOnlyRootFilesystem: false
runAsUser:

Oracle Cloud Infrastructure User Guide 908


Container Engine for Kubernetes

rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
volumes:
- '*'

---

# Cluster role which grants access to the privileged pod security policy
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: oke-privileged-psp
rules:
- apiGroups:
- policy
resourceNames:
- oke-privileged
resources:
- podsecuritypolicies
verbs:
- use

---

# Role binding for kube-system - allow kube-system service accounts - should


take care of CNI i.e. flannel running in the kube-system namespace
# Assumes access to the kube-system is restricted
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kube-system-psp
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: oke-privileged-psp
subjects:
# For all service accounts in the kube-system namespace
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts:kube-system

Autoscaling Kubernetes Clusters


You can automatically scale the clusters you create using Container Engine for Kubernetes to optimize resource
usage.
To enable autoscaling, you can deploy the Kubernetes Metrics Server to collect resource metrics from each worker
node in the cluster (see Deploying the Kubernetes Metrics Server on a Cluster Using Kubectl on page 910).
Having deployed the Kubernetes Metrics Server, you can then use:
• the Kubernetes Horizontal Pod Autoscaler to adjust the number of pods in a deployment (see Using the
Kubernetes Horizontal Pod Autoscaler on page 911)
• the Kubernetes Vertical Pod Autoscaler to adjust the resource requests and limits for containers running in a
deployment's pods (see Using the Kubernetes Vertical Pod Autoscaler on page 914)

Oracle Cloud Infrastructure User Guide 909


Container Engine for Kubernetes

Deploying the Kubernetes Metrics Server on a Cluster Using Kubectl


You can deploy the Kubernetes Metrics Server on clusters you create using Container Engine for Kubernetes to
enable autoscaling.
The Kubernetes Metrics Server is a cluster-wide aggregator of resource usage data. The Kubernetes Metrics Server
collects resource metrics from the kubelet running on each worker node and exposes them in the Kubernetes API
server through the Kubernetes Metrics API. Other Kubernetes add-ons require the Kubernetes Metrics Server,
including:
• the Horizontal Pod Autoscaler (see Using the Kubernetes Horizontal Pod Autoscaler on page 911)
• the Vertical Pod Autoscaler (see Using the Kubernetes Vertical Pod Autoscaler on page 914)
Note that the Kubernetes Metrics Server is not intended to be used for anything other than autoscaling. For example,
it is not recommended that you use the Kubernetes Metrics Server to forward metrics to monitoring solutions, nor as a
source of monitoring solution metrics. For more information, see the Kubernetes Metrics Server documentation.
To deploy the Kubernetes Metrics Server on a cluster you've created with Container Engine for Kubernetes:
1. If you haven't already done so, follow the steps to set up the cluster's kubeconfig configuration file and (if
necessary) set the KUBECONFIG environment variable to point to the file. Note that you must set up your own
kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set up. See Setting Up
Cluster Access on page 875.
2. If your Oracle Cloud Infrastructure user is a tenancy administrator or cluster administrator, skip the next step and
go straight to the following step.
3. If your Oracle Cloud Infrastructure user is not a tenancy administrator or cluster administrator, ask a tenancy
administrator or cluster administrator to grant your user the Kubernetes RBAC cluster-admin clusterrole on the
cluster by entering:

kubectl create clusterrolebinding <my-cluster-admin-binding> --


clusterrole=cluster-admin --user=<user-OCID>

For more information, see About Access Control and Container Engine for Kubernetes on page 919.
4. In a terminal window, deploy the Kubernetes Metrics Server by entering:

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/


releases/download/<version-number>/components.yaml

where <version-number> is the Kubernetes Metrics Server version that you want to deploy. For example,
v0.3.6.
Note that the Kubernetes Metrics Server is being actively developed, so the version number to specify will change
over time. To find out the currently available versions, see the Kubernetes Metrics Server documentation.
Tip:

If the command fails to connect to https://github.com/


kubernetes-sigs/metrics-server/releases/download/
<version-number>/components.yaml , go to the url in a browser
and download the manifest file components.yaml to a local directory.
Repeat the kubectl apply command and specify the local location of
the components.yaml file.
5. Confirm that the Kubernetes Metrics Server has been deployed successfully and is available by entering:

kubectl get deployment metrics-server -n kube-system

Oracle Cloud Infrastructure User Guide 910


Container Engine for Kubernetes

Using the Kubernetes Horizontal Pod Autoscaler


You can use the Kubernetes Horizontal Pod Autoscaler to automatically scale the number of pods in a deployment,
replication controller, replica set, or stateful set, based on that resource's CPU or memory utilization, or on other
metrics. The Horizontal Pod Autoscaler can help applications scale out to meet increased demand, or scale in when
resources are no longer needed. You can set a target metric percentage for the Horizontal Pod Autoscaler to meet
when scaling applications. For more information, see Horizontal Pod Autoscaler in the Kubernetes documentation.
The Horizontal Pod Autoscaler is a standard API resource in Kubernetes that requires the installation of a metrics
source, such as the Kubernetes Metrics Server, in the cluster. For more information, see Deploying the Kubernetes
Metrics Server on a Cluster Using Kubectl on page 910.

Notes about the Horizontal Pod Autoscaler


Note the following:
• The Kubernetes Metrics Server only supports scaling based on CPU and memory utilization. Other
implementations of the Kubernetes Metrics API support scaling based on custom metrics. Refer to the Kubernetes
documentation for a list of alternative implementations (see Implementations), and for more information about
scaling based on custom metrics (see Autoscaling on multiple metrics and custom metrics).
• You can scale applications manually by updating manifest files, without using the Horizontal Pod Autoscaler.
Working with the Horizontal Pod Autoscaler
The instructions below are based on the Horizontal Pod Autoscaler Walkthrough topic in the Kubernetes
documentation. They describe how to:
• Verify that the Kubernetes Metrics Server has been installed on a cluster.
• Deploy a sample php-apache web server.
• Create a Horizontal Pod Autoscaler resource that will scale based on CPU utilization.
• Start generation of a sample load.
• View the scaling operation in action.
• Stop the sample load generation.
• Clean up, by removing the php-apache web server and the Horizontal Pod Autoscaler.

Step 1: Verify the Kubernetes Metrics Server Installation


1. If you haven't already done so, follow the steps to set up the cluster's kubeconfig configuration file and (if
necessary) set the KUBECONFIG environment variable to point to the file. Note that you must set up your own
kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set up. See Setting Up
Cluster Access on page 875.
2. Confirm that the Kubernetes Metrics Server has been deployed successfully on the cluster and is available by
entering:

kubectl -n kube-system get deployment/metrics-server

If the command returns a `Not Found` error, then you must deploy the Kubernetes Metrics Server on the cluster
before proceeding. See Deploying the Kubernetes Metrics Server on a Cluster Using Kubectl on page 910.

Step 2: Deploy a Sample Application


Deploy a simple Apache web server application by entering:

kubectl apply -f https://k8s.io/examples/application/php-apache.yaml

The output from the above command confirms the deployment:

deployment.apps/php-apache created
service/php-apache created

Oracle Cloud Infrastructure User Guide 911


Container Engine for Kubernetes

The Apache web server pod that is created from the manifest file:
• Has a 500m CPU limit, which ensures the container will never use more than 500 millicores, or 1/2 of a core.
• Has a 200m CPU request allowance, which allows the container to use 200 millicores of CPU resources, or 1/5 of
a core.

Step 3: Create a Horizontal Pod Autoscaler Resource


1. Create a Horizontal Pod Autoscaler resource to maintain a minimum of 1 and a maximum of 10 replicas, and an
average CPU utilization of 50%, by entering:

kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10

The output from the above command confirms the Horizontal Pod Autoscaler has been created.

horizontalpodautoscaler.autoscaling/php-apache autoscaled

The command creates a Horizontal Pod Autoscaler for the Apache web server deployment that:
• Maintains a minimum of 1 and a maximum of 10 replicas of the previously created pods controlled by the
Apache web server deployment.
• Increases and decreases the number of replicas of the deployment to maintain an average CPU utilization of
50% across all pods.
If the average CPU utilization falls below 50%, the Horizontal Pod Autoscaler tries to reduce the number of pods
in the deployment to the minimum (in this case, 1). If the average CPU utilization goes above 50 percent, the
Horizontal Pod Autoscaler tries to increase the number of pods in the deployment to the maximum (in this case,
10). For more information, see How does the Horizontal Pod Autoscaler work? in the Kubernetes documentation.
For more information about the kubectl autoscale command, see autoscale in the Kubernetes
documentation.
2. After a minute, confirm the current status of the Horizontal Pod Autoscaler by entering:

kubectl get hpa

The output from the above command shows the current status:

NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE


php-apache Deployment/php-apache 0%/50% 1 10 1 10s

The TARGETS column shows the average CPU utilization across all pods controlled by the Apache web server
deployment. Since no requests are being sent to the server, the above example shows the current CPU utilization is
0%, compared to the target utilization of 50%. Note that you might see different numbers, depending on how long
you wait before running the command.

Step 4: Start Sample Load Generation


1. Run a container with a busybox image to create a load for the Apache web server by entering:

kubectl run -it --rm load-generator --image=busybox /bin/sh --


generator=run-pod/v1
2. Generate a load for the Apache web server that will cause the Horizontal Pod Autoscaler to scale out the
deployment by entering:

while true; do wget -q -O- http://php-apache; done

Oracle Cloud Infrastructure User Guide 912


Container Engine for Kubernetes

Step 5: View the Scaling Operation


1. After a minute, open a new terminal window and confirm the current status of the Horizontal Pod Autoscaler by
entering:

kubectl get hpa

The output from the above command shows the current status:

NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE


php-apache Deployment/php-apache 250%/50% 1 10 1 1m

In the above example, you can see the current CPU utilization has increased to 250%, compared to the target
utilization of 50%. Note that you might see different numbers, depending on how long you wait before running the
command.
To achieve the utilization target of 50%, the Horizontal Pod Autoscaler will have to increase the number of
replicas to scale out the deployment.
2. After another few minutes, view the increased number of replicas by re-entering:

kubectl get hpa

The output from the above command shows the current status:

NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE


php-apache Deployment/php-apache 50%/50% 1 10 5 5m

In the above example, you can see that the Horizontal Pod Autoscaler has resized the deployment to 5 replicas,
and the utilization target of 50% has been achieved. Note that you might see different numbers, depending on how
long you wait before running the command.
3. Verify the deployment has been scaled out by entering:

kubectl get deployment php-apache

The output from the above command shows the current status:

NAME READY UP-TO-DATE AVAILABLE AGE


php-apache 5/5 5 5 5m

Note that you might see different numbers, depending on how long you wait before running the command.

Step 6: Stop Sample Load Generation


1. In the terminal window where you created the container with the busybox image and generated the load for the
Apache web server:
a. Terminate the load generation by pressing Ctrl+C
b. Close the command prompt by entering:

exit

The output from the above command confirms the session has ended:

Session ended, resume using 'kubectl attach load-generator -c load-


generator -i -t' command when the pod is running
pod "load-generator" deleted

Oracle Cloud Infrastructure User Guide 913


Container Engine for Kubernetes

2. After a minute, confirm the current status of the Horizontal Pod Autoscaler by entering:

kubectl get hpa

The output from the above command shows the current status:

NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE


php-apache Deployment/php-apache 0%/50% 1 10 5 10m

In the above example, you can see the current CPU utilization has reduced to 0%, compared to the target
utilization of 50%. However, there are still 5 replicas in the deployment. Note that you might see different
numbers, depending on how long you wait before running the command.
3. After another few minutes, view the reduced number of replicas by re-entering:

kubectl get hpa

The output from the above command shows the current status:

NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE


php-apache Deployment/php-apache 0%/50% 1 10 1 15m

In the above example, you can see that the Horizontal Pod Autoscaler has resized the deployment to 1 replica.
Be aware that it will take time for the replica count to reduce to 1. Five minutes is the default timeframe for
scaling in. Note that you might see different numbers, depending on how long you wait before running the
command.
4. Verify the deployment has been scaled in by entering:

kubectl get deployment php-apache

The output from the above command shows the current status:

NAME READY UP-TO-DATE AVAILABLE AGE


php-apache 1/1 1 1 15m

Note that you might see different numbers, depending on how long you wait before running the command.

Step 7: Clean Up
1. Delete the Horizontal Pod Autoscaler by entering:

kubectl delete horizontalpodautoscaler.autoscaling/php-apache

Note that when you delete a Horizontal Pod Autoscaler, the number of replicas remains the same. A deployment
does not automatically revert back to the state it had prior to the creation of the Horizontal Pod Autoscaler.
2. Delete the Apache web server deployment by entering:

kubectl delete deployment.apps/php-apache service/php-apache

Using the Kubernetes Vertical Pod Autoscaler


You can use the Kubernetes Vertical Pod Autoscaler to automatically adjust the resource requests and limits for
containers running in a deployment's pods. The Vertical Pod Autoscaler can improve cluster resource utilization by:
• Setting the requests automatically based on usage to make sure the appropriate resource amount is available for
each pod.
• Maintaining ratios between limits and requests that were specified in containers' initial configurations.
• Scaling down pods that are over-requesting resources, based on their usage over time.

Oracle Cloud Infrastructure User Guide 914


Container Engine for Kubernetes

• Scaling up pods that are under-requesting resources, based on their usage over time.
The Vertical Pod Autoscaler has three components:
• Recommender: Monitors the current and past resource consumption and provides recommended CPU and
memory request values for a container.
• Updater: Checks for pods with incorrect resources and deletes them, so that the pods can be recreated with the
updated request values.
• Admission Plugin: Sets the correct resource requests on new pods (that is, pods just created or recreated by their
controller due to changes made by the Updater).
For more information, see Vertical Pod Autoscaler and Managing Resources for Containers in the Kubernetes
documentation.
You configure the Vertical Pod Autoscaler using the VerticalPodAutoscaler custom resource definition
object. The VerticalPodAutoscaler object enables you to specify the pods to vertically autoscale, and which
resource recommendations to apply (if any). For more information, see VerticalPodAutoscaler and Custom Resource
Definition object in the Kubernetes documentation.
The Vertical Pod Autoscaler requires the installation of a metrics source, such as the Kubernetes Metrics Server, in
the cluster. For more information, see Deploying the Kubernetes Metrics Server on a Cluster Using Kubectl on page
910.

Overriding Limit Ranges


The Vertical Pod Autoscaler attempts to make recommendations within the minimum and maximum values specified
by a limit range, if one has been defined. However, if the applicable limit range conflicts with the values specified in
the resourcePolicy section of the VerticalPodAutoscaler manifest, the Vertical Pod Autoscaler gives
priority to the resource policy and makes recommendations accordingly (even if the values fall outside the limit
range). For more information, see Limit Ranges and Resource Policy Overriding Limit Range in the Kubernetes
documentation.

Creating Recommendations without Applying them


You can use the Vertical Pod Autoscaler to create and apply recommendations, or simply to create recommendations
(without updating pods). To simply create recommendations without applying them, set updateMode: "Off" in
the updatePolicy section of the VerticalPodAutoscaler manifest.
When pods are created, the Vertical Pod Autoscaler analyzes the CPU and memory needs of the containers and
records those recommendations in its Status field. The Vertical Pod Autoscaler does not take any action to update
the resource requests for the running containers.

Excluding Specific Containers


You can use the Vertical Pod Autoscaler to create and apply recommendations to all the containers in a pod, or
you can selectively exclude particular containers. To turn off recommendations for a particular container, in the
resourcePolicy section of the VerticalPodAutoscaler manifest, specify a containerName and set
mode: "Off" in the containerPolicies section.

Notes about the Vertical Pod Autoscaler


Note the following:
• Currently, you are recommended not to use the Vertical Pod Autoscaler with the Horizontal Pod Autoscaler
on CPU or memory utilization metrics. However, note that you can use the Vertical Pod Autoscaler with the
Horizontal Pod Autoscaler on custom and external metrics. See Support for custom metrics in the Kubernetes
documentation.
• The Vertical Pod Autoscaler recommendations might exceed available resources (for example, node size,
available size, available quota). Note that applying the recommendations might cause pods to go into a pending
status.

Oracle Cloud Infrastructure User Guide 915


Container Engine for Kubernetes

• Whenever the Vertical Pod Autoscaler updates pod resources, the pod is recreated, which causes all running
containers to be restarted. Note that the pod might be recreated on a different node.
Working with the Vertical Pod Autoscaler
The instructions below walk you through deploying the Vertical Pod Autoscaler on a cluster. They describe how to:
• Verify that the Kubernetes Metrics Server has been installed on a cluster.
• Download and deploy the Vertical Pod Autoscaler.
• Deploy a sample application.
• View the scaling operation in action.
• View the recommendation.
• Clean up, by removing the sample application and the Vertical Pod Autoscaler.

Step 1: Verify the Kubernetes Metrics Server Installation


1. If you haven't already done so, follow the steps to set up the cluster's kubeconfig configuration file and (if
necessary) set the KUBECONFIG environment variable to point to the file. Note that you must set up your own
kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set up. See Setting Up
Cluster Access on page 875.
2. Confirm that the Kubernetes Metrics Server has been deployed successfully on the cluster and is available by
entering:

kubectl -n kube-system get deployment/metrics-server

If the command returns a `Not Found` error, then you must deploy the Kubernetes Metrics Server on the cluster
before proceeding. See Deploying the Kubernetes Metrics Server on a Cluster Using Kubectl on page 910.

Step 2: Download and Deploy the Vertical Pod Autoscaler


1. Download the Vertical Pod Autoscaler source code from GitHub. For example, by entering:

git clone -b vpa-release-0.8 https://github.com/kubernetes/autoscaler.git


2. Change to the vertical-pod-autoscaler directory:

cd autoscaler/vertical-pod-autoscaler
3. If you have previously deployed the Vertical Pod Autoscaler, delete it by entering:

./hack/vpa-down.sh
4. Deploy the Vertical Pod Autoscaler by entering:

./hack/vpa-up.sh
5. Verify that the Vertical Pod Autoscaler pods have been created successfully by entering:

kubectl get pods -n kube-system

The output from the above command shows the pods:

vpa-admission-controller-59d9965cfb-bzs8l 1/1 Running 0 6m34s


vpa-recommender-5bcb58569-mqdds 1/1 Running 0 6m43s
vpa-updater-5979cbf757-scw2d 1/1 Running 0 6m46s

Note that you will probably see different names and numbers.

Oracle Cloud Infrastructure User Guide 916


Container Engine for Kubernetes

Step 3: Deploy the Sample Application


1. Deploy the sample hamster application to create a deployment and a corresponding Vertical Pod Autoscaler by
entering:

kubectl apply -f examples/hamster.yaml

The output from the above command confirms the deployment and creation:

verticalpodautoscaler.autoscaling.k8s.io/hamster-vpa created
deployment.apps/hamster created

Deploying the hamster application creates a deployment with two pods and a Vertical Pod Autoscaler pointing at
the deployment.
2. Verify that the hamster pods have been created successfully by entering:

kubectl get pods -l app=hamster

The output from the above command confirms the creation:

NAME READY STATUS RESTARTS AGE


hamster-7cbfd64f57-mqqnk 1/1 Running 0 54s
hamster-7cbfd64f57-rq6wv 1/1 Running 0 55s

Note that you will probably see different names for the hamster pods.
3. View the CPU and memory reservations using the kubectl describe pod command and one of the hamster
pod names returned in the previous step. For example:

kubectl describe pod hamster-7cbfd64f57-rq6wv

Note that the above command is an example only. You must use one of the hamster pod names that was returned
when you ran the kubectl get pods -l app=hamster command in the previous step.
In the requests section of the output, you can see the pod's current CPU and memory reservations. For example:

Requests:
cpu: 100m
memory: 50Mi

The Vertical Pod Autoscaler (specifically, the Recommender) analyzes the pods and observes their behavior to
determine whether these CPU and memory reservations are appropriate. Note that you might see different CPU
and memory reservations.
The reservations are not sufficient because the sample hamster application is deliberately under-resourced. Each
pod runs a single container that:
• requests 100 millicores, but tries to utilize more than 500 millicores
• reserves much less memory than it needs to run

Step 4: View the Scaling Operation


Having analyzed the original pods in the sample hamster application and determined that the CPU and memory
reservations are inadequate, the Vertical Pod Autoscaler (specifically the Updater) relaunches the pods with different
values as proposed by the Recommender. Note that the Vertical Pod Autoscaler does not modify the template in the
deployment, but updates the actual requests of the pods.

Oracle Cloud Infrastructure User Guide 917


Container Engine for Kubernetes

1. Monitor the pods in the sample hamster application, and wait for the Updater to start a new hamster pod with a
new name, by entering:

kubectl get --watch pods -l app=hamster


2. When you see that a new hamster pod has started, view its CPU and memory reservations using the kubectl
describe pod command and the pod's name. For example:

kubectl describe pod hamster-7cbfd64f57-wmg4

In the requests section of the output, you can see the new pod's CPU and memory reservations:

Requests:
cpu: 587m
memory: 262144k

In the above example, notice that the CPU reservation has increased to 587 millicores and the memory reservation
has increased to 262,144 Kilobytes. The original pod was under-resourced and the Vertical Pod Autoscaler has
corrected the original reservations with more appropriate values. Note that you might see different CPU and
memory reservations.

Step 5: View the Recommendation


View the recommendations made by the Vertical Pod Autoscaler (specifically, by the Recommender) by entering:

kubectl describe vpa/hamster-vpa

The output from the above command shows the recommendations:

Name: hamster-vpa
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"autoscaling.k8s.io/
v1beta2","kind":"VerticalPodAutoscaler","metadata":{"annotations":
{},"name":"hamster-vpa","namespace":"d...
API Version: autoscaling.k8s.io/v1
Kind: VerticalPodAutoscaler
Metadata:
Creation Timestamp: 2020-09-22T18:08:09Z
Generation: 27
Resource Version: 19466955
Self Link: /apis/autoscaling.k8s.io/v1/namespaces/default/
verticalpodautoscalers/hamster-vpa
UID: 689cee90-6fed-404d-adf9-b6fa8c1da660
Spec:
Resource Policy:
Container Policies:
Container Name: *
Controlled Resources:
cpu
memory
Max Allowed:
Cpu: 1
Memory: 500Mi
Min Allowed:
Cpu: 100m
Memory: 50Mi
Target Ref:
API Version: apps/v1
Kind: Deployment

Oracle Cloud Infrastructure User Guide 918


Container Engine for Kubernetes

Name: hamster
Update Policy:
Update Mode: Auto
Status:
Conditions:
Last Transition Time: 2020-09-22T18:10:10Z
Status: True
Type: RecommendationProvided
Recommendation:
Container Recommendations:
Container Name: hamster
Lower Bound:
Cpu: 519m
Memory: 262144k
Target:
Cpu: 587m
Memory: 262144k
Uncapped Target:
Cpu: 587m
Memory: 262144k
Upper Bound:
Cpu: 1
Memory: 500Mi
Events: <none>

Note that you might see different recommendations.

Step 6: Clean Up
1. Remove the sample application by entering:

kubectl delete -f examples/hamster.yaml


2. In the vertical-pod-autoscaler directory, delete the Vertical Pod Autoscaler deployment by entering:

./hack/vpa-down.sh

About Access Control and Container Engine for Kubernetes


To perform operations on a Kubernetes cluster, you must have appropriate permissions to access the cluster.
For most operations on Kubernetes clusters created and managed by Container Engine for Kubernetes, Oracle Cloud
Infrastructure Identity and Access Management (IAM) provides access control. A user's permissions to access
clusters comes from the groups to which they belong. The permissions for a group are defined by policies. Policies
define what actions members of a group can perform, and in which compartments. Users can then access clusters and
perform operations based on the policies set for the groups they are members of.
IAM provides control over:
• whether a user can create or delete clusters
• whether a user can add, remove, or modify node pools
• which Kubernetes object create/delete/view operations a user can perform on all clusters within a compartment or
tenancy
See Policy Configuration for Cluster Creation and Deployment on page 865.
In addition to IAM, the Kubernetes RBAC Authorizer can enforce additional fine-grained access control for users
on specific clusters via Kubernetes RBAC roles and clusterroles. A Kubernetes RBAC role is a collection of
permissions. For example, a role might include read permission on pods and list permission for pods. A Kubernetes
RBAC clusterrole is just like a role, but can be used anywhere in the cluster. A Kubernetes RBAC rolebinding maps a
role to a user or set of users, granting that role's permissions to those users for resources in that namespace. Similarly,

Oracle Cloud Infrastructure User Guide 919


Container Engine for Kubernetes

a Kubernetes RBAC clusterrolebinding maps a clusterrole to a user or set of users, granting that clusterrole's
permissions to those users across the entire cluster.
IAM and the Kubernetes RBAC Authorizer work together to enable users who have been successfully authorized by
at least one of them to complete the requested Kubernetes operation.
When a user attempts to perform any operation on a cluster (except for create role and create clusterrole operations),
IAM first determines whether the group to which the user belongs has the appropriate and sufficient permissions. If
so, the operation succeeds. If the attempted operation also requires additional permissions granted via a Kubernetes
RBAC role or clusterrole, the Kubernetes RBAC Authorizer then determines whether the user has been granted the
appropriate Kubernetes role or clusterrole.
Typically, you’ll want to define your own Kubernetes RBAC roles and clusterroles when deploying a Kubernetes
cluster to provide additional fine-grained control. When you attempt to perform a create role or create clusterrole
operation, the Kubernetes RBAC Authorizer first determines whether you have sufficient Kubernetes privileges. To
create a role or clusterrole, you must have been assigned an existing Kubernetes RBAC role (or clusterrole) that has at
least the same or higher privileges as the new role (or clusterrole) you’re attempting to create.
By default, users are not assigned any Kubernetes RBAC roles (or clusterroles) by default. So before attempting to
create a new role (or clusterrole), you must be assigned an appropriately privileged role (or clusterrole). A number of
such roles and clusterroles are always created by default, including the cluster-admin clusterrole (for a full list, see
Default Roles and Role Bindings in the Kubernetes documentation). The cluster-admin clusterrole essentially confers
super-user privileges. A user granted the cluster-admin clusterrole can perform any operation across all namespaces in
a given cluster.
Note that Oracle Cloud Infrastructure tenancy administrators already have sufficient privileges, and do not require the
cluster-admin clusterrole.

Example: Granting the Kubernetes RBAC cluster-admin clusterrole


Note:

The following instructions assume:


• You have the required access to create Kubernetes RBAC roles and
clusterroles, either because you're in the tenancy's Administrators group,
or because you have the Kubernetes RBAC cluster-admin clusterrole.
• The user to which you want to grant the RBAC cluster-admin clusterrole
is not an OCI tenancy administrator. If they are an OCI tenancy
administrator, they do not require the Kubernetes RBAC cluster-admin
clusterrole.
Follow these steps to grant a user who is not a tenancy administrator the Kubernetes RBAC cluster-admin clusterrole
on a cluster deployed on Oracle Cloud Infrastructure:
1. If you haven't already done so, follow the steps to set up the cluster's kubeconfig configuration file and (if
necessary) set the KUBECONFIG environment variable to point to the file. Note that you must set up your own
kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set up. See Setting Up
Cluster Access on page 875.
2. In a terminal window, grant the Kubernetes RBAC cluster-admin clusterrole to the user by entering:

Oracle Cloud Infrastructure User Guide 920


Container Engine for Kubernetes

kubectl create clusterrolebinding <my-cluster-admin-binding> --


clusterrole=cluster-admin --user=<user_OCID>

where:
• <my-cluster-admin-binding> is a string of your choice to be used as the name for the binding
between the user and the Kubernetes RBAC cluster-admin clusterrole. For example, jdoe_clst_adm
• <user_OCID> is the user's OCID (obtained from the Console ). For example,
ocid1.user.oc1..aaaaa...zutq (abbreviated for readability).
For example:

kubectl create clusterrolebinding jdoe_clst_adm --clusterrole=cluster-


admin --user=ocid1.user.oc1..aaaaa...zutq

Example: Giving a developer user the ability to read pods in a new cluster
Note:

The following instructions assume you're in the tenancy's Administrators


group, and therefore have:
• the required permissions to create clusters, and to manage users and
groups
• the required access to create Kubernetes RBAC roles and clusterroles
Follow these steps to give a developer the necessary Oracle Cloud Infrastructure and Kubernetes RBAC permissions
to use kubectl to view pods running on a cluster deployed on Oracle Cloud Infrastructure:
1. Create a new Oracle Cloud Infrastructure user for the developer to use (for example, called [email protected]),
and make a note of the new user's OCID (for example, ocid1.user.oc1..aaaaa...tx5a, abbreviated for
readability). See To create a user on page 2436.
2. Create a new Oracle Cloud Infrastructure group and add the new user to the group (for example, called acme-dev-
pod-vwr). See To create a group on page 2439.
3. Create a new Oracle Cloud Infrastructure policy that grants the new group the CLUSTER_USE permission on
clusters, with a policy statement like:

Allow group acme-dev-pod-vwr to use clusters in <location>

In the above policy statement, replace <location> with either tenancy (if you are creating the policy in the
tenancy's root compartment) or compartment <compartment-name> (if you are creating the policy in an
individual compartment).
See To create a policy on page 2473.
4. Create a new cluster in the Console. See Creating a Kubernetes Cluster on page 869.
5. Follow the steps to set up the cluster's kubeconfig configuration file and (if necessary) set the KUBECONFIG
environment variable to point to the file. Note that you must set up your own kubeconfig file. You cannot access a
cluster using a kubeconfig file that a different user set up. See Setting Up Cluster Access on page 875.
6. In a text editor, create a file (for example, called role-pod-reader.yaml) with the following content. This file
defines a Kubernetes RBAC role that enables users to read pod details.

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]

Oracle Cloud Infrastructure User Guide 921


Container Engine for Kubernetes

verbs: ["get", "watch", "list"]


7. In a terminal window, create the new role in the cluster using kubectl. For example, if you gave the yaml file that
defines the new role the name role-pod-reader.yaml, enter the following:

kubectl create -f role-pod-reader.yaml


8. In a terminal window, bind the Kubernetes RBAC role you just created to the Oracle Cloud Infrastructure user
account you created earlier by entering the following to create a new rolebinding (in this case, called pod-reader-
binding):

kubectl create rolebinding pod-reader-binding --role=pod-reader --


user=ocid1.user.oc1..aaaaa...tx5a
9. Give the developer the credentials of the new Oracle Cloud Infrastructure user you created earlier, and tell the
developer they can now see details of pods running on the cluster deployed on Oracle Cloud Infrastructure by:
• Signing in to the Console using the new user's credentials.
• Following the instructions in Setting Up Cluster Access on page 875 to set up their own copy of the
cluster's kubeconfig file. If the file does not have the expected default name and location of $HOME/.kube/
config, the developer will also have to set the KUBECONFIG environment variable to point to the file. Note
that the developer must set up their own kubeconfig file. They cannot access a cluster using a kubeconfig file
that you (or a different user) set up.
• Using kubectl to see details of the pods by entering:

kubectl get pods

Supported Images (Including Custom Images) and Shapes for Worker Nodes
When creating a cluster using Container Engine for Kubernetes, you can customize the worker nodes in the cluster by
specifying:
• The operating system image to use for worker nodes. The image is a template of a virtual hard drive that
determines the operating system and other software for the worker node.
• The shape to use for worker nodes. The shape is the number of CPUs and the amount of memory to allocate to
each newly created compute instance to be used as a worker node.
This topic includes information about the images and shapes provided by Oracle Cloud Infrastructure that are
supported by Container Engine for Kubernetes for use in node pools. Note that some of the shapes might not be
available in your particular tenancy.
To see a list of the supported images and the shapes available in your tenancy, enter:

oci ce node-pool-options get --node-pool-option-id all

Supported Images
Container Engine for Kubernetes supports the provisioning of worker nodes using some, but not all, of the latest
Oracle Linux images provided by Oracle Cloud Infrastructure.
To see the images supported by Container Engine for Kubernetes:
• When using the Console to create a cluster in the 'Custom Create' workflow, view the list of values in the Image
drop-down menu to see the list of supported images.
• When using the CLI, view the supported images (in the data: sources: section of the response) by entering:

oci ce node-pool-options get --node-pool-option-id all

Oracle Cloud Infrastructure User Guide 922


Container Engine for Kubernetes

Custom Images
When specifying the image that Container Engine for Kubernetes uses to provision worker nodes in a node pool,
you can specify your own custom image rather than one of the explicitly supported Oracle Linux images returned
by the oci ce node-pool-options get --node-pool-option-id all command. Worker nodes
provisioned from a custom image include the customizations, configuration, and software that were present when the
image was created. Note that Container Engine for Kubernetes only supports custom images that are based on one of
the Oracle Linux images returned by the oci ce node-pool-options get command.
To provision worker nodes from a custom image, you must use the CLI or API and specify the custom image’s OCID
when creating the node pool. For example, by running the oci ce node-pool create command and using the
--node-image-id parameter to specify a custom image's OCID, as follows:

oci ce node-pool create \


--cluster-id ocid1.cluster.oc1.iad.aaaaaaaaaf______jrd \
--name my-custom-linux-image \
--node-image-id ocid1.image.oc1.iad.aaaaaaaa6______nha \
--compartment-id ocid1.compartment.oc1..aaaaaaaay______t6q \
--kubernetes-version v1.15.7 \
--node-shape VM.Standard2.1 \
--placement-configs "[ { \"availabilityDomain\": \"nFuS:US-ASHBURN-AD-1\",
\"subnetId\": \"ocid1.subnet.oc1.iad.aaaaaaaa3______a6q\" } ]" \
--size 1 \
--region=us-ashburn-1

Note the following additional considerations when using custom images:


• Container Engine for Kubernetes installs Kubernetes on top of a custom image, and Kubernetes or the installation
software might change certain kernel configurations.
• Custom images must have access to a yum repository (public or internal).
• Custom images must not use a customized cloud-init. You can perform post-provisioning customization using
SSH or Daemonset.
• For the best support, ensure you create a custom image from the most up-to-date base image.
For more information about custom images and Oracle Cloud Infrastructure, see Managing Custom Images on page
670.

Supported Shapes
Container Engine for Kubernetes supports the provisioning of worker nodes using many, but not all, of the shapes
provided by Oracle Cloud Infrastructure. More specifically:
• Supported: Flexible shapes; Bare Metal shapes, including standard shapes; HPC shapes, except in
RDMA networks; VM shapes, including standard shapes; Dense I/O shapes.
• Not Supported: Dedicated VM host shapes; GPU shapes on VMs and Bare Metal instances; Micro VM shapes;
HPC shapes on Bare Metal instances in RDMA networks.
Note that you might be unable to select some shapes in your particular tenancy due to service limits and compartment
quotas, even though those shapes are supported by Container Engine for Kubernetes.
To see the shapes that are supported by Container Engine for Kubernetes and available in your tenancy:
• When using the Console to create a cluster in the 'Custom Create' workflow, view the list of values in the Shape
drop-down menu to see the list of supported shapes.
• When using the CLI, view the supported shapes (in the data: shapes: section of the response) by entering:

oci ce node-pool-options get --node-pool-option-id all

You might be able to use the Compute service's Console pages (or the Compute service's CLI or API) to subsequently
change the shape of a worker node after it has been created. However, bear in mind that Container Engine for

Oracle Cloud Infrastructure User Guide 923


Container Engine for Kubernetes

Kubernetes only supports those shapes shown in the Shape drop-down menu or returned by the oci ce node-
pool-options get --node-pool-option-id all command.
For more information about all the shapes provided by Oracle Cloud Infrastructure, see Compute Shapes on page 659.

Supported Admission Controllers


The Kubernetes version you select when you create a cluster using Container Engine for Kubernetes determines the
default set of admission controllers that are turned on in the created cluster. The set follows the recommendation
given in the Kubernetes documentation for that version. This topic shows the supported admission controllers, the
Kubernetes versions in which they are supported, and the order in which they run in the Kubernetes API server.

Admission Controllers (sorted alphabetically)


The table lists, in alphabetical order, the admission controllers that are turned on in the Kubernetes clusters you create
using Container Engine for Kubernetes. For each admission controller, the table shows the Kubernetes version in
which it is supported.

Admission Supported in 1.16? Supported in 1.17? Supported in 1.18? Supported in 1.19?


Controllers (in
alphabetical order)
DefaultIngressClass No No Yes Yes
DefaultStorageClass Yes Yes Yes Yes
Yes
DefaultTolerationSeconds Yes Yes Yes
Yes
ExtendedResourceToleration Yes Yes Yes
LimitRanger Yes Yes Yes Yes
Yes
MutatingAdmissionWebhook Yes Yes Yes
NamespaceLifecycle Yes Yes Yes Yes
NodeRestriction Yes Yes Yes Yes
No
PersistentVolumeClaimResize No No No
PodSecurityPolicy Yes Yes Yes Yes
(optional, see
Using Pod Security
Polices with
Container Engine
for Kubernetes)
Priority Yes Yes Yes Yes
ResourceQuota No No No No
RuntimeClass Yes Yes Yes Yes
ServiceAccount Yes Yes Yes Yes
Yes
StorageObjectInUseProtection Yes Yes Yes
Yes
TaintNodesByCondition Yes Yes Yes
No
ValidatingAdmissionWebhook No No No

Oracle Cloud Infrastructure User Guide 924


Container Engine for Kubernetes

Supported Admission Controllers (sorted by run order)


The table lists the admission controllers that are turned on in the Kubernetes clusters you create using Container
Engine for Kubernetes. The table shows the order in which supported admission controllers run in the Kubernetes
API server. Note that the run order is different in different Kubernetes versions.

Run order in Kubernetes Run order in Kubernetes Run order in Kubernetes Run order in Kubernetes
1.16 clusters: 1.17 clusters: 1.18 clusters: 1.19 clusters:
NamespaceLifecycle NamespaceLifecycle NamespaceLifecycle NamespaceLifecycle
LimitRanger LimitRanger LimitRanger LimitRanger
ServiceAccount ServiceAccount ServiceAccount ServiceAccount
NodeRestriction NodeRestriction NodeRestriction NodeRestriction
TaintNodesByCondition TaintNodesByCondition TaintNodesByCondition TaintNodesByCondition
PodSecurityPolicy PodSecurityPolicy PodSecurityPolicy PodSecurityPolicy
(optional, see Using (optional, see Using (optional, see Using (optional, see Using
Pod Security Polices Pod Security Polices Pod Security Polices Pod Security Polices
with Container Engine with Container Engine with Container Engine with Container Engine
for Kubernetes on page for Kubernetes on page for Kubernetes on page for Kubernetes on page
904) 904) 904) 904)
Priority Priority Priority Priority
DefaultTolerationSeconds DefaultTolerationSeconds DefaultTolerationSeconds DefaultTolerationSeconds
ExtendedResourceToleration ExtendedResourceToleration ExtendedResourceToleration ExtendedResourceToleration
DefaultStorageClass DefaultStorageClass DefaultStorageClass DefaultStorageClass
StorageObjectInUseProtectionStorageObjectInUseProtectionStorageObjectInUseProtectionStorageObjectInUseProtection
MutatingAdmissionWebhookMutatingAdmissionWebhookRuntimeClass RuntimeClass
RuntimeClass RuntimeClass DefaultIngressClass DefaultIngressClass
MutatingAdmissionWebhookMutatingAdmissionWebhook

Kubernetes Versions and Container Engine for Kubernetes


When you create a new Kubernetes cluster using Container Engine for Kubernetes, you specify:
• The version of Kubernetes to run on the control plane nodes in the cluster.
• The version of Kubernetes to run on the worker nodes in each node pool. Different worker nodes in the same
node pool can run different versions of Kubernetes. Different node pools in a cluster can run different versions of
Kubernetes.
The version of Kubernetes that you specify for the worker nodes in a node pool must be either the same Kubernetes
version as that running on the control plane nodes, or an earlier Kubernetes version that is still compatible. In other
words:
• The control plane nodes in a new cluster must run the same version of Kubernetes as the version running on
worker nodes, or must be no more than two versions ahead.
• The worker nodes in a node pool must not run a more recent version of Kubernetes than the associated control
plane nodes.

New Versions of Kubernetes


New Kubernetes versions are released periodically that contain new features and bug fixes.

Oracle Cloud Infrastructure User Guide 925


Container Engine for Kubernetes

Kubernetes version numbers have the format x.y.z where x is a major release, y is a minor release, and z is a patch
release. For example, 1.19.7.
Kubernetes itself is supported for three minor versions at a time (the current release version and two previous
versions).
As described in the Kubernetes documentation, a certain amount of version variation is permissible between control
plane nodes and worker nodes in a cluster:
• The Kubernetes version on worker nodes can lag behind the version on the control plane nodes by up to two
versions, but no more. If the version on the worker nodes is more than two versions behind the version on the
control plane nodes, the Kubernetes versions on the worker nodes and the control plane nodes are incompatible.
• The Kubernetes version on worker nodes must never be more recent than the version on the control plane nodes.
For the Kubernetes versions currently and previously supported by Container Engine for Kubernetes, see Supported
Versions of Kubernetes on page 926.

Supported Versions of Kubernetes


When Container Engine for Kubernetes support for a new version of Kubernetes is announced, an older Kubernetes
version ceases to be supported.
This topic lists:
• Kubernetes Versions Supported by Container Engine for Kubernetes on page 926
• Kubernetes Versions Previously Supported by Container Engine for Kubernetes on page 928

Kubernetes Versions Supported by Container Engine for Kubernetes


Container Engine for Kubernetes supports three versions of Kubernetes for new clusters. For a minimum of 30 days
after the release of a new Kubernetes version, Container Engine for Kubernetes continues to support the fourth, oldest
available version.
Container Engine for Kubernetes supports the following versions of Kubernetes for new clusters:

Kubernetes Version Supported by Container Engine Notes


for Kubernetes?

1.19.7 Yes Support introduced: 17 March 2021


1.18.10 Yes Support introduced: 1 December,
2020
1.17.13 Yes Support for 1.17.x versions (initially
1.17.9) introduced: 3 November,
2020

1.16.15 Yes Support for 1.16.x versions (initially


1.16.8) introduced: 22 June, 2020
See Notes about Container Engine
for Kubernetes Support for
Kubernetes Version 1.16 on page
927

Notes about Container Engine for Kubernetes Support for Kubernetes Version 1.19
Kubernetes version 1.19 is built with golang version 1.15. Golang no longer supports x509 certificates that contain
only CommonName. Before upgrading to Kubernetes version 1.19, Oracle recommends you check whether any
clusters have admission webhooks that use an x509 certificate containing only CommonName. If there is such a
cluster, update the admission webhook to use a new x509 certificate that contains a Subject Alternative Name (SAN).

Oracle Cloud Infrastructure User Guide 926


Container Engine for Kubernetes

If you don't update the admission webhook, kube-apiserver cannot call it. As a result, any deployment dependent on
the admission webhook will not be deployed in the cluster.

Notes about Container Engine for Kubernetes Support for Kubernetes Version 1.16
Note that Kubernetes version 1.16 deprecates:
• A number of versions of the following Kubernetes APIs, in favor of more stable versions (as described in this
kubernetes.io blog post):
• NetworkPolicy
• PodSecurityPolicy
• DaemonSet
• Deployment
• StatefulSet
• ReplicaSet
If a deprecated API version is used, workloads running on Kubernetes version 1.16 clusters are subject to
disruption.
• Any labels in the k8s.io and kubernetes.io namespaces, except for the following:
• kubernetes.io/hostname
• kubernetes.io/instance-type
• kubernetes.io/os
• kubernetes.io/arch
• beta.kubernetes.io/instance-type
• beta.kubernetes.io/os
• beta.kubernetes.io/arch
• failure-domain.beta.kubernetes.io/zone
• failure-domain.beta.kubernetes.io/region
• failure-domain.kubernetes.io/zone
• failure-domain.kubernetes.io/region
• [*.]kubelet.kubernetes.io/*
• [*.]node.kubernetes.io/*
If a disallowed label is used, errors occur when creating or updating node pools in Kubernetes version 1.16
clusters.
Before upgrading clusters to Kubernetes version 1.16, Oracle strongly recommends you prepare as follows:
• Migrate to the stable API versions as soon as possible. Container Engine for Kubernetes already supports
Kubernetes versions that support the stable API versions, so you can do this immediately. Depending on your use
of the Kubernetes APIs, your migration tasks might include:
• changing manifest files to reference the stable API versions
• updating custom integrations and controllers to call the stable API versions
• updating third party tools (ingress controllers, continuous delivery systems) to call stable API versions
• verifying your version of kubectl adheres to the Kubernetes version skew support policy described in the
Kubernetes documentation
• making sure any references to documented Kubernetes examples are using stable API versions

Oracle Cloud Infrastructure User Guide 927


Container Engine for Kubernetes

• Update the Kubernetes labels in the k8s.io and kubernetes.io namespaces to just the following:
• kubernetes.io/hostname
• kubernetes.io/instance-type
• kubernetes.io/os
• kubernetes.io/arch
• beta.kubernetes.io/instance-type
• beta.kubernetes.io/os
• beta.kubernetes.io/arch
• failure-domain.beta.kubernetes.io/zone
• failure-domain.beta.kubernetes.io/region
• failure-domain.kubernetes.io/zone
• failure-domain.kubernetes.io/region
• [*.]kubelet.kubernetes.io/*
• [*.]node.kubernetes.io/*

Kubernetes Versions Previously Supported by Container Engine for Kubernetes


Container Engine for Kubernetes previously supported the following versions of Kubernetes:

Kubernetes Version Supported by Container Engine Support Ended


for Kubernetes?

1.15.12 No 2 February, 2021


1.15.7 No 2 February, 2021
1.14.8 No 15 December 2020
1.13.x No 21 March, 2020

1.12.7 No 29 January, 2020

1.12.6 No 15 April, 2019

1.11.9 No 9 September, 2019

1.11.8 No 15 April, 2019

1.11.x versions prior to 1.11.8 No 13 March, 2019


1.10.x No 12 April, 2019
1.9.x No 11 December, 2019
1.8.x No 7 September, 2018

Upgrading Clusters to Newer Kubernetes Versions


After a new version of Kubernetes has been released and when Container Engine for Kubernetes supports the new
version, you can upgrade the Kubernetes version running on control plane nodes and worker nodes in a cluster.
The control plane nodes and worker nodes that comprise the cluster can run different versions of Kubernetes,
provided you follow the Kubernetes version skew support policy described in the Kubernetes documentation.
You upgrade control plane nodes and worker nodes differently:
• You upgrade control plane nodes by upgrading the cluster and specifying a more recent Kubernetes version for
the cluster. Control plane nodes running older versions of Kubernetes are upgraded. Because Container Engine for

Oracle Cloud Infrastructure User Guide 928


Container Engine for Kubernetes

Kubernetes distributes the Kubernetes Control Plane on multiple Oracle-managed control plane nodes to ensure
high availability (distributed across different availability domains in a region where supported), you're able to
upgrade the Kubernetes version running on control plane nodes with zero downtime.
Having upgraded control plane nodes to a new version of Kubernetes, you can subsequently create new node
pools with worker nodes running the newer version. Alternatively, you can continue to create new node pools
with worker nodes running older versions of Kubernetes (providing those older versions are compatible with the
Kubernetes version running on the control plane nodes).
For more information about control plane node upgrade, see Upgrading the Kubernetes Version on Control Plane
Nodes in a Cluster on page 930.
• You upgrade worker nodes in one of two ways:
• By performing an 'in-place' upgrade of a node pool in the cluster, specifying a more recent Kubernetes version
for the existing node pool.
• By performing an 'out-of-place' upgrade of a node pool in the cluster, replacing the original node pool with a
new node pool for which you've specified a more recent Kubernetes version.
For more information about worker node upgrade, see Upgrading the Kubernetes Version on Worker Nodes in a
Cluster on page 930.
To find out more about the Kubernetes versions currently and previously supported by Container Engine for
Kubernetes, see Supported Versions of Kubernetes on page 926.

Notes about Upgrading Clusters


Note the following when upgrading clusters:
• Container Engine for Kubernetes only upgrades the Kubernetes version running on control plane nodes when you
explicitly initiate the upgrade operation.
• After upgrading control plane nodes to a newer version of Kubernetes, you cannot downgrade the control plane
nodes to an earlier Kubernetes version.
• Before you upgrade the version of Kubernetes running on the control plane nodes, it is your responsibility to test
that applications deployed on the cluster are compatible with the new Kubernetes version. For example, before
upgrading the existing cluster, you might create a new separate cluster with the new Kubernetes version to test
your applications.
• The versions of Kubernetes running on the control plane nodes and the worker nodes must be compatible (that
is, the Kubernetes version on the control plane nodes must be no more than two minor versions ahead of the
Kubernetes version on the worker nodes). See the Kubernetes version skew support policy described in the
Kubernetes documentation.
• If the version of Kubernetes currently running on the control plane nodes is more than one version behind the
most recent supported version, you are given a choice of versions to upgrade to. If you want to upgrade to a
version of Kubernetes that is more than one version ahead of the version currently running on the control plane
nodes, you must upgrade to each intermediate version in sequence without skipping versions (as described in the
Kubernetes documentation).
• To successfully upgrade control plane nodes in a cluster, the Kubernetes Dashboard service must be of type
ClusterIP. If the Kubernetes Dashboard service is not of type ClusterIP (for example, if the service is of type
NodePort), the upgrade will fail. In this case, change the type of the Kubernetes Dashboard service back to
ClusterIP (for example, by entering kubectl -n kube-system edit service kubernetes-
dashboard and changing the type).
• Prior to Kubernetes version 1.14, Container Engine for Kubernetes created clusters with kube-dns as the DNS
server. However, from Kubernetes version 1.14 onwards, Container Engine for Kubernetes creates clusters with
CoreDNS as the DNS server. When you upgrade a cluster created by Container Engine for Kubernetes from
an earlier version to Kubernetes 1.14 or later, the cluster's kube-dns server is automatically replaced with the
CoreDNS server. Note that if you customized kube-dns behavior using the original kube-dns ConfigMap, those
customizations are not carried forward to the CoreDNS ConfigMap. You will have to create and apply a new
ConfigMap containing the customizations to override settings in the CoreDNS Corefile. For more information
about upgrading to CoreDNS, see Configuring DNS Servers for Kubernetes Clusters on page 932.

Oracle Cloud Infrastructure User Guide 929


Container Engine for Kubernetes

Upgrading the Kubernetes Version on Control Plane Nodes in a Cluster


When Container Engine for Kubernetes supports a newer version of Kubernetes than the version currently running on
the control plane nodes in a cluster, you can upgrade the Kubernetes version running on the control plane nodes.
Important:

After you’ve upgraded control plane nodes to a newer Kubernetes version,


you can’t downgrade the control plane nodes to an earlier Kubernetes
version. It’s therefore important that before you upgrade the Kubernetes
version running on the control plane nodes, you test that applications
deployed on the cluster are compatible with the new Kubernetes version.
Using the Console
To upgrade the version of Kubernetes running on the control plane nodes:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
Kubernetes Clusters.
2. Choose a Compartment you have permission to work in.
3. On the Cluster List page, click the name of the cluster where you want to upgrade the Kubernetes version running
on the control plane nodes.
If a newer Kubernetes version is available than the one running on the control plane nodes in the cluster, the
Upgrade Available button is enabled at the top of the Cluster page.
4. Click Upgrade Available to upgrade the control plane nodes to a newer version.
5. In the Upgrade Cluster Master dialog box, select the Kubernetes version to which to upgrade the control plane
nodes, and click Upgrade.
The Kubernetes version running on the control plane nodes is upgraded. From now on, the new Kubernetes version
will appear as an option when you’re defining new node pools for the cluster.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the UpdateCluster operation to upgrade the version of Kubernetes running on the control plane nodes.

Upgrading the Kubernetes Version on Worker Nodes in a Cluster


You can upgrade the version of Kubernetes running on the worker nodes in a cluster in two ways:
• Perform an 'in-place' upgrade of a node pool in the cluster, by specifying a more recent Kubernetes version for
new worker nodes starting in the existing node pool. First, you modify the existing node pool's properties to
specify the more recent Kubernetes version. Then, you 'drain' existing worker nodes in the node pool to prevent
new pods starting, and to delete existing pods. Finally, you terminate each of the worker nodes in turn. When new
worker nodes are started in the existing node pool, they run the more recent Kubernetes version you specified. See
Performing an In-Place Worker Node Upgrade by Updating an Existing Node Pool on page 931.
• Perform an 'out-of-place' upgrade of a node pool in the cluster, by replacing the original node pool with a new
node pool. First, you create a new node pool with a more recent Kubernetes version. Then, you 'drain' existing
worker nodes in the original node pool to prevent new pods starting, and to delete existing pods. Finally, you
delete the original node pool. When new worker nodes are started in the new node pool, they run the more recent
Kubernetes version you specified. See Performing an Out-of-Place Worker Node Upgrade by Replacing an
Existing Node Pool with a New Node Pool on page 931.
Note that in both cases:
• The more recent Kubernetes version you specify for the worker nodes in the node pool must be compatible
with the Kubernetes version running on the control plane nodes in the cluster. See Upgrading Clusters to Newer
Kubernetes Versions on page 928).

Oracle Cloud Infrastructure User Guide 930


Container Engine for Kubernetes

• You must drain existing worker nodes in the original node pool. If you don't drain the worker nodes, workloads
running on the cluster are subject to disruption.
Performing an In-Place Worker Node Upgrade by Updating an Existing Node Pool
You can upgrade the version of Kubernetes running on worker nodes in a node pool by specifying a more recent
Kubernetes version for the existing node pool. For each worker node, you first drain it to prevent new pods starting
and to delete existing pods. You then terminate the worker node so that a new worker node is started, running the
more recent Kubernetes version you specified. When new worker nodes are started in the existing node pool, they run
the more recent Kubernetes version you specified.
To perform an in-place upgrade of a node pool in a cluster, by specifying a more recent Kubernetes version for the
existing node pool:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
Kubernetes Clusters.
2. Choose a Compartment you have permission to work in.
3. On the Cluster List page, click the name of the cluster where you want to change the Kubernetes version running
on worker nodes.
4. On the Cluster page, display the Node Pools tab, and click the name of the node pool where you want to upgrade
the Kubernetes version running on the worker nodes.
5. On the Node Pool page, click Edit and in the Version field, specify the required Kubernetes version for worker
nodes.
The Kubernetes version you specify must be compatible with the version that is running on the control plane
nodes.
6. Click Edit to save the change.
You now have to terminate existing worker nodes so that new worker nodes are started, running the Kubernetes
version you specified.
7. For the first worker node in the node pool:
a. Prevent new pods from starting and delete existing pods by entering:

kubectl drain <node_name>

For more information:


• about using kubectl, see Accessing a Cluster Using Kubectl on page 889
• about the drain command, see drain in the Kubernetes documentation
Recommended: Leverage pod disruption budgets as appropriate for your application to ensure that there's a
sufficient number of replica pods running throughout the drain operation.
b. On the Node Pool page, display the Nodes tab and click the worker node's name in the Node Name field.
c. On the Instances page, select Terminate from the More Actions menu.
The worker node is terminated and a new worker node is started in its place, running the Kubernetes version you
specified.
8. Repeat the previous step for each remaining worker node in the node pool, until all worker nodes in the node pool
are running the Kubernetes version you specified.
Performing an Out-of-Place Worker Node Upgrade by Replacing an Existing Node Pool with a New
Node Pool
You can 'upgrade' the version of Kubernetes running on worker nodes in a node pool by replacing the original node
pool with a new node pool that has new worker nodes running the appropriate Kubernetes version. Having drained
existing worker nodes in the original node pool to prevent new pods starting and to delete existing pods, you can then
delete the original node pool. When new worker nodes are started in the new node pool, they run the more recent
Kubernetes version you specified.

Oracle Cloud Infrastructure User Guide 931


Container Engine for Kubernetes

To perform an 'out-of-place' upgrade of a node pool in a cluster, by creating a new node pool to 'upgrade' the
Kubernetes version on worker nodes:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
Kubernetes Clusters.
2. Choose a Compartment you have permission to work in.
3. On the Cluster List page, click the name of the cluster where you want to change the Kubernetes version running
on worker nodes.
4. On the Cluster page, display the Node Pools tab, and then click Add Node Pool to create a new node pool and
specify the required Kubernetes version for its worker nodes.
The Kubernetes version you specify must be compatible with the version that is running on the control plane
nodes.
5. If there are labels attached to worker nodes in the original node pool and those labels are used by selectors (for
example, to determine the nodes on which to run pods), then use the kubectl label nodes command
to attach the same labels to the new worker nodes in the new node pool. See Assigning Pods to Nodes in the
Kubernetes documentation.
6. For the first worker node in the original node pool, prevent new pods from starting and delete existing pods by
entering:

kubectl drain <node_name>

For more information:


• about using kubectl, see Accessing a Cluster Using Kubectl on page 889
• about the drain command, see drain in the Kubernetes documentation
Recommended: Leverage pod disruption budgets as appropriate for your application to ensure that there's a
sufficient number of replica pods running throughout the drain operation.
7. Repeat the previous step for each remaining worker node in the node pool, until all the worker nodes have been
drained from the original node pool.
When you have drained all the worker nodes from the original node pool and pods are running on worker nodes in
the new node pool, you can delete the original node pool.
8. On the Cluster page, display the Node Pools tab, and then select Delete Node Pool from the Actions menu
beside the original node pool.
The original node pool and all its worker nodes are deleted.

Configuring DNS Servers for Kubernetes Clusters

Configuring Built-in DNS Servers (kube-dns, CoreDNS)


Clusters created by Container Engine for Kubernetes include a DNS server as a built-in Kubernetes service that is
launched automatically. The kubelet process on each worker node directs individual containers to the DNS server to
translate DNS names to IP addresses.
Prior to Kubernetes version 1.14, Container Engine for Kubernetes created clusters with kube-dns as the DNS server.
However, from Kubernetes version 1.14 onwards, Container Engine for Kubernetes creates clusters with CoreDNS as
the DNS server. CoreDNS is a general-purpose authoritative DNS server that is modular and pluggable.
Default CoreDNS behavior is controlled by a configuration file referred to as a Corefile. The Corefile is a Kubernetes
ConfigMap, with a Corefile section that defines CoreDNS behavior. You cannot modify the Corefile directly. If
you need to customize CoreDNS behavior, you create and apply your own ConfigMap to override settings in the
Corefile (as described in this topic). Note that if you do customize CoreDNS default behavior, the customizations are
periodically deleted during internal updates to the cluster.
When you upgrade a cluster created by Container Engine for Kubernetes from an earlier version to Kubernetes 1.14
or later, the cluster's kube-dns server is automatically replaced with the CoreDNS server. Note that if you customized
kube-dns behavior using the original kube-dns ConfigMap, those customizations are not carried forward to the

Oracle Cloud Infrastructure User Guide 932


Container Engine for Kubernetes

CoreDNS ConfigMap. You will have to create and apply a new ConfigMap containing the customizations to override
settings in the CoreDNS Corefile.
For more information about CoreDNS customization and Kubernetes, see the Kubernetes documentation and the
CoreDNS documentation.
To create a ConfigMap to override the settings in the CoreDNS Corefile:
1. Define a ConfigMap in a yaml file, in the format:

apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
<customization-options>

For example:

apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
example.server: | # All custom server files must have a “.server” file
extension.
# Change example.com to the domain you wish to forward.
example.com {
# Change 1.1.1.1 to your customer DNS resolver.
forward . 1.1.1.1
}

For more information about the ConfigMap options to use to customize CoreDNS behavior, see the Kubernetes
documentation and the CoreDNS documentation.
2. Create the ConfigMap by entering:

kubectl apply -f <filename>.yaml


3. Verify the customizations have been applied by entering:

kubectl get configmaps --namespace=kube-system coredns-custom -o yaml


4. Force CoreDNS to reload the ConfigMap by entering:

kubectl delete pod --namespace kube-system -l k8s-app=kube-dns

Configuring ExternalDNS to use Oracle Cloud Infrastructure DNS


ExternalDNS is an add-on to Kubernetes that can create DNS records for services in DNS providers external to
Kubernetes . It sets up DNS records in an external DNS provider to make Kubernetes services discoverable via that
DNS provider, and enables you to control DNS records dynamically. See ExternalDNS for more information.
Having deployed ExternalDNS on a cluster, you can expose a service running on the cluster by adding the
external-dns.alpha.kubernetes.io/hostname annotation to the service. ExternalDNS creates a DNS
record for the service in the external DNS provider you've configured for the cluster.
ExternalDNS is not itself a DNS server like CoreDNS, but a way to configure other external DNS providers. Oracle
Cloud Infrastructure DNS is one such external DNS provider. See Overview of the DNS Service on page 1676.

Oracle Cloud Infrastructure User Guide 933


Container Engine for Kubernetes

For convenience, instructions are included below to set up ExternalDNS on a cluster and configure it to use Oracle
Cloud Infrastructure DNS. These instructions are a summary based on the Setting up ExternalDNS for Oracle Cloud
Infrastructure (OCI) tutorial, which is available on GitHub.
To set up ExternalDNS on a cluster and configure it to use Oracle Cloud Infrastructure DNS:
1. Create a new DNS zone in Oracle Cloud Infrastructure DNS to contain the DNS records that ExternalDNS will
create for the cluster. See Creating a Zone on page 1679.
2. If you haven't already done so, follow the steps to set up the cluster's kubeconfig configuration file and (if
necessary) set the KUBECONFIG environment variable to point to the file. Note that you must set up your own
kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set up. See Setting Up
Cluster Access on page 875.
3. Create a Kubernetes secret containing the Oracle Cloud Infrastructure user authentication details for ExternalDNS
to use when connecting to the Oracle Cloud Infrastructure API to insert and update DNS records in the DNS zone
you just created.
a. In a text editor, create a credentials file containing the Oracle Cloud Infrastructure user credentials to use to
access the DNS zone:

auth:
region: <region-identifier>
tenancy: <tenancy-ocid>
user: <user-ocid>
key: |
-----BEGIN RSA PRIVATE KEY-----
<private-key>
-----END RSA PRIVATE KEY-----
fingerprint: <fingerprint>
# Omit if there is not a password for the key
passphrase: <passphrase>
compartment: <compartment-ocid>

where:
• <region-identifer> identifies the user's region. For example, us-phoenix-1
• <tenancy-ocid> is the OCID of the user's tenancy. For example,
ocid1.tenancy.oc1..aaaaaaaap...keq (abbreviated for readability).
• <user-ocid> is the OCID of the user. For example, ocid1.user.oc1..aaaaa...zutq
(abbreviated for readability).
• <private-key> is an RSA key, starting with -----BEGIN RSA PRIVATE KEY----- and ending
with -----END RSA PRIVATE KEY-----
• passphrase: <passphrase> optionally provides the passphrase for the key, if one exists
• <compartment-ocid> is the OCID of the compartment to which the DNS zone belongs
For example:

auth:
region: us-phoenix-1
tenancy: ocid1.tenancy.oc1..aaaaaaaap...keq
user: ocid1.user.oc1..aaaaa...zutq
key: |
-----BEGIN RSA PRIVATE KEY-----
this-is-not-a-secret_Ef8aiAk7+I0...
-----END RSA PRIVATE KEY-----
fingerprint: bg:92:82:9f...
# Omit if there is not a password for the key
passphrase: Uy2kSl...

Oracle Cloud Infrastructure User Guide 934


Container Engine for Kubernetes

compartment: ocid1.compartment.oc1..aaaaaaaa7______ysq
b. Save the credentials file with a name of your choosing (for example, oci-creds.yaml).
c. Create a Kubernetes secret from the credentials file you just created, by entering:

kubectl create secret generic <secret-name> --from-file=<credential-


filename>

For example:

kubectl create secret generic external-dns-config --from-file=oci-


creds.yaml
4. Deploy ExternalDNS on the cluster.
a. In a text editor, create a configuration file (for example, called external-dns-deployment.yaml)
to create the ExternalDNS deployment, and specify the name of the Kubernetes secret you just created. For
example:

apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: external-dns
template:

Oracle Cloud Infrastructure User Guide 935


Container Engine for Kubernetes

metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.7.3
args:
- --source=service
- --source=ingress
- --provider=oci
- --policy=upsert-only # prevent ExternalDNS from deleting any
records, omit to enable full synchronization
- --txt-owner-id=my-identifier
volumeMounts:
- name: config
mountPath: /etc/kubernetes/
volumes:
- name: config
secret:
secretName: external-dns-config
b. Save and close the configuration file.
c. Apply the configuration file to deploy ExternalDNS by entering:

kubectl apply -f <filename>

where <filename> is the name of the file you created earlier. For example:

kubectl apply -f external-dns-deployment.yaml

The output from the above command confirms the deployment:

serviceaccount/external-dns created
clusterrole.rbac.authorization.k8s.io/external-dns created
clusterrolebinding.rbac.authorization.k8s.io/external-dns-viewer created
deployment.apps/external-dns created
5. Verify that ExternalDNS has been deployed successfully and can insert records in the DNS zone you created
earlier in Oracle Cloud Infrastructure by creating an nginx deployment and an nginx service:
a. In a text editor, create a configuration file (for example, called nginx-externaldns.yaml) to create an
nginx deployment and an nginx service that includes the external-dns.alpha.kubernetes.io/
hostname annotation. For example:

apiVersion: v1
kind: Service
metadata:
name: nginx
annotations:
external-dns.alpha.kubernetes.io/hostname: example.com
spec:
type: LoadBalancer
ports:
- port: 80
name: http
targetPort: 80
selector:
app: nginx
---
apiVersion: apps/v1

Oracle Cloud Infrastructure User Guide 936


Container Engine for Kubernetes

kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
name: http
b. Apply the configuration file to create the nginx service and deployment by entering:

kubectl apply -f <filename>

where <filename> is the name of the file you created earlier. For example:

kubectl apply -f nginx-externaldns.yaml

The output from the above command confirms the deployment:

service/nginx created
deployment.apps/nginx created
c. Wait a couple of minutes, and then verify that a DNS record was created for the nginx service in the Oracle
Cloud Infrastructure DNS zone (see Managing DNS Service Zones on page 1681).

Creating Load Balancers to Distribute Traffic Between Cluster Nodes


When you create a deployment, you can optionally create a load balancer service in the same compartment as the
cluster to distribute traffic between the nodes assigned to the deployment. The key fields in the configuration of a load
balancer service are the type of service being created and the ports that the load balancer will listen to.
Note:

Load balancer services you create appear in the Console. However, do not
use the Console (or the Oracle Cloud Infrastructure CLI or API) to modify
load balancer services. Any modifications you make will either be reverted
by Container Engine for Kubernetes or will conflict with its operation and
possibly result in service interruption.

Creating Load Balancers to Distribute HTTP Traffic


Consider the following configuration file, nginx_lb.yaml. It defines a deployment (kind: Deployment)
for the nginx app, followed by a service definition with a type of LoadBalancer (type: LoadBalancer) that
balances http traffic on port 80 for the nginx app.

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: nginx

Oracle Cloud Infrastructure User Guide 937


Container Engine for Kubernetes

spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx

The first part of the configuration file defines an Nginx deployment, requesting that it be hosted on 3 pods running the
nginx:1.7.9 image, and accept traffic to the containers on port 80.
The second part of the configuration file defines the Nginx service, which uses type LoadBalancer to balance Nginx
traffic on port 80 amongst the available pods.
To create the deployment and service defined in nginx_lb.yaml while connected to your Kubernetes cluster,
enter the command:

kubectl apply -f nginx_lb.yaml

This command outputs the following upon successful creation of the deployment and the load balancer:

deployment "my-nginx" created


service "my-nginx-svc" created

The load balancer may take a few minutes to go from a pending state to being fully operational. You can view the
current state of your cluster by entering:

kubectl get all

The output from the above command shows the current state:

NAME READY STATUS RESTARTS AGE


po/my-nginx-431080787-0m4m8 1/1 Running 0 3m
po/my-nginx-431080787-hqqcr 1/1 Running 0 3m
po/my-nginx-431080787-n8125 1/1 Running 0 3m

NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE


svc/kubernetes 203.0.113.1 <NONE> 443/TCP 3d
svc/my-nginx-svc 203.0.113.7 192.0.2.22 80:30269/TCP 3m

Oracle Cloud Infrastructure User Guide 938


Container Engine for Kubernetes

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE


deploy/my-nginx 3 3 3 3 3m

NAME DESIRED CURRENT READY AGE


rs/my-nginx-431080787 3 3 3 3m

The output shows that the my-nginx deployment is running on 3 pods (the po/my-nginx entries), that the load
balancer is running (svc/my-nginx-svc) and has an external IP (192.0.2.22) that clients can use to connect to the app
that's deployed on the pods.

Creating Load Balancers with SSL Support to Distribute HTTPS Traffic


You can create a load balancer with SSL termination, allowing https traffic to an app to be distributed among the
nodes in a cluster. This example provides a walkthrough of the configuration and creation of a load balancer with SSL
support.
Consider the following configuration file, nginx-demo-svc-ssl.yaml, which defines an Nginx deployment
and exposes it via a load balancer that serves http on port 80, and https on port 443. This sample creates an Oracle
Cloud Infrastructure load balancer, by defining a service with a type of LoadBalancer (type: LoadBalancer).

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: nginx-service
annotations:
service.beta.kubernetes.io/oci-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/oci-load-balancer-tls-secret: ssl-
certificate-secret
spec:
selector:
app: nginx
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 80

The Load Balancer's annotations are of particular importance. The ports on which to support https traffic are defined
by the value of service.beta.kubernetes.io/oci-load-balancer-ssl-ports. You can declare multiple SSL ports by

Oracle Cloud Infrastructure User Guide 939


Container Engine for Kubernetes

using a comma-separated list for the annotation's value. For example, you could set the annotation's value to "443,
3000" to support SSL on ports 443 and 3000.
The required TLS secret, ssl-certificate-secret, needs to be created in Kubernetes. This example creates and uses a
self-signed certificate. However, in a production environment, the most common scenario is to use a public certificate
that's been signed by a certificate authority.
The following command creates a self-signed certificate, tls.crt, with its corresponding key, tls.key:

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out
tls.crt -subj "/CN=nginxsvc/O=nginxsvc"

Now that you created the certificate, you need to store both it and its key as a secret in Kubernetes. The name of the
secret must match the name from the service.beta.kubernetes.io/oci-load-balancer-tls-secret annotation of the load
balancer's definition. Use the following command to create a TLS secret in Kubernetes, whose key and certificate
values are set by --key and --cert, respectively.

kubectl create secret tls ssl-certificate-secret --key tls.key --cert


tls.crt

You must create the Kubernetes secret before you can create the service, since the service references the secret in its
definition. Create the service using the following command:

kubectl create -f manifests/demo/nginx-demo-svc-ssl.yaml

Watch the service and wait for a public IP address (EXTERNAL-IP) to be assigned to the Nginx service (nginx-
service) by entering:

kubectl get svc --watch

The output from the above command shows the load balancer IP to use to connect to the service.

NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE


nginx-service 192.0.2.1 198.51.100.1 80:30274/TCP 5m

The load balancer is now running, which means the service can now be accessed as follows:
• using http, by entering:

curl http://198.51.100.1
• using https, by entering:

curl --insecure https://198.51.100.1

The "--insecure" flag is used to access the service using https due to the use of self-signed certificates in this
example. Do not use this flag in a production environment where the public certificate was signed by a certificate
authority.
Note: When a cluster is deleted, a load balancer that's dynamically created when a service is created will not be
removed. Before deleting a cluster, delete the service, which in turn will result in the cloud provider removing the
load balancer. The syntax for this command is:

kubectl delete svc SERVICE_NAME

For example, to delete the service from the previous example, enter:

kubectl delete svc nginx-service

Oracle Cloud Infrastructure User Guide 940


Container Engine for Kubernetes

Updating the TLS Certificates of Existing Load Balancers


To update the TLS certificate of an existing load balancer:
1. Obtain a new TLS certificate. In a production environment, the most common scenario is to use a public
certificate that's been signed by a certificate authority.
2. Create a new Kubernetes secret. For example, by entering:

kubectl create secret tls new-ssl-certificate-secret --key new-tls.key --


cert new-tls.crt
3. Modify the service definition to reference the new Kubernetes secret by changing the
service.beta.kubernetes.io/oci-load-balancer-tls-secret annotation in the service
configuration. For example:

apiVersion: v1
kind: Service
metadata:
name: nginx-service
annotations:
service.beta.kubernetes.io/oci-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/oci-load-balancer-tls-secret: new-ssl-
certificate-secret
spec:
selector:
app: nginx
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 80
4. Update the service. For example, by entering:

kubectl apply -f new-nginx-demo-svc-ssl.yaml

Creating Internal Load Balancers in Public and Private Subnets


You can create Oracle Cloud Infrastructure load balancers to control access to services running on a cluster:
• When you create a cluster in the 'Custom Create' workflow you select an existing VCN that contains the network
resources to be used by the new cluster. If you want to use load balancers to control traffic into the VCN, you
select existing public or private subnets in that VCN to host the load balancers.
• When you create a cluster in the 'Quick Create' workflow, the VCN that's automatically created contains a public
regional subnet to host a load balancer. If you want to host load balancers in private subnets, you can add private
subnets to the VCN later.
Alternatively, you can create an internal load balancer service (often referred to simply as an 'internal load balancer')
in a cluster to enable other programs running in the same VCN as the cluster to access services in the cluster. An
internal load balancer is an Oracle Cloud Infrastructure private load balancer. A private load balancer has a private
IP address assigned by the Load Balancing service, which serves as the entry point for incoming traffic. For more
information about Oracle Cloud Infrastructure private load balancers, see Private Load Balancer on page 2499.
You can host internal load balancers in public subnets and private subnets.

Oracle Cloud Infrastructure User Guide 941


Container Engine for Kubernetes

To create an internal load balancer hosted on a public subnet, add the following annotation in the metadata section of
the manifest file:

service.beta.kubernetes.io/oci-load-balancer-internal: "true"

To create an internal load balancer hosted on a private subnet, add both following annotations in the metadata section
of the manifest file:

service.beta.kubernetes.io/oci-load-balancer-internal: "true"

service.beta.kubernetes.io/oci-load-balancer-subnet1:
"ocid1.subnet.oc1..aaaaaa....vdfw"

where ocid1.subnet.oc1..aaaaaa....vdfw is the OCID of the private subnet.


For example:

apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
annotations:
service.beta.kubernetes.io/oci-load-balancer-internal: "true"
service.beta.kubernetes.io/oci-load-balancer-subnet1:
"ocid1.subnet.oc1..aaaaaa....vdfw"
spec:
type: LoadBalancer
ports:
- port: 8100
selector:
app: nginx

Specifying Alternative Load Balancer Shapes


The shape of an Oracle Cloud Infrastructure load balancer specifies its maximum total bandwidth (that is, ingress
plus egress). By default, load balancers are created with a shape of 100Mbps. Other shapes are available, including
400Mbps and 8000Mbps.
To specify an alternative shape for a load balancer, add the following annotation in the metadata section of the
manifest file:

service.beta.kubernetes.io/oci-load-balancer-shape: <value>

where value is the bandwidth of the shape (for example, 100Mbps, 400Mbps, 8000Mbps).
For example:

apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
annotations:
service.beta.kubernetes.io/oci-load-balancer-shape: 400Mbps
spec:
type: LoadBalancer

Oracle Cloud Infrastructure User Guide 942


Container Engine for Kubernetes

ports:
- port: 80
selector:
app: nginx

Note: Sufficient load balancer quota must be available in the region for the shape you specify. Enter the following
kubectl command to confirm that load balancer creation did not fail due to lack of quota:

kubectl describe service <service-name>

Specifying Flexible Load Balancer Shapes


The shape of an Oracle Cloud Infrastructure load balancer specifies its maximum total bandwidth (that is, ingress plus
egress). As described in Specifying Alternative Load Balancer Shapes on page 942, you can specify different load
balancer shapes.
In addition, you can also specify a flexible shape for an Oracle Cloud Infrastructure load balancer, by defining a
minimum and a maximum bandwidth for the load balancer.
To specify a flexible shape for a load balancer, add the following annotations in the metadata section of the manifest
file:

service.beta.kubernetes.io/oci-load-balancer-shape: "flexible"
service.beta.kubernetes.io/oci-load-balancer-shape-flex-min: <min-value>
service.beta.kubernetes.io/oci-load-balancer-shape-flex-max: <max-value>

where:
• <min-value> is the minimum bandwidth for the load balancer, in Mbps (for example, 10)
• <max-value> is the maximum bandwidth for the load balancer, in Mbps (for example, 100)
Note that you do not include a unit of measurement when specifying bandwidth values for flexible load balancer
shapes (unlike for pre-defined shapes). For example, specify the minimum bandwidth as 10 rather than as 10Mbps.
For example:

apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
annotations:
service.beta.kubernetes.io/oci-load-balancer-shape: "flexible"
service.beta.kubernetes.io/oci-load-balancer-shape-flex-min: 10
service.beta.kubernetes.io/oci-load-balancer-shape-flex-max: 100
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx

Specifying Load Balancer Connection Timeout


You can specify the maximum idle time (in seconds) allowed between two successive receive or two successive send
operations between the client and backend servers.

Oracle Cloud Infrastructure User Guide 943


Container Engine for Kubernetes

To explicitly specify a maximum idle time, add the following annotation in the metadata section of the manifest file:

service.beta.kubernetes.io/oci-load-balancer-connection-idle-timeout:
<value>

where value is the number of seconds.


For example:

apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
annotations:
service.beta.kubernetes.io/oci-load-balancer-connection-idle-timeout:
100
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx

Note that if you don't explicitly specify a maximum idle time, a default value is used. The default value depends on
the type of listener:
• for TCP listeners, the default maximum idle time is 300 seconds
• for HTTP listeners, the default maximum idle time is 60 seconds

Specifying Load Balancer Security List Management Options


You can use the security list management feature in Kubernetes to manage security list rules. This feature is useful if
you are new to Kubernetes, or for basic deployments.
Note:

You might encounter scalability and other issues if you use the Kubernetes
security list management feature in complex deployments, and with tools
like Terraform. For these reasons, Oracle does not recommend using the
Kubernetes security list management feature in production environments.
To specify how the Kubernetes security list management feature manages security lists, add the following annotation
in the metadata section of the manifest file:

service.beta.kubernetes.io/oci-load-balancer-security-list-management-mode:
<value>

where <value> is one of:


• "All": All required security list rules for load balancer services are managed.
• "Frontend": Only security list rules for ingress to load balancer services are managed. You have to set up a
rule that allows inbound traffic to the appropriate ports for node port ranges, the kube-proxy health port, and the
health check port ranges.
• "None": No security list management is enabled. You have to set up a rule that allows inbound traffic to the
appropriate ports for node port ranges, the kube-proxy health port, and the health check port ranges. Additionally,
you have to set up rules to allow inbound traffic to load balancers.

Oracle Cloud Infrastructure User Guide 944


Container Engine for Kubernetes

For example:

apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
annotations:
service.beta.kubernetes.io/oci-load-balancer-security-list-management-
mode: "Frontend"
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx

Note that if you specify an invalid value for oci-load-balancer-security-list-management-mode,


the value "All" is used instead.

Specifying Load Balancer Listener Protocol


You can define the type of traffic accepted by the load balancer listener by specifying the protocol on which the
listener accepts connection requests.
To explicitly specify the load balancer listener protocol, add the following annotation in the metadata section of the
manifest file:

service.beta.kubernetes.io/oci-load-balancer-backend-protocol: <value>

where <value> is the protocol that defines the type of traffic accepted by the listener. For example, "HTTP". To get
a list of valid protocols, use the ListProtocols operation.
For example:

apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
annotations:
service.beta.kubernetes.io/oci-load-balancer-backend-protocol: "HTTP"
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx

Note that if you don't explicitly specify a protocol, "TCP" is used as the default value.

Specifying Load Balancer Health Check Parameters


An Oracle Cloud Infrastructure load balancer applies a health check policy to continuously monitor backend servers.
A health check is a test to confirm backend server availability, and can be a request or a connection attempt. If a
server fails the health check, the load balancer takes the server temporarily out of rotation. If the server subsequently
passes the health check, the load balancer returns it to the rotation.

Oracle Cloud Infrastructure User Guide 945


Container Engine for Kubernetes

Health check policies include a number of parameters, which have default values. When you create a load balancer,
you can override health check parameter default values by including annotations in the metadata section of the
manifest file. Having created a load balancer, you can later add, modify, and delete those annotations. If you delete
an annotation that specified a value for a health check parameter, the load balancer uses the parameter's default value
instead.
To specify how many unsuccessful health check requests to attempt before a backend server is considered unhealthy,
add the following annotation in the metadata section of the manifest file:

service.beta.kubernetes.io/oci-load-balancer-health-check-retries: <value>

where <value> is the number of unsuccessful health check requests.


To specify the interval between health check requests, add the following annotation in the metadata section of the
manifest file:

service.beta.kubernetes.io/oci-load-balancer-health-check-interval: <value>

where <value> is a numeric value in milliseconds. The minimum is 1000.


To specify the maximum time to wait for a response to a health check request, add the following annotation in the
metadata section of the manifest file:

service.beta.kubernetes.io/oci-load-balancer-health-check-timeout: <value>

where <value> is a numeric value in milliseconds. A health check is successful only if the load balancer receives a
response within this timeout period.
For example:

apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
annotations:
service.beta.kubernetes.io/oci-load-balancer-health-check-retries: 5
service.beta.kubernetes.io/oci-load-balancer-health-check-interval:
15000
service.beta.kubernetes.io/oci-load-balancer-health-check-timeout: 4000
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx

Note that if you don't explicitly specify health check parameter values by including annotations in the metadata
section of the manifest file, the following defaults are used:

Annotation Not Included Default Value Used

service.beta.kubernetes.io/oci-load- 3
balancer-health-check-retries
service.beta.kubernetes.io/oci-load- 10000
balancer-health-check-interval

Oracle Cloud Infrastructure User Guide 946


Container Engine for Kubernetes

Annotation Not Included Default Value Used

service.beta.kubernetes.io/oci-load- 3000
balancer-health-check-timeout

For more information about Oracle Cloud Infrastructure load balancer health check policies, see Working with Health
Check Policies on page 2587.

Preventing Nodes from Handling Load Balancer Traffic


You can exclude particular worker nodes from the list of backend servers in an Oracle Cloud Infrastructure load
balancer backend set. For more information, see node.kubernetes.io/exclude-from-external-load-balancers on page
899.

Creating a Persistent Volume Claim


Container storage via a container's root file system is ephemeral, and can disappear upon container deletion and
creation. To provide a durable location to prevent data from being lost, you can create and use persistent volumes to
store data outside of containers.
A persistent volume offers persistent storage that enables your data to remain intact, regardless of whether the
containers to which the storage is connected are terminated.
A persistent volume claim (PVC) is a request for storage, which is met by binding the PVC to a persistent volume
(PV). A PVC provides an abstraction layer to the underlying storage. For example, an administrator could create a
number of static persistent volumes that can later be bound to one or more persistent volume claims. If none of the
static persistent volumes match the user's PVC request, the cluster may attempt to dynamically create a new PV that
matches the PVC request.
With Oracle Cloud Infrastructure as the underlying IaaS provider, you can provision persistent volume claims by
attaching volumes from the Oracle Cloud Infrastructure Block Volume service. The volumes are connected to clusters
created by Container Engine for Kubernetes using FlexVolume and CSI (Container Storage Interface) volume plugins
deployed on the clusters.
The minimum amount of persistent storage that a PVC can request is 50 gigabytes. If the request is for less than 50
gigabytes, the request is rounded up to 50 gigabytes.
For more information about persistent volumes, persistent volume claims, and volume plugins, see the Kubernetes
documentation.

Provisioning Persistent Volume Claims on the Block Volume Service


The Oracle Cloud Infrastructure Block Volume service (the Block Volume service) provides persistent, durable, and
high-performance block storage for your data. You can use the CSI volume plugin or the FlexVolume volume plugin
to connect clusters to volumes from the Block Volume service. Using the CSI volume plugin has several advantages:
• In future, new functionality will only be added to the CSI volume plugin, not to the FlexVolume volume plugin
(although Kubernetes developers will continue to maintain the FlexVolume volume plugin).
• The CSI volume plugin does not require access to underlying operating system and root file system dependencies.
The StorageClass specified for a PVC controls which volume plugin to use to connect to Block Volume service
volumes. If you don't explicitly specify a value for storageClassName in the yaml file that defines the PVC, the
cluster's default StorageClass is used. In clusters created by Container Engine for Kubernetes, the oci StorageClass
is initially set up as the default. The oci StorageClass is used by the FlexVolume volume plugin.
In the case of the CSI volume plugin, the CSI topology feature ensures that worker nodes and volumes are located
in the same availability domain. In the case of the FlexVolume volume plugin, you can use the matchLabels
element to select the availability domain in which a persistent volume claim is provisioned. Note that you do not use
the matchLabels element with the CSI volume plugin.

Oracle Cloud Infrastructure User Guide 947


Container Engine for Kubernetes

Regardless of the volume plugin you choose to use, if a cluster is in a different compartment to its worker nodes, you
must create an additional policy to enable access to Block Volume service volumes. This situation arises when the
subnet specified for a node pool belongs to a different compartment to the cluster. To enable the worker nodes to
access Block Volume service volumes, create the additional policy with both the following policy statements:
• ALLOW any-user to manage volumes in TENANCY where request.principal.type =
'cluster'
• ALLOW any-user to manage volume-attachments in TENANCY where
request.principal.type = 'cluster'
Note:

In the FlexVolume examples in this topic, the PVCs request storage


in availability domains in the Ashburn region using the failure-
domain.beta.kubernetes.io/zone label. For more information
about using this label (and the shortened versions of availability domain
names to specify), see failure-domain.beta.kubernetes.io/zone on page
898.

Specifying the Volume plugin used by a Persistent Volume Claim


To explicitly specify the volume plugin to use to connect to the Block Volume service when provisioning a persistent
volume claim, specify a value for storageClassName in the yaml file that defines the PVC:
• to use the CSI volume plugin, specify storageClassName: "oci-bv"
• to use the FlexVolume volume plugin, specify storageClassName: "oci"

Example 1: Dynamically Creating a Persistent Volume on the Block Volume Service for Use by the
CSI Volume Plugin
In this example, the cluster administrator has not created any suitable PVs that match the PVC request. As a result,
a block volume is dynamically provisioned using the CSI plugin specified by the oci-bv StorageClass's definition
(provisioner: blockvolume.csi.oraclecloud.com).
You define a PVC in a file called csi-bvs-pvc.yaml. For example:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mynginxclaim
spec:
storageClassName: "oci-bv"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi

Enter the following command to create the PVC from the csi-bvs-pvc.yaml file:

kubectl create -f csi-bvs-pvc.yaml

The output from the above command confirms the creation of the PVC:

persistentvolumeclaim "mynginxclaim" created

Verify that the PVC has been created by running kubectl get pvc:

kubectl get pvc

Oracle Cloud Infrastructure User Guide 948


Container Engine for Kubernetes

The output from the above command shows the current status of the PVC:

NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS


AGE
mynginxclaim Pending oci-bv
4m

The PVC has a status of Pending because the oci-bv StorageClass's definition includes
volumeBindingMode: WaitForFirstConsumer.
You can use this PVC when creating other objects, such as pods. For example, you could create a new pod from the
following pod definition, which instructs the system to use the mynginxclaim PVC as the nginx volume, which is
mounted by the pod at /data.

apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- name: http
containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumes:
- name: data
persistentVolumeClaim:
claimName: mynginxclaim

Having created the new pod, you can verify that the PVC has been bound to a new persistent volume by entering:

kubectl get pvc

The output from the above command confirms that the PVC has been bound:

NAME STATUS VOLUME CAPACITY


ACCESSMODES STORAGECLASS AGE
mynginxclaim Bound ocid1.volume.oc1.iad.<unique_ID> 50Gi
RWO oci-bv 4m

You can verify that the pod is using the new persistent volume claim by entering:

kubectl describe pod nginx

Example 2: Dynamically Creating a Persistent Volume on the Block Volume Service for Use by the
FlexVolume Volume Plugin
In this example, the cluster administrator has not created any suitable PVs that match the PVC request. As a result, a
block volume is dynamically provisioned using the FlexVolume volume plugin specified by the oci StorageClass's
definition (provisioner: oracle.com/oci).
You define a PVC in a file called flex-bvs-pvc.yaml. For example:

apiVersion: v1
kind: PersistentVolumeClaim

Oracle Cloud Infrastructure User Guide 949


Container Engine for Kubernetes

metadata:
name: mynginxclaim
spec:
storageClassName: "oci"
selector:
matchLabels:
failure-domain.beta.kubernetes.io/zone: "US-ASHBURN-AD-1"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi

Note that the flex-bvs-pvc.yaml file includes the matchLabels element, which is only applicable in the case of the
FlexVolume volume plugin.
Enter the following command to create the PVC from the flex-bvs-pvc.yaml file:

kubectl create -f flex-bvs-pvc.yaml

The output from the above command confirms the creation of the PVC:

persistentvolumeclaim "mynginxclaim" created

Verify that the PVC has been created and bound to a new persistent volume by entering:

kubectl get pvc

The output from the above command shows the current status of the PVC:

NAME STATUS VOLUME CAPACITY


ACCESSMODES STORAGECLASS AGE
mynginxclaim Bound ocid1.volume.oc1.iad.<unique_ID> 50Gi
RWO oci 4m

The PVC already has a status of Bound because the oci StorageClass's definition includes
volumeBindingMode: Immediate.
You can use this PVC when creating other objects, such as pods. For example, the following pod definition instructs
the system to use the mynginxclaim PVC as the nginx volume, which is mounted by the pod at /data.

apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- name: http
containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumes:
- name: data
persistentVolumeClaim:
claimName: mynginxclaim

Oracle Cloud Infrastructure User Guide 950


Container Engine for Kubernetes

Having created the new pod, you can verify that it is running and using the new persistent volume claim by entering:

kubectl describe pod nginx

Example 3: Creating a Persistent Volume from a Backup on the Block Volume Service for Use by
the FlexVolume Volume Plugin
In this example, the cluster administrator has created a block volume backup for you to use when provisioning a new
persistent volume claim. The block volume backup comes with data ready for use by other objects such as pods.
You define a PVC in a file called flex-pvcfrombackup.yaml file. You use the volume.beta.kubernetes.io/
oci-volume-source annotation element to specify the source of the block volume to use when provisioning
a new persistent volume claim using the FlexVolume volume plugin. You can specify the OCID of either a block
volume or a block volume backup as the value of the annotation. In this example, you specify the OCID of the block
volume backup created by the cluster administrator. For example:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myvolume
annotations:
volume.beta.kubernetes.io/oci-volume-source:
ocid1.volumebackup.oc1.iad.abuw...
spec:
selector:
matchLabels:
failure-domain.beta.kubernetes.io/zone: US-ASHBURN-AD-1
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi

Note that the flex-pvcfrombackup.yaml file includes the matchLabels element, which is only applicable in the
case of the FlexVolume volume plugin.
Enter the following command to create the PVC from the flex-pvcfrombackup.yaml file:

kubectl create -f flex-pvcfrombackup.yaml

The output from the above command confirms the creation of the PVC:

persistentvolumeclaim "myvolume" created

Verify that the PVC has been created and bound to a new persistent volume created from the volume backup by
entering:

kubectl get pvc

The output from the above command shows the current status of the PVC:

NAME STATUS VOLUME CAPACITY


ACCESSMODES STORAGECLASS AGE
myvolume Bound ocid1.volume.oc1.iad.<unique_ID> 50Gi RWO
oci 4m

Oracle Cloud Infrastructure User Guide 951


Container Engine for Kubernetes

You can use the new persistent volume created from the volume backup when defining other objects, such as pods.
For example, the following pod definition instructs the system to use the myvolume PVC as the nginx volume, which
is mounted by the pod at /data.

apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- name: http
containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumes:
- name: data
persistentVolumeClaim:
claimName: myvolume

Having created the new pod, you can verify that it is running and using the new persistent volume claim by entering:

kubectl describe pod nginx

Adding OCI Service Broker for Kubernetes to Clusters


Service brokers offer a catalog of backing services to workloads running on cloud native platforms. The Open Service
Broker API is a commonly-used standard for interactions between service brokers and platforms. The Open Service
Broker API specification describes a simple set of API endpoints that platforms use to provision, gain access to, and
manage service offerings. For more information about the Open Service Broker API, see resources available online
including those at openservicebrokerapi.org.
OCI Service Broker for Kubernetes is an implementation of the Open Service Broker API. OCI Service Broker for
Kubernetes is specifically for interacting with Oracle Cloud Infrastructure services from Kubernetes clusters. It
includes service broker adapters to bind to the following Oracle Cloud Infrastructure services:
• Object Storage
• Autonomous Transaction Processing
• Autonomous Data Warehouse
• Streaming
You can add OCI Service Broker for Kubernetes to clusters you've created with Oracle Cloud Infrastructure
Container Engine for Kubernetes to interact with the Oracle Cloud Infrastructure services listed above. Having added
OCI Service Broker for Kubernetes to a cluster, you don't have to manually provision and de-provision the Oracle
Cloud Infrastructure services each time you deploy or un-deploy an application on the cluster. Instead, you interact
with the Oracle Cloud Infrastructure services by using kubectl to call the Open Service Broker APIs implemented by
OCI Service Broker for Kubernetes .
OCI Service Broker for Kubernetes is available as a Helm chart, a Docker container, and as source code from Github.
For more information about OCI Service Broker for Kubernetes, see the OCI Service Broker for Kubernetes
documentation in the Github repository.

Adding OCI Service Broker for Kubernetes to a Cluster


To add OCI Service Broker for Kubernetes to a cluster, follow the detailed instructions in the Github repository.
For convenience, here's a high-level summary of the steps involved:

Oracle Cloud Infrastructure User Guide 952


Container Engine for Kubernetes

1. Install OCI Service Broker for Kubernetes. During this step, you will typically:
• Install the Service Catalog.
• Install the svcat tool.
• Deploy OCI Service Broker for Kubernetes.
• Grant RBAC permissions and roles.
• Register OCI Service Broker for Kubernetes.
For more information about installation, see the OCI Service Broker for Kubernetes documentation in the Github
repository.
2. Secure OCI Service Broker for Kubernetes. During this step, you will typically:
• Restrict access to Service Catalog resources using RBAC permissions and roles.
• Configure TLS for OCI Service Broker for Kubernetes.
• Set up an Oracle Cloud Infrastructure user for use by OCI Service Broker for Kubernetes.
• Set up appropriate policies to control access to resources (according to the Oracle Cloud Infrastructure services
to be used).
• Limit access to the OCI Service Broker for Kubernetes endpoint using NetworkPolicy.
• Stand up an etcd cluster for Service Catalog and OCI Service Broker for Kubernetes.
• Protect sensitive values by creating secrets.
The security configuration to choose will depend on your particular requirements. For more information, see the
OCI Service Broker for Kubernetes documentation in the Github repository.
3. Provision and bind to the required Oracle Cloud Infrastructure services. During this step, you will typically:
• Provide service provision request parameters.
• Provide service binding request parameters.
• Provide service binding response credentials.
The details to provide will depend on the Oracle Cloud Infrastructure service to bind to. For more information, see
the OCI Service Broker for Kubernetes documentation in the Github repository.

Example: Setting Up an Ingress Controller on a Cluster


You can set up different open source ingress controllers on clusters you have created with Container Engine for
Kubernetes.
This topic explains how to set up an example ingress controller along with corresponding access control on an
existing cluster. Having set up the ingress controller, this topic describes how to use the ingress controller with an
example hello-world backend, and how to verify the ingress controller is working as expected.

Example Components
The example includes an ingress controller and a hello-world backend.

Ingress Controller Components


The ingress controller comprises:
• An ingress controller deployment called nginx-ingress-controller. The deployment deploys an image
that contains the binary for the ingress controller and Nginx. The binary manipulates and reloads the /etc/
nginx/nginx.conf configuration file when an ingress is created in Kubernetes. Nginx upstreams point to
services that match specified selectors.
• An ingress controller service called ingress-nginx. The service exposes the ingress controller deployment
as a LoadBalancer type service. Because Container Engine for Kubernetes uses an Oracle Cloud Infrastructure
integration/cloud-provider, a load balancer will be dynamically created with the correct nodes configured as a
backend set.

Oracle Cloud Infrastructure User Guide 953


Container Engine for Kubernetes

Backend Components
The hello-world backend comprises:
• A backend deployment called docker-hello-world. The deployment handles default routes for health
checks and 404 responses. This is done by using a stock hello-world image that serves the minimum required
routes for a default backend.
• A backend service called docker-hello-world-svc.The service exposes the backend deployment for
consumption by the ingress controller deployment.

Setting Up the Example Ingress Controller


In this section, you create the access rules for ingress. You then create the example ingress controller components,
and confirm they are running.

Creating the Access Rules for the Ingress Controller


1. If you haven't already done so, follow the steps to set up the cluster's kubeconfig configuration file and (if
necessary) set the KUBECONFIG environment variable to point to the file. Note that you must set up your own
kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set up. See Setting Up
Cluster Access on page 875.
2. If your Oracle Cloud Infrastructure user is a tenancy administrator, skip the next step and go straight to Creating
the Service Account, and the Ingress Controller on page 954.
3. If your Oracle Cloud Infrastructure user is not a tenancy administrator, in a terminal window, grant the user the
Kubernetes RBAC cluster-admin clusterrole on the cluster by entering:

kubectl create clusterrolebinding <my-cluster-admin-binding> --


clusterrole=cluster-admin --user=<user-OCID>

where:
• <my-cluster-admin-binding> is a string of your choice to be used as the name for the binding
between the user and the Kubernetes RBAC cluster-admin clusterrole. For example, jdoe_clst_adm
• <user-OCID> is the user's OCID (obtained from the Console ). For example,
ocid1.user.oc1..aaaaa...zutq (abbreviated for readability).
For example:

kubectl create clusterrolebinding jdoe_clst_adm --clusterrole=cluster-


admin --user=ocid1.user.oc1..aaaaa...zutq

Creating the Service Account, and the Ingress Controller


1. Run the following command to create the nginx-ingress-controller ingress controller deployment,
along with the Kubernetes RBAC roles and bindings:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-


nginx/nginx-0.30.0/deploy/static/mandatory.yaml
2. Create and save the file cloud-generic.yaml containing the following code to define the ingress-
nginx ingress controller service as a load balancer service:

kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx

Oracle Cloud Infrastructure User Guide 954


Container Engine for Kubernetes

spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
3. Using the file you just saved, create the ingress-nginx ingress controller service by running the following
command:

kubectl apply -f cloud-generic.yaml

Verifying the ingress-nginx Ingress Controller Service is Running as a Load Balancer Service
1. View the list of running services by entering:

kubectl get svc -n ingress-nginx

The output from the above command shows the services that are running:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)


AGE
ingress-nginx LoadBalancer 10.96.229.38 <pending> 80:30756/
TCP,443:30118/TCP 1h

The EXTERNAL-IP for the ingress-nginx ingress controller service is shown as <pending> until the load
balancer has been fully created in Oracle Cloud Infrastructure.
2. Repeat the kubectl get svc command until an EXTERNAL-IP is shown for the ingress-nginx ingress
controller service:

kubectl get svc -n ingress-nginx

The output from the above command shows the EXTERNAL-IP for the ingress-nginx ingress controller
service:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)


AGE
ingress-nginx LoadBalancer 10.96.229.38 129.146.214.219 80:30756/
TCP,443:30118/TCP 1h

Oracle Cloud Infrastructure User Guide 955


Container Engine for Kubernetes

Creating the TLS Secret


A TLS secret is used for SSL termination on the ingress controller.
1. Output a new key to a file. For example, by entering:

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out
tls.crt -subj "/CN=nginxsvc/O=nginxsvc"

To generate the secret for this example, a self-signed certificate is used. While this is okay for testing, for
production, use a certificate signed by a Certificate Authority.
Note:

Under Windows, you may need to replace "/CN=nginxsvc/


O=nginxsvc" with "//CN=nginxsvc\O=nginxsvc" . For
example, this is necessary if you run the openssl command from a Git
Bash shell.
2. Create the TLS secret by entering:

kubectl create secret tls tls-secret --key tls.key --cert tls.crt

Setting Up the Example Backend


In this section, you define a hello-world backend service and deployment.

Creating the docker-hello-world Service Definition


1. Create the file hello-world-ingress.yaml containing the following code. This code uses a publicly
available hello-world image from Docker Hub. You can substitute another image of your choice that can be run in
a similar manner.

apiVersion: apps/v1
kind: Deployment
metadata:
name: docker-hello-world
labels:
app: docker-hello-world
spec:
selector:
matchLabels:
app: docker-hello-world
replicas: 3
template:
metadata:
labels:
app: docker-hello-world
spec:
containers:
- name: docker-hello-world
image: scottsbaldwin/docker-hello-world:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: docker-hello-world-svc
spec:
selector:

Oracle Cloud Infrastructure User Guide 956


Container Engine for Kubernetes

app: docker-hello-world
ports:
- port: 8088
targetPort: 80
type: ClusterIP

Note the docker-hello-world service's type is ClusterIP, rather than LoadBalancer, because this service will be
proxied by the ingress-nginx ingress controller service. The docker-hello-world service does not need public
access directly to it. Instead, the public access will be routed from the load balancer to the ingress controller, and
from the ingress controller to the upstream service.
2. Create the new hello-world deployment and service on nodes in the cluster by running the following command:

kubectl create -f hello-world-ingress.yaml

Using the Example Ingress Controller to Access the Example Backend


In this section you create an ingress to access the backend using the ingress controller.

Creating the Ingress Resource


1. Create the file ingress.yaml and populate it with this code:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-world-ing
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- secretName: tls-secret
rules:
- http:
paths:
- backend:
serviceName: docker-hello-world-svc
servicePort: 8088
2. Create the resource by entering:

kubectl create -f ingress.yaml

Verifying that the Example Components are Working as Expected


In this section, you confirm that all of the example components have been successfully created and are operating as
expected. The docker-hello-world-svc service should be running as a ClusterIP service, and the ingress-
nginx service should be running as a LoadBalancer service. Requests sent to the ingress controller should be routed
to nodes in the cluster.

Obtaining the External IP Address of the Load Balancer


To confirm the ingress-nginx service is running as a LoadBalancer service, obtain its external IP address by
entering:

kubectl get svc --all-namespaces

Oracle Cloud Infrastructure User Guide 957


Container Engine for Kubernetes

The output from the above command shows the services that are running:

NAMESPACE NAME TYPE CLUSTER-IP


EXTERNAL-IP PORT(S) AGE
default docker-hello-world-svc ClusterIP 10.96.83.247
<none> 8088/TCP 16s
default kubernetes ClusterIP 10.96.0.1
<none> 443/TCP 1h
ingress-nginx ingress-nginx LoadBalancer 10.96.229.38
129.146.214.219 80:30756/TCP,443:30118/TCP 5m
kube-system kube-dns ClusterIP 10.96.5.5
<none> 53/UDP,53/TCP 1h

Sending cURL Requests to the Load Balancer


1. Use the external IP address of the ingress-nginx service (for example, 129.146.214.219) to curl an http
request by entering:

curl -I http://129.146.214.219

Example output from the above command:

HTTP/1.1 301 Moved Permanently


Via: 1.1 10.68.69.10 (McAfee Web Gateway 7.6.2.10.0.23236)
Date: Thu, 07 Sep 2017 15:20:16 GMT
Server: nginx/1.13.2
Location: https://129.146.214.219/
Content-Type: text/html
Content-Length: 185
Proxy-Connection: Keep-Alive
Strict-Transport-Security: max-age=15724800; includeSubDomains;

The output shows a 301 redirect and a Location header that suggest that http traffic is being redirected to https.
2. Either cURL against the https url or add the -L option to automatically follow the location header. The -k option
instructs cURL to not verify the SSL certificates. For example, by entering:

curl -ikL http://129.146.214.219

Example output from the above command:

HTTP/1.1 301 Moved Permanently


Via: 1.1 10.68.69.10 (McAfee Web Gateway 7.6.2.10.0.23236)
Date: Thu, 07 Sep 2017 15:22:29 GMT
Server: nginx/1.13.2
Location: https://129.146.214.219/
Content-Type: text/html
Content-Length: 185
Proxy-Connection: Keep-Alive
Strict-Transport-Security: max-age=15724800; includeSubDomains;

HTTP/1.0 200 Connection established

HTTP/1.1 200 OK
Server: nginx/1.13.2
Date: Thu, 07 Sep 2017 15:22:30 GMT
Content-Type: text/html
Content-Length: 71
Connection: keep-alive
Last-Modified: Thu, 07 Sep 2017 15:17:24 GMT
ETag: "59b16304-47"

Oracle Cloud Infrastructure User Guide 958


Container Engine for Kubernetes

Accept-Ranges: bytes
Strict-Transport-Security: max-age=15724800; includeSubDomains;

<h1>Hello webhook world from: docker-hello-world-1732906117-0ztkm</h1>

The last line of the output shows the HTML that is returned from the pod whose hostname is docker-hello-
world-1732906117-0ztkm.
3. Issue the cURL request several times to see the hostname in the HTML output change, demonstrating that load
balancing is occurring:

$ curl -k https://129.146.214.219

<h1>Hello webhook world from: docker-hello-world-1732906117-6115l</h1>

$ curl -k https://129.146.214.219

<h1>Hello webhook world from: docker-hello-world-1732906117-7r89v</h1>

$ curl -k https://129.146.214.219

<h1>Hello webhook world from: docker-hello-world-1732906117-0ztkm</h1>

Inspecting nginx.conf
The nginx-ingress-controller ingress controller deployment manipulates the nginx.conf file in the pod
within which it is running.
1. Find the name of the pod running the nginx-ingress-controller ingress controller deployment by
entering:

kubectl get po -n ingress-nginx

The output from the above command shows the name of the pod running the nginx-ingress-controller
ingress controller

NAME READY STATUS RESTARTS


AGE
nginx-ingress-controller-110676328-h86xg 1/1 Running 0
1h
2. Use the name of the pod running the nginx-ingress-controller ingress controller deployment to show
the contents of nginx.conf by entering the following kubectl exec command:

kubectl exec -n ingress-nginx -it nginx-ingress-controller-110676328-h86xg


-- cat /etc/nginx/nginx.conf
3. Look for proxy_pass in the output. There will be one for the default backend and another that looks similar to:

proxy_pass http://upstream_balancer;

This shows that Nginx is proxying requests to an upstream called upstream_balancer.


4. Locate the upstream definition in the output. It will look similar to:

upstream upstream_balancer {
server 0.0.0.1:1234; # placeholder

balancer_by_lua_block {
tcp_udp_balancer.balance()
}

Oracle Cloud Infrastructure User Guide 959


Container Engine for Kubernetes

The upstream is proxying via Lua.

Example: Installing Calico and Setting Up Network Policies


The Kubernetes networking model assumes containers (pods) have unique and routable IP addresses within a cluster.
In the Kubernetes networking model, containers communicate with each other using those IP addresses, regardless of
whether the containers are deployed on the same node in a cluster or on a different node. The Container Networking
Interface (CNI) is the API that enables containers to communicate with the network using IP addresses.
By default, pods accept traffic from any source. To enhance cluster security, pods can be 'isolated' by selecting them
in a network policy (the Kubernetes NetworkPolicy resource). A network policy is a specification of how groups of
pods are allowed to communicate with each other and other network endpoints. NetworkPolicy resources use labels to
select pods and to define rules that specify what traffic is allowed to the selected pods. If a NetworkPolicy in a cluster
namespace selects a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy.
Other pods in the namespace that are not selected by a NetworkPolicy will continue to accept all traffic. For more
information about network policies, see the Kubernetes documentation.
Network policies are implemented by the CNI network provider. Simply creating the NetworkPolicy resource without
a CNI network provider to implement it will have no effect. Note that not all CNI network providers implement the
NetworkPolicy resource.
Clusters you create with Container Engine for Kubernetes have flannel installed as the default CNI network provider.
flannel is a simple overlay virtual network that satisfies the requirements of the Kubernetes networking model by
attaching IP addresses to containers. For more information about flannel, see the flannel documentation.
Although flannel satisfies the requirements of the Kubernetes networking model, it does not support NetworkPolicy
resources. If you want to enhance the security of clusters you create with Container Engine for Kubernetes by
implementing network policies, you have to install and configure a network provider that does support NetworkPolicy
resources. One such provider is Calico (refer to the Kubernetes documentation for a list of other network providers).
Calico is an open source networking and network security solution for containers, virtual machines, and native host-
based workloads. For more information about Calico, see the Calico documentation.
You can manually install Calico alongside flannel in clusters you have created using Container Engine for
Kubernetes.
Note:

• Only the use of open source Calico is supported. Use of Calico Enterprise
is not supported.
• If you install Calico on a cluster that has existing node pools in which
pods are already running, you will have to recreate the pods when the
Calico installation is complete. For example, by running the kubectl
rollout restart command. If you install Calico on a cluster before
creating any node pools in the cluster (recommended), you can be sure
that there will be no pods to recreate.

Installing Calico manually


Having created a cluster using Container Engine for Kubernetes (using either the Console or the API), you can
subsequently install Calico on the cluster (alongside flannel) to support network policies.
For convenience, Calico installation instructions are included below, based on Calico version 3.10. Note that Calico
installation instructions vary between Calico versions. For information about installing different versions of Calico,
always refer to the Calico documentation for installing Calico for network policy enforcement only.
1. If you haven't already done so, follow the steps to set up the cluster's kubeconfig configuration file and (if
necessary) set the KUBECONFIG environment variable to point to the file. Note that you must set up your own

Oracle Cloud Infrastructure User Guide 960


Container Engine for Kubernetes

kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set up. See Setting Up
Cluster Access on page 875.
2. In a terminal window, download the Calico policy-only manifest for the Kubernetes API datastore by entering:

curl https://docs.projectcalico.org/v3.10/manifests/calico-policy-
only.yaml -o calico.yaml

Note that the url differs, according to the version of Calico that you want to install. Refer to the Calico
documentation for instructions to install a particular version of Calico.
3. The calico.yaml file includes multiple references to the pod CIDR block value. In the downloaded calico.yaml
file, the pod CIDR block value is initially set to 192.168.0.0/16. If the pod CIDR block value of the cluster
created by Container Engine for Kubernetes is 192.168.0.0/16, skip this step. However, if the pod CIDR block
value of the cluster created by Container Engine for Kubernetes is a different value (such as the default value of
10.244.0.0/16), you have to change the initial value in the calico.yaml file. The steps below show one way to do
that:
a. Set the value of an environment variable to the pod CIDR block value. For example, by entering a command
like:

export POD_CIDR="10.244.0.0/16"
b. Replace the default value 192.168.0.0/16 in the calico.yaml file with the actual pod CIDR block value of the
cluster created by Container Engine for Kubernetes. For example, by entering a command like:

sed -i -e "s?192.168.0.0/16?$POD_CIDR?g" calico.yaml


4. The calico.yaml file defines a deployment named calico-typha, which has a replica count of 1 by default. You
might want to consider changing this default replica count for large clusters or production environments. Calico
recommends:
• At least one replica for every 200 nodes, up to a maximum of 20.
• A minimum of three replicas in production environments to reduce the impact of rolling upgrades and failures
(the number of replicas should always be less than the number of nodes, otherwise rolling upgrades will stall).
To change the replica count, open the calico.yaml file in a text editor and change the value of the replicas
setting:

apiVersion: apps/v1
kind: Deployment
metadata:
name: calico-typha
...
spec:
...
replicas: <number-of-replicas>

Note that the way to set the replica count differs, according to the Calico version you've installed. Refer to the
Calico documentation to find out how to set the replica count for the version you've installed.
5. Install and configure Calico by entering the following command:

kubectl apply -f calico.yaml

Setting up Network Policies


Having installed Calico on a cluster you've created with Container Engine for Kubernetes, you can create Kubernetes
NetworkPolicy resources to isolate pods as required.
For NetworkPolicy examples and how to use them, see the Calico documentation and specifically:

Oracle Cloud Infrastructure User Guide 961


Container Engine for Kubernetes

• Kubernetes policy, demo


• Kubernetes policy, basic tutorial
• Kubernetes policy, advanced tutorial
Note that the examples vary, according to the Calico version you've installed.

Frequently Asked Questions About Container Engine for Kubernetes


This topic provides answers to some frequently asked questions about Container Engine for Kubernetes.

Does Container Engine for Kubernetes Support Alpha and Beta Features in
Kubernetes?
Periodically, Kubernetes releases new features. New Kubernetes features are introduced in the following stages, as
described in the Kubernetes documentation and summarized below:
• Alpha stage: An Alpha feature is disabled by default, might contain bugs, and might change or be dropped at any
time. The feature is recommended for short-lived testing clusters only.
• Beta stage: A Beta feature is usually enabled by default, has been well-tested, and will not be dropped. However,
details of the feature might change in incompatible ways, and is recommended for non-business-critical use only.
• General Availability stage: A Generally Available (or Stable) feature is always enabled, and will appear in
released software for many subsequent versions.
Container Engine for Kubernetes supports the use of Kubernetes Beta features that are enabled by default in
Kubernetes. Container Engine for Kubernetes does not support Alpha features, nor Beta features that are disabled by
default.
For more information about Kubernetes Alpha and Beta features, see the Kubernetes documentation.

What Are VCN-Native Clusters?


Container Engine for Kubernetes creates Kubernetes clusters that are completely integrated with your Oracle Cloud
Infrastructure Virtual Cloud Network (VCN). Worker nodes, load balancers, and the Kubernetes API endpoint are
part of your VCN, and you can configure them as public or private. Such clusters that are fully integrated with your
VCN are known as "VCN-native clusters".
Note:

In earlier releases, clusters were provisioned with the public Kubernetes API
endpoint in the Oracle-managed tenancy.
You can continue to create such clusters using the CLI or API, but not the
Console.

Container Engine for Kubernetes Metrics


You can monitor the health, capacity, and performance of Kubernetes clusters managed by Container Engine for
Kubernetes using metrics, alarms, and notifications.
This topic describes the metrics emitted by Container Engine for Kubernetes in the oci_oke metric namespace.
Resources: clusters, worker nodes

Overview of the Container Engine for Kubernetes Service Metrics


Container Engine for Kubernetes metrics help you monitor Kubernetes clusters, along with node pools and individual
worker nodes. You can use metrics data to diagnose and troubleshoot cluster and node pool issues.
To view a default set of metrics charts in the Console, navigate to the cluster you're interested in, and then click
Metrics. You also can use the Monitoring service to create custom queries.

Oracle Cloud Infrastructure User Guide 962


Container Engine for Kubernetes

Prerequisites
IAM policies: To monitor resources, you must be given the required type of access in a policy written by an
administrator, whether you're using the Console or the REST API with an SDK, CLI, or other tool. The policy must
give you access to the monitoring services as well as the resources being monitored. If you try to perform an action
and get a message that you don’t have permission or are unauthorized, confirm with your administrator the type of
access you've been granted and which compartment you should work in. For more information on user authorizations
for monitoring, see the Authentication and Authorization section for the related service: Monitoring or Notifications.

Available Metrics: oci_oke


The metrics listed in the following tables are automatically available for any Kubernetes clusters you create. You do
not need to enable monitoring on the resource to get these metrics.
Container Engine for Kubernetes metrics include the following dimensions:
RESOURCEID
The OCID of the resource to which the metric applies.
RESOURCEDISPLAYNAME
The name of the resource to which the metric applies.
RESPONSECODE
The response code sent from the Kubernetes API server.
RESPONSEGROUP
The response code group, based on the response code's first digit (for example, 2xx, 3xx, 4xx, 5xx).
CLUSTERID
The OCID of the cluster to which the metric applies.
NODEPOOLID
The OCID of the node pool to which the metric applies.
NODESTATE
The state of the compute instance hosting the worker node. For example, ACTIVE, CREATING,
DELETING, DELETED, FAILED, UPDATING, INACTIVE.
NODECONDITION
The condition of the worker node, as indicated by the Kubernetes API server. For example, Ready,
MemoryPressure, PIDPressure, DiskPressure, NetworkUnavailable.
AVAILABILITYDOMAIN
The availability domain where the compute instance resides.
FAULTDOMAIN
The fault domain where the compute instance resides.

Metric Metric Display Unit Description Dimensions


Name
API Server
APIServerRequestCount count Number of requests resourceId
Requests received by the
Kubernetes API resourceDisplayName
Server.

Oracle Cloud Infrastructure User Guide 963


Container Engine for Kubernetes

Metric Metric Display Unit Description Dimensions


Name
API Server
APIServerResponseCount count Number of resourceId
Response Count different non-200
resourceDisplayName
responses (that is,
error responses) responseCode
sent from the
responseGroup
Kubernetes API
server.
Unschedulable
UnschedulablePods count Number of pods resourceId
Pods that the Kubernetes
resourceDisplayName
scheduler is unable
to schedule. Not
available in clusters
running versions of
Kubernetes prior to
version 1.15.x.
NodeState Node State count Number of compute resourceId
nodes in different
clusterId
states.
nodepoolId
resourceDisplayName
nodeState
nodeCondition
availabilityDomain
faultDomain

Kubernetes Node
KubernetesNodeCondition count Number of worker resourceId
Condition nodes in different clusterId
conditions, as
indicated by the nodepoolId
Kubernetes API resourceDisplayName
server.
nodeCondition

Using the Console


To view default metric charts for a single cluster
1. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click
Kubernetes Clusters.
2. Select the region you are using with Container Engine for Kubernetes.
3. Select the compartment containing the cluster for which you want to view metrics.
The Clusters page shows all the clusters in the compartment you selected.
4. Click the name of the cluster for which you want to view metrics.
5. Under Resources, click Metrics.
The Metrics tab displays a chart for each metric for the cluster that is emitted by the Container Engine for
Kubernetes metric namespace. To see metrics for a node pool in the cluster, display the Node Pools tab, click the
name of the node pool, and display the Metrics tab. To see metrics for a worker node in the node pool, display the

Oracle Cloud Infrastructure User Guide 964


Container Engine for Kubernetes

Nodes tab and click the View Metrics link beside the name of the worker node. For more information about the
emitted metrics, see Available Metrics: oci_oke on page 963.
For more information about monitoring metrics and using alarms, see Monitoring Overview on page 2686. For
information about notifications for alarms, see Notifications Overview on page 3378.
Not seeing the cluster metrics data you expect?
If you don't see the metrics data for a cluster that you expect, see the following possible causes and resolutions.

Problem Possible How to Resolution


Cause check

I know the Kubernetes API server returned some error responses The Confirm the Adjust the
but the API Server Response Count chart doesn't show them. responses Start Time Start Time
might and End and End
have been Time cover Time as
returned the period necessary.
outside the when the
time period responses
covered by were
the API returned.
Server
Response
Count
chart.
I know the Kubernetes API server returned some error responses Although Confirm Adjust
but the API Server Response Count chart doesn't show them, even the the x-axis the x-axis
though the responses were returned between the Start Time and responses (window (window
End Time. were of data of data
returned display) display) as
between covers necessary.
the Start the period
Time and when the
End Time, responses
the x-axis were
(window returned.
of data
display)
might be
excluding
the
responses.
I want to see data in the charts as a continuous line over time, but This is Increase the Adjust the
the line has gaps in it. expected Interval Interval as
behavior. If (for necessary.
there is no example,
metrics data from 1
to show in minute to
the selected 5 minutes,
interval, the or from 1
data line is minute to 1
discontinuous.hour).

Oracle Cloud Infrastructure User Guide 965


Container Engine for Kubernetes

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following APIs for monitoring:
• Monitoring API for metrics and alarms
• Notifications API for notifications (used with alarms)

Oracle Cloud Infrastructure User Guide 966


Container Engine for Kubernetes

Oracle Cloud Infrastructure User Guide 967


Data Transfer

Chapter

13
Data Transfer
This chapter explains how to migrate data to Oracle Cloud Infrastructure using Disk-Based Data Import and Data
Transfer Appliance.

Overview of Data Transfer Service


Oracle offers offline data transfer solutions that let you migrate data to Oracle Cloud Infrastructure. You can also
export data currently residing in Oracle Cloud Infrastructure to your data center offline. Moving data over the public
internet is not always feasible because of high network costs, unreliable network connectivity, long transfer times,
and security concerns. Our transfer solutions address these pain points, are easy to use, and provide faster data upload
compared to over-the-wire data transfer.
Note:

To simplify this Data Transfer documentation, we generically refer to Object


Storage to mean that you can transfer data into a bucket in either the Object
Storage tier or Archive Storage tier.
DISK-BASED DATA TRANSFER
You send your data as files on encrypted commodity disk to an Oracle transfer site. Operators at the Oracle
transfer site upload the files into your designated Object Storage bucket in your tenancy.
This transfer solution requires you to source and purchase the disk used to transfer data to Oracle Cloud
Infrastructure. The disk is shipped back to you after the data is successfully uploaded.
See Data Import - Disk on page 970 for details.
APPLIANCE-BASED DATA TRANSFER
You send your data as files on secure, high-capacity, Oracle-supplied storage appliances to an Oracle transfer
site. Operators at the Oracle transfer site upload the data into your designated Object Storage bucket in your
tenancy.
This solution supports data transfer when you are migrating a large volume of data and when using a transfer
disk is not a practical alternative. You do not need to write any code or purchase any hardware. Oracle
supplies the transfer appliance and software required to manage the transfer.
See Data Import - Appliance on page 1015 for details.
APPLIANCE-BASED DATA EXPORT
You export your data from your Oracle Cloud Infrastructure Object Storage bucket to your data center using
an Oracle-provided appliance.
This solution is useful if you have media content or processed datasets you need to share with a customer or
business partner.
See Appliance Data Export on page 1082 for details.

Supported Regions
Learn about the supported regions for Data Transfer.

Oracle Cloud Infrastructure User Guide 968


Data Transfer

Data transfer and export are supported in the following regions:


• US East (Ashburn)
• US West (Phoenix)
• US DoD East (Ashburn)
• US DoD North (Chicago)
• US DoD West (Phoenix)
• Germany Central (Frankfurt)
• UK South (London)
• Japan East (Tokyo)
• Japan Central (Osaka)

Limits on Data Transfer Service Resources


Learn about how to determine your limits on Data Transfer service resources.
When you sign up for Oracle Cloud Infrastructure, a set of service limits is configured for your tenancy. The service
limit is the quota or allowance set on a resource. Verify that your service limits are set appropriately before you begin
the data transfer process.
See Service Limits on page 217 for a list of applicable limits and instructions for requesting a limit increase. To set
compartment-specific limits on a resource or resource family, administrators can use compartment quotas.

Tagging Resources
Learn about how to use tagging on your Data Transfer resources.
Apply tags to your resources to help organize them according to your business needs. You can apply tags at the
time you create a resource, or you can update the resource later with the wanted tags. For general information about
applying tags, see Resource Tags on page 213.

Automation for Objects Using the Events Service


Learn how to use the Events service to automate certain Data Transfer events.
You can create automation based on state changes for your Oracle Cloud Infrastructure resources by using event
types, rules, and actions. For more information, see Overview of Events on page 1788.
Events for objects are handled differently than other resources. Objects do not emit events by default. Use the
Console, CLI, or API to enable a bucket to emit events for object state changes. You can enable events for object state
changes during or after bucket creation.

Notifications
Learn how to set up notifications for certain Data Transfer events.
You can set up different types of notifications to alert you when any change happens to during your data transfer. See
Notifications Overview on page 3378.
Note:

To fully utilize notifications, setup events that trigger the notifications. See
Overview of Events on page 1788 for more information.
You can also set up notifications for appliance-based import and export jobs using a CLI command. The notifications
run from the CLI provides a more convenient process than using the Notifications and Events services. Instructions
for setting these CLI-based notifications are in the Preparation topics for appliance-based import and export.

Data Encryption
Learn about how Data Transfer applies encryption to data files and tasks
Data Transfer uses the following encryption methods:

Oracle Cloud Infrastructure User Guide 969


Data Transfer

• Data at rest is encrypted with AES-256 encryption.


• Node-to-node communication is encrypted with GCM-AES-128.
• Console and API are using TLS and will default to AES-256.

Inputting Text into Data Transfer


Learn what type of text you can input into Data Transfer.
You must use only ASCII text for all inputs to Data Transfer. This requirement applies to the browser-based Console
and CLIs.

What's Next
Now you are ready to prepare for your data transfer or export. See the following pages for more information on each
of the data transfer or export methods:
• Data Import - Disk on page 970
• Data Import - Appliance on page 1015
• Appliance Data Export on page 1082

Data Import - Disk


Learn about how to import your data to Oracle Cloud Infrastructure using a customer-provided commercial hard disk
drive.
Disk-Based Data Import is one of Oracle's offline data transfer solutions that lets you migrate data to Oracle Cloud
Infrastructure. You send your data as files on an encrypted disk to an Oracle transfer site. Operators at the Oracle
transfer site upload the files into the designated Object Storage bucket in your tenancy. You are then free to move the
uploaded data to other Oracle Cloud Infrastructure services as needed.
Note:

Oracle does not certify or test disks you intend to use for disk import jobs.
Calculate your disk capacity requirements and disk I/O to determine what
USB 2.0/3.0 disk works best for your data transfer needs.

Disk-Based Data Import Concepts


Learn about the concepts around disk-based data import.
IMPORT DISK
An import disk is a user-supplied storage device that is specially prepared to copy and upload data to Oracle
Cloud Infrastructure. You copy your data to the import disk and ship it in a parcel to Oracle to upload your
data.
Disk-Based Data Import supports external USB 2.0/3.0 hard disk drives.
Note:

Pin-code protected devices and physical-key protected devices are


currently not supported.
TRANSFER DISK
A transfer disk is the logical representation of an import disk that has been prepared to copy and upload data
to Oracle Cloud Infrastructure.
Note:

The terms transfer disk and import disk both represent the disk being
used to move your data to Oracle Cloud Infrastructure. Transfer disk
is used in the context of configuring the disk within the transfer job

Oracle Cloud Infrastructure User Guide 970


Data Transfer

and transfer package. Import disk is used when physically handling the
disk, such as connecting it to the Data Host or mailing it to Oracle.
TRANSFER JOB
A transfer job is the logical representation of a data migration to Oracle Cloud Infrastructure. A transfer job
consists of one or more transfer packages that each contain a single transfer disk.
DATA TRANSFER UTILITY
The Data Transfer Utility is the command line software that Oracle provides for you to prepare the transfer
disk for your data and for shipment to Oracle. In addition, you can use this software to manage transfer jobs
and packages.
Note:

You can only run Data Transfer Utility tasks for a supported Linux
machine. Windows-based machines are not supported in disk-based
transfer jobs.
DATA HOST
The host computer on your site that stores the data you intend to copy to the disk for migration to Oracle
Cloud Infrastructure.
Note:

Only Linux machines can be used as Data Hosts.


TRANSFER PACKAGE
A transfer package is the logical representation of the parcel containing the transfer disk that you ship to
Oracle to upload to Oracle Cloud Infrastructure.
BUCKET
The logical container in Oracle Cloud Infrastructure Object Storage where Oracle operators upload your data.
A bucket is associated with a single compartment in your tenancy whose policies that determine what actions
a user can perform.
DATA TRANSFER ADMINISTRATOR
A new or existing IAM user that has the authorization and permissions to create and manage transfer jobs.
DATA TRANSFER UPLOAD USER
A temporary IAM user that grants Oracle personnel the authorization and permissions to upload the data
from your transfer disk to your designated Oracle Cloud Infrastructure Object Storage bucket. Delete this
temporary user after your data is uploaded to Oracle Cloud Infrastructure.

Roles and Responsibilities


Learn about the roles and responsibilities associated with disk-based data import.
Depending on your organization, the responsibilities of using and managing the data transfer may span multiple roles.
Use the following set of roles as a guideline for how you can assign the various tasks associated with the data transfer.
• Project Sponsor: Responsible for the overall success of the data transfer. Project Sponsors usually have complete
access to their organization's Oracle Cloud Infrastructure tenancy. They coordinate with the other roles in the
organization to complete the implementation of data transfer project. The Project Sponsor is also responsible for
signing legal documentation and setting up notifications for the data import.
• Infrastructure Engineer: Responsible for integrating the transfer appliance into the organization's IT
infrastructure from where the data is being transferred. Tasks associated with this role include connecting the
transfer appliance to power, placing it within the network, and setting the IP address through a serial console menu
using the provided USB-to-Serial adapter.

Oracle Cloud Infrastructure User Guide 971


Data Transfer

• Data Administrator: Responsible for identifying and preparing the data to be transferred to Oracle Cloud
Infrastructure. This person usually has access to, and expertise with, the data being migrated.
These roles correspond to the various phases of the data transfer described in the following section. A specific role
can be responsible for one or more phases.

Task Flow for Disk-Based Data Import


Learn about the role-based tasks associated with disk-based data import.
Here is a high-level overview of the tasks involved in transferring data to Oracle Cloud Infrastructure using Data
Transfer Disk. Complete one phase before proceeding to the next one. Use the roles previously described to distribute
the tasks across individuals or groups within your organization.

Oracle Cloud Infrastructure User Guide 972


Data Transfer

Secure Disk Data Transfer to Oracle Cloud Infrastructure


Learn about how security is applied to disk-based data import.
This section highlights the security details of the Data Transfer Service process.
• The Data Transfer Utility uses the standard Linux dm-crypt and LUKS utilities to encrypt block devices.
• The dm-crypt software generates a master AES-256 bit encryption key that is used for all data written to or read
from the disk. That key is protected by an encryption passphrase that the user must know to access the encrypted
data.
• When the data transfer administrator uses the Data Transfer Utility to create a disk, Oracle Cloud Infrastructure
creates a strong encryption passphrase that is displayed to the user and passed to dm-crypt. The passphrase is
displayed to standard output only once and cannot be retrieved again. Copy this passphrase to a durable, secure
location for future reference.
• For extra security, you can also encrypt your own data with your own encryption keys. Before copying your data
to the transfer disk, you can encrypt your data with a tool and encryption key of your choosing. After the data has
been uploaded, you would need to use the same tool and encryption key to access the data.
• All network communication between the Data Transfer Utility and Oracle Cloud Infrastructure is encrypted in-
transit using Transport Layer Security (TLS).
• After copying your data to a transfer disk, generate a manifest file using the Data Transfer Utility. The manifest
contains an index of all of the copied files and generated data integrity hashes. The Data Transfer Utility copies
the config_upload_user configuration file and referenced IAM credentials to the encrypted transfer disk.
This configuration file describes the temporary IAM data transfer upload user. Oracle uses the credentials and
entries defined in the config_upload_user file when processing the transfer disk and uploading files to
Oracle Cloud Infrastructure Object Storage.
Note:

Data Transfer Service Does Not Support Passphrases on Private Keys


While we recommend encrypting a private key with a passphrase when
generating API signing keys, Data Transfer does not support passphrases
on the key file required for the config_upload_user. If you use a
passphrase, Oracle personnel cannot upload your data.
Oracle cannot upload data from a transfer disk without the correct credentials defined in this configuration file.
See Installing the Data Transfer Utility on page 974 for more information about the required configuration
files.
• When you disconnect or lock a transfer disk using the Data Transfer Utility, the original encryption passphrase
is required to once again access the disk. If the encryption passphrase is not known or lost, you cannot access the
data on the transfer disk. To reuse a transfer disk, you must reformat the disk. Reformatting a disk removes all the
data.
• Oracle retrieves the encryption passphrase for a transfer disk from Oracle Cloud Infrastructure. Oracle uses the
passphrase to decrypt, mount the transfer disk, and upload the data to the designated bucket in the tenancy.
• After processing a transfer package, Oracle returns the transfer disk attached to the transfer package using the
return shipping label you provide.
• To protect your data, we make the data on the disk unrecoverable before shipping the transfer disk back to you. To
comply with customs regulations, we wipe the disk completely before shipping it back to international shipping
addresses.

Ways to Manage Disk Data Transfers


Learn about the different methods available for run disk-based data imports.
We provide two ways to manage disk-based data transfers:
• The Data Transfer Utility is a full-featured command line tool for disk-based data transfers only (appliance-based
data transfers use a different command line tool). For more information and installation instructions, see Installing
the Data Transfer Utility on page 974.

Oracle Cloud Infrastructure User Guide 973


Data Transfer

• The Console is an easy-to-use, partial-featured browser-based interface. For more information, see Signing In to
the Console on page 41.
Note:

You can perform many data transfer tasks using either the Console or the
Data Transfer Utility. However, there are some tasks you can only perform
using the Data Transfer Utility (for example, creating and locking the
transfer disk). describes the management tasks in detail and guides you to the
appropriate management interface to use for each task.

What's Next
You are now ready to begin preparation for the Disk-Based Data Import. See Preparing for Disk Data Transfers on
page 974 for more information.

Preparing for Disk Data Transfers


Learn about how to prepare for a disk-based data import job.

This topic describes the tasks associated with preparing for the Disk-Based Data Import. The Project Sponsor role
typically performs these tasks. See Roles and Responsibilities on page 971.
Import Disk Requirements
Learn about the requirements for running a disk-based data import job.
You are responsible for performing following tasks in order:
• Purchasing the required number of hard drives to migrate your data to Oracle Cloud Infrastructure. Use USB
2.0/3.0 external hard disk drives (HDD) with a single partitioned file system containing your data.
Note:

Oracle does not certify or test disks you intend to use for disk import jobs.
Calculate your disk capacity requirements and disk I/O to determine what
USB 2.0/3.0 disk works best for your data transfer needs.
• Copying the data to the HDDs following the procedures described in this import disk documentation.
• Shipping the disks to the specified Oracle data transfer site.
After the data is copied successfully to Oracle Cloud Infrastructure Object Storage, the hard drives are shipped back
to you in the same encrypted state that they were received.
Installing the Data Transfer Utility
Learn about the installation of the Data Transfer Utility for running disk-based data import jobs..
This topic describes how to install and configure the Data Transfer Utility for use in disk-based data transfers. In
addition, this topic describes the syntax for the Data Transfer Utility commands.
Important:

With this release, the Data Transfer Utility only supports disk-based data
transfers. Use of the Data Transfer Utility for appliance-based transfers has
been replaced with the Oracle Cloud Infrastructure command line interface
(CLI).
The Data Transfer Utility is licensed under the Universal Permissive License 1.0 and the Apache License 2.0. Third-
party content is separately licensed as described in the code.

Oracle Cloud Infrastructure User Guide 974


Data Transfer

Note:

The Data Transfer Utility must be run as the root user.


Prerequisites
Learn about prerequisites for installing the Data Transfer Utility.
To install and use the Data Transfer Utility, obtain the following:
• An Oracle Cloud Infrastructure account.
• The required Oracle Cloud Infrastructure users and groups with the required IAM policies.
See Creating the Required IAM Users, Groups, and Policies on page 980 for details.
• A Data Host machine with the following installed:
• Oracle Linux 6 or greater, Ubuntu 14.04 or greater, or SUSE 11 or greater. All Linux operating systems must
have the ability to create an EXT file system.
Note:

Windows-based machines are not supported in disk-based transfer jobs.


• Java 1.8 or Java 1.11
• hdparm 9.0 or later
• Cryptsetup 1.2.0 or greater
• Firewall access: If you have a restrictive firewall in the environment where you are using the Data Transfer
Utility, you may need to open your firewall configuration to the following IP address ranges: 140.91.0.0/16.
You also need to open access to the object storage IP address ranges: 134.70.0.0/17.
Installation
Learn about how to install the Data Transfer Utility on different Linux operating systems.
Download and install the Data Transfer Utility installer that corresponds to your Data Host's operating system.
To install the Data Transfer Utility on Debian or Ubuntu
Install the Data Transfer Utility on a Debian or Ubuntu Linux operating system.
1. Download the installation .deb file.
2. Issue the apt install command as the root user that has write permissions to the /opt directory.

sudo apt install ./dts-X.Y.Z.x86_64.deb

X.Y.Z represents the version numbers that match the installer you downloaded.
3. Confirm that the Data Transfer Utility installed successfully.

sudo dts --version

Your Data Transfer Utility version number is returned.


To install the Data Transfer Utility on Oracle Linux or Red Hat Linux
Install the Data Transfer Utility on an Oracle Linux or Red Hat Linux operating system.
1. Download the installation .rpm file.
2. Issue the yum install command as the root user that has write permissions to the /opt directory.

sudo yum localinstall ./dts-X.Y.Z.x86_64.rpm

X.Y.Z represents the version numbers that match the installer you downloaded.

Oracle Cloud Infrastructure User Guide 975


Data Transfer

3. Confirm that the Data Transfer Utility installed successfully.

sudo dts --version

Your Data Transfer Utility version number is returned.


Configuration
Configure the Data Transfer Utility after installing it.
Before using the Data Transfer Utility, you must create a base Oracle Cloud Infrastructure directory and two
configuration files with the required credentials. One configuration file is for the data transfer administrator, the IAM
user with the authorization and permissions to create and manage transfer jobs. The other configuration file is for the
data transfer upload user, the temporary IAM user that Oracle uses to upload your data on your behalf.

Base Data Transfer Directory


Create a base Oracle Cloud Infrastructure directory:

mkdir /root/.oci/

Configuration File for the Data Transfer Administrator


Create a data transfer administrator configuration file /root/.oci/config with the following structure:

[DEFAULT]
user=<The OCID for the data transfer administrator>
fingerprint=<The fingerprint of the above user's public key>
key_file=<The _absolute_ path to the above user's private key file on the
host machine>
tenancy=<The OCID for the tenancy that owns the data transfer job and
bucket>
region=<The region where the transfer job and bucket should exist. Valid
values are:
us-ashburn-1, us-phoenix-1, eu-frankfurt-1, and uk-london-1.>

For example:

[DEFAULT]
user=ocid1.user.oc1..unique_ID
fingerprint=4c:1a:6f:a1:5b:9e:58:45:f7:53:43:1f:51:0f:d8:45
key_file=/home/user/ocid1.user.oc1..unique_ID.pem
tenancy=ocid1.tenancy.oc1..unique_ID
region=us-phoenix-1

For the data transfer administrator, you can create a single configuration file that contains different profile sections
with the credentials for multiple users. Then use the ##profile option to specify which profile to use in the
command. Here is an example of a data transfer administrator configuration file with different profile sections:

[DEFAULT]
user=ocid1.user.oc1..unique_ID
fingerprint=4c:1a:6f:a1:5b:9e:58:45:f7:53:43:1f:51:0f:d8:45
key_file=/home/user/ocid1.user.oc1..unique_ID.pem
tenancy=ocid1.tenancy.oc1..unique_ID
region=us-phoenix-1
[PROFILE1]
user=ocid1.user.oc1..unique_ID
fingerprint=4c:1a:6f:a1:5b:9e:58:45:f7:53:43:1f:51:0f:d8:45
key_file=/home/user/ocid1.user.oc1..unique_ID.pem
tenancy=ocid1.tenancy.oc1..unique_ID
region=us-ashburn-1

Oracle Cloud Infrastructure User Guide 976


Data Transfer

By default, the DEFAULT profile is used for all Data Transfer Utility commands. For example:

dts job create --compartment-id compartment_id --bucket bucket_name --


display-name display_name --device-type disk

Instead, you can issue any Data Transfer Utility command with the --profile option to specify a different data
transfer administrator profile. For example:

dts job create --compartment-id compartment_id --bucket bucket_name --


display-name display_name --device-type disk --profile profile_name

Using the example configuration file above, the <profile_name> would be profile1.

Configuration File for the Data Transfer Upload User


Create a data transfer upload user /root/.oci/config_upload_user configuration file with the following
structure:

[DEFAULT]
user=<The OCID for the data transfer upload user>
fingerprint=<The fingerprint of the above user's public key>
key_file=<The _absolute_ path to the above user's private key file on the
host machine>
tenancy=<The OCID for the tenancy that owns the data transfer job and
bucket>
region=<The region where the transfer job and bucket should exist. Valid
values are:
us-ashburn-1, us-phoenix-1, eu-frankfurt-1, and uk-london-1.>

For example:

[DEFAULT]
user=ocid1.user.oc1..unique_ID
fingerprint=4c:1a:6f:a1:5b:9e:58:45:f7:53:43:1f:51:0f:d8:45
key_file=/home/user/ocid1.user.oc1..unique_ID.pem
tenancy=ocid1.tenancy.oc1..unique_ID
region=us-phoenix-1

Important:

Creating an upload user configuration file with multiple profiles is not


supported.

Configuration File Entries


The following table lists the basic entries that are required for each configuration file and where to get the information
for each entry.
Note:

Data Transfer Service does not support passphrases on the key files for both
data transfer administrator and data transfer upload user.

Oracle Cloud Infrastructure User Guide 977


Data Transfer

Entry Description and Where to Required?


Get the Value

user OCID of the data transfer Yes


administrator or the data
transfer upload user, depending
on which profile you are
creating. To get the value, see
Required Keys and OCIDs on
page 4215.

fingerprint Fingerprint for the key pair Yes


being used. To get the value,
see Required Keys and OCIDs
on page 4215.

key_file Full path and filename of the Yes


private key.
Important: The key pair
must be in PEM format. For
instructions on generating a
key pair in PEM format, see
Required Keys and OCIDs on
page 4215.

tenancy OCID of your tenancy. To get Yes


the value, see Required Keys
and OCIDs on page 4215.

region An Oracle Cloud Infrastructure Yes


region. See Regions and
Availability Domains on page
182.
Data transfer is supported in
US East (Ashburn),
US West (Phoenix),
Germany Central
(Frankfurt), and
UK South (London).

You can verify the data transfer upload user credentials using the following command:

dts job verify-upload-user-credentials --bucket bucket_name

Configuration File Location


The location of the configuration files is /root/.oci/config.
Using the Data Transfer Utility
This section provides an overview of the syntax for the Data Transfer Utility.
Important:

The Data Transfer Utility must be run as the root user.

Oracle Cloud Infrastructure User Guide 978


Data Transfer

You can specify Data Transfer Utility command options using the following commands:
• --option valueor
• --option=value

Syntax
The basic Data Transfer Utility syntax is:

dts resource action [options]

This syntax is applied to the following:


• dts is the shortened utility command name
• job is an example of a <resource>
• create is an example of an <action>
• Other utility strings are [options]
The following examples show typical Data Transfer Utility commands to create a transfer job.

dts job create --compartment-id ocid.compartment.oc1..exampleuniqueID --


display-name "mycompany transfer1" --bucket mybucket --device-type disk

Or:

dts job create --compartment-id=compartment-id


ocid.compartment.oc1..exampleuniqueID --display-name="mycompany transfer1"
--bucket=mybucket --device-type=disk

Note:

In the previous examples, provide a friendly name for the transfer job using
the ##display#name option. Avoid entering confidential information.

Finding Out the Installed Version of the Data Transfer Utility


You can get the installed version of the Data Transfer Utility using --version or -v. For example:

dts --version

0.6.183

Accessing Data Transfer Utility Help


All Data Transfer Utility help commands have an associated help component you can access from the command line.
To view the help, enter any command followed by the --help or -h option. For example:

dts job --help

Usage: job [COMMAND]


Transfer disk or appliance job operations - {job action [options]}

Commands:
create Creates a new transfer disk or appliance
job.
show Shows the transfer disk or appliance job
details.
update Updates the transfer disk or appliance job
details.
delete Deletes the transfer disk or appliance job.

Oracle Cloud Infrastructure User Guide 979


Data Transfer

close Closes the transfer disk or appliance job.


list Lists all transfer disk or appliance jobs.
verify-upload-user-credentials Verifies the transfer disk or appliance
upload user credentials.

When you run the help option (--help or -h) for a specified command, all the subordinate commands and options
for that level of the Data Transfer Utility are displayed. If you want to access the Data Transfer Utility help for a
specific subordinate command, include it in the Data Transfer Utility string, for example:

dts job create --help

Usage: job create --bucket=<bucket> --compartment-id=<compartmentId>


[--defined-tags=<definedTags>] --device-type=<deviceType>
--display-name=<displayName>
[--freeform-tags=<freeformTags>] [--profile=<profile>]

Creates a new transfer disk or appliance job.

--bucket=<bucket> Upload bucket for the job.


--compartment-id=<compartmentId> Compartment OCID.
--defined-tags=<definedTags> Defined tags for the new transfer job in
JSON format.
--device-type=<deviceType> Device type for the job: DISK or APPLIANCE.
--display-name=<displayName> Display name for the job.
--freeform-tags=<freeformTags> Free-form tags for the new transfer job in
JSON format.
--profile=<profile> Profile.

Creating the Required IAM Users, Groups, and Policies


Each service in Oracle Cloud Infrastructure integrates with IAM for authentication and authorization.
To use Oracle Cloud Infrastructure, you must be given the required type of access in a policy written by an
administrator, whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you try
to perform an action and get a message that you don’t have permission or are unauthorized, confirm with your
administrator the type of access you've been granted and which compartment you should work in.
Access to resources is provided to groups using policies and then inherited by the users that are assigned to those
groups. Data transfer requires the creation of two distinct groups:
• Data transfer administrators who can create and manage transfer jobs.
• Data transfer upload users who can upload data to Object Storage. For your data security, the permissions for
upload users allow Oracle personnel to upload standard and multi-part objects on your behalf and inspect bucket
and object metadata. The permissions do not allow Oracle personnel to inspect the actual data.
The Data Administrator is responsible for generating the required RSA keys needed for the temporary upload users.
These keys should never be shared between users.
For details on creating groups, see Managing Groups on page 2438.
An administrator creates these groups with the following policies:
• The data transfer administrator group requires an authorization policy that includes the following:

Allow group group_name to manage data-transfer-jobs in


compartment compartment_name
Allow group group_name to manage objects in compartment compartment_name
Allow group group_name to manage buckets in compartment compartment_name

Alternatively, you can consolidate the manage buckets and manage objects policies into the following:

Allow group group_name to manage object-family in


compartment compartment_name

Oracle Cloud Infrastructure User Guide 980


Data Transfer

• The data transfer upload user group requires an authorization policy that includes the following:

Allow group group_name to manage buckets in compartment compartment_name


where all { request.permission='BUCKET_READ',
target.bucket.name='<bucket_name>' }
Allow group group_name to manage objects in compartment compartment_name
where all { target.bucket.name='<bucket_name>',
any { request.permission='OBJECT_CREATE',
request.permission='OBJECT_OVERWRITE',
request.permission='OBJECT_INSPECT' }}

To enable notifications, add the following policies:

Allow group group name to manage ons-topics in tenancy


Allow group group name to manage ons-subscriptions in tenancy
Allow group group name to manage cloudevents-rules in tenancy
Allow group group name to inspect compartments in tenancy

See Notifications Overview on page 3378 and Overview of Events on page 1788 for more information.
The Oracle Cloud Infrastructure administrator then adds a user to each of the data transfer groups created. For details
on creating users, see Managing Users on page 2433.
Important:

For security reasons, we recommend that you create a unique IAM data
transfer upload user for each transfer job and then delete that user once your
data is uploaded to Oracle Cloud Infrastructure.
Creating Object Storage Buckets
The Object Storage service is used to upload your data to Oracle Cloud Infrastructure. Object Storage stores objects in
a container called a bucket within a compartment in your tenancy. For details on creating the bucket to store uploaded
data, see Managing Buckets on page 3426.
Configuring Firewall Settings
Ensure that your local environment's firewall can communicate with the Data Transfer Service running on the IP
address ranges: 140.91.0.0/16. You also need to open access to the Object Storage IP address ranges: 134.70.0.0/17.
Creating Transfer Jobs
This section describes how to create a transfer job as part of the preparation for the data transfer. See Transfer Jobs on
page 999 for complete details on all tasks related to transfer jobs.
Tip:

You can use the Console or the Data Transfer Utility to create a transfer job.
A transfer job represents the collection of files that you want to transfer and signals the intention to upload those files
to Oracle Cloud Infrastructure. A transfer job combines at least one transfer disk with a transfer package. Identify
which compartment and Object Storage bucket that Oracle is to upload your data to. Create the transfer job in the
same compartment as the upload bucket and supply a human-readable name for the transfer job.
Note:

It is recommended that you create a compartment for each transfer job to


minimize the required access your tenancy.
Creating a transfer job returns a job ID that you specify in other transfer tasks. For example:

ocid1.datatransferjob.region1.phx..exampleuniqueID

Oracle Cloud Infrastructure User Guide 981


Data Transfer

To create a transfer job using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Select the designated compartment you are to use for data transfers from the list.
A list of transfer jobs that have already been created is displayed.
3. Click Create Transfer Job.
The Create Transfer Job dialog appears.
4. Enter a Job Name. Avoid entering confidential information. Then, select the Upload Bucket from the list.
5. Select Disk for the Transfer Device Type.
6. Click Create Transfer Job.
To create a transfer job using the Data Transfer Utility

dts job create --bucket bucket --compartment-id compartment_id --display-


name display_name

display_name is the name of the transfer job. Avoid entering confidential information.
For example:

oci dts job create --bucket MyBucket1 --compartment-id


ocid.compartment.oc1..exampleuniqueID --display-name MyDiskImportJob

Transfer Job :
ID : ocid1.datatransferjob.oc1..exampleuniqueID
CompartmentId : ocid.compartment.oc1..exampleuniqueID
UploadBucket : MyBucket1
Name : MyDiskImportJob
Label : JZM9PAVWH
CreationDate : 2019/06/04 17:07:05 EDT
Status : PREPARING
freeformTags : *** none ***
definedTags : *** none ***
Packages :
[1] :
Label : PBNZOX9RU
TransferSiteShippingAddress : Oracle Data Transfer Service;
Job:JZM9PAVWH Package:PBNZOX9RU ; 21111 Ridgetop Circle; Dock B; Sterling,
VA 20166; USA
DeliveryVendor : FedEx
DeliveryTrackingNumber : *** none ***
ReturnDeliveryTrackingNumber : *** none ***
Status : PREPARING
Devices : [*** none ***]
UnattachedDevices : [*** none ***]
Appliances : [*** none ***]When you use the to display the
details of a job, tagging details are also included in the output if you
specified tags.

Optionally, you can specify one or more defined or free-form tags when you create a transfer job. For more
information about tagging, see Resource Tags on page 213.
Defined Tags
To specify defined tags when creating a job:

dts job create --bucket <bucket> --compartment-id <compartment_id>


--display-name <display_name> --defined-tags '{ "<tag_namespace>":
{ "<tag_key>":"<value>" }}'

Oracle Cloud Infrastructure User Guide 982


Data Transfer

For example:

oci dts job create --bucket MyBucket1 --compartment-id


ocid.compartment.oc1..exampleuniqueID --display-name MyDiskImportJob --
defined-tags '{"Operations": {"CostCenter": "01"}}'

Transfer Job :
ID : ocid1.datatransferjob.oc1..exampleuniqueID
CompartmentId : ocid.compartment.oc1..exampleuniqueID
UploadBucket : MyBucket1
Name : MyDiskImportJob
Label : JZM9PAVWH
CreationDate : 2019/06/04 17:07:05 EDT
Status : PREPARING
freeformTags : *** none ***
definedTags :
Operations :
CostCenter : 01
Packages :
[1] :
Label : PBNZOX9RU
TransferSiteShippingAddress : Oracle Data Transfer Service;
Job:JZM9PAVWH Package:PBNZOX9RU ; 21111 Ridgetop Circle; Dock B; Sterling,
VA 20166; USA
DeliveryVendor : FedEx
DeliveryTrackingNumber : *** none ***
ReturnDeliveryTrackingNumber : *** none ***
Status : PREPARING
Devices : [*** none ***]
UnattachedDevices : [*** none ***]
Appliances : [*** none ***]When you use the to display the
details of a job, tagging details are also included in the output if you
specified tags.

Note:

Users create tag namespaces and tag keys with the required permissions.
These items must exist before you can specify them when creating a job. See
Working with Defined Tags on page 3950 for details.
Freeform Tags
To specify freeform tags when creating a job:

dts job create --bucket <bucket> --compartment-id compartment_id --display-


name display_name --freeform-tags '{ "tag_key":"value" }'

For example:

oci dts job create --bucket MyBucket1 --compartment-id


ocid.compartment.oc1..exampleuniqueID --display-name MyDiskImportJob --
defined-tags '{"Operations": {"CostCenter": "01"}}'

Transfer Job :
ID : ocid1.datatransferjob.oc1..exampleuniqueID
CompartmentId : ocid.compartment.oc1..exampleuniqueID
UploadBucket : MyBucket1
Name : MyDiskImportJob
Label : JZM9PAVWH
CreationDate : 2019/06/04 17:07:05 EDT
Status : PREPARING
freeformTags :

Oracle Cloud Infrastructure User Guide 983


Data Transfer

Pittsburg_Team : brochures
definedTags : *** none ***
Packages :
[1] :
Label : PBNZOX9RU
TransferSiteShippingAddress : Oracle Data Transfer Service;
Job:JZM9PAVWH Package:PBNZOX9RU ; 21111 Ridgetop Circle; Dock B; Sterling,
VA 20166; USA
DeliveryVendor : FedEx
DeliveryTrackingNumber : *** none ***
ReturnDeliveryTrackingNumber : *** none ***
Status : PREPARING
Devices : [*** none ***]
UnattachedDevices : [*** none ***]
Appliances : [*** none ***]When you use the to display the
details of a job, tagging details are also included in the output if you
specified tags.

Multiple Tags
To specify multiple tags, comma separate the JSON-formatted key/value pairs:

dts job create --bucket bucket --compartment-id compartment_id


--display-name display_name --device-type disk --freeform-tags
'{ "tag_key":"value" }', '{ "tag_key":"value" }'

Getting Transfer Job IDs


Each transfer job you create has a unique ID within Oracle Cloud Infrastructure. For example:

ocid1.datatransferjob.region1.phx..unique_ID

You will need to forward this transfer job ID to the Data Administrator.
To get the transfer job ID using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Select the Compartment from the list.
The transfer jobs in that compartment are displayed.
3. Click the link under Transfer Jobs for the transfer job whose details you want to view.

Alternatively, you can click the Actions icon ( ), and then click View Details.
The Details page for that transfer job appears.
4. Find the OCID field in the Details page and click Show to display it or Copy to copy it to your computer.
To get the transfer job ID using the CLI

dts job list --compartment-id compartment_id

For example:

dts job list --compartment-id ocid.compartment.oc1..exampleuniqueID

Transfer Job List :


[1] :
ID : ocid1.datatransferjob.oc1..exampleuniqueID
Name : MyDiskImportJob
Label : JVWK5YWPU

Oracle Cloud Infrastructure User Guide 984


Data Transfer

BucketName : MyBucket1
CreationDate : 2020/06/01 17:33:16 EDT
Status : INITIATED
FreeformTags : *** none ***
DefinedTags :
Financials :
key1 : nondefault

The ID for each transfer job is returned:

ID : ocid1.datatransferjob.oc1..exampleuniqueID

Tip:

When you create a transfer job using the dts job create CLI, the
transfer job ID is displayed in the CLI's return.
Creating Upload Configuration Files
The Project Sponsor is responsible for creating or obtaining configuration files that allow the uploading of user
data to the transfer appliance. Send these configuration files to the Data Administrator where they can be placed
in the Data Host. The config file is for the data transfer administrator, the IAM user with the authorization and
permissions to create and manage transfer jobs. The config_upload_user file is for the data transfer upload
user, the temporary IAM user that Oracle uses to upload your data on your behalf.
Create a base Oracle Cloud Infrastructure directory and two configuration files with the required credentials.

Creating the Data Transfer Directory


Create a Oracle Cloud Infrastructure directory (.oci) on the same Data Host where the CLI is installed. For
example:

mkdir /root/.oci/

The two configuration files (config and config_upload_user) are placed in this directory.

Creating the Data Transfer Administrator Configuration File


Create the data transfer administrator configuration file /root/.oci/config with the following structure:

[DEFAULT]
user=<The OCID for the data transfer administrator>
fingerprint=<The fingerprint of the above user's public key>
key_file=<The _absolute_ path to the above user's private key file on the
host machine>
tenancy=<The OCID for the tenancy that owns the data transfer job and
bucket>
region=<The region where the transfer job and bucket should exist. Valid
values are:
us-ashburn-1, us-phoenix-1, eu-frankfurt-1, and uk-london-1.>

For example:

[DEFAULT]
user=ocid1.user.oc1..exampleuniqueID
fingerprint=4c:1a:6f:a1:5b:9e:58:45:f7:53:43:1f:51:0f:d8:45
key_file=/home/user/ocid1.user.oc1..exampleuniqueID.pem
tenancy=ocid1.tenancy.oc1..exampleuniqueID
region=us-phoenix-1

Oracle Cloud Infrastructure User Guide 985


Data Transfer

For the data transfer administrator, you can create a single configuration file that contains different profile sections
with the credentials for multiple users. Then use the ##profile option to specify which profile to use in the
command.
Here is an example of a data transfer administrator configuration file with different profile sections:

[DEFAULT]
user=ocid1.user.oc1..exampleuniqueID
fingerprint=4c:1a:6f:a1:5b:9e:58:45:f7:53:43:1f:51:0f:d8:45
key_file=/home/user/ocid1.user.oc1..exampleuniqueID.pem
tenancy=ocid1.tenancy.oc1..exampleuniqueID
region=us-phoenix-1
[PROFILE1]
user=ocid1.user.oc1..exampleuniqueID
fingerprint=4c:1a:6f:a1:5b:9e:58:45:f7:53:43:1f:51:0f:d8:45
key_file=/home/user/ocid1.user.oc1..exampleuniqueID.pem
tenancy=ocid1.tenancy.oc1..exampleuniqueID
region=us-ashburn-1

By default, the DEFAULT profile is used for all CLI commands. For example:

oci dts job create --compartment-id ocid.compartment.oc1..exampleuniqueID --


bucket MyBucket --display-name MyDisplay --device-type disk

Instead, you can issue any CLI command with the --profile option to specify a different data transfer
administrator profile. For example:

oci dts job create --compartment-id ocid.compartment.oc1..exampleuniqueID


--bucket MyBucket --display-name MyDisplay --device-type disk --profile
MyProfile

Using the example configuration file above, the <profile_name> would be profile1.
If you created two separate configuration files, use the following command to specify the configuration file to use:

oci dts job create --compartment-id compartment_id --bucket bucket_name --


display-name display_name

Creating the Data Transfer Upload User Configuration File


The config_upload_user configuration file is for the data transfer upload user, the temporary IAM user that
Oracle uses to upload your data on your behalf. Create this configuration file with the following structure:

[DEFAULT]
user=<The OCID for the data transfer upload user>
fingerprint=<The fingerprint of the above user's public key>
key_file=<The _absolute_ path to the above user's private key file on the
host machine>
tenancy=<The OCID for the tenancy that owns the data transfer job and
bucket>
region=<The region where the transfer job and bucket should exist. Valid
values are:
us-ashburn-1, us-phoenix-1, eu-frankfurt-1, and uk-london-1.>

Adding Object Storage Endpoints


Include the line endpoint=url for the Object Storage API endpoint in the upload user configuration file.

Oracle Cloud Infrastructure User Guide 986


Data Transfer

For example:

endpoint=https://objectstorage.us-phoenix-1.oraclecloud.com

Click here to view a list of endpoints.


A complete configuration including the Object Storage endpoint might look like this:

[DEFAULT]
user=ocid1.user.oc1..exampleuniqueID
fingerprint=4c:1a:6f:a1:5b:9e:58:45:f7:53:43:1f:51:0f:d8:45
key_file=/home/user/ocid1.user.oc1..exampleuniqueID.pem
tenancy=ocid1.tenancy.oc1..exampleuniqueID
region=us-phoenix-1
endpoint=https://objectstorage.us-phoenix-1.oraclecloud.com

Important:

Creating an upload user configuration file with multiple profiles is not


supported.

Configuration File Entries


The following table lists the basic entries that are required for each configuration file and where to get the information
for each entry.
Note:

Data Transfer Service does not support passphrases on the key files for both
data transfer administrator and data transfer upload user.

Entry Description and Where to Required?


Get the Value

user OCID of the data transfer Yes


administrator or the data
transfer upload user, depending
on which profile you are
creating. To get the value, see
Required Keys and OCIDs on
page 4215.

fingerprint Fingerprint for the key pair Yes


being used. To get the value,
see Required Keys and OCIDs
on page 4215.

key_file Full path and filename of the Yes


private key.
Important: The key pair
must be in PEM format. For
instructions on generating a
key pair in PEM format, see
Required Keys and OCIDs on
page 4215.

Oracle Cloud Infrastructure User Guide 987


Data Transfer

Entry Description and Where to Required?


Get the Value

tenancy OCID of your tenancy. To get Yes


the value, see Required Keys
and OCIDs on page 4215.

region An Oracle Cloud Infrastructure Yes


region. See Regions and
Availability Domains on page
182.
Data transfer is supported in
US East (Ashburn),
US West (Phoenix),
Germany Central
(Frankfurt), and
UK South (London).

You can verify the data transfer upload user credentials using the following command:

dts job verify-upload-user-credentials --bucket bucket_name

Creating Transfer Packages


A transfer package is the virtual representation of the physical disk package that you are shipping to Oracle for upload
to Oracle Cloud Infrastructure. See Transfer Packages on page 1012 for complete details on all tasks related to
transfer packages.
Creating a transfer package requires the job ID returned from when you created the transfer job. For example:

ocid1.datatransferjob.region1.phx..exampleuniqueID

To create a transfer package using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the transfer job for which you want to create a transfer package.
3.
Click the Actions icon ( ), and then click View Details.
Alternatively, click the hyperlinked name of the transfer job.
A list of transfer packages that have already been created is displayed.
4. Click Create Transfer Package.
The Create Transfer Package dialog appears.
5. Select a Vendor from the list.
6. Click Create Transfer Package.
The Data Transfer Package dialog appears displaying information such as the shipping address, the shipping vendor,
and the shipping status.
To create a transfer package using the Data Transfer Utility
At the command prompt on the Data Host, run dts package create to create a transfer package.

dts package create --job-id job_id

Oracle Cloud Infrastructure User Guide 988


Data Transfer

The following information is returned:

Transfer Package :
Label :
TransferSiteShippingAddress :
DeliveryVendor :
DeliveryTrackingNumber :
ReturnDeliveryTrackingNumber :
Status :
Devices :

Getting Transfer Package Labels


To get the transfer package label using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the transfer job for which you want to see the details.
3.
Click the Actions icon ( ), and then click View Details.
Alternatively, click the hyperlinked name of the transfer job.
4. Click Transfer Packages under Resources.
A list of transfer packages associated with the transfer job is displayed.
To get the transfer package label using the Data Transfer Utility

dts job show --job-id job_id

For example:

dts job show --job-id ocid1.datatransferjob.oc1..exampleuniqueID

Transfer Job :
ID : ocid1.datatransferjob.oc1..exampleuniqueID
CompartmentId : ocid.compartment.oc1..exampleuniqueID
UploadBucket : MyBucket1
Name : MyDiskImportJob
Label : JZM9PAVWH
CreationDate : 2019/06/04 17:07:05 EDT
Status : PREPARING
freeformTags : *** none ***
definedTags : *** none ***
Packages :
[1] :
Label : PBNZOX9RU
TransferSiteShippingAddress : Oracle Data Transfer Service;
Job:JZM9PAVWH Package:PBNZOX9RU ; 21111 Ridgetop Circle; Dock B; Sterling,
VA 20166; USA
DeliveryVendor : FedEx
DeliveryTrackingNumber : *** none ***
ReturnDeliveryTrackingNumber : *** none ***
Status : PREPARING
Devices : [*** none ***]
UnattachedDevices : [*** none ***]
Appliances : [*** none ***]

The transfer package label is displayed as part of the job details.


Getting Shipping Labels
You can find the shipping address in the transfer package details. Use this information to get a shipping label for the
transfer package that is used to send the disk to Oracle.

Oracle Cloud Infrastructure User Guide 989


Data Transfer

After getting the shipping labels from the Console or Data Transfer Utility, go to the supported carrier you are using
(UPS, FedEx, or DHL) and manually create both the SHIP TO ORACLE and RETURN TO CUSTOMER labels. See
Shipping Import Disks on page 995 and Monitoring the Import Disk Shipment and Data Transfer on page 997
for information.
To get the shipping address for a transfer package using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the transfer job for which you want to see the details.
3.
Click the Actions icon ( ), and then click View Details.
Alternatively, click the hyperlinked name of the transfer job.
A list of transfer packages that have already been created is displayed.
4. Find the transfer package for which you want to see the details.
5.
Click the Actions icon ( ), and then click View Details.
Alternatively, click the hyperlinked name of the transfer job.
To get the shipping address for a transfer package using the Data Transfer Utility

dts package show --job-id job_id --package-label package_label

For example:

dts package show --job-id ocid1.datatransferjob.oci1..exampleuniqueID --


package-label PWA8O67MI

Transfer Package :
Label : PWA8O67MI
TransferSiteShippingAddress : Oracle Data Transfer Service;
Job:JZM9PAVWH Package:PWA8O67MI ; 21111 Ridgetop Circle; Dock B; Sterling,
VA 20166; USA
DeliveryVendor : *** none ***
DeliveryTrackingNumber : *** none ***
ReturnDeliveryTrackingNumber : *** none ***
Status : PREPARING
Devices : [*** none ***]

Notifying the Data Administrator


When you have completed all the tasks in this topic, provide the Data Administrator of the following:
• IAM login credentials
• Data Transfer Utility configuration files
• Transfer job ID
• Package label
What's Next
You are now ready to configure your system for the data transfer. See Configuring Import Disk Data Transfers on
page 990.

Configuring Import Disk Data Transfers

Oracle Cloud Infrastructure User Guide 990


Data Transfer

This topic describes the tasks associated with configuring the Disk-Based Data Import. The Infrastructure Engineer
role typically performs these tasks. See Roles and Responsibilities on page 971.
Configuration for the Disk-Based Data Import consists of the following tasks:
• Attaching the import disk to the Data Host. Remove all partitions and any file systems. To prevent the accidental
deletion of data, the Data Transfer Utility does not work with disks that already have partitions or file systems.
Disks are visible to the host as block devices and must provide a valid response to the hdparm -I <device>
Linux command.
• Sending the block device path to the Data Administrator.
What's Next
You are now ready to load your data to the transfer disk. See Copying the Data to the Import Disk on page 991.

Copying the Data to the Import Disk

This topic describes the tasks associated with running the data transfer from the Data Host to the import disk. The
Data Administrator role typically performs these tasks. See Roles and Responsibilities on page 971.
Information Prerequisites
Before performing any disk copying tasks, you must obtain the following information:
• Disk block device path. The Infrastructure Engineer typically provides this information.
• IAM login information, Data Transfer Utility configuration files, transfer job ID, and package label. The Project
Sponsor typically provides this information.
Creating the Transfer Disk
The transfer disk is the logical representation of the physical import disk that has been configured for use for
receiving data as part of the disk-based data transfer. See Transfer Disks on page 1008 for complete details on all
tasks related to transfer disks.
Note:

You can only use the Data Transfer Utility to create a transfer disk.
When you create a transfer disk for use with the disk on which you are copying your files, the Data Transfer Utility:
• Sets up the disk for encryption using the passphrase.
• Creates a file system on the disk.
• Mounts the file system at /mnt/orcdts_label.
For example:

/mnt/orcdts_DJZNWK3ET

When you register a transfer disk, Oracle Cloud Infrastructure generates a strong encryption passphrase that is used
to encrypt the contents on the disk. The encryption passphrase is displayed to standard output to the data transfer
administrator user and cannot be retrieved again. Create a local, secure copy of the encryption passphrase, so you can
reference the passphrase again.
Creating a transfer disk requires the job ID returned from when you created the transfer job and the path to the
attached disk (for example, /dev/sdb).

Oracle Cloud Infrastructure User Guide 991


Data Transfer

To create a transfer disk using the Data Transfer Utility


At the command prompt on the host, run dts disk create to create a transfer disk.

dts disk create --job-id job_id --block-device block_device

Copying Files to the Disk


You can only copy regular files to the disk. You cannot copy special files, such as symbolic links, device special,
sockets, and pipes, directly to the disk. See the following section for instructions on how to prepare special files.
Important:

• Individual files being copied to the disk cannot exceed 9.76 TB.
• Do not fill up the disk to 100% capacity. There must be space available
to generate metadata and for the manifest file to perform the upload to
Object Storage. At least 1 GB of free disk space is needed for this area.
Attach the disk to the Data Host and copy files to the mount point created by disk through the Data Transfer Utility.
Note:

Only Linux machines can be used as Data Hosts.


Note:

Copy all Files Before Disconnecting the Disk


Do not disconnect the disk until you copy all files from the Data Host and
generate the manifest file. If you accidentally disconnect the disk before
copying all files, you must unlock the disk using the encryption passphrase.
The encryption passphrase was generated and displayed when you created
the transfer disk. If the generated encryption passphrase is not available, you
must delete the transfer disk from the transfer job and re-create the transfer
disk. All data previously copied to that disk is lost.
Copying Special Files
To transfer special files, create a tar archive of these files and copy the tar archive to the Data Transfer Appliance. We
recommend copying many small files using a tar archive. Copying a single compressed archive file should also take
less time than running copy commands such as cp -r or rsync.
Here are some examples of creating a tar archive and getting it onto the Data Transfer Appliance:
• Running a simple tar command:

tar -cvzf /mnt/nfs-dts-1/filesystem.tgz filesystem/


• Running a command to create a file with md5sum hashes for each file in addition to the tar archive:

tar cvzf /mnt/nfs-dts-1/filesystem.tgz filesystem/ |xargs -I '{}' sh -c


"test -f '{}' && md5sum '{}'"|tee tarzip_md5

The tar archive file filesystem.tgz has a base64 md5sum once it is uploaded to OCI Object Storage. Store
the tarzip_md5 file where you can retrieve it. After the compressed tar archive file is downloaded from Object
Storage and unpacked, you can compare the individual files against the hashes in the file.
Generating the Manifest File
Note:

You can only use the Data Transfer Utility to generate a manifest file. The
amount of time to generate the manifest file depends on the size of the upload
files, disk speed, and available processing power.

Oracle Cloud Infrastructure User Guide 992


Data Transfer

After copying your data to a transfer disk, generate a manifest file using the Data Transfer Utility. The manifest
contains an index of all of the copied files and generated data integrity hashes. The Data Transfer Utility copies the
config_upload_user configuration file and referenced IAM credentials to the encrypted transfer disk. This
configuration file describes the temporary IAM data transfer upload user. Oracle uses the credentials and entries
defined in the config_upload_user file when processing the transfer disk and uploading files to Oracle Cloud
Infrastructure Object Storage.
Note:

Data Transfer Service Does Not Support Passphrases on Private Keys


While we recommend encrypting a private key with a passphrase when
generating API signing keys, Data Transfer does not support passphrases
on the key file required for the config_upload_user. If you use a
passphrase, Oracle personnel cannot upload your data.
Oracle cannot upload data from a transfer disk without the correct credentials defined in this configuration file. See
Installing the Data Transfer Utility on page 974 for more information about the required configuration files.
To create a manifest file using the Data Transfer Utility
At the command prompt on the Data Host, run dts disk manifest to create a manifest file.

dts disk manifest --job-id job_id --disk-label disk_label--object-name-


prefix object_name_prefix]

Note:

Do You Need to Regenerate the Manifest File?


If you add, remove, or modify any files on the disk after generating the
manifest file, you must regenerate the file. If the manifest file does not match
the contents of the target bucket, Oracle cannot upload the data.
Locking the Transfer Disk
Note:

You can only use the Data Transfer Utility to lock a transfer disk.
Locking a transfer disk safely unmounts the disk and removes the encryption passphrase from the Data Host.
To lock a transfer disk using the Data Transfer Utility
At the command prompt on the Data Host, run dts disk lock to lock a transfer disk.

dts disk lock --job-id job_id --disk-label disk_label --block-


device block_device

Unlocking the Transfer Disk


Note:

You can only use the Data Transfer Utility to unlock a transfer disk.
When unlocking the transfer disk, you are prompted for the encryption passphrase that was generated when you
created the transfer disk.
To unlock a transfer disk using the Data Transfer Utility
At the command prompt on the Data Host, run dts disk unlock to unlock a transfer disk.

dts disk unlock --job-id job_id --disk-label disk_label --block-


device block_device --encryption-passphrase encryption_passphrase

Oracle Cloud Infrastructure User Guide 993


Data Transfer

Attaching the Transfer Disk to the Transfer Package


Attach a transfer disk to a transfer package after you have performed the following tasks:
1. Copied your data onto the disk.
2. Generated the required manifest file.
3. Run and reviewed the dry-run report.
4. Locked the transfer disk in preparation for shipment.
A disk can be attached to one package, detached, and then attached to another package. In some cases, you have
attached a transfer disk to a transfer package, but have changed your mind about shipping that disk with the transfer
package. You can also detach a transfer disk from one transfer package and attach that disk to a different transfer
package.
To attach a transfer disk to a transfer package using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the transfer job associated with the transfer package for which you want to attach a disk.
3.
Click the Actions icon ( ), and then click View Details.
A list of transfer packages is displayed.
4. Find the transfer package for which you want to attach a disk.
5.
Click the Actions icon ( ), and then click View Details.
Alternatively, click the hyperlinked name of the transfer package.
A list of transfer disks is displayed.
6. Click Attach Transfer Disks.
The Attach Transfer Disks dialog appears.
7. Select the Transfer Disks that you want to attach to the transfer package.
8. Click Attach.
To attach a transfer disk to a transfer package using the Data Transfer Utility
At the command prompt on the Data Host, run dts disk attach to attach a disk to a transfer package.

dts disk attach --job-id job_id --package-label package_label --disk-


label disk_label

Detaching the Transfer Disk from the Transfer Package


To detach a transfer disk from a transfer package using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the transfer package from which you want to detach a transfer disk.
3.
Click the Actions icon ( ), and then click View Details.
Alternatively, click the hyperlinked name of the transfer package.
A list of transfer disks that have already been attached is displayed.
4. Find the transfer disk that you want to detach.
5.
Click the Actions icon ( ), and then click View Details.
Alternatively, click the hyperlinked name of the transfer disk.
6. Click Detach Transfer Disk.

Oracle Cloud Infrastructure User Guide 994


Data Transfer

To detach a transfer disk from a transfer package using the Data Transfer Utility
At the command prompt on the Data Host, run dts disk detach to detach a disk from a transfer package.

dts disk detach --job-id job_id --package-label package_label --disk-


label disk_label

Setting Tracking Details on the Transfer Package


After delivering the transfer package to the shipping vendor, update the transfer package with the tracking
information.
Important:

Oracle cannot process a transfer package until you update the tracking
information.
To update the transfer package with tracking information using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the transfer job for which you want to see the associated transfer packages.
3.
Click the Actions icon ( ), and then click View Details.
A list of transfer packages that have already been created is displayed.
4. Find the transfer package that you want to update.
5.
Click the Actions icon ( ), and then click View Details.
6. Click Edit.
7. Enter the Tracking ID and the Return Tracking ID.
8. Click Edit Transfer Package.
To update the transfer package with tracking information using the Data Transfer Utility
At the command prompt on the host, run dts package ship to update the transfer package tracking information.

dts package ship --job-id job_id --package-label package_label --package-


vendor vendor_name --tracking-number tracking_number --return-tracking-
number return_tracking_number

Notifying the Infrastructure Engineer


After completing the tasks listed in this topic, notify the Infrastructure Engineer of the following:
• Disconnect the physical disk from the Data Host.
• Package the disk for shipment.
What's Next
You are now ready to ship your disk with the copied data to Oracle. See Shipping Import Disks on page 995.

Shipping Import Disks

This topic describes the tasks associated with shipping the import disk containing the copied data to Oracle. The
Infrastructure Engineer role typically performs these tasks. See Roles and Responsibilities on page 971.

Oracle Cloud Infrastructure User Guide 995


Data Transfer

Disconnecting the Transfer Disk from the Data Host


Do not disconnect the import disk until you copy all files from the Data Host and generate the manifest file. See
Copying the Data to the Import Disk on page 991 for more information.
Printing Shipping Labels
You should receive the shipping labels electronically from the Project Sponsor. Print them on the appropriate labels
for shipping the import disk. See Getting Shipping Labels on page 989 for more information.
Packaging and Shipping the Import Disk

General
Include the required return shipping label in the box when packaging the import disk for shipment.
Note:

If you do not include the return shipping label inside the box, Oracle cannot
process the transfer package.
Ensure that the transfer job and transfer package label are clearly readable on the outside of the box containing the
import disk.
Important:

If you are shipping transfer disks to London or Frankfurt, request that the
shipping vendor requires a signature delivery.

Listing Disk Delivery Vendors


You can view the vendors available for delivery of your import disk to Oracle Cloud Infrastructure.
Note:

• Available vendors for transfer disk delivery to Oracle Cloud Infrastructure


can vary over time. Have the latest version of the Data Transfer Utility
installed to view the current list.
• You can only use the Data Transfer Utility to lock an transfer disk.
To list the vendors available for delivering the transfer disk to Oracle Cloud Infrastructure
At the command prompt on the Data Host, run dts package list-delivery-vendors to list the available
delivery vendors.

dts package list-delivery-vendors

Delivery Vendors :
[1] : FedEx
[2] : DHL
[3] : UPS

Shipping Import Disks Internationally


Create a commercial invoice when shipping transfer disks internationally. To ensure that packages are not held up in
customs, follow these guidelines when creating the commercial invoice:
• Show a unique reference number.

Oracle Cloud Infrastructure User Guide 996


Data Transfer

• Show the "bill-to party" as follows:


• For shipments to the European Union (Frankfurt) location:
ORACLE Deutschland B.V. & Co. KG Riesstrasse 25 Munich, 80992 GERMANY
• For shipments to the United States location:
Oracle America, Inc. 500 Oracle Parkway REDWOOD CITY CA 94065 UNITED
STATES.
• Show the "ship-to party" as the address provided in the transfer package details. See Getting Shipping Labels
on page 989 for details.
• State that "The value shown includes the value of software and data recorded onto the hard drive unit."
• State that the "Goods are free of charge - no payment required."
• State that the type of export is "Temporary."
• Ensure that the commodity code shows the correct HS code for a hard drive unit as specified in the source
country's HS code list.
• State the description as the manufacture's description of the hard drive unit and include the words "Hard Disk
Drive."
• Ensure that the invoice is signed and includes the printed name of the signer.
What's Next
Now you can track your transfer disk shipment and review post transfer logs and summaries. See Monitoring the
Import Disk Shipment and Data Transfer on page 997.

Monitoring the Import Disk Shipment and Data Transfer

This topic describes the monitoring tasks to do after sending the import disk with the copied data to Oracle for data
transfer to Oracle Cloud Infrastructure. The Project Sponsor role typically performs these tasks. See Roles and
Responsibilities on page 971.
Tracking the Import Disk Shipment
When Oracle has processed the transfer (import) disk associated with a transfer package, the status of the transfer
package changes to Processed. When Oracle has shipped the disk, the status of the transfer package changes to
Returned.
To check the status of a transfer package using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Choose the data transfer package for which you want to display the details.
3.
Click the Actions icon ( ), and then click View Details.
4. Look at the Status.
To check the status of a transfer package using the Data Transfer Utility
At the command prompt on the Data Host, run dts package show to show the status of a transfer package.

dts package show --job-id job_id --package-label package_label

For example:

dts package show --job-id ocid1.datatransferjob.oci1..exampleuniqueID --


package-label PWA8O67MI

Transfer Package :

Oracle Cloud Infrastructure User Guide 997


Data Transfer

Label : PWA8O67MI
TransferSiteShippingAddress : Oracle Data Transfer Service; Job:JZM9PAVWH
Package:PWA8O67MI ; 21111 Ridgetop Circle; Dock B; Sterling, VA 20166; USA
DeliveryVendor : *** none ***
DeliveryTrackingNumber : *** none ***
ReturnDeliveryTrackingNumber : *** none ***
Status : PREPARING
Devices : [*** none ***]

Reviewing the Upload Summary


Oracle creates upload summary log files for each uploaded import disk. These logs are placed in the bucket where the
data was uploaded to Oracle Cloud Infrastructure. The upload summary file compares the import disk's manifest file
to the contents of the target Oracle Cloud Infrastructure Object Storage bucket after file upload.
The top of the log report summarizes the overall file processing status:

P - Present: The file is present in both the disk and the target bucket
M - Missing: The file is present in the disk but not the target bucket. It
was likely uploaded and then deleted by another user before the summary was
generated.
C - Name Collision: The file is present in the manifest but a file with the
same name but different contents is present in the target bucket.
U - Unreadable: The file is not readable from the disk
N - Name Too Long: The file name on disk is too long and could not be
uploaded

Complete file upload details follow the summary.

Viewing Data Transfer Metrics


After the import disk with your copied data is received by Oracle and the data transfer begins, you can view the
metrics associated with the transfer job in the Transfer Appliance Details page in chart or table format.
Tip:

Set up your notifications to alert you when the data transfer from the import
disk to Oracle Cloud Infrastructure is occurring. When the state changes from
ORACLE_RECEIVED to PROCESSING, you can start viewing data transfer
metrics.
Select Metrics under Resources to display each of these measures:
• Import Files Uploaded: Total number of files uploaded for import.
• Import Bytes Uploaded: Total number of bytes uploaded for import.
• Import Files Remaining: Total number of files remaining for import upload.
• Import Bytes Remaining: Total number of bytes remaining for import upload.
• Import Files in Error: Total number of files in error for import.
• Import Upload Verification Progress: Progress of verification of files that have already been uploaded for
import.
Select the Start Time and End Time for these measures, either by manually entering the days and times in their
respective fields, or by selecting the Calendar feature and picking the times that way. As an alternative to selecting

Oracle Cloud Infrastructure User Guide 998


Data Transfer

a start and end time, you can also select from a list of standard times (last hour, last 6 hours, and so forth) from the
Quick Selects list for the period measured. The time period you specify applies to all the measures.
Specify the Interval (for example, 5 minutes, 1 hour) that each measure is recorded from the list.
Specify the Statistic being recorded (for example, Sum, Mean) for each measure from the list.
Tip:

Mean is the most useful statistic for data transfer as it reflects an absolute
value of the metric.
Choose additional actions from the Options list, including viewing the query in the Metrics Explorer, capturing the
URL for the measure, and switching between chart and table view.
Click Reset Charts to delete any existing information in the charts and begin recording new metrics.
See Monitoring Overview on page 2686 for general information on monitoring your Oracle Cloud Infrastructure
services.
Closing the Transfer Job
Typically, you would close a transfer job when no further transfer job activity is required or possible. Closing a
transfer job requires that the status of all associated transfer packages be returned, canceled, or deleted. In addition,
the status of the associated transfer disk must be complete, in error, missing, canceled, or deleted.
To close a transfer job using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the data transfer package for which you want to display the details.
3.
Click the Actions icon ( ), and then click View Details.
Alternatively, click the hyperlinked name of the transfer job.
4. Click Close Transfer Job.
To close a transfer job using the Data Transfer Utility
At the command prompt on the host, run dts job close to close a transfer job.

dts job close --job-id job_id

What's Next
You have completed the process of setting up, running, and monitoring the import disk-based data transfer. After the
disk contents is successfully migrated to Oracle Cloud Infrastructure, your physical disk is erased and returned to you.
If you determine that another disk-based data transfer is required, repeat the procedure from the beginning.

Disk Import Reference


This topic provides complete task details for certain components associated with Disk-Based Data Imports. Use this
topic as a reference to learn and use commands associated with components included in the Disk-Based Data Import
procedure.
Transfer Jobs
A transfer job represents the collection of files that you want to transfer and signals the intention to upload those files
to Oracle Cloud Infrastructure. A transfer job combines at least one transfer disk with a transfer package. Identify
which compartment and Object Storage bucket to which Oracle will upload your data.
Tip:

Create a compartment for each transfer job to minimize the required access
your tenancy.

Oracle Cloud Infrastructure User Guide 999


Data Transfer

Creating Transfer Jobs


Create the transfer job in the same compartment as the upload bucket and supply a human-readable name for the
transfer job.
Creating a transfer job returns a job ID that you specify in other transfer tasks. For example:

ocid1.datatransferjob.region1.phx..unique_ID

To create a transfer job using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Select the designated compartment you are to use for data transfers from the list.
A list of transfer jobs that have already been created is displayed.
3. Click Create Transfer Job.
The Create Transfer Job dialog appears.
4. Enter a Job Name. Avoid entering confidential information. Then, select the Upload Bucket from the list.
5. Select Disk for the Transfer Device Type.
6. Click Create Transfer Job.
To create a transfer job using the Data Transfer Utility

dts job create --bucket bucket --compartment-id compartment_id --display-


name display_name

display_name is the name of the transfer job. Avoid entering confidential information.
For example:

oci dts job create --bucket MyBucket1 --compartment-id


ocid.compartment.oc1..exampleuniqueID --display-name MyDiskImportJob

Transfer Job :
ID : ocid1.datatransferjob.oc1..exampleuniqueID
CompartmentId : ocid.compartment.oc1..exampleuniqueID
UploadBucket : MyBucket1
Name : MyDiskImportJob
Label : JZM9PAVWH
CreationDate : 2019/06/04 17:07:05 EDT
Status : PREPARING
freeformTags : *** none ***
definedTags : *** none ***
Packages :
[1] :
Label : PBNZOX9RU
TransferSiteShippingAddress : Oracle Data Transfer Service;
Job:JZM9PAVWH Package:PBNZOX9RU ; 21111 Ridgetop Circle; Dock B; Sterling,
VA 20166; USA
DeliveryVendor : FedEx
DeliveryTrackingNumber : *** none ***
ReturnDeliveryTrackingNumber : *** none ***
Status : PREPARING
Devices : [*** none ***]
UnattachedDevices : [*** none ***]
Appliances : [*** none ***]When you use the to display the
details of a job, tagging details are also included in the output if you
specified tags.

Optionally, you can specify one or more defined or free-form tags when you create a transfer job. For more
information about tagging, see Resource Tags on page 213.

Oracle Cloud Infrastructure User Guide 1000


Data Transfer

Defined Tags
To specify defined tags when creating a job:

dts job create --bucket <bucket> --compartment-id <compartment_id>


--display-name <display_name> --defined-tags '{ "<tag_namespace>":
{ "<tag_key>":"<value>" }}'

For example:

oci dts job create --bucket MyBucket1 --compartment-id


ocid.compartment.oc1..exampleuniqueID --display-name MyDiskImportJob --
defined-tags '{"Operations": {"CostCenter": "01"}}'

Transfer Job :
ID : ocid1.datatransferjob.oc1..exampleuniqueID
CompartmentId : ocid.compartment.oc1..exampleuniqueID
UploadBucket : MyBucket1
Name : MyDiskImportJob
Label : JZM9PAVWH
CreationDate : 2019/06/04 17:07:05 EDT
Status : PREPARING
freeformTags : *** none ***
definedTags :
Operations :
CostCenter : 01
Packages :
[1] :
Label : PBNZOX9RU
TransferSiteShippingAddress : Oracle Data Transfer Service;
Job:JZM9PAVWH Package:PBNZOX9RU ; 21111 Ridgetop Circle; Dock B; Sterling,
VA 20166; USA
DeliveryVendor : FedEx
DeliveryTrackingNumber : *** none ***
ReturnDeliveryTrackingNumber : *** none ***
Status : PREPARING
Devices : [*** none ***]
UnattachedDevices : [*** none ***]
Appliances : [*** none ***]When you use the to display the
details of a job, tagging details are also included in the output if you
specified tags.

Note:

Users create tag namespaces and tag keys with the required permissions.
These items must exist before you can specify them when creating a job. See
Working with Defined Tags on page 3950 for details.
Freeform Tags
To specify freeform tags when creating a job:

dts job create --bucket <bucket> --compartment-id compartment_id --display-


name display_name --freeform-tags '{ "tag_key":"value" }'

For example:

oci dts job create --bucket MyBucket1 --compartment-id


ocid.compartment.oc1..exampleuniqueID --display-name MyDiskImportJob --
defined-tags '{"Operations": {"CostCenter": "01"}}'

Transfer Job :

Oracle Cloud Infrastructure User Guide 1001


Data Transfer

ID : ocid1.datatransferjob.oc1..exampleuniqueID
CompartmentId : ocid.compartment.oc1..exampleuniqueID
UploadBucket : MyBucket1
Name : MyDiskImportJob
Label : JZM9PAVWH
CreationDate : 2019/06/04 17:07:05 EDT
Status : PREPARING
freeformTags :
Pittsburg_Team : brochures
definedTags : *** none ***
Packages :
[1] :
Label : PBNZOX9RU
TransferSiteShippingAddress : Oracle Data Transfer Service;
Job:JZM9PAVWH Package:PBNZOX9RU ; 21111 Ridgetop Circle; Dock B; Sterling,
VA 20166; USA
DeliveryVendor : FedEx
DeliveryTrackingNumber : *** none ***
ReturnDeliveryTrackingNumber : *** none ***
Status : PREPARING
Devices : [*** none ***]
UnattachedDevices : [*** none ***]
Appliances : [*** none ***]When you use the to display the
details of a job, tagging details are also included in the output if you
specified tags.

Multiple Tags
To specify multiple tags, comma separate the JSON-formatted key/value pairs:

dts job create --bucket bucket --compartment-id compartment_id


--display-name display_name --device-type disk --freeform-tags
'{ "tag_key":"value" }', '{ "tag_key":"value" }'

Listing Transfer Jobs


To display the list of transfer jobs using the Console
Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer - Imports.
To display the list of transfer jobs using the Data Transfer Utility

dts job list --compartment-id compartment_id

For example:

dts job list --compartment-id ocid.compartment.oc1..exampleuniqueID

Transfer Job List :


[1] :
ID : ocid1.datatransferjob.oc1..exampleuniqueID
Name : MyDiskImportJob
Label : JVWK5YWPU
BucketName : MyBucket1
CreationDate : 2020/06/01 17:33:16 EDT
Status : INITIATED
FreeformTags : *** none ***
DefinedTags :
Financials :
key1 : nondefault

Oracle Cloud Infrastructure User Guide 1002


Data Transfer

When you use the Data Transfer Utility to list jobs, tagging details are also included in the output if you specified
tags.
Displaying Transfer Job Details
To display the details of a transfer job using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the transfer job for which you want to display the details.
3.
Click the Actions icon ( ), and then click View Details.
To display the details of a transfer job using the Data Transfer Utility

dts job show --job-id job_id

For example:

dts job show --job-id ocid1.datatransferjob.oc1..exampleuniqueID

Transfer Job :
ID : ocid1.datatransferjob.oc1..exampleuniqueID
CompartmentId : ocid.compartment.oc1..exampleuniqueID
UploadBucket : MyBucket1
Name : MyDiskImportJob
Label : JZM9PAVWH
CreationDate : 2019/06/04 17:07:05 EDT
Status : PREPARING
freeformTags : *** none ***
definedTags : *** none ***
Packages :
[1] :
Label : PBNZOX9RU
TransferSiteShippingAddress : Oracle Data Transfer Service;
Job:JZM9PAVWH Package:PBNZOX9RU ; 21111 Ridgetop Circle; Dock B; Sterling,
VA 20166; USA
DeliveryVendor : FedEx
DeliveryTrackingNumber : *** none ***
ReturnDeliveryTrackingNumber : *** none ***
Status : PREPARING
Devices : [*** none ***]
UnattachedDevices : [*** none ***]
Appliances : [*** none ***]

When you use the Data Transfer Utility to display the details of a job, tagging details are also included in the output if
you specified tags.
Editing Transfer Jobs
To edit the name of a transfer job using the Console
Note:

You can only change the name of a transfer job using the Console. If you
want to change other attributes of the transfer job, use the Data Transfer
Utility instead.
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the data transfer job that you want to edit.
3.
Click the Actions icon ( ), and then click Edit.
4. Edit the name of the transfer job. Avoid entering confidential information.

Oracle Cloud Infrastructure User Guide 1003


Data Transfer

5. Click Save.
To edit the name of a transfer job using the Data Transfer Utility

dts job update --job-id job_id --display-name display_name

<display_name> is the new name of the transfer job. Avoid entering confidential information.
For example:

dts job update --job-id


ocid1.datatransferjob.oc1.phx.aaaaaaaa4tccxsktptbexdy6ipmfqome5acvieqthlqvts6lltqv5qxo2
--display-name MyRenamedJob

Transfer Job :
ID : ocid1.datatransferjob.oc1..exampleuniqueID
CompartmentId : ocid.compartment.oc1..exampleuniqueID
UploadBucket : MyBucket1
Name : MyRenamedJob
Label : JZM9PAVWH
CreationDate : 2019/06/04 17:07:05 EDT
Status : PREPARING
freeformTags : *** none ***
definedTags : *** none ***
Packages : [*** none ***]
UnattachedDevices : [*** none ***]
Appliances : [*** none ***]

To edit the tags associated with a transfer job using the Data Transfer Utility
The Data Transfer Utility replaces any existing tags with the new key/value pairs you specify.
To edit defined tags, provide the replacement key value pairs:

dts job update --job-id job_id --defined-tags '{ "tag_namespace":


{ "tag_key":"value" }}'

For example:

dts job update --job-id ocid1.datatransferjob.oc1..exampleuniqueID --


defined-tags '{"Operations": {"CostCenter": "42"}}'

Transfer Job :
ID : ocid1.datatransferjob.oc1..exampleuniqueID
CompartmentId : ocid.compartment.oc1..exampleuniqueID
UploadBucket : MyBucket1
Name : MyDiskImportJob
Label : JZM9PAVWH
CreationDate : 2019/06/04 17:07:05 EDT
Status : PREPARING
freeformTags : *** none ***
definedTags :
operations :
costcenter : 42
Packages : [*** none ***]
UnattachedDevices : [*** none ***]
Appliances : [*** none ***]

Oracle Cloud Infrastructure User Guide 1004


Data Transfer

To edit free-form tags, provide the replacement key/value pairs:

dts job update --job-id job_id --freeform-tags '{ "tag_key":"value" }'

For example:

dts job update --job-id ocid1.datatransferjob.oc1..exampleuniqueID --


freeform-tags '{"Chicago_Team":"marketing_videos"}'

Transfer Job :
ID : ocid1.datatransferjob.oc1..exampleuniqueID
CompartmentId : ocid.compartment.oc1..exampleuniqueID
UploadBucket : MyBucket1
Name : MyDiskImportJob
Label : JZM9PAVWH
CreationDate : 2019/06/04 17:07:05 EDT
Status : PREPARING
freeform-tags :
Chicago_Team : marketing_videos
definedTags : *** none ***
Packages : [*** none ***]
UnattachedDevices : [*** none ***]
Appliances : [*** none ***]

To delete the tags associated with a transfer job using the Data Transfer Utility
The Data Transfer Utility replaces any existing tags with the new key/value pairs you specify. If you want to delete
some of the tags, you would specify new tag string that does not contain the key/value pair you want to delete.
Partial tag deletion is handled in the same way as you edit tags:
• To edit free-form tags, provide the replacement key/value pairs:

dts job update --job-id job_id --freeform-tags '{ "tag_key":"value" }'

• To edit defined tags, provide the replacement key value pairs:

dts job update --job-id job_id --defined-tags '{ "tag_namespace":


{ "tag_key":"value" }}'

To delete all free-form tags:

dts job update --job-id job_id --freeform-tags '{}'

To delete all defined tags:

dts job update --job-id job_id --defined-tags '{}'

Moving Transfer Jobs Between Compartments


Note:

You can only use the Console to move disk-based data transfer jobs between
compartments.
To move a transfer job to a different compartment using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.

Oracle Cloud Infrastructure User Guide 1005


Data Transfer

2. Select the Compartment from the list.


The transfer jobs in that compartment are displayed.
3. Click the link under Transfer Jobs for the transfer job that you want to move.
The Details page for that transfer job appears.

Alternatively, you can click the Actions icon ( ), and then click Move Resource.
4. Click Move Resource in the Details page.
The Move Resource to a Different Compartment dialog appears.
5. Choose the compartment you want to which you want to move the transfer job from the list.
6. Click Move Resource.
You are returned to the Details page for that transfer job.
To move a transfer job to a different compartment using the Data Transfer Utility

dts job move --job-id job_id compartment-id compartment_id [OPTIONS]

compartment_id is the compartment to which the data transfer job is being moved.
Options are:
• --if-match: The tag that must be matched for the task to occur for that entity. If set, the update is only
successful if the object's tag matches the tag specified in the request.
• --from-json: Provide input to this command as a JSON document from a file using the file://path-to/file
syntax. The --generate-full-command-json-input option can be used to generate a sample JSON file
to be used with this command option. The key names are pre-populated and match the command option names
(converted to camelCase format, e.g. compartment-id --> compartmentId), while the values of the keys need to
be populated by the user before using the sample file as an input to this command. For any command option that
accepts multiple values, the value of the key can be a JSON array. Options can still be provided on the command
line. If an option exists in both the JSON document and the command line then the command line specified value
will be used. For examples on usage of this option, please see our "using CLI with advanced JSON options" link:
https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliusing.htm#AdvancedJSONOptions.
To confirm the transfer, display the list of transfer jobs in the new compartment. See Listing Transfer Jobs on page
1002 for more information.
Verifying Upload User Credentials
Note:

You can only use the CLI command to verify upload user credentials.
You can verify the current upload user credentials to see if there are any problems or updates required. If any
configuration file is incorrect or invalid, the upload fails.
To verify the upload user credentials of a transfer job using the CLI

dts job verify-upload-user-credentials --bucket bucket

bucket is the upload bucket for the transfer job.


For example:

dts job verify-upload-user-credentials --bucket MyBucket1

created object BulkDataTransferTestObject in bucket MyBucket1


overwrote object BulkDataTransferTestObject in bucket MyBucket1
inspected object BulkDataTransferTestObject in bucket MyBucket1
read object BulkDataTransferTestObject in bucket MyBucket1

Oracle Cloud Infrastructure User Guide 1006


Data Transfer

Depending on your user configuration, you may get an error message returned similar to the following:

ERROR : key_file /root/.oci/oci_api_key.pem is not a valid file

If a user credential issue is identified, fix it and rerun the verify-upload-user-credentials CLI to ensure
that all problems are addressed. Then you can proceed with transfer job activities.
Deleting Transfer Jobs
Typically, you would delete a transfer job early in the transfer process and before you create any transfer packages or
their associated disks. For example, you initiated the data transfer by creating a transfer job, but changed your mind.
If you want to delete a transfer job later in the transfer process, you must first delete all transfer packages and their
associated disks from the transfer job.
To delete a transfer job using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the data transfer job that you want to delete.
3.
Click the Actions icon ( ), and then click Delete.
Alternatively, you can delete a transfer job from the View Details page.
4. Confirm the deletion when prompted.
To delete a transfer job using the Data Transfer Utility

dts job delete --job-id job_id

For example:

dts job delete --job-id ocid1.datatransferjob.oc1..exampleuniqueID

Confirm the deletion when prompted. The transfer job is deleted with no further action or return. To confirm the
deletion, display the list of transfer jobs in the compartment. See Listing Transfer Jobs on page 1002 for more
information.
Closing Transfer Jobs
Typically, you would close a transfer job when no further transfer job activity is required or possible. Closing a
transfer job requires that the status of all associated transfer packages be returned, canceled, or deleted. In addition,
the status of all associated transfer disks must be complete, in error, missing, canceled, or deleted.
To close a transfer job using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the data transfer package for which you want to display the details.
3.
Click the Actions icon ( ), and then click View Details.
Alternatively, click the hyperlinked name of the transfer job.
4. Click Close Transfer Job.
To close a transfer job using the Data Transfer Utility

dts job close --job-id job_id

For example:

dts job close --job-id ocid1.datatransferjob.oc1..exampleuniqueID

Oracle Cloud Infrastructure User Guide 1007


Data Transfer

Transfer Job :
ID : ocid1.datatransferjob.oc1..exampleuniqueID
CompartmentId : ocid.compartment.oc1..exampleuniqueID
UploadBucket : MyBucket1
Name : MyDiskImportJob
Label : JZM9PAVWH
CreationDate : 2019/06/04 17:07:05 EDT
Status : CLOSED
freeformTags : *** none ***
definedTags : *** none ***
Packages : [*** none ***]
UnattachedDevices : [*** none ***]
Appliances : [*** none ***]

Transfer Disks
The section describes the creation and management transfer disks.
Important:

Before creating a transfer disk from an attached disk, remove all partitions
and any file systems. To prevent the accidental deletion of data, the Data
Transfer Utility does not work with disks that already have partitions or file
systems. Disks are visible to the Data Host as block devices and must provide
a valid response to the hdparm -I <device> Linux command.
Creating Transfer Disks
Note:

You can only use the Data Transfer Utility to create a transfer disk.
When you create a transfer disk, the Data Transfer Utility:
• Sets up the disk for encryption using the passphrase.
• Creates a file system on the disk.
• Mounts the file system at /mnt/orcdts_<label>.
For example:

/mnt/orcdts_DJZNWK3ET

When you register a transfer disk, Oracle Cloud Infrastructure generates a strong encryption passphrase that is used to
encrypt the transfer disk. This passphrase is displayed in the return when you run the command. Create a local, secure
copy of the encryption passphrase so you can reference the passphrase again. You cannot retrieve the passphrase after
it is shown here. This passphrase is used to encrypt this disk and normally will not be needed again. However, if the
system is restarted before all files are copied to the filesystem and the disk is then finalized through this CLI, you will
need to provide the passphrase.
Creating a transfer disk requires the job ID returned from when you created the transfer job and the path to the
attached disk (for example /dev/sdb).
To create a transfer disk using the Data Transfer Utility

dts disk create --job-id job_id --block-device block_device

When the run the disk create command, the CLI displays information regarding the creation of the disk, and prompts
you to continue in several places. When the disk is created the following disk information is displayed:

dts disk create --job-id ocid1.datatransferjob.oci1..exampleuniqueID --


block-device /dev/sdb
...
Transfer Disk :

Oracle Cloud Infrastructure User Guide 1008


Data Transfer

Label: : DNKZQ1XKC
SerialNumber: : VB6fc3b4a1-5d90f001
Status : PREPARING
EncryptionPassphrase : passphrase

Important:

Record the passphrase in a secure, local location.


Displaying Transfer Disk Details
Note:

You can only use the Data Transfer Utility to display details for a specified
transfer disk.
To display the details of a disk using the Data Transfer Utility

dts disk show --job-id job_id --disk-label disk_label

For example:

dts disk show --job-id ocid1.datatransferjob.oci1..exampleuniqueID --disk-


label DNKZQ1XKC

Transfer Disk :
Label: : DNKZQ1XKC
SerialNumber: : VB6fc3b4a1-5d90f001
UploadStatusLogUrl : JVPWPQV6U/DNKZQ1XKC/upload_summary.txt
Status : PREPARING

The path syntax for the upload status log URL is <transfer_job_label>/<disk_lable>/
upload_summary.txt.
Deleting Transfer Disks
Note:

You can only use the Data Transfer Utility to delete a transfer disk.
Typically, you would delete a transfer disk during the disk preparation process. You created, attached, and copied
data to the transfer disk, but have changed your mind about shipping the disk. If you want to reuse the disk, remove
all file systems and create the disk again.
To delete a transfer disk using the Data Transfer Utility

dts disk delete --job-id job_id --disk-label disk_label

For example:

dts disk delete --job-id ocid1.datatransferjob.oci1..exampleuniqueID --disk-


label DNKZQ1XKC

Deleted Disk: DNKZQ1XKC

Canceling Transfer Disks


If you shipped a disk to Oracle, but have changed your mind about uploading the files, you can cancel the transfer
disk.
Oracle cannot process canceled transfer disks. Oracle returns canceled transfer disks to the sender.

Oracle Cloud Infrastructure User Guide 1009


Data Transfer

Note:

You can only use the Data Transfer Utility to cancel a transfer disk.
To cancel a transfer disk using the Data Transfer Utility

dts disk cancel --job-id job_id --disk-label disk_label

For example:

dts disk cancel --job-id ocid1.datatransferjob.oci1..exampleuniqueID --disk-


label DNKZQ1XKC

Canceled Disk: DNKZQ1XKC

Locking Transfer Disks


Note:

You can only use the Data Transfer Utility to lock a transfer disk.
Locking a transfer disk safely unmounts the disk and removes the encryption passphrase from the Data Host.
To lock a transfer disk using the Data Transfer Utility

dts disk lock --job-id <job_id> --disk-label disk_label --block-


device block_device

For example:

dts disk lock --job-id ocid1.datatransferjob.oci1..exampleuniqueID --disk-


label DNKZQ1XKC --block-device /dev/sdb

Copying upload user credentials.


created object BulkDataTransferTestObject in bucket MyBucket1
overwrote object BulkDataTransferTestObject in MyBucket1
inspected object BulkDataTransferTestObject in bucket MyBucket1
read object BulkDataTransferTestObject in bucket MyBucket1
Scanning filesystem to validate manifest. If special files are encountered,
they will be listed below.
validated manifest
/dev/sdb DNKZQ1XKC is encrypted and locked
Locked disk.

Unlocking Transfer Disks


If you need to unlock the transfer disk, you are prompted for the encryption passphrase that was generated when you
created the transfer disk.
To unlock a transfer disk using the Data Transfer Utility

dts disk unlock --job-id <job_id> --disk-label disk_label --block-


device block_device --encryption-passphrase encryption_passphrase

For example:

dts disk unlock --job-id ocid1.datatransferjob.oci1..exampleuniqueID --disk-


label DNKZQ1XKC --block-device /dev/sdb --encryption-passphrase passphrase

Encryption passphrase ('q' to quit):


enabled cleartext read/write on device

Oracle Cloud Infrastructure User Guide 1010


Data Transfer

Unlocked and mounted disk.

Attaching Transfer Disks to Transfer Packages


Attach a transfer disk to a transfer package after you have done the following tasks in order:
• Copied your data onto the disk.
• Generated the required manifest file.
• Run and reviewed the dry-run report.
• Locked the transfer disk in preparation for shipment.
Note:

You can only attach a single disk to each transfer package.


To attach a transfer disk to a transfer package using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the transfer job associated with the transfer package having the disk attached.
3.
Click the Actions icon ( ), and then click View Details.
A list of transfer packages is displayed.
4. Find the transfer package for which you want to attach a disk.
5.
Click the Actions icon ( ), and then click View Details.
Alternatively, click the hyperlinked name of the transfer package.
The transfer disk is displayed.
6. Click Attach Transfer Disks.
The Attach Transfer Disks dialog appears.
7. Select the Transfer Disks that you want to attach to the transfer package.
8. Click Attach.
To attach a transfer disk to a transfer package using the Data Transfer Utility

dts disk attach --job-id job_id --package-label package_label --disk-


label disk_label

For example:

dts attach --job-id ocid1.datatransferjob.oci1..exampleuniqueID --package-


label PWA8O67MI --disk-label DNKZQ1XKC

Attached disk: DNKZQ1XKC to package: PWA8O67MI

See Transfer Packages on page 1012 for more information, including how to obtain the package label.
Detaching Transfer Disks from Transfer Packages
To detach a transfer disk to a transfer package using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the transfer package for which you want to detach a transfer disk.
3.
Click the Actions icon ( ), and then click View Details.
Alternatively, click the hyperlinked name of the transfer package.
The attached transfer disk is displayed.

Oracle Cloud Infrastructure User Guide 1011


Data Transfer

4.
Click the Actions icon ( ), and then click View Details.
Alternatively, click the hyperlinked name of the transfer disk.
5. Click Detach Transfer Disk.
To detach a transfer disk to a transfer package using the Data Transfer Utility

dts disk detach --job-id job_id --package-label package_label --disk-


label disk_label

For example:

dts detach --job-id ocid1.datatransferjob.oci1..exampleuniqueID --package-


label PWA8O67MI --disk-label DNKZQ1XKC

Detached disk: DNKZQ1XKC from package: PWA8O67MI

Querying Transfer Disks


You can query the transfer disk for information regarding the physical transfer disk such as the loops, sizes, and
mountpoints.
Note:

You can only use the Data Transfer Utility to query the transfer disk.
To query the transfer disk using the Data Transfer Utility

dts disk query

For example:

dts disk query

NAME TYPE SIZE UUID LABEL


MOUNTPOINT
loop0 loop 140.7M /
snap/gnome-3-26-1604/92
loop1 loop 4.2M /
snap/gnome-calculator/501
sda disk 40.8G
##sda1 part 12G xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx /
sr0 rom 56.9M 2020-02-18-17-20-05-35 VBox_GAs_6.1.4 /
media/user/VBox_GAs_6.1.4

Transfer Packages
A transfer package is the virtual representation of the physical disk package that you are shipping to Oracle for upload
to Oracle Cloud Infrastructure.
Creating Transfer Packages
Creating a transfer package requires the job ID returned from when you created the transfer job. For example:

ocid1.datatransferjob.region1.phx..exampleuniqueID

To create a transfer package using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the transfer job for which you want to create a transfer package.

Oracle Cloud Infrastructure User Guide 1012


Data Transfer

3.
Click the Actions icon ( ), and then click View Details.
Alternatively, click the hyperlinked name of the transfer job.
A list of transfer packages that have already been created is displayed.
4. Click Create Transfer Package.
The Create Transfer Package dialog appears.
5. Select a Vendor from the list.
6. Click Create Transfer Package.
The Data Transfer Package dialog appears displaying information such as the shipping address, the shipping vendor,
and the shipping status.
To create a transfer package using the Data Transfer Utility

dts package create --job-id job_id

For example:

dts package create --job-id ocid1.datatransferjob.oci1..exampleuniqueID

Transfer Package :
Label : PWA8O67MI
TransferSiteShippingAddress : Oracle Data Transfer Service;
Job:JZM9PAVWH Package:PWA8O67MI ; 21111 Ridgetop Circle; Dock B; Sterling,
VA 20166; USA
DeliveryVendor : *** none ***
DeliveryTrackingNumber : *** none ***
ReturnDeliveryTrackingNumber : *** none ***
Status : PREPARING
Devices : [*** none ***]

Displaying Transfer Package Details


To display the details of a transfer package using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the transfer job for which you want to see the details.
3.
Click the Actions icon ( ), and then click View Details.
Alternatively, click the hyperlinked name of the transfer job.
A list of transfer packages that have already been created is displayed.
To display the details of a transfer package using the Data Transfer Utility

dts package show --job-id job_id --package-label package_label

For example:

dts package show --job-id ocid1.datatransferjob.oci1..exampleuniqueID --


package-label PWA8O67MI

Transfer Package :
Label : PWA8O67MI
TransferSiteShippingAddress : Oracle Data Transfer Service;
Job:JZM9PAVWH Package:PWA8O67MI ; 21111 Ridgetop Circle; Dock B; Sterling,
VA 20166; USA
DeliveryVendor : *** none ***
DeliveryTrackingNumber : *** none ***
ReturnDeliveryTrackingNumber : *** none ***

Oracle Cloud Infrastructure User Guide 1013


Data Transfer

Status : PREPARING
Devices : [*** none ***]

Editing Transfer Packages


Edit the transfer package and supply the tracking information when you ship the package.
To edit a transfer package using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the transfer job for which you want to see the associated transfer packages.
3.
Click the Actions icon ( ), and then click View Details.
4. Find the transfer package that you want to edit.
5.
Click the Actions icon ( ), and then click View Details.
6. Click Edit.
Change the vendor and supply the tracking information as needed.
7. Click Edit Transfer Package.
Deleting Transfer Packages
Typically, you would delete a transfer package early in the transfer process and before you create its associated
transfer disk. You initiated the transfer job and package, but have changed your mind. If you delete a transfer package
later in the transfer process, you must first detach the associated transfer disk. You cannot delete a transfer package
once the package has been shipped to Oracle.
To delete a transfer package using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the transfer job for which you want to see the associated transfer packages.
3.
Click the Actions icon ( ), and then click View Details.
4. Find the transfer package that you want to edit.
5.
Click the Actions icon ( ), and then click View Details.
6. Click Edit.
Change the vendor and supply the tracking information as needed.
7. Click Edit Transfer Package.
To delete a transfer package using the Data Transfer Utility

dts package delete --job-id job_id --package-label package_label

For example:

dts package delete --job-id ocid1.datatransferjob.oci1..exampleuniqueID --


package-label PWA8O67MI

Deleted package with label: PWA8O67MI

Canceling Transfer Packages


If you shipped a transfer package, but have changed your mind about uploading the data, you can cancel a transfer
package. Before canceling a transfer package, you must first cancel the transfer disk associated with that transfer
package. Oracle cannot process canceled transfer packages. Oracle returns canceled transfer packages to the sender.
To cancel a transfer package using the Console

Oracle Cloud Infrastructure User Guide 1014


Data Transfer

1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the transfer job for which you want to see associated transfer packages.
3.
Click the Actions icon ( ), and then click View Details.
4. Find the transfer package that you want to cancel.
5.
Click the Actions icon ( ), and then click View Details.
6. Click Cancel Transfer Package.
To cancel a transfer package using the Data Transfer Utility

dts package cancel --job-id job_id --package-label package_label

For example:

dts package cancel --job-id ocid1.datatransferjob.oci1..exampleuniqueID --


package-label PWA8O67MI

Canceled package with label: PWA8O67MI

Data Import - Appliance


Appliance-Based Data Import is one of Oracle's offline data transfer solutions that lets you migrate petabyte-scale
datasets to Oracle Cloud Infrastructure. You send your data as files on one or more secure, high-capacity, Oracle-
supplied Data Transfer Appliances to an Oracle transfer site. Operators at the Oracle transfer site upload the files into
the designated Object Storage bucket in your tenancy. You are then free to move the uploaded data to other Oracle
Cloud Infrastructure services as needed.
Note:

Appliance-Based Data Import is not available for free trial or Pay As You Go
accounts.

Appliance-Based Data Import Concepts


TRANSFER JOB
A transfer job is the logical representation of a data migration to Oracle Cloud Infrastructure. A transfer job
is associated with one or more import appliances.
DATA TRANSFER APPLIANCE
The Data Transfer Appliance (import appliance) is a high-storage capacity device that is specially prepared
to copy and upload data to Oracle Cloud Infrastructure. You request an import appliance from Oracle, copy
your data onto it, and then ship it back to Oracle for upload.
COMMAND LINE INTERFACE
The command line interface (CLI) is a small footprint tool that you can use on its own or with the Console to
complete Oracle Cloud Infrastructure tasks, including Appliance-Based Data Import jobs.
Note:

You can only run Oracle Cloud Infrastructure CLI commands from a
Linux host. This differs from running CLI commands for other Oracle
Cloud Infrastructure Services on a variety of host operating systems.
Appliance-based commands require validation that is only available on
Linux hosts.

Oracle Cloud Infrastructure User Guide 1015


Data Transfer

HOST
A physical computer at the customer site on which one or more of the logical hosts (Control, Data, Terminal
Emulation) is running. Depending on your computing environment, you can have any of the following:
• A separate physical host for each logical host
• All three logical hosts consolidated onto a single physical host
• Two logical hosts on one physical host and the third logical host on a separate physical host
All physical hosts much be on network used for the data transfer.
CONTROL HOST
The logical representation of the host computer at your site from which you perform Data Transfer Service
tasks. Depending on your needs, you may use one or more separate hosts (Control and Data) to run your
Appliance-Based Data Importjob.
Note:

You can only run Oracle Cloud Infrastructure CLI commands from a
Linux-based Control Host machine. You can run Console tasks from a
browser running on a Windows machine.
DATA HOST
The logical representation of the host computer on your site that stores the data you intend to copy to Oracle
Cloud Infrastructure.
Note:

Only Linux machines can be used as Data Hosts.


TERMINAL EMULATION HOST
The logical representation of the host computer that uses terminal emulation software to communicate with,
and allow you to command, the import appliance.
BUCKET
The logical container in Oracle Cloud Infrastructure Object Storage where Oracle operators upload your data.
A bucket is associated with a single compartment in your tenancy whose policies that determine what actions
a user can perform on a bucket and on all the objects in the bucket.
DATA TRANSFER ADMINISTRATOR
A new or existing IAM user that has the authorization and permissions to create and manage transfer jobs.
DATA TRANSFER UPLOAD USER
A temporary IAM user that grants Oracle personnel the authorization and permissions to upload the data
from the import appliance to your designated Oracle Cloud Infrastructure Object Storage bucket. Delete this
temporary user after your data is uploaded to Oracle Cloud Infrastructure.
APPLIANCE MANAGEMENT SERVICE
Software running on the import appliance that provides management functions. Users interact with this
service though the Oracle Cloud Infrastructure CLI.

Appliance Specifications
Use NFS versions 3, 4, or 4.1 to copy your data onto the appliance. Here are some details about the appliance:

Oracle Cloud Infrastructure User Guide 1016


Data Transfer

Item Description Specification

Storage Capacity • US East (Ashburn), US West (Phoenix), Germany Central (Frankfurt):


150 TB of protected usable space.
• All other regions: 95 TB of protected usable space.

Network Interfaces • 10 GbE - RJ45


• 10 GbE - SFP+
You are responsible for providing all network cables. If you want to use
SFP+, your transceivers must be compatible with Intel X520 NICs.

Provided Cables • NEMA 5–15 type B to C13


• C13 - 14 power
• USB - DB9 serial

Environmental • Operational temperature: 50–95°F (10–35°C)


• Operational relative humidity: 8–90% non-condensing
• Acoustics: < 75 dB @ 73°F (23° C)
• Operational altitude: -1,000 ft - 10,000 ft (approx. -300–3048 m))

Power • Consumption: 554 W


• Voltage: 100–240 VAC
• Frequency: 47–63 Hz
• Conversion efficiency: 89%

Weight • Unit: 38 lbs (approx. 17 kg)


• Unit + Transit Case: 64 lbs (approx. 29 kg)

Height 3.5" (approx. 9 cm) (2U)

Width 17" (approx. 43 cm)

Depth 24" (approx. 61 cm)

Shipping Case 11" x 25" x 28" (approx. 28 x 63.5 x 71 cm)

Import File Constraints


All files being imported using the Data Transfer Appliance must conform to the following:
• Maximum file size - 10 TB
• Maximum file name length - 1024 characters

Roles and Responsibilities


Depending on your organization, the responsibilities of using and managing the data transfer may span multiple roles.
Use the following set of roles as a guideline for how you can assign the various tasks associated with the data transfer.
• Project Sponsor: Responsible for the overall success of the data transfer. Project Sponsors usually have complete
access to their organization's Oracle Cloud Infrastructure tenancy. They coordinate with the other roles in the
organization to complete the implementation of data transfer project. The Project Sponsor is also responsible for
signing legal documentation and setting up notifications for the data import.
• Infrastructure Engineer: Responsible for integrating the transfer appliance into the organization's IT
infrastructure from where the data is being transferred. Tasks associated with this role include connecting the

Oracle Cloud Infrastructure User Guide 1017


Data Transfer

transfer appliance to power, placing it within the network, and setting the IP address through a serial console menu
using the provided USB-to-Serial adapter.
• Data Administrator: Responsible for identifying and preparing the data to be transferred to Oracle Cloud
Infrastructure. This person usually has access to, and expertise with, the data being migrated.
These roles correspond to the various phases of the data transfer described in the following section. A specific role
can be responsible for one or more phases.

Task Flow for Appliance-Based Data Import


Here is a high-level overview of the tasks involved in the Appliance-Based Data Import to Oracle Cloud
Infrastructure. Complete one phase before proceeding to the next one. Use the roles previously described to distribute
the tasks across individuals or groups within your organization.

Oracle Cloud Infrastructure User Guide 1018


Data Transfer

Oracle Cloud Infrastructure User Guide 1019


Data Transfer

Secure Appliance Data Transfer to Oracle Cloud Infrastructure


This section highlights the security details of the Data Transfer Appliance process.
• Appliances are shipped from Oracle to you with a tamper-evident security tie on the transit case. A second
tamper-evident security tie is included in the import appliance transit case for you to secure the case when you
ship the case back to Oracle. The number on the physical security ties must match the numbers logged by Oracle
in the import appliance details.
• When you configure the import appliance for the first time:
• The import appliance generates a master AES-256 bit encryption key that is used for all data written to or read
from the device. The encryption key never leaves the device.
• The encryption key is protected by an encryption passphrase that you must know to access the encrypted data.
The system securely fetches a provided encryption passphrase from Oracle Cloud Infrastructure and registers
that passphrase on the import appliance.
Note:

The encryption passphrase is never stored on the import appliance.


• All data is encrypted as the data is copied to an import appliance.
• For more security, you can also encrypt your own data with your own encryption keys. Before copying your data
to the import appliance, you can encrypt your data with a tool and encryption key of your choosing. After the data
has been uploaded, you would need to use the same tool and encryption key to access the data.
• All network communication between your appliance-based data transfer environment and Oracle Cloud
Infrastructure is encrypted in-transit using Transport Layer Security (TLS).
• After copying your data to a transfer appliance, the data transfer system generates a manifest file. The manifest
contains an index of all of the copied files and generated data integrity hashes. The system also encrypts and
copies the config_upload_user configuration file to the transfer appliance. This configuration file
describes the temporary IAM data transfer upload user. Oracle uses the credentials and entries defined in the
config_upload_user file when processing the transfer appliance and uploading files to Oracle Cloud
Infrastructure Object Storage.
Note:

Data Transfer Service Does Not Support Passphrases on Private Keys


While we recommend encrypting a private key with a passphrase when
generating API signing keys, the Data Transfer Service does not support
passphrases on the key file required for the config_upload_user
configuration file. If you use a passphrase, Oracle personnel cannot upload
your data.
Oracle cannot upload data from a transfer appliance without the correct credentials defined in this configuration
file. See Preparing Upload Configuration Files on page 1031 for more information about the required
configuration files.
• Oracle erases all of your data from the import appliance after it has been processed. The erasure process follows
the NIST 800-88 standards.
• Keep possession of the security tie after you have finished unpacking and connecting the import appliance.
Include it when returning the import appliance to Oracle. Failure to include the security tie can result in a delay in
the data migration process.

What's Next
You are now ready to prepare the host for the Appliance-Based Data Import. See Preparing for Appliance Data
Transfers on page 1021 for more information.

Oracle Cloud Infrastructure User Guide 1020


Data Transfer

Preparing for Appliance Data Transfers

This topic describes the tasks associated with preparing for the Appliance-Based Data Import job. The Project
Sponsor role typically performs these tasks. See Roles and Responsibilities on page 1017.
Note:

You can only run Oracle Cloud Infrastructure CLI commands from a Linux
host. This differs from running CLI commands for other Oracle Cloud
Infrastructure Services on a variety of host operating systems. Appliance-
based commands require validation that is only available on Linux hosts.
Installing and Using the Oracle Cloud Infrastructure Command Line Interface
The Oracle Cloud Infrastructure Command Line Interface (CLI) provides a set of command line-based tools for
configuring and running Appliance-Based Data Import jobs. Use the Oracle Cloud Infrastructure CLI as an alternative
to running commands from the Console. Sometimes you must use the CLI to complete certain tasks as there is no
Console equivalent.

Minimum Required CLI Version


The minimum CLI version required for Appliance-Based Data Import is 2.12.1.

Determining CLI Versions


Access the following URL to see the currently available version of the CLI:
https://github.com/oracle/oci-cli/blob/master/CHANGELOG.rst
Enter the following command at the prompt to see the version of the CLI currently installed on your machine:

oci --version

If you have a version on your machine older than the version currently available, install the latest version.
Note:

Always update to the latest version of the CLI. The CLI is not updated
automatically, and you can only access new or updated CLI features by
installing the current version.

Linux Operating System Requirements


See Requirements on page 4229 for a list of the Linux operating systems that support the CLI.

Installing the CLI


Installation and configuration of the CLIs is described in detail in Command Line Interface (CLI) on page 4228.

Using the CLI


You can specify CLI options using the following commands:
• --option value or
• --option=value

Oracle Cloud Infrastructure User Guide 1021


Data Transfer

The basic CLI syntax is:

oci dts resource action options

This syntax is applied to the following:


• oci dts is the shortened CLI command name.
• job is an example of a resource.
• create is an example of an action.
• Other strings are options.
The following command to create a transfer job shows a typical CLI command construct.

oci dts job create --compartment-id ocid1.compartment.oc1..exampleuniqueID


--bucket MyBucket --display-name MyApplianceImportJob --device-type
appliance

Note:

In the previous examples, provide a friendly name for the transfer job using
the ##display#name option. Avoid entering confidential information.

Accessing Command Line Interface Help


All CLI help commands have an associated help component you can access from the command line. To view the help,
enter any command followed by the --help or -h option. For example:

oci dts job --help

NAME
dts_job -

DESCRIPTION
Transfer disk or appliance job operations

AVAILABLE COMMANDS
o change-compartment
o close
o create
o delete
o detach-devices-details
...

When you run the help option (--help or -h) for a specified command, all the subordinate commands and options
for that level of CLI are displayed. If you want to access the CLI help for a specific subordinate command, include it
in the CLI string, for example:

oci dts job create --help

NAME
dts_job_create -

DESCRIPTION
Creates a new transfer disk or appliance job.

USAGE
oci dts job create [OPTIONS]

REQUIRED PARAMETERS

Oracle Cloud Infrastructure User Guide 1022


Data Transfer

--bucket [text]

Upload bucket name

--compartment-id, -c [text]

Compartment OCID

--device-type [text]

Creating the Required IAM Users, Groups, and Policies


Each service in Oracle Cloud Infrastructure integrates with IAM for authentication and authorization.
To use Oracle Cloud Infrastructure, you must be given the required type of access in a policy written by an
administrator, whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you try
to perform an action and get a message that you don’t have permission or are unauthorized, confirm with your
administrator the type of access you've been granted and which compartment you should work in.
Access to resources is provided to groups using policies and then inherited by the users that are assigned to those
groups. Data transfer requires the creation of two distinct groups:
• Data transfer administrators who can create and manage transfer jobs.
• Data transfer upload users who can upload data to Object Storage. For your data security, the permissions for
upload users allow Oracle personnel to upload standard and multi-part objects on your behalf and inspect bucket
and object metadata. The permissions do not allow Oracle personnel to inspect the actual data.
The Data Administrator is responsible for generating the required RSA keys needed for the temporary upload users.
These keys should never be shared between users.
For details on creating groups, see Managing Groups on page 2438.
An administrator creates these groups with the following policies:
• The data transfer administrator group requires an authorization policy that includes the following:

Allow group group_name to manage data-transfer-jobs in


compartment compartment_name
Allow group group_name to manage objects in compartment compartment_name
Allow group group_name to manage buckets in compartment compartment_name

Alternatively, you can consolidate the manage buckets and manage objects policies into the following:

Allow group group_name to manage object-family in


compartment compartment_name

• The data transfer upload user group requires an authorization policy that includes the following:

Allow group group_name to manage buckets in compartment compartment_name


where all { request.permission='BUCKET_READ',
target.bucket.name='<bucket_name>' }
Allow group group_name to manage objects in compartment compartment_name
where all { target.bucket.name='<bucket_name>',
any { request.permission='OBJECT_CREATE',
request.permission='OBJECT_OVERWRITE',
request.permission='OBJECT_INSPECT' }}

To enable notifications, add the following policies:

Allow group group name to manage ons-topics in tenancy


Allow group group name to manage ons-subscriptions in tenancy
Allow group group name to manage cloudevents-rules in tenancy
Allow group group name to inspect compartments in tenancy

Oracle Cloud Infrastructure User Guide 1023


Data Transfer

See Notifications Overview on page 3378 and Overview of Events on page 1788 for more information.
The Oracle Cloud Infrastructure administrator then adds a user to each of the data transfer groups created. For details
on creating users, see Managing Users on page 2433.
Important:

For security reasons, we recommend that you create a unique IAM data
transfer upload user for each transfer job and then delete that user once your
data is uploaded to Oracle Cloud Infrastructure.
Requesting Appliance Entitlement
If your tenancy is not entitled to use the Data Transfer Appliance, you must request the Data Transfer Appliance
Entitlement before creating an appliance-based transfer job.
Important:

The main buyer or administrator, who is at a VP level or higher, receives


an email notification and is required to e-sign a Terms and Conditions
document. After Oracle has confirmed signature of the document, you can
create an appliance-based transfer job. The email for the DocuSign does not
go to the requester unless they are the main buyer or administrator who is at
least at a VP level.
It can take up to 24 hours before the Terms and Conditions are sent.
To request the Data Transfer Appliance Entitlement using the Console
Open the Transfer Job page and click Request at the top. Otherwise, you are prompted to request the entitlement
when attempting to create your first appliance-based transfer job.
Once requested, the status of your request is visible at the top of the Transfer Job page. For example:
Data Transfer Appliance Entitlement: Granted
It can take a while to get the Data Transfer Appliance Entitlement approved. After Oracle receives your request, a
Terms and Conditions Agreement is sent to the account owner via DocuSign to use the appliance. The entitlement
request is approved once the signature is received. The Data Transfer Appliance Entitlement is a tenancy-wide
entitlement that you need to request once for each tenancy.
To request the Data Transfer Appliance Entitlement using the CLI

oci dts appliance request-entitlement --compartment-id compartment_id --


name name --email email

name is the name of the requester.


email is the email address of the requester.
For example:

oci dts appliance request-entitlement --compartment-id


ocid.compartment.oc1..exampleuniqueID --name "John Doe" --email
[email protected]

{
"data": {
"compartment-id": "ocid.compartment.oc1..exampleuniqueID",
"creation-time": "2019-12-18T18:29:15+00:00",
"defined-tags": {},
"display-name": null,
"freeform-tags": {},
"id": "ocid1.datatransferapplianceentitlement.oc1..exampleuniqueID",
"lifecycle-state": "CREATING",

Oracle Cloud Infrastructure User Guide 1024


Data Transfer

"lifecycle-state-details": "REQUESTED",
"requestor-email": "[email protected]",
"requestor-name": "John Doe",
"update-time": "2019-12-20T19:04:09+00:00"
}
}

To show the status of a Data Transfer Appliance Entitlement request using the CLI

oci dts appliance show-entitlement --compartment-id compartment_id

For example:

oci dts appliance show-entitlement --compartment-id


ocid.compartment.oc1..exampleuniqueID
{
"data": {
"compartment-id": ""ocid.compartment.oc1..exampleuniqueID",
"defined-tags": null,
"display-name": null,
"freeform-tags": null,
"id": null,
"lifecycle-state": "ACTIVE",
"lifecycle-state-details": "APPROVED",
"requestor-email": "[email protected]",
"requestor-name": "John Doe"
}
}

Establishing the Data Transfer Appliance Entitlement Policy


Use the following policy to enable users in a specific group to request a Data Transfer Appliance Entitlement in your
tenancy.

Allow group <group_name> to {DTA_ENTITLEMENT_CREATE} in tenancy

Appliance Entitlement Eligibility


Your request for a Data Transfer Appliance Entitlement in your tenancy may be denied if you are a free trial
customer. If your request is denied, upgrade to a full account. You can also contact your Oracle Customer Support
Manager or Oracle Support to determine your options for obtaining the entitlement.
Creating Object Storage Buckets
The Object Storage service is used to upload your data to Oracle Cloud Infrastructure. Object Storage stores objects in
a container called a bucket within a compartment in your tenancy. For details on creating the bucket to store uploaded
data, see Managing Buckets on page 3426.
Configuring Firewall Settings
Ensure that your local environment's firewall can communicate with the Data Transfer Service running on the IP
address ranges: 140.91.0.0/16. You also need to open access to the Object Storage IP address ranges: 134.70.0.0/17.
Creating Transfer Jobs
This section describes how to create a transfer job as part of the preparation for the data transfer. See Transfer Jobs on
page 1059 for complete details on all tasks related to transfer jobs.
A transfer job represents the collection of files that you want to transfer and signals the intention to upload those files
to Oracle Cloud Infrastructure. Identify which compartment and Object Storage bucket to which Oracle is to upload
your data. Create the transfer job in the same compartment as the upload bucket and supply a human-readable name
for the transfer job.

Oracle Cloud Infrastructure User Guide 1025


Data Transfer

Note:

It is recommended that you create a compartment for each transfer job to


minimize the required access your tenancy.
Creating a transfer job returns a transfer job ID that you specify in other transfer tasks. For example:

ocid1.datatransferjob.region1.phx..unique_ID

To create a transfer job using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Select the designated compartment you are to use for data transfers from the list.
A list of transfer jobs that have already been created is displayed.
3. Click Create Transfer Job.
The Create Transfer Job dialog appears.
4. Enter a Job Name. Avoid entering confidential information. Then, select the Upload Bucket from the list.
5. Select Appliance for the Transfer Device Type.
6. Click Create Transfer Job.
To create a transfer job using the CLI

oci dts job create --bucket bucket --compartment-id compartment_id --


display-name display_name --device-type device_type

display_name is the name of the transfer job. Avoid entering confidential information.
device_type should always be appliance for Appliance-Based Data Import jobs.
For example:

oci dts job create --bucket MyBucket1 --compartment-id


ocid.compartment.oc1..exampleuniqueID --display-name MyApplianceImportJob
--device-type appliance

{
"data": {
"attached-transfer-appliance-labels": [],
"attached-transfer-device-labels": [],
"attached-transfer-package-labels": [],
"compartment-id": "ocid.compartment.oc1..exampleuniqueID",
"creation-time": "2019-12-18T19:43:58+00:00",
"defined-tags": {},
"device-type": "APPLIANCE",
"display-name": "MyApplianceImportJob",
"freeform-tags": {},
"id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"label": "JAKQVAGJF",
"lifecycle-state": "INITIATED",
"upload-bucket-name": "MyBucket1"
},
"etag": "2--gzip"
}

Optionally, you can specify one or more defined or freeform tags when you create a transfer job. For more
information about tagging, see Resource Tags on page 213.
Defined Tags

Oracle Cloud Infrastructure User Guide 1026


Data Transfer

To specify defined tags when creating a job:

oci dts job create --bucket bucket --compartment-id compartment_id


--display-name display_name --device-type appliance --defined-tags
'{ "tag_namespace": { "tag_key":"value" }}'

For example:

oci dts job create --bucket MyBucket1 --compartment-id


ocid.compartment.oc1..exampleuniqueID --display-name MyApplianceImportJob
--device-type appliance --defined-tags '{"Operations": {"CostCenter":
"01"}}'

{
"data": {
"attached-transfer-appliance-labels": [],
"attached-transfer-device-labels": [],
"attached-transfer-package-labels": [],
"compartment-id": "ocid.compartment.oc1..exampleuniqueID",
"creation-time": "2019-12-18T19:43:58+00:00",
"defined-tags": {
"operations": {
"costcenter": "01"
}
},
"device-type": "APPLIANCE",
"display-name": "MyApplianceImportJob",
"freeform-tags": {},
"id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"label": "JAKQVAGJF",
"lifecycle-state": "INITIATED",
"upload-bucket-name": "MyBucket1"
},
"etag": "2--gzip"
}

Freeform Tags
To specify freeform tags when creating a job:

oci dts job create --bucket bucket --compartment-id compartment_id


--display-name display_name --device-type appliance --freeform-tags
'{ "tag_key":"value" }'

For example:

oci dts job create --bucket MyBucket1 --compartment-id


ocid.compartment.oc1..exampleuniqueID --display-name MyApplianceImportJob
--device-type appliance --freeform-tags '{"Pittsburg_Team":"brochures"}'

{
"data": {
"attached-transfer-appliance-labels": [],
"attached-transfer-device-labels": [],
"attached-transfer-package-labels": [],
"compartment-id": "ocid.compartment.oc1..exampleuniqueID",
"creation-time": "2019-12-18T19:43:58+00:00",
"defined-tags": {},
"device-type": "APPLIANCE",
"display-name": "MyApplianceImportJob",
"freeform-tags": {
"Pittsburg_Team": "brochures"

Oracle Cloud Infrastructure User Guide 1027


Data Transfer

},
"id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"label": "JAKQVAGJF",
"lifecycle-state": "INITIATED",
"upload-bucket-name": "MyBucket1"
},
"etag": "2--gzip"
}

Note:

Users create tag namespaces and tag keys with the required permissions.
These items must exist before you can specify them when creating a job. See
Working with Defined Tags on page 3950 for details.
Multiple Tags
To specify multiple tags, comma separate the JSON-formatted key/value pairs:

oci dts job create --bucket bucket --compartment-id compartment_id


--display-name display_name --device-type appliance --freeform-tags
'{ "tag_key":"value" }', '{ "tag_key":"value" }'

Notifications
To include notifications, include the --setup-notifications option. See Setting Up Transfer Job
Notifications from the CLI on page 1029 for more information on this feature.
Getting Transfer Job IDs
Each transfer job you create has a unique ID within Oracle Cloud Infrastructure. For example:

ocid1.datatransferjob.oc1.phx.exampleuniqueID

You will need to forward this transfer job ID to the Data Administrator.
To get the transfer job ID using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Select the Compartment from the list.
The transfer jobs in that compartment are displayed.
3. Click the link under Transfer Jobs for the transfer job whose details you want to view.

Alternatively, you can click the Actions icon ( ), and then click View Details.
The Details page for that transfer job appears.
4. Find the OCID field in the Details page and click Show to display it or Copy to copy it to your computer.
To get the transfer job ID using the CLI

oci dts job list --compartment-id compartment_id

For example:

oci dts job list --compartment-id ocid.compartment.oc1..exampleuniqueID

{
"data": [
{
"creation-time": "2019-12-18T19:43:58+00:00",
"defined-tags": {},

Oracle Cloud Infrastructure User Guide 1028


Data Transfer

"device-type": "APPLIANCE",
"display-name": "MyApplianceImportJob",
"freeform-tags": {},
"id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"label": "JAKQVAGJF",
"lifecycle-state": "INITIATED",
"upload-bucket-name": "MyBucket1"
},
{
"creation-time": "2019-10-03T16:52:26+00:00",
"defined-tags": {},
"device-type": "DISK",
"display-name": "MyDiskImportJob",
"freeform-tags": {},
"id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"label": "J2AWEOL5T",
"lifecycle-state": "INITIATED",
"upload-bucket-name": "MyBucket2"
}
]
}

The ID for each transfer job is included in the return:

"id": "ocid.compartment.oc1..exampleuniqueID"

Tip:

When you create a transfer job using the oci dts job create CLI,
the transfer job ID is displayed in the CLI's return. You can also run the oci
dts job show CLI for that specific job to get the ID.
Setting Up Transfer Job Notifications from the CLI
You can generate notifications that send messages regarding changes to a new or existing appliance-based transfer job
through the CLI. Using this feature creates a topic, subscription for a list of email addresses, and a rule that notifies
you on all events related to the export job's activities and changes in state. This method provides a more convenient
way to generate notifications tailored to appliance-based transfer jobs.
The CLI command to set up transfer job notifications is different depending on whether you are creating a new
transfer job or updating an existing transfer job. In both cases, running the CLI command prompts you to enter the
email addresses of each notification subscriber as a comma separated list. Each recipient is sent an email with a link
to confirm they want to receive the notifications.
You are prompted to enter those email addresses you want included in the notifications, separated
by commas (","). When your list is complete, add a colon (":") followed by your own email address:
[email protected],[email protected] : [email protected].
For both of the notification commands, the following is returned:

If the commands fail to run, you can use the OCI CLI to do the setup
manually:
export ROOT_COMPARTMENT_OCID=ocidv1:tenancy:oc1:exampleuniqueID
oci ons topic create --compartment-id $ROOT_COMPARTMENT_OCID --name
DTSExportTopic --description "Topic for data transfer service export jobs"
oci ons subscription create --protocol EMAIL --compartment-id
$ROOT_COMPARTMENT_OCID --topic-id $TOPIC_OCID --endpoint $EMAIL_ID
oci events rule create --display-name DTSExportRule --is-enabled
true --compartment-id $ROOT_COMPARTMENT_OCID --actions '{"actions":
[{"actionType":"ONS","topicId":"$TOPIC_OCID","isEnabled":true}]}' --
condition '{"eventType":
["com.oraclecloud.datatransferservice.addapplianceexportjob","com.oraclecloud.datatransf

Oracle Cloud Infrastructure User Guide 1029


Data Transfer

--description "Rule for data transfer service to send notifications for


export jobs"
Creating topic for export

To set up notifications when creating a transfer job using the CLI


To set up notifications when creating a transfer job, include the --setup-notifications option in the CLI:

oci dts job create --bucket bucket_name --compartment-id compartment_id --


display-name display_name --device-type appliance ... --setup-notifications

To set up notifications for an existing export job using the CLI


To set up notifications for an existing transfer job:

oci dts job setup-notifications --job-id job_id

For example:

oci dts job setup-notifications --job-id


ocid1.datatransferjob.oc1..exampleuniqueID

If the commands fail to run, you can use the OCI CLI to do the setup
manually:
oci ons topic create --compartment-id ocid1.tenancy.oc1..exampleuniqueID --
name MyImportJob --description "Topic for data transfer service import job
with OCID ocid1.datatransferjob.oc1..exampleuniqueID"
oci ons subscription create --protocol EMAIL --compartment-id
$ROOT_COMPARTMENT_OCID --topic-id $TOPIC_OCID --subscription_endpoint
$EMAIL_ID
oci events rule create --display-name MyImportJob --is-enabled true--
compartment-id ocid1.tenancy.oc1..exampleuniqueID --actions '{"actions":
[{"actionType":"ONS","topicId":"$TOPIC_OCID","isEnabled":true}]}' --
condition
'{"eventType":"com.oraclecloud.datatransferservice.*transferjob","data":
{"resourceId":"ocid1.datatransferjob.oc1..exampleuniqueID"}}' --description
"Rule for data transfer service to send notifications for an import job
with OCID ocid1.datatransferjob.oc1..exampleuniqueID"
Creating topic DTSImportJobTopic_2pwaqq

{
"data": {
"api-endpoint": "https://cell1.notification.us-
phoenix-1.oraclecloud.com",
"compartment-id": "ocid1.tenancy.oc1..exampleuniqueID",
"defined-tags": {},
"description": "Topic for data transfer service import job with OCID
ocid1.datatransferjob.oc1..exampleuniqueID",
"etag": null,
"freeform-tags": {},
"lifecycle-state": "ACTIVE",
"name": "DTSImportJobTopic_2pwaqq",
"time-created": "2020-07-15T18:26:07.179000+00:00",
"topic-id": "ocid1.onstopic.oc1..exampleuniqueID"
},
"etag": "2e5a567d"
}

Enter email addresses to subscribe to as a comma separated list. Example:


[email protected],[email protected] : [email protected]
Creating subscription for [email protected]
{

Oracle Cloud Infrastructure User Guide 1030


Data Transfer

"data": {
"compartment-id": "ocid1.tenancy.oc1..exampleuniqueID",
"created-time": 1594837577401,
"defined-tags": {},
"deliver-policy": "{\"maxReceiveRatePerSecond\":0,\"backoffRetryPolicy
\":{\"initialDelayInFailureRetry\":60000,\"maxRetryDuration\":7200000,
\"policyType\":\"EXPONENTIAL\"}}",
"endpoint": "[email protected]",
"etag": "cac2f405",
"freeform-tags": {},
"id": "ocid1.onssubscription.oc1..exampleuniqueID",
"lifecycle-state": "PENDING",
"protocol": "EMAIL",
"topic-id": "ocid1.onstopic.oc1..exampleuniqueID"
},
"etag": "cac2f405"
}

Creating rule DTSImportJobRule_2pwaqq


{
"data": {
"actions": {
"actions": [
{
"action-type": "ONS",
"description": null,
"id": "ocid1.eventaction.oc1..exampleuniqueID",
"is-enabled": true,
"lifecycle-message": null,
"lifecycle-state": "ACTIVE",
"topic-id": "ocid1.onstopic.oc1..exampleuniqueID"
}
]
},
"compartment-id": "ocid1.tenancy.oc1..exampleuniqueID",
"condition": "{\"eventType\":
\"com.oraclecloud.datatransferservice.*transferjob\",\"data\":{\"resourceId
\":\"ocid1.datatransferjob.oc1..exampleuniqueID\"}}",
"defined-tags": {},
"description": "Rule for data transfer service to send notifications for
an import job with OCID ocid1.datatransferjob.oc1..exampleuniqueID",
"display-name": "DTSImportJobRule_2pwaqq",
"freeform-tags": {},
"id": "ocid1.eventrule.oc1..exampleuniqueID",
"is-enabled": true,
"lifecycle-message": null,
"lifecycle-state": "ACTIVE",
"time-created": "2020-07-15T18:26:18.307000+00:00"
},
"etag": "aff873bfb4015b49902b97c7a6cc40588bf89b9e3deeb27b77ecce6d7a99768a"
}

Preparing Upload Configuration Files


The Project Sponsor is responsible for creating or obtaining configuration files that allow the uploading of user data
to the import appliance. Send these configuration files to the Data Administrator where they can be placed in the
Control Host (if there are separate Control and Data Hosts).The config file is for the data transfer administrator, the
IAM user with the authorization and permissions to create and manage transfer jobs. The config_upload_user
file is for the data transfer upload user, the temporary IAM user that Oracle uses to upload your data on your behalf.
Create a base Oracle Cloud Infrastructure directory and two configuration files with the required credentials.

Oracle Cloud Infrastructure User Guide 1031


Data Transfer

Creating the Data Transfer Directory


Create a Oracle Cloud Infrastructure directory (.oci) on the same Control Host machine where the Oracle Cloud
Infrastructure CLI is installed. For example:

mkdir /root/.oci/

The two configuration files (config and config_upload_user) are placed in what ever location you choose.
Note:

You can store the configuration files anywhere on your Control Host. The
root directory is only given as an example.

Creating the Data Transfer Administrator Configuration File


The data transfer administrator configuration file contains the required credentials for working with Oracle Cloud
Infrastructure. You can create this file using a setup CLI or manually using a text editor.

Using the Setup CLI


Run the oci setup config command line utility to walk through the first-time setup process. The command
prompts you for the information required for the configuration file and the API public/private keys. The setup dialog
generates an API key pair and creates the configuration file.
For more information about how to find the required information, see:
• Where to Get the Tenancy's OCID and User's OCID on page 4220
• Regions and Availability Domains on page 182

Manual Setup
If you want to set up the API public/private keys yourself and write your own configuration file, see SDK and Tool
Configuration.
Tip:

Use the oci setup keys command to generate a key pair to include in
the config file.
Create the data transfer administrator configuration file /root/.oci/config with the following structure:

[DEFAULT]
user=<The OCID for the data transfer administrator>
fingerprint=<The fingerprint of the above user's public key>
key_file=<The _absolute_ path to the above user's private key file on the
host machine>
tenancy=<The OCID for the tenancy that owns the data transfer job and
bucket>
region=<The region where the transfer job and bucket should exist. Valid
values are:
us-ashburn-1, us-phoenix-1, eu-frankfurt-1, and uk-london-1, and ap-
osaka-1.>

For example:

[DEFAULT]
user=ocid1.user.oc1..<unique_ID>
fingerprint=4c:1a:6f:a1:5b:9e:58:45:f7:53:43:1f:51:0f:d8:45
key_file=/home/user/ocid1.user.oc1..exampleuniqueID.pem
tenancy=ocid1.tenancy.oc1..<unique_ID>
region=us-phoenix-1

Oracle Cloud Infrastructure User Guide 1032


Data Transfer

For the data transfer administrator, you can create a single configuration file that contains different profile sections
with the credentials for multiple users. Then use the ##profile option to specify which profile to use in the
command. Here is an example of a data transfer administrator configuration file with different profile sections:

[DEFAULT]
user=ocid1.user.oc1..exampleuniqueID
fingerprint=4c:1a:6f:a1:5b:9e:58:45:f7:53:43:1f:51:0f:d8:45
key_file=/home/user/ocid1.user.oc1..exampleuniqueID.pem
tenancy=ocid1.tenancy.oc1..exampleuniqueID
region=us-phoenix-1
[PROFILE1]
user=ocid1.user.oc1..exampleuniqueID
fingerprint=4c:1a:6f:a1:5b:9e:58:45:f7:53:43:1f:51:0f:d8:45
key_file=/home/user/ocid1.user.oc1..exampleuniqueID.pem
tenancy=ocid1.tenancy.oc1..exampleuniqueID
region=us-ashburn-1

By default, the DEFAULT profile is used for all CLI commands. For example:

oci dts job create --compartment-id ocid.compartment.oc1..exampleuniqueID --


bucket MyBucket --display-name MyDisplay --device-type appliance

Instead, you can issue any CLI command with the --profile option to specify a different data transfer
administrator profile. For example:

oci dts job create --compartment-id ocid.compartment.oc1..exampleuniqueID --


bucket MyBucket --display-name MyDisplay --device-type appliance --profile
MyProfile

Using the example configuration file above, the <profile_name> would be profile1.
If you created two separate configuration files, use the following command to specify the configuration file to use:

oci dts job create --compartment-id compartment_id --bucket bucket_name --


display-name display_name --config

Creating the Data Transfer Upload User Configuration File


The config_upload_user configuration file is for the data transfer upload user, the temporary IAM user that
Oracle uses to upload your data on your behalf. Create this configuration file with the following structure:

[DEFAULT]
user=<The OCID for the data transfer upload user>
fingerprint=<The fingerprint of the above user's public key>
key_file=<The _absolute_ path to the above user's private key file on the
host machine>
tenancy=<The OCID for the tenancy that owns the data transfer job and
bucket>
region=<The region where the transfer job and bucket should exist. Valid
values are:
us-ashburn-1, us-phoenix-1, eu-frankfurt-1, and uk-london-1, and ap-
osaka-1.>

Adding Object Storage Endpoints


Include the line endpoint=url for the Object Storage API endpoint in the upload user configuration file.

Oracle Cloud Infrastructure User Guide 1033


Data Transfer

For example:

endpoint=https://objectstorage.us-phoenix-1.oraclecloud.com

Click here to view a list of endpoints.


A complete configuration including the Object Storage endpoint might look like this:

[DEFAULT]
user=ocid1.user.oc1..exampleuniqueID
fingerprint=4c:1a:6f:a1:5b:9e:58:45:f7:53:43:1f:51:0f:d8:45
key_file=/home/user/ocid1.user.oc1..exampleuniqueID.pem
tenancy=ocid1.tenancy.oc1..exampleuniqueID
region=us-phoenix-1
endpoint=https://objectstorage.us-phoenix-1.oraclecloud.com

Important:

Creating an upload user configuration file with multiple profiles is not


supported.

Configuration File Entries


The following table lists the basic entries that are required for each configuration file and where to get the information
for each entry.
Note:

Data Transfer Service does not support passphrases on the key files for both
data transfer administrator and data transfer upload user.

Entry Description and Where to Required?


Get the Value

user OCID of the data transfer Yes


administrator or the data
transfer upload user, depending
on which profile you are
creating. To get the value, see
Required Keys and OCIDs on
page 4215.

fingerprint Fingerprint for the key pair Yes


being used. To get the value,
see Required Keys and OCIDs
on page 4215.

key_file Full path and filename of the Yes


private key.
Important: The key pair
must be in PEM format. For
instructions on generating a
key pair in PEM format, see
Required Keys and OCIDs on
page 4215.

Oracle Cloud Infrastructure User Guide 1034


Data Transfer

Entry Description and Where to Required?


Get the Value

tenancy OCID of your tenancy. To get Yes


the value, see Required Keys
and OCIDs on page 4215.

region An Oracle Cloud Infrastructure Yes


region. See Regions and
Availability Domains on page
182.
Data transfer is supported in
US East (Ashburn), US West
(Phoenix), Germany Central
(Frankfurt), and UK South
(London).

You can verify the data transfer upload user credentials using the following command:

oci dts job verify-upload-user-credentials --bucket bucket_name

Requesting the Import Appliance


This section describes how to request an import appliance from Oracle for copying your data to Oracle Cloud
Infrastructure See Import Appliances on page 1068 for complete details on all tasks related to transfer jobs.
Oracle Cloud Infrastructure customers can use import appliances to migrate data for free. You are only charged for
Object Storage usage once the data is successfully transferred to your designated bucket. All appliance requests still
require approval from Oracle.
Tip:

To save time, identify the data you intend to upload and make data copy
preparations before requesting the import appliance.
Creating an appliance request returns an Oracle-assigned appliance label. For example:

XA8XM27EVH

To request an appliance using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
Choose the transfer job that for which you want to request an import appliance.
2. Click Request Transfer Appliance under Transfer Appliances.
The Request Transfer Appliance dialog appears.

Oracle Cloud Infrastructure User Guide 1035


Data Transfer

3. Provide the shipping address details where you want the import appliance sent.

Company Name: Required. Specify the name of the company that owns the data being migrated to Oracle
Cloud Infrastructure.
• Recipient Name: Required. Specify the name of the recipient of the import appliance.
• Recipient Phone Number: Required. Specify the recipient's phone number.
• Recipient Email Address: Required. Specify the recipient's email address.
• Care Of: Optional intermediary party responsible for transferring the import appliance shipment from the
delivery vendor to the intended recipient.
• Address Line 1: Required. Specify the street address to where the import appliance is sent.
• Address Line 2: Optional identifying address details like building, suite, unit, or floor information.
• City/Locality: Required. Specify the city or locality.
• State/Province/Region: Required. Specify the state, province, or region.
• Zip/Postal Code: Specify the zip code or postal code.
• Country: Required. Select the country.
4. Click Request Transfer Appliance.
To request an appliance using the CLI

oci dts appliance request --job-id job_id --addressee addressee --care-


of care_of --address1 address_line1 --city-or-locality city_or_locality --
state-province-region state_province_region --country country --zip-postal-
code zip_postal_code --phone-number phone_number --email email [OPTIONS]

<options> are:
• --address2: Optional address of the addressee (line 2).
• --address3: Optional address of the addressee (line 3).
• --address4: Optional address of the addressee (line 4).
• --from-json: Provide input to this command as a JSON document from a file using the file://path-to/file
syntax. The --generate-full-command-json-input option can be used to generate a sample JSON file
to be used with this command option. The key names are pre-populated and match the command option names
(converted to camelCase format, e.g. compartment-id --> compartmentId), while the values of the keys need to
be populated by the user before using the sample file as an input to this command. For any command option that
accepts multiple values, the value of the key can be a JSON array. Options can still be provided on the command
line. If an option exists in both the JSON document and the command line then the command line specified value
will be used. For examples on usage of this option, please see our "using CLI with advanced JSON options" link:
https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliusing.htm#AdvancedJSONOptions.
For example:

oci dts appliance request --job-id


ocid1.datatransferjob.oc1..exampleuniqueID --addressee MyCompany --care-of
"John Doe" --address1 "123 Main Street" --city-or-locality Anytown --state-
province-region NY --country USA --zip-postal-code 12345 --phone-number
8005551212 --email [email protected]

{
"data": {
"appliance-delivery-tracking-number": null,
"appliance-delivery-vendor": null,
"appliance-return-delivery-tracking-number": null,
"creation-time": "2020-05-20T22:08:13+00:00",
"customer-received-time": null,
"customer-returned-time": null,
"customer-shipping-address": {
"address1": "123 Main Street",
"address2": null,
"address3": null,

Oracle Cloud Infrastructure User Guide 1036


Data Transfer

"address4": null,
"addressee": "MyCompany",
"care-of": "John Doe",
"city-or-locality": "Anytown",
"country": "USA",
"email": "[email protected]",
"phone-number": "3115551212",
"state-or-region": "NY",
"zipcode": "12345"
},
"delivery-security-tie-id": null,
"label": "XAKWEGKZ5T",
"lifecycle-state": "REQUESTED",
"next-billing-time": null,
"return-security-tie-id": null,
"serial-number": null,
"transfer-job-id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"upload-status-log-uri": "JAKQVAGJF/XAKWEGKZ5T/upload_summary.txt"
}
}

When you submit an appliance request, Oracle generates a unique label (label": "XAKWEGKZ5T) to identify the
import appliance and your request is sent to Oracle for approval and processing.
Setting Up Import Appliance Request Notifications from the CLI
You can generate notifications that send messages regarding changes to your import appliance request by using
the setup-notifications command through the CLI. Running this command creates a topic, subscription
for list of email addresses, and also a rule that notifies you on all events related to the import appliance request's
activities and changes in state. This method provides a more convenient way to generate notifications tailored to
import appliance requests.
Running the CLI command prompts you to enter the email addresses of each notification subscriber as a comma
separated list. Each recipient is sent an email with a link to confirm they want to receive the notifications.
Note:

Setting up notifications from the CLI affects all import appliances in your
tenancy. You cannot specify notifications for individual appliances.

Setting Up Notifications for a New Import Appliance Request


To include job notifications when requesting an import appliance, include the --setup-notifications option
in the CLI:

oci dts appliance request --job-id job_id --addressee addressee --


address1 address_line1 --city-or-locality city_or_locality --state-or-
region state_or_region --country country --zip-code zip ... --setup-
notifications

Setting up Notifications for an Existing Import Appliance Request


To set up notifications for an existing import appliance request, run the appliance setup-notifications
CLI on the appliance:

oci dts appliance setup-notifications --appliance-label appliance_label

Notifying the Data Administrator


When you have completed all the tasks in this topic, provide the Data Administrator of the following:
• IAM login credentials

Oracle Cloud Infrastructure User Guide 1037


Data Transfer

• Oracle Cloud Infrastructure CLI configuration files


• Transfer job ID
• Appliance label
What's Next
You are now ready to configure your system for the data transfer. See Configuring Appliance Data Imports on page
1038.

Configuring Appliance Data Imports

This topic describes the tasks associated with configuring the Appliance-Based Data Import. The Infrastructure
Engineer role typically performs these tasks. See Roles and Responsibilities on page 1017.
Unpacking and Connecting the Import Appliance to the Network
When the shipping vendor delivers your import appliance, Oracle updates the status as Delivered and provides the
date and time the appliance was received in the Transfer Appliance Details.
Important:

Your import appliance arrives in a transit case with a telescoping handle and
wheels. The case amenities allow for easy movement to the location where
you intend to place the appliance to upload your data.
Retain all packaging materials! When shipping the import appliance back
to Oracle, you must package the appliance in the same manner and packaging
in which the appliance was received.
Here are the tasks involved in unpacking and getting your import appliance ready to configure.
1. Inspect the tamper-evident security tie on the transit case.
If the appliance was tampered with during transit, the tamper-evident security tie serves to alert you.
Caution:

If the security tie is damaged or is missing, do not plug the appliance into
your network! Immediately file a Service Request (SR).
2. Remove and compare the number on the security tie with the number logged by Oracle.
To see the security tie number logged by Oracle using the Console:
a. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
b. Find the transfer job and import appliance associated with the removed security tie.
c.
Click the Actions icon ( ), and then click View Details.
d. Look at the contents of the Send Security Tie ID field in the Transfer Appliance Details and compare that
number with the number on the physical tag.
To see the security tie number logged by Oracle using the CLI:

oci dts appliance show --job-id job_id --appliance-label appliance_label

For example:

oci dts appliance show --job-id ocid1.datatransferjob.oc1..exampleuniqueID


--appliance-label XAKWEGKZ5T

Oracle Cloud Infrastructure User Guide 1038


Data Transfer

{
"data": {
"appliance-delivery-tracking-number": null,
"appliance-delivery-vendor": null,
"appliance-return-delivery-tracking-number": null,
"creation-time": "2020-05-20T22:08:13+00:00",
"customer-received-time": null,
"customer-returned-time": null,
"customer-shipping-address": {
"address1": "123 Main Street",
"address2": null,
"address3": null,
"address4": null,
"addressee": "MyCompany",
"care-of": "John Doe",
"city-or-locality": "Anytown",
"country": "USA",
"email": "[email protected]",
"phone-number": "3115551212",
"state-or-region": "NY",
"zipcode": "12345"
},
"delivery-security-tie-id": "exampleuniqueID",
"label": "XAKWEGKZ5T",
"lifecycle-state": "PROCESSING",
"next-billing-time": null,
"return-security-tie-id": "exampleuniqueID",
"serial-number": "exampleuniqueserialnumber",
"transfer-job-id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"upload-status-log-uri": "JAKQVAGJF/XAKWEGKZ5T/upload_summary.txt"
}
}

Compare the value of the delivery-security-tie-id attribute with the number on the physical tag to
ensure they match.
Caution:

If the number on the physical security tie does not match the number
logged by Oracle, do not plug the appliance into your network!
Immediately file a Service Request (SR).
Note:

Keep possession of the security tie after you have finished unpacking
and connecting the appliance. Include it when returning the appliance to
Oracle. Failure to include the security tie can result in a delay in the data
migration process.
3. Open the transit case and ensure that the case contains the following items:
• Appliance unit and power cable (two types of power cables provided: C14 and C13 to 14)
• USB to DB-9 serial cable
• Return shipping instructions (retain these instructions)
• Return shipping label, label sleeve, tie-on tag, and zip tie
• Return shipment tamper-evident security tie (use this tie to ensure secure transit case back to Oracle)

Oracle Cloud Infrastructure User Guide 1039


Data Transfer

4. Compare the number on the return shipment security tie with the number logged by Oracle.
To see the security tie number logged by Oracle using the Console:
a. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
b. Find the transfer job and import appliance associated with the return shipment security tie.
c.
Click the Actions icon ( ), and then click View Details.
d. Look at the contents of the Return Security Tie ID field in the Transfer Appliance Detailsand compare that
number with the number on the physical tag.
To see the security tie number logged by Oracle using the CLI:

oci dts appliance show --job-id job_id --appliance-label appliance_label

For example:

oci dts appliance show --job-id ocid1.datatransferjob.oc1..exampleuniqueID


--appliance-label XAKWEGKZ5T

{
"data": {
"appliance-delivery-tracking-number": null,
"appliance-delivery-vendor": null,
"appliance-return-delivery-tracking-number": null,
"creation-time": "2020-05-20T22:08:13+00:00",
"customer-received-time": null,
"customer-returned-time": null,
"customer-shipping-address": {
"address1": "123 Main Street",
"address2": null,
"address3": null,
"address4": null,
"addressee": "MyCompany",
"care-of": "John Doe",
"city-or-locality": "Anytown",
"country": "USA",
"email": "[email protected]",
"phone-number": "3115551212",
"state-or-region": "NY",
"zipcode": "12345"
},
"delivery-security-tie-id": "exampleuniqueID",
"label": "XAKWEGKZ5T",
"lifecycle-state": "PROCESSING",
"next-billing-time": null,
"return-security-tie-id": "exampleuniqueID",
"serial-number": "exampleuniqueserialnumber",
"transfer-job-id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"upload-status-log-uri": "JAKQVAGJF/XAKWEGKZ5T/upload_summary.txt"
}
}

Compare the value of the return-security-tie-id attribute with the number on the physical tag to ensure
they match.
Caution:

If the number on the return security tie does not match the number logged
by Oracle, file a Service Request (SR). These security tie numbers must
match or Oracle cannot upload data from your returned appliance.

Oracle Cloud Infrastructure User Guide 1040


Data Transfer

5. Remove the import appliance from the case and place the appliance on a solid surface or in a rack.
Caution:

We recommend assistance lifting the appliance out of the transit case and
placing the appliance in a rack or on a desk top. The total shipping weight
is about 64 lbs (29.0299 kg) and appliance weight is 38 lbs (17.2365 kg).
6. Connect the appliance to your local network using one of the following:
• 10GBase-T: Standard RJ-45
• SFP+: The transceiver must be compatible with Intel X520 NICs.
7. Attach one of the provided power cords to the appliance and plug the other end into a grounded power source.
8. Turn on the appliance by flipping the power switch on the back of the appliance.
Connecting the Import Appliance to the Terminal Emulation Host
Connect the import appliance to your designated Terminal Emulation Host computer using the provided USB to DB-9
serial cable.
Note:

You might need to download the driver for this cable on your Terminal
Emulation Host: https://www.cablestogo.com/product/26887/5ft-usb-to-db9-
male-serial-rs232-adapter-cable#support

Setting Up Terminal Emulation


Appliance-based transfers require you to set up your host for terminal emulation so you can communicate with the
appliance through the appliance's serial console. This communication requires installing serial console terminal
emulator software. We recommend using the following:
• PuTTY for Windows
• ZOC for OS X
• PuTTY or Minicom for Linux
Configure the following terminal emulator software settings:
• Baud Rate: 115200
• Emulation: VT102
• Handshaking: Disabled/off
• RTS/DTS: Disabled/off
Note:

PuTTY does not allow you to configure all of these settings individually.
However, you can configure the PuTTY default settings by selecting the
Serial connection type and specifying "115200" for the Serial Line baud
speed. This configuration is sufficient to use PuTTY as a terminal emulator
for the appliance.
Configuring the Import Appliance Networking
When the import appliance boots up, an appliance serial console configuration menu is displayed on the Terminal
Emulation Host to which the appliance is connected.

Oracle Cloud Data Transfer Appliance


- For use with minimum dts version: dts-0.4.140
- See "Help" for determining your dts version

1) Configure Networking
2) Show Networking
3) Reset Authentication

Oracle Cloud Infrastructure User Guide 1041


Data Transfer

4) Show Authentication
5) Show Status
6) Collect Appliance Diagnostic Information
7) Generate support bundle
8) Shutdown Appliance
9) Reboot Appliance
10) Help

Select a command:

Note:

It can take up to 5 minutes for the serial console menu to display. Press Enter
if you do not see the serial console configuration menu after this amount of
time.
The appliance supports a single active network interface on any of the 10-Gbps network ports. If only one interface is
cabled and active, that interface is chosen automatically. When multiple interfaces are active, you are given the choice
to select the interface to use.
To configure your import appliance networking:
1. Access the Terminal Emulation Host and select Configure Networking from the appliance serial console menu.
2. Provide the required networking information when prompted:
• IP Address: IP address of the appliance.
• Subnet Mask Length: The count of leading 1 bit in the subnet mask. For example, if the subnet mask is
255.255.255.0 then the length is 24.
• Default Gateway: Default gateway for network communications.
For example:

Configure Networking:
^C to cancel

Configuring IP address, subnet mask length, gateway


Example:
IP Address : 10.0.0.2
Subnet Mask Length : 24
Gateway : 10.0.0.1

Address: 10.0.0.1
Subnet Mask Length: 24
Gateway: 10.0.0.1

Configuring IP address 10.0.0.1 netmask 255.255.255.0 default gateway


10.0.0.1
Enabling enp0s3
Now trying to restart the network

Network configuration is complete

New authentication material created.

Client access token : 4iH1gw1okPJO


Appliance certificate MD5 fingerprint :
BF:C6:49:9B:25:FE:9F:64:06:7E:DF:F5:F9:E5:C6:56
Press ENTER to return...

When you configure a network interface, the appliance software generates a new client access token and appliance
X.509/SSL certificate. The access token is used to authorize your Control Host to communicate with the Data
Transfer Appliance's Management Service. The x.509/SSL certificate is used to encrypt communications with the

Oracle Cloud Infrastructure User Guide 1042


Data Transfer

Data Transfer Appliance's Management Service over the network. Provide the access token and SSL certificate
fingerprint values displayed here when you use the CLI commands to initialize authentication on your host machine.
You can change the selected interface, network information, and reset the authentication material at any time by
selecting Configure Networking again from the appliance serial console menu.
Notifying the Data Administrator
After completing the tasks in this topic, send the following import appliance information to the Data Administrator:
• Appliance IP address
• Access token
• SSL certificate fingerprint
What's Next
You are now ready to load your data to the disk. See Copying Data to the Import Appliance on page 1043.

Copying Data to the Import Appliance

This topic describes the tasks associated with copying data from the Data Host to the import appliance using the
Control Host. The Data Administrator role typically performs these tasks. See Roles and Responsibilities on page
1017.
Note:

You can only run Oracle Cloud Infrastructure CLI commands from a Linux
host. This differs from running CLI commands for other Oracle Cloud
Infrastructure Services on a variety of host operating systems. Appliance-
based commands require validation that is only available on Linux hosts.
Information Prerequisites
Before performing any import appliance copying tasks, you must obtain the following information:
• Appliance IP address: This is typically is provided by the Infrastructure Engineer.
• IAM login information, Data Transfer Utilityconfiguration files, transfer job ID, and appliance label: This is
typically is provided by the Project Sponsor.
Setting Up an HTTP Proxy Environment
You might need to set up an HTTP proxy environment on the Control Host to allow access to the public internet. This
proxy environment allows the Oracle Cloud Infrastructure CLI to communicate with the Data Transfer Appliance
Management Service and the import appliance over a local network connection. If your environment requires
internet-aware applications to use network proxies, configure the Control Host to use your environment's network
proxies by setting the standard Linux environment variables on your Control Host.
Assume that your organization has a corporate internet proxy at http://www-proxy.myorg.com and that the
proxy is an HTTP address at port 80. You would set the following environment variable:

export HTTPS_PROXY=http://www-proxy.myorg.com:80

If you configured a proxy on the Control Host and the appliance is directly connected to that host, the Control Host
tries unsuccessfully to communicate with the appliance using a proxy. Set a no_proxy environment variable
for the appliance. For example, if the appliance is on a local network at 10.0.0.1, you would set the following
environment variable:

export NO_PROXY=10.0.0.1

Oracle Cloud Infrastructure User Guide 1043


Data Transfer

Setting Firewall Access


If you have a restrictive firewall in the environment where you are using the Oracle Cloud Infrastructure CLI, you
may need to open your firewall configuration to the following IP address ranges: 140.91.0.0/16.
Initializing Authentication to the Import Appliance
Note:

You can only use the Oracle Cloud Infrastructure CLI to initialize
authentication.
Initialize authentication to allow the host machine to communicate with the import appliance. Use the values returned
from the Configure Networking command. See Configuring the Transfer Appliance Networking for details.
To initialize authentication using the CLI
Perform this task using the following CLI. There is no Console equivalent.

oci dts physical-appliance initialize-authentication --job-id job-id


--appliance-cert-fingerprint fingerprint --appliance-ip ip_address --
appliance-label appliance-label

For example:

oci dts physical-appliance initialize-authentication --job-id


ocid1.datatransferjob.oci1..exampleuniqueID --appliance-cert-fingerprint
F7:1B:D0:45:DA:04:0C:07:1E:B2:23:82:E1:CA:1A:E9 --appliance-ip 10.0.0.1 --
appliance-label XA8XM27EVH

When prompted, supply the access token and system. For example:

oci dts physical-appliance initialize-authentication --appliance-


certfingerprint 86:CA:90:9E:AE:3F:0E:76:E8:B4:E8:41:2F:A4:2C:38 --
applianceip 10.0.0.5 --jobid ocid1.datatransferjob.oc1..exampleuniqueID --
appliance-label XAKKJAO9KT

Retrieving the Appliance serial id from Oracle Cloud Infrastructure.


Access token ('q' to quit):
Found an existing appliance. Is it OK to overwrite it? [y/n]y
Registering and initializing the authentication between the dts CLI and the
appliance
Appliance Info :
encryptionConfigured : false
lockStatus : NA
finalizeStatus : NA
totalSpace : Unknown
availableSpace : Unknown

The Control Host can now communicate with the import appliance.
To show details about the connected appliance using the CLI
At the command prompt on the host, run oci dts physical-appliance show to show the status of the
connected import appliance.

oci dts physical-appliance show

For example:

oci dts physical-appliance show

Appliance Info :

Oracle Cloud Infrastructure User Guide 1044


Data Transfer

encryptionConfigured : false
lockStatus : NA
finalizeStatus : NA
totalSpace : Unknown
availableSpace : Unknown

Configuring Import Appliance Encryption


Configure the import appliance to use encryption. Oracle Cloud Infrastructure creates a strong passphrase for each
appliance. The command securely collects the strong passphrase from Oracle Cloud Infrastructure and sends that
passphrase to the Data Transfer service.
If your environment requires Internet-aware applications to use network proxies, ensure that you set up the required
Linux environment variables. See for more information.
Important:

If you are working with multiple appliances at the same time, be sure the job
ID and appliance label you specify in this step matches the physical appliance
you are currently working with. You can get the serial number associated
with the job ID and appliance label using the Console or the Oracle Cloud
Infrastructure CLI. You can find the serial number of the physical appliance
on the back of the device on the agency label.
Note:

You can only use the Oracle Cloud Infrastructure CLI to configure
encryption.
To configure import appliance encryption using the CLI
At the command prompt on the host, run oci dts physical-appliance configure-encryption to
configure import appliance encryption.

oci dts physical-appliance configure-encryption --job-id job_id --appliance-


label appliance_label

For example:

oci dts physical-appliance configure-encryption --job-id


ocid1.datatransferjob.region1.phx..exampleuniqueID --appliance-label
XA8XM27EVH

Moving the state of the appliance to preparing...


Passphrase being retrieved...
Configuring encryption...
Encryption configured. Getting physical transfer appliance info...
{
"data": {
"availableSpaceInBytes": "Unknown",
"encryptionConfigured": true,
"finalizeStatus": "NA",
"lockStatus": "LOCKED",
"totalSpaceInBytes": "Unknown"
}
}

Unlocking the Import Appliance


You must unlock the appliance before you can write data to it. Unlocking the appliance requires the strong passphrase
that is created by Oracle Cloud Infrastructure for each appliance.
Unlock the appliance using one of the following ways:

Oracle Cloud Infrastructure User Guide 1045


Data Transfer

• If you provide the --job-id and --appliance-label when running the unlock command, the data
transfer system retrieves the passphrase from Oracle Cloud Infrastructure and sends it to the appliance during the
unlock operation.
• You can query Oracle Cloud Infrastructure for the passphrase and provide that passphrase when prompted during
the unlock operation.
Important:

It can take up to 10 minutes to unlock an appliance the first time. Subsequent


unlocks are not as time consuming.
Note:

You can only use the Oracle Cloud Infrastructure CLI to unlock the import
appliance.
To unlock the appliance and send the passphrase to the appliance using the CLI

oci dts physical-appliance unlock --job-id job_id --appliance-


label appliance_label

For example:

oci dts physical-appliance unlock --job-id


ocid1.datatransferjob.oc1..exampleuniqueID --appliance-label XAKWEGKZ5T
Retrieving the passphrase from Oracle Cloud Infrastructure
{
"data": {
"availableSpaceInBytes": "64.00GB",
"encryptionConfigured": true,
"finalizeStatus": "NOT_FINALIZED",
"lockStatus": "NOT_LOCKED",
"totalSpaceInBytes": "64.00GB"
}
}

To query Oracle Cloud Infrastructure for the passphrase to unlock the import appliance using the CLI

oci dts appliance get-passphrase --job-id job_id --appliance-


label appliance_label

For example:

oci dts appliance get-passphrase --job-id


ocid1.datatransferjob.oc1..exampleuniqueID --appliance-label XAKWEGKZ5T

{
"data": {
"encryption-passphrase": "passphrase"
}
}

Run dts physical-appliance unlock without --job-id and --appliance-label and supply the
passphrase when prompted to complete the task:

oci dts physical-appliance unlock

Creating NFS Datasets


A dataset is a collection of files that are treated similarly. You can write up to 100 million files onto the import
appliance for migration to Oracle Cloud Infrastructure. We currently support one dataset per appliance. Appliance-

Oracle Cloud Infrastructure User Guide 1046


Data Transfer

Based Data Import supports NFS versions 3, 4, and 4.1 to write data to the appliance. In preparation for writing data,
create and configure a dataset to write to. See Datasets on page 1077 for complete details on all tasks related to
datasets.
To create a dataset using the CLI

oci dts nfs-dataset create --name dataset_name

For example:

oci dts nfs-dataset create --name nfs-ds-1

Creating dataset with NFS export details nfs-ds-1


{
"data": {
"datasetType": "NFS",
"name": "nfs-ds-1",
"nfsExportDetails": {
"exportConfigs": null
},
"state": "INITIALIZED"
}
}

Configuring Export Settings on the Dataset


To configure export settings on a dataset using the CLI

oci dts nfs-dataset set-export --name dataset_name --rw true --world true

For example:

oci dts nfs-dataset set-export --name nfs-ds-1 --rw true --world true

Settings NFS exports to dataset nfs-ds-1


{
"data": {
"datasetType": "NFS",
"name": "nfs-ds-1",
"nfsExportDetails": {
"exportConfigs": [
{
"hostname": null,
"ipAddress": null,
"readWrite": true,
"subnetMaskLength": null,
"world": true
}
]
},
"state": "INITIALIZED"
}
}

Here is another example of creating the export to give read/write access to a subnet:

oci dts nfs-dataset set-export --name nfs-ds-1 --ip 10.0.0.0 --subnet-mask-


length 24 --rw true --world false

Settings NFS exports to dataset nfs-ds-1


{
"data": {
"datasetType": "NFS",

Oracle Cloud Infrastructure User Guide 1047


Data Transfer

"name": "nfs-ds-1",
"nfsExportDetails": {
"exportConfigs": [
{
"hostname": null,
"ipAddress": "10.0.0.0",
"readWrite": true,
"subnetMaskLength": "24",
"world": false
}
]
},
"state": "INITIALIZED"
}
}

Activating the Dataset


Activation creates the NFS export, making the dataset accessible to NFS clients.
To activate the dataset using the CLI

oci dts nfs-dataset activate --name dataset_name

For example:

oci dts nfs-dataset activate --name nfs-ds-1

Fetching all the datasets


Activating dataset small-files
Dataset nfs-ds-1 activated

Setting Your Data Host as an NFS Client


Note:

Only Linux machines can be used as Data Hosts.


Set up your Data Host as an NFS client:
• For Debian or Ubuntu, install the nfs-common package. For example:

sudo apt-get install nfs-common


• For Oracle Linux or Red Hat Linux, install the nfs-utils package. For example:

sudo yum install nfs-utils

Mounting the NFS Share


To mount the NFS share
At the command prompt on the Data Host, create the mountpoint directory:

mkdir -p /mnt/<mountpoint>

For example:

mkdir -p /mnt/nfs-ds-1

Next, use the mount command to mount the NFS share.

mount -t nfs <appliance_ip>:/data/<dataset_name> <mountpoint>

Oracle Cloud Infrastructure User Guide 1048


Data Transfer

For example:

mount -t nfs 10.0.0.1:/data/nfs-ds-1 /mnt/nfs-ds-1

Note:

The appliance IP address in this example (10.0.0.1) may be different that


the one you use for your appliance.
After the NFS share is mounted, you can write data to the share.
Copying Files to the NFS Share
You can only copy regular files to transfer appliances. You cannot copy special files, such as symbolic links, device
special, sockets, and pipes, directly to the Data Transfer Appliance. See the following section for instructions on how
to prepare special files.
Important:

• Individual files being copied to the transfer appliance cannot exceed 9.76
TB.
• Do not fill up the transfer appliance to 100% capacity. There must be
space available to generate metadata and for the manifest file to perform
the upload to Object Storage. At least 1 GB of free disk space is needed
for this area.
Copying Special Files
To transfer special files, create a tar archive of these files and copy the tar archive to the Data Transfer Appliance. We
recommend copying many small files using a tar archive. Copying a single compressed archive file should also take
less time than running copy commands such as cp -r or rsync.
Here are some examples of creating a tar archive and getting it onto the Data Transfer Appliance:
• Running a simple tar command:

tar -cvzf /mnt/nfs-dts-1/filesystem.tgz filesystem/


• Running a command to create a file with md5sum hashes for each file in addition to the tar archive:

tar cvzf /mnt/nfs-dts-1/filesystem.tgz filesystem/ |xargs -I '{}' sh -c


"test -f '{}' && md5sum '{}'"|tee tarzip_md5

The tar archive file filesystem.tgz has a base64 md5sum once it is uploaded to OCI Object Storage. Store
the tarzip_md5 file where you can retrieve it. After the compressed tar archive file is downloaded from Object
Storage and unpacked, you can compare the individual files against the hashes in the file.
Deactivating the Dataset
Note:

Deactivating the dataset is only required if you are running appliance


commands using the Data Transfer Utility. If you are using the Oracle Cloud
Infrastructure CLI to run your Appliance-Based Data Import, you can skip
this step and proceed to Sealing the Dataset on page 1050.
After you are done writing data, deactivate the dataset. Deactivation removes the NFS export on the dataset,
disallowing any further writes.

Oracle Cloud Infrastructure User Guide 1049


Data Transfer

To deactivate the dataset using the CLI


At the command prompt on the host, run dts nfs-dataset deactivate to deactivate the NFS dataset.

oci dts nfs-dataset deactivate --name <dataset_name>

For example:

oci dts nfs-dataset deactivate --name nfs-ds-1

Sealing the Dataset


Sealing a dataset stops all writes to the dataset. This process can take some time to complete, depending upon the
number of files and total amount of data copied to the import appliance.
If you issue the seal command without the --wait option, the seal operation is triggered and runs in the
background. You are returned to the command prompt and can use the seal-status command to monitor the
sealing status. Running the seal command with the --wait option results in the seal operation being triggered and
continues to provide status updates until sealing completion.
The sealing operation generates a manifest across all files in the dataset. The manifest contains an index of the copied
files and generated data integrity hashes.
To seal the dataset using the CLI

oci dts nfs-dataset seal --name dataset_name [--wait]

For example:

oci dts nfs-dataset seal --name nfs-ds-1

Seal initiated. Please use seal-status command to get progress.

To monitor the dataset sealing process using the CLI

oci dts nfs-dataset seal-status --name dataset_name

For example:

oci dts nfs-dataset seal-status --name nfs-ds-1

{
"data": {
"bytesProcessed": 2803515612507,
"bytesToProcess": 2803515612507,
"completed": true,
"endTimeInMs": 1591990408804,
"failureReason": null,
"numFilesProcessed": 182,
"numFilesToProcess": 182,
"startTimeInMs": 1591987136180,
"success": true
}
}

Note:

If changes are necessary after sealing a dataset or finalizing an appliance, you


must reopen the dataset to modify the contents. See Reopening a Dataset on
page 1080.

Oracle Cloud Infrastructure User Guide 1050


Data Transfer

Downloading the Dataset Seal Manifest


After sealing the dataset, you can optionally download the dataset's seal manifest to a user-specified location. The
manifest file contains the checksum details of all the files. The transfer site uploader consults the manifest file to
determine the list of files to upload to object storage. For every uploaded file, it validates that the checksum reported
by object storage matches the checksum in manifest. This validation ensures that no files got corrupted in transit.
To download the dataset seal manifest file using the CLI

oci dts nfs-dataset get-seal-manifest --name dataset_name --output-


file output_file_path

For example:

oci dts nfs-dataset get-seal-manifest --name nfs-ds-1 --output-file ~/


Downloads/seal-manifest

Finalizing the Import Appliance


Note:

You can only use the CLI commands to finalize the import appliance.
Finalizing an appliance tests and copies the following to the appliance:
• Upload user configuration credentials
• Private PEM key details
• Name of the upload bucket
The credentials, API key, and bucket are required for Oracle to be able to upload your data to Oracle Cloud
Infrastructure Object Storage. When you finalize an appliance, you can no longer access the appliance for dataset
operations unless you unlock the appliance. See Reopening a Dataset on page 1080 if you need to unlock an
appliance that was finalized.
Important:

If you are working with multiple appliances at the same time, be sure the job
ID and appliance label you specify in this step matches the physical appliance
you are currently working with. You can get the serial number associated
with the job ID and appliance label using the Console or the Oracle Cloud
Infrastructure CLI. You can find the serial number of the physical appliance
on the back of the device on the agency label.
To finalize the import appliance
1. Seal the dataset before finalizing the import appliance. See Sealing the Dataset on page 1050.
2. Open a command prompt on the host and run oci dts physical-appliance finalize to finalize an
appliance.

oci dts physical-appliance finalize --job-id job_id --appliance-


label appliance_label

For example:

oci dts physical-appliance finalize --job-id


ocid1.datatransferjob.region1.phx..exampleuniqueID --appliance-label
XAKWEGKZ5T
Retrieving the upload summary object name from Oracle Cloud Infrastructure
Retrieving the upload bucket name from Oracle Cloud Infrastructure
Validating the upload user credentials
Create object BulkDataTransferTestObject in bucket MyBucket using upload
user

Oracle Cloud Infrastructure User Guide 1051


Data Transfer

Overwrite object BulkDataTransferTestObject in bucket MyBucket using


upload user
Inspect object BulkDataTransferTestObject in bucket MyBucket using upload
user
Read bucket metadata MyBucket using upload user
Storing the upload user configuration and credentials on the transfer
appliance
Finalizing the transfer appliance...
The transfer appliance is locked after finalize. Hence the finalize status
will be shown as NA. Please unlock the transfer appliance again to see
the correct finalize status
Changing the state of the transfer appliance to FINALIZED
{
"data": {
"availableSpaceInBytes": "Unknown",
"encryptionConfigured": true,
"finalizeStatus": "NA",
"lockStatus": "LOCKED",
"totalSpaceInBytes": "Unknown"
}
}

Note:

If changes are necessary after sealing a dataset or finalizing an appliance, you


must reopen the dataset to modify the contents. See Reopening a Dataset on
page 1080.
What's Next
You are now ready to ship your import appliance with the copied data to Oracle. See Shipping the Import Appliance
on page 1052.

Shipping the Import Appliance

This topic describes the tasks associated with shipping the import appliance containing the copied data to Oracle. The
Infrastructure Engineer role typically performs these tasks. See Roles and Responsibilities on page 1017.
Note:

You can only run Oracle Cloud Infrastructure CLI commands from a Linux
host. This differs from running CLI commands for other Oracle Cloud
Infrastructure Services on a variety of host operating systems. Appliance-
based commands require validation that is only available on Linux hosts.
Shutting Down the Import Appliance
Shut down the import appliance before packing up and shipping the appliance back to Oracle.
To shut down the import appliance
Using the terminal emulator on the host machine, select Shutdown from the appliance serial console.
Important:

The shutdown does not power off the appliance. Wait 10 minutes after
issuing the shutdown, then turn the power switch off and disconnect the
power cable.

Oracle Cloud Infrastructure User Guide 1052


Data Transfer

Packing and Shipping the Import Appliance to Oracle


Return the import appliance to Oracle within 30 days. If you need the appliance beyond the standard 30-day window,
you can file a service request to ask for an extension of up to 60 days.
Important:

Review and follow the instructions that were provided in the transit case with
the appliance.
To pack and ship the import appliance
1. Unplug the power cord from the power source and detach the other end of the cord from the appliance.
2. Disconnect the appliance from your network.
3. Remove the return shipment tamper-evident security tie from the transit case.
4. Place the appliance, power cord, and serial cable in the transit case.
Caution:

We recommend assistance lifting and placing the appliance back into the
transit case. The total shipping weight is about 64 lbs (29.0299 kg) and
appliance weight is 38 lbs (17.2365 kg).
5. Close and secure the transit case with the return tamper-evident security tie.
6. Loop the top of the plastic tie-on tag with return shipping label through the handle of the transit case. Remove the
protective tape from the back of the tie-on tag, exposing the adhesive area on which to secure the tag onto itself.
Use the provided zip tie to secure the tie-on tag to the handle.
7. Return the transit case:
• If a return label was included with the import appliance, attach the label and arrange with the shipping vendor
to drop off or pick up the appliance.
• If a return label was not included, open a service request with Oracle to arrange for the appliance's return. See
Creating a Service Request Using the Console on page 127.
The shipping vendor notifies Oracle when the appliance is shipped back to Oracle for upload to Oracle Cloud
Infrastructure Object Storage.
What's Next
Now you can track your return import appliance shipment to Oracle and review post transfer logs and summaries. See
Monitoring the Import Appliance and Data Transfer on page 1053.

Monitoring the Import Appliance and Data Transfer

This topic describes the monitoring tasks to do after sending the import appliance with the copied data back to Oracle
for data transfer to Oracle Cloud Infrastructure. The Project Sponsor role typically performs these tasks. See Roles
and Responsibilities on page 1017.
Note:

You can only run Oracle Cloud Infrastructure CLI commands from a Linux
host. This differs from running CLI commands for other Oracle Cloud
Infrastructure Services on a variety of host operating systems. Appliance-
based commands require validation that is only available on Linux hosts.
Monitoring the Status of Your Import Appliance Return Shipment
The shipping vendor notifies Oracle when your import appliance is picked up and shipped back for upload to Oracle
Cloud Infrastructure Object Storage.

Oracle Cloud Infrastructure User Guide 1053


Data Transfer

To monitor the status of your import appliance return shipment using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the transfer job and associated import appliance that you shipped back to Oracle for data upload.
3. Under Transfer Appliances, look at the Status field.
To monitor the status of your import appliance return shipment using the CLI

oci dts appliance show --job-id job_id --appliance-label appliance_label

For example:

oci dts appliance show --job-id ocid1.datatransferjob.oc1..exampleuniqueID


--appliance-label XAKWEGKZ5T

{
"data": {
"appliance-delivery-tracking-number": null,
"appliance-delivery-vendor": null,
"appliance-return-delivery-tracking-number": null,
"creation-time": "2020-05-20T22:08:13+00:00",
"customer-received-time": null,
"customer-returned-time": null,
"customer-shipping-address": {
"address1": "123 Main Street",
"address2": null,
"address3": null,
"address4": null,
"addressee": "MyCompany",
"care-of": "John Doe",
"city-or-locality": "Anytown",
"country": "USA",
"email": "[email protected]",
"phone-number": "3115551212",
"state-or-region": "NY",
"zipcode": "12345"
},
"delivery-security-tie-id": "exampleuniqueID",
"label": "XAKWEGKZ5T",
"lifecycle-state": "PROCESSING",
"next-billing-time": null,
"return-security-tie-id": "exampleuniqueID",
"serial-number": "exampleuniqueserialnumber",
"transfer-job-id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"upload-status-log-uri": "JAKQVAGJF/XAKWEGKZ5T/upload_summary.txt"
}
}

The import appliance status is indicated by the lifecycle-state attribute.


Import Appliance Status Values
Here are the import appliance status values, listed in alphabetic order:
CANCELLED
You can change your mind about uploading your data to Oracle Cloud Infrastructure Object Storage and
cancel your import appliance. Ship the appliance back to Oracle and then cancel the appliance. Oracle always
uses secure wipe tools on the boot and data areas whenever an appliance is returned.

Oracle Cloud Infrastructure User Guide 1054


Data Transfer

COMPLETE
Oracle completed your import appliance data upload. Your data is available in your designated bucket in
Oracle Cloud Infrastructure Object Storage.
CUSTOMER LOST
You have not returned the import appliance within the required 90 days.

DELIVERED
Oracle received a delivery confirmation from the shipping vendor that your import appliance was delivered.
When the appliance is delivered, Oracle provides the date and time it was received in the appliance details.
Appliance usage tracking begins.
ERROR
Oracle encountered an unrecoverable error trying to process your import appliance. Oracle cannot upload
your data from the appliance. To protect your data, Oracle uses secure wipe tools on the boot and data areas
any transfer appliance that cannot be processed. Complete another request for an appliance.

ORACLE PREPARING
Oracle approved your import appliance request. The status displays "Preparing" until the appliance is shipped
to you.
ORACLE RECEIVED
Oracle received your import appliance shipment. The status displays "Oracle Received" until Oracle begins
processing and uploading your data from the appliance.
ORACLE RECEIVED CANCELED
You canceled your import appliance after you shipped the appliance back to Oracle. Oracle received your
canceled appliance does not upload any data from it.
PREPARING
You activated your import appliance. You can now copy your data onto the appliance. The status displays
"Preparing" until you ship the appliance back to Oracle.
PROCESSING
Oracle is processing and uploading the data from your import appliance. The status displays
"Processing"until Oracle completes uploading your data from the appliance.
REJECTED
Oracle denied your import appliance request.
Important:

If your appliance request is denied and you have questions, contact


your Sales Representative or file a Service Request (SR).
REQUESTED
You successfully completed your request for an import appliance. The status displays requested until Oracle
approves your appliance request.
RETURN SHIPPED
Oracle received confirmation from the shipping vendor that you shipped your import appliance back to
Oracle. The status displays "Return Shipped" until Oracle receives your appliance.

Oracle Cloud Infrastructure User Guide 1055


Data Transfer

RETURN SHIPPED CANCELLED


You canceled your import appliance after the appliance was delivered to you or after you shipped the
appliance back to Oracle. Oracle received confirmation from the shipping vendor that your canceled transfer
appliance is on the way back to Oracle. The status displays "Return shipped cancelled" until Oracle receives
your appliance.
SHIPPING
Oracle completed the necessary preparations and shipped your import appliance. When the appliance is
shipped, Oracle provides the serial number of the appliance, the shipping vendor, and the tracking number in
the appliance details. The status displays shipping until the appliance is delivered to you.
Reviewing the Upload Summary
Oracle creates upload summary log files for each uploaded appliance. These log files are placed in the bucket where
data was uploaded to Oracle Cloud Infrastructure. The upload summary file compares the appliance's manifest file to
the contents of the target Oracle Cloud Infrastructure Object Storage bucket after file upload.
Note:

If you chose to upload your data to an Archive Storage bucket, you must first
restore the log file object before you can download that file for review.
The top of the log report summarizes the overall file processing status:

P - Present: The file is present in both the device and the target bucket
M - Missing: The file is present in the device but not the target bucket. It
was likely uploaded and then deleted by another user before the summary was
generated.
C - Name Collision: The file is present in the manifest but a file with the
same name but different contents is present in the target bucket.
U - Unreadable: The file is not readable from the disk
N - Name Too Long: The file name on disk is too long and could not be
uploaded

Complete file upload details follow the summary.

If you upload more than 100,000 files, the upload details are broken into multiple pages. You can only download the
first page from the Console. Download the rest of the pages directly from the Object Storage bucket. The subsequent
pages have the same object name as the first page, but have an enumerated suffix.

Verifying Uploaded File Integrity


To verify object data integrity of files uploaded to Object Storage from the Data Transfer Appliance, a cryptographic
hash using MD5 is provided for all objects uploaded to Object Storage from the Data Transfer Appliance. Oracle
Cloud Infrastructure provides the object hash value in base64 encoding.
To download files imported into Object Storage and verify their integrity, run the following CLI command:

oci os object get --namespace object_storage_namespace --bucket-


name bucket_name --name object_name --file file_location

Oracle Cloud Infrastructure User Guide 1056


Data Transfer

For example:

oci os object get --namespace MyNamespace --bucket-name MyBucket1 --name


JLA12B3C/XAABC12EFG/upload_summary.txt –file upload_summary.txt

Downloading object [####################################] 100%

Open the file and match the file names of the uploaded files with the MD5 column:

In this example, file_1.txt has the MD5 sum of: EoN8s6dgT/9pGYA7Yx1klQ==


To download file_1.txt, run the following CLI command:

oci os object get --namespace object_storage_namespace --bucket-


name bucket_name --name object_name --file file_location

For example:

oci os object get --namespace example_namespace --bucket-name bucket-1 --


name file_1.txt –file file_1.txt

Downloading object [####################################] 100%

Covert the base64 encoded hash value to hexadecimal, use the following command:

python -c 'print "BASE64-ENCODED-MD5-VALUE".decode("base64").encode("hex")'

For example:

python -c 'print "EoN8s6dgT/9pGYA7Yx1klQ==".decode("base64").encode("hex")'

12837cb3a7604fff6919803b631d6495

Now generate the md5sum on Linux and verify both values match:

md5sum file_name

For example:

md5sum file_1.txt

12837cb3a7604fff6919803b631d6495 file_1.txt

Oracle Cloud Infrastructure User Guide 1057


Data Transfer

Verifying Multipart Uploaded Files


Large files are split into 1 GB parts when they are uploaded from the Data Transfer Appliance to Object Storage. You
can verify the md5sum after downloading a file that was transferred in multiple parts using one of several available
scripts. See the following for more information and links to these scripts:
https://github.com/oracle/oci-cli/issues/134
Viewing Data Transfer Metrics
After the import appliance with your copied data is received by Oracle and the data transfer begins, you can view the
metrics associated with the transfer job in the Transfer Appliance Details page in chart or table format.
Tip:

Set up your notifications to alert you when the data transfer from the
appliance to Oracle Cloud Infrastructure is occurring. When the state changes
from ORACLE_RECEIVED to PROCESSING, you can start viewing
data transfer metrics. If you included the --setup-notification
option when you made your appliance request from the CLI, this alert
occurs automatically. See Notifications Overview on page 3378 for more
information.
Select Metrics under Resources to display each of these measures:
• Import Files Uploaded: Total number of files uploaded for import.
• Import Bytes Uploaded: Total number of bytes uploaded for import.
• Import Files Remaining: Total number of files remaining for import upload.
• Import Bytes Remaining: Total number of bytes remaining for import upload.
• Import Files in Error: Total number of files in error for import.
• Import Upload Verification Progress: Progress of verification of files that have already been uploaded for
import.
Select the Start Time and End Time for these measures, either by manually entering the days and times in their
respective fields, or by selecting the Calendar feature and picking the times that way. As an alternative to selecting
a start and end time, you can also select from a list of standard times (last hour, last 6 hours, and so forth) from the
Quick Selects list for the period measured. The time period you specify applies to all the measures.
Specify the Interval (for example, 5 minutes, 1 hour) that each measure is recorded from the list.
Specify the Statistic being recorded (for example, Sum, Mean) for each measure from the list.
Tip:

Mean is the most useful statistic for data transfer as it reflects an absolute
value of the metric.
Choose additional actions from the Options list, including viewing the query in the Metrics Explorer, capturing the
URL for the measure, and switching between chart and table view.
Click Reset Charts to delete any existing information in the charts and begin recording new metrics.
See Monitoring Overview on page 2686 for general information on monitoring your Oracle Cloud Infrastructure
services.
Closing the Transfer Job
Close the transfer job when no further transfer job activity is required or possible. Closing a transfer job requires that
the status of all associated import appliances be returned, canceled, or deleted.
To close a transfer job using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the data transfer package for which you want to display the details.

Oracle Cloud Infrastructure User Guide 1058


Data Transfer

3.
Click the Actions icon ( ), and then click View Details.
Alternatively, click the hyperlinked name of the transfer job.
4. Click Close Transfer Job.
To close a transfer job using the CLI

oci dts job close --job-id job_id

For example:

oci dts job close --job-id ocid1.datatransferjob.oc1..exampleuniqueID


{
"data": {
"attached-transfer-appliance-labels": [],
"attached-transfer-device-labels": [],
"attached-transfer-package-labels": [],
"compartment-id": "ocid.compartment.oc1..exampleuniqueID",
"creation-time": "2020-05-20T22:00:43+00:00",
"defined-tags": {},
"device-type": "APPLIANCE",
"display-name": "MyApplianceImportJob",
"freeform-tags": {},
"id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"label": "JGX4N1XLI",
"lifecycle-state": "CLOSED",
"upload-bucket-name": "MyBucket"
},
"etag": "1"
}

The lifecycle-state attribute value is "CLOSED."


What's Next
You have completed the process of setting up, running, and monitoring the Appliance-Based Data Import. If you
determine that another appliance-based data transfers is required, repeat the procedure from the beginning.

Appliance Import Reference


This topic provides complete task details for certain components associated with Appliance-Based Data Imports. Use
this topic as a reference to learn and use commands associated with components included in the Appliance-Based
Data Import procedure.
Transfer Jobs
A transfer job is the logical representation of a data migration to Oracle Cloud Infrastructure. A transfer job is
associated with one or more import appliances.
Note:

It is recommended that you create a compartment for each transfer job to


minimize the required access your tenancy.
Creating Transfer Jobs
Create the transfer job in the same compartment as the upload bucket and supply a human-readable name for the
transfer job.
Creating a transfer job returns a job OCID that you specify in other transfer tasks. For example:

ocid1.datatransferjob.oci1..exampleuniqueID

Oracle Cloud Infrastructure User Guide 1059


Data Transfer

To create a transfer job using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Select the Compartment you are to use for data transfers from the list.
A list of transfer jobs that have already been created is displayed.
3. Click Create Transfer Job.
The Create Transfer Job dialog appears.
4. Enter a Job Name. Avoid entering confidential information. Then, select the Upload Bucket from the list.
5. Select Disk for the Transfer Device Type.
6. Click Create Transfer Job.
To create a transfer job using the CLI

oci dts job create --bucket bucket --compartment-id compartment_id --


display-name display_name --device-type device_type

display_name is the name of the transfer job. Avoid entering confidential information.
device_type should always be appliance for Appliance-Based Data Import jobs.
For example:

oci dts job create --bucket MyBucket1 --compartment-id


ocid.compartment.oc1..exampleuniqueID --display-name MyApplianceImportJob
--device-type appliance

{
"data": {
"attached-transfer-appliance-labels": [],
"attached-transfer-device-labels": [],
"attached-transfer-package-labels": [],
"compartment-id": "ocid.compartment.oc1..exampleuniqueID",
"creation-time": "2019-12-18T19:43:58+00:00",
"defined-tags": {},
"device-type": "APPLIANCE",
"display-name": "MyApplianceImportJob",
"freeform-tags": {},
"id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"label": "JAKQVAGJF",
"lifecycle-state": "INITIATED",
"upload-bucket-name": "MyBucket1"
},
"etag": "2--gzip"
}

Optionally, you can specify one or more defined or freeform tags when you create a transfer job. For more
information about tagging, see Resource Tags on page 213.
Defined Tags
To specify defined tags when creating a job:

oci dts job create --bucket bucket --compartment-id compartment_id


--display-name display_name --device-type appliance --defined-tags
'{ "tag_namespace": { "tag_key":"value" }}'

For example:

oci dts job create --bucket MyBucket1 --compartment-id


ocid.compartment.oc1..exampleuniqueID --display-name MyApplianceImportJob

Oracle Cloud Infrastructure User Guide 1060


Data Transfer

--device-type appliance --defined-tags '{"Operations": {"CostCenter":


"01"}}'

{
"data": {
"attached-transfer-appliance-labels": [],
"attached-transfer-device-labels": [],
"attached-transfer-package-labels": [],
"compartment-id": "ocid.compartment.oc1..exampleuniqueID",
"creation-time": "2019-12-18T19:43:58+00:00",
"defined-tags": {
"operations": {
"costcenter": "01"
}
},
"device-type": "APPLIANCE",
"display-name": "MyApplianceImportJob",
"freeform-tags": {},
"id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"label": "JAKQVAGJF",
"lifecycle-state": "INITIATED",
"upload-bucket-name": "MyBucket1"
},
"etag": "2--gzip"
}

Freeform Tags
To specify freeform tags when creating a job:

oci dts job create --bucket bucket --compartment-id compartment_id


--display-name display_name --device-type appliance --freeform-tags
'{ "tag_key":"value" }'

For example:

oci dts job create --bucket MyBucket1 --compartment-id


ocid.compartment.oc1..exampleuniqueID --display-name MyApplianceImportJob
--device-type appliance --freeform-tags '{"Pittsburg_Team":"brochures"}'

{
"data": {
"attached-transfer-appliance-labels": [],
"attached-transfer-device-labels": [],
"attached-transfer-package-labels": [],
"compartment-id": "ocid.compartment.oc1..exampleuniqueID",
"creation-time": "2019-12-18T19:43:58+00:00",
"defined-tags": {},
"device-type": "APPLIANCE",
"display-name": "MyApplianceImportJob",
"freeform-tags": {
"Pittsburg_Team": "brochures"
},
"id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"label": "JAKQVAGJF",
"lifecycle-state": "INITIATED",
"upload-bucket-name": "MyBucket1"
},
"etag": "2--gzip"
}

Oracle Cloud Infrastructure User Guide 1061


Data Transfer

Note:

Users create tag namespaces and tag keys with the required permissions.
These items must exist before you can specify them when creating a job. See
Working with Defined Tags on page 3950 for details.
Multiple Tags
To specify multiple tags, comma separate the JSON-formatted key/value pairs:

oci dts job create --bucket bucket --compartment-id compartment_id


--display-name display_name --device-type appliance --freeform-tags
'{ "tag_key":"value" }', '{ "tag_key":"value" }'

Notifications
To include notifications, include the --setup-notifications option. See Setting Up Transfer Job
Notifications from the CLI on page 1029 for more information on this feature.
Listing Transfer Jobs
To display the list of transfer jobs using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Select the Compartment from the list.
The transfer jobs in that compartment are displayed.
To display the list of transfer jobs using the CLI

oci dts job list --compartment-id compartment_id

For example:

oci dts job list --compartment-id ocid.compartment.oc1..exampleuniqueID

{
"data": [
{
"creation-time": "2019-12-18T19:43:58+00:00",
"defined-tags": {},
"device-type": "APPLIANCE",
"display-name": "MyApplianceImportJob",
"freeform-tags": {},
"id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"label": "JAKQVAGJF",
"lifecycle-state": "INITIATED",
"upload-bucket-name": "MyBucket1"
},
{
"creation-time": "2019-10-03T16:52:26+00:00",
"defined-tags": {},
"device-type": "DISK",
"display-name": "MyDiskImportJob",
"freeform-tags": {},
"id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"label": "J2AWEOL5T",
"lifecycle-state": "INITIATED",
"upload-bucket-name": "MyBucket2"
}
]
}

Oracle Cloud Infrastructure User Guide 1062


Data Transfer

When you use the CLI to list jobs, tagging details are also included in the output if you specified tags.
Displaying Transfer Job Details
To display the details of a transfer job using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Select the Compartment from the list.
The transfer jobs in that compartment are displayed.
3. Click the link under Transfer Jobs for the transfer job whose details you want to view.

Alternatively, you can click the Actions icon ( ), and then click View Details.
The Details page for that transfer job appears.
To display the details of a transfer job using the CLI

oci dts job show --job-id job_id

For example:

oci dts job show --job-id ocid1.datatransferjob.oc1..exampleuniqueID

{
"data": {
"attached-transfer-appliance-labels": [],
"attached-transfer-device-labels": [],
"attached-transfer-package-labels": [],
"compartment-id": "ocid.compartment.oc1..exampleuniqueID",
"creation-time": "2019-12-18T19:43:58+00:00",
"defined-tags": {},
"device-type": "APPLIANCE",
"display-name": "MyApplianceImportJob",
"freeform-tags": {},
"id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"label": "JAKQVAGJF",
"lifecycle-state": "INITIATED",
"upload-bucket-name": "MyBucket1"
},
"etag": "2--gzip"
}

When you use the CLI command to display the details of a job, tagging details are also included in the output if you
specified tags.
Editing Transfer Jobs
To edit the name of a transfer job using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Select the Compartment from the list.
The transfer jobs in that compartment are displayed.
3. Click the link under Transfer Jobs for the transfer job whose name you want to edit.
The Details page for that transfer job appears.

Alternatively, you can click the Actions icon ( ), and then click View Details.
4. Click Edit in the Details page.
The Edit Transfer Job dialog appears.
5. Edit the name of the transfer job. Avoid entering confidential information.

Oracle Cloud Infrastructure User Guide 1063


Data Transfer

6. Click Edit Transfer Job.


You are returned to the Details page for that transfer job.
To edit the name of a transfer job using the CLI

oci dts job update --job-id job_id --display-name display_name

display_name is the new name of the transfer job. Avoid entering confidential information.
For example:

oci dts job update --job-id ocid1.datatransferjob.oc1..exampleuniqueID --


display-name MyRenamedJob

{
"data": {
"attached-transfer-appliance-labels": [],
"attached-transfer-device-labels": [],
"attached-transfer-package-labels": [],
"compartment-id": "ocid.compartment.oc1..exampleuniqueID",
"creation-time": "2019-12-18T19:43:58+00:00",
"defined-tags": {},
"device-type": "APPLIANCE",
"display-name": "MyRenamedJob",
"freeform-tags": {},
"id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"label": "JAKQVAGJF",
"lifecycle-state": "INITIATED",
"upload-bucket-name": "MyBucket1"
},
"etag": "3"
}

Editing Transfer Job Tags


To edit the tags associated with a transfer job using the CLI
The CLI command replaces any existing tags with the new key/value pairs you specify. For more information about
tagging, see Resource Tags on page 213.
To edit defined tags, provide the replacement key value pairs:

oci dts job update --job-id job_id --defined-tags '{ "tag_namespace":


{ "tag_key":"value" }}'

For example:

oci dts job update --job-id ocid1.datatransferjob.oc1..exampleuniqueID --


defined-tags '{"Operations": {"CostCenter": "42"}}'

{
"data": {
"attached-transfer-appliance-labels": [],
"attached-transfer-device-labels": [],
"attached-transfer-package-labels": [],
"compartment-id": "ocid.compartment.oc1..exampleuniqueID",
"creation-time": "2019-12-18T19:43:58+00:00",
"defined-tags": {
"operations": {
"costcenter": "42"
}
},
"device-type": "APPLIANCE",

Oracle Cloud Infrastructure User Guide 1064


Data Transfer

"display-name": "MyApplianceImportJob",
"freeform-tags": {},
"id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"label": "JAKQVAGJF",
"lifecycle-state": "INITIATED",
"upload-bucket-name": "MyBucket1"
},
"etag": "2--gzip"
}

To edit free-form tags, provide the replacement key/value pairs:

oci dts job update --job-id <job_id> --freeform-tags '{ "tag_key":"value" }'

For example:

oci dts job update --job-id ocid1.datatransferjob.oc1..exampleuniqueID --


freeform-tags '{"Chicago_Team":"marketing_videos"}'

{
"data": {
"attached-transfer-appliance-labels": [],
"attached-transfer-device-labels": [],
"attached-transfer-package-labels": [],
"compartment-id": "ocid.compartment.oc1..exampleuniqueID",
"creation-time": "2019-12-18T19:43:58+00:00",
"defined-tags": {},
"device-type": "APPLIANCE",
"display-name": "MyApplianceImportJob",
"freeform-tags": {
"Chicago_Team": "marketing_videos"
},
"id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"label": "JAKQVAGJF",
"lifecycle-state": "INITIATED",
"upload-bucket-name": "MyBucket1"
},
"etag": "2--gzip"
}

Deleting Transfer Job Tags


To delete the tags associated with a transfer job using the CLI
The CLI command replaces any existing tags with the new key/value pairs you specify. If you want to delete some of
the tags, specify a new tag string that does not contain the unwanted key/value pairs.
To delete all free-form tags:

oci dts job update --job-id job_id --freeform-tags '{}'

To delete all defined tags:

oci dts job update --job-id job_id --defined-tags '{}'

Moving Transfer Jobs Between Compartments


To move transfer job to a different compartment using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Select the Compartment from the list.
The transfer jobs in that compartment are displayed.

Oracle Cloud Infrastructure User Guide 1065


Data Transfer

3. Click the link under Transfer Jobs for the transfer job that you want to move.
The Details page for that transfer job appears.

Alternatively, you can click the Actions icon ( ), and then click Move Resource.
4. Click Move Resource in the Details page.
The Move Resource to a Different Compartment dialog appears.
5. Select the compartment you want to which you want to move the transfer job from the list.
6. Click Move Resource.
You are returned to the Details page for that transfer job.
To move a transfer job to a different compartment using the CLI

oci dts job move --job-id job_id compartment-id compartment_id [OPTIONS]

compartment_id is the compartment to which the data transfer job is being moved.
OPTIONS are:
• --if-match: The tag that must be matched for the task to occur for that entity. If set, the update is only
successful if the object's tag matches the tag specified in the request.
• --from-json: Provide input to this command as a JSON document from a file using the file://path-to/file
syntax. The --generate-full-command-json-input option can be used to generate a sample JSON file
to be used with this command option. The key names are pre-populated and match the command option names
(converted to camelCase format, e.g. compartment-id --> compartmentId), while the values of the keys need to
be populated by the user before using the sample file as an input to this command. For any command option that
accepts multiple values, the value of the key can be a JSON array. Options can still be provided on the command
line. If an option exists in both the JSON document and the command line then the command line specified value
will be used. For examples on usage of this option, please see our "using CLI with advanced JSON options" link:
https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliusing.htm#AdvancedJSONOptions.
For example:

oci dts job move --job-id ocid1.datatransferjob.oc1..exampleuniqueID


compartment-id ocid.compartment.oc1..exampleuniqueID

To confirm the transfer, display the list of transfer jobs in the new compartment. See Listing Transfer Jobs on page
1062 for more information.
Verifying Upload User Credentials
Note:

You can only use the CLI command to verify upload user credentials.
You can verify the current upload user credentials to see whether there are any problems or updates required. If any
configuration file is incorrect or invalid, the upload fails.
To verify the upload user credentials using the CLI

oci dts job verify-upload-user-credentials --bucket bucket

<bucket> is the upload bucket for the transfer job.


For example:

oci dts job verify-upload-user-credentials --bucket MyBucket1

created object BulkDataTransferTestObject in bucket MyBucket1


overwrote object BulkDataTransferTestObject in bucket MyBucket1
inspected object BulkDataTransferTestObject in bucket MyBucket1

Oracle Cloud Infrastructure User Guide 1066


Data Transfer

read object BulkDataTransferTestObject in bucket MyBucket1

Depending on your user configuration, you may get an error message returned similar to the following:

WARNING: Permissions on /home/user/.oci/config_upload_user are too open.


To fix this please try executing the following command:
oci setup repair-file-permissions --file /home/user/.oci/config_upload_user
Alternatively to hide this warning, you may set the environment variable,
OCI_CLI_SUPPRESS_FILE_PERMISSIONS_WARNING:
export OCI_CLI_SUPPRESS_FILE_PERMISSIONS_WARNING=True

ERROR: The config file at /home/user/.oci/config_upload_user is invalid:

+Config Errors+-----------
+----------------------------------------------------------------------------------
+
| Key | Error | Hint
|
+-------------+-----------
+----------------------------------------------------------------------------------
+
| fingerprint | malformed | openssl rsa -pubout -outform DER -in path to
your private key | openssl md5 -c |
+-------------+-----------
+----------------------------------------------------------------------------------
+

If a user credential issue is identified, fix it and rerun the verify-upload-user-credentials CLI to ensure
that all problems are addressed. Then you can proceed with transfer job activities.
Deleting Transfer Jobs
You can delete transfer jobs when they are in the Initiated, Preparing, and Close states.
To delete a transfer job using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Select the Compartment from the list.
The transfer jobs in that compartment are displayed.
3. Find the data transfer job that you want to delete.
4.
Click the Actions icon ( ), and then click Delete.
Alternatively, you can delete a transfer job from the View Details page.
5. Confirm the deletion when prompted.
To delete a transfer job using the CLI

oci dts job delete --job-id job_id

For example:

oci dts job delete --job-id ocid1.datatransferjob.oc1..exampleuniqueID

Confirm the deletion when prompted. The transfer job is deleted with no further action or return. To confirm the
deletion, display the list of transfer jobs in the compartment. See Listing Transfer Jobs on page 1062 for more
information.

Oracle Cloud Infrastructure User Guide 1067


Data Transfer

Closing Transfer Jobs


Typically, you would close a transfer job when no further transfer job activity is required or possible. Closing a
transfer job requires that the status of all associated transfer packages be returned, canceled, or deleted. In addition,
the status of all associated transfer disks must be complete, in error, missing, canceled, or deleted.
When you close the transfer job, the status changes to Closed.
To close a transfer job using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Select the Compartment from the list.
The transfer jobs in that compartment are displayed.
3. Find the data transfer package for which you want to display the details.
4.
Click the Actions icon ( ), and then click View Details.
Alternatively, click the hyperlinked name of the transfer job.
5. Click Close Transfer Job.
To close a transfer job using the CLI

oci dts job close --job-id job_id

For example:

oci dts job close --job-id ocid1.datatransferjob.oc1..exampleuniqueID


{
"data": {
"attached-transfer-appliance-labels": [],
"attached-transfer-device-labels": [],
"attached-transfer-package-labels": [],
"compartment-id": "ocid.compartment.oc1..exampleuniqueID",
"creation-time": "2020-05-20T22:00:43+00:00",
"defined-tags": {},
"device-type": "APPLIANCE",
"display-name": "MyApplianceImportJob",
"freeform-tags": {},
"id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"label": "JGX4N1XLI",
"lifecycle-state": "CLOSED",
"upload-bucket-name": "MyBucket"
},
"etag": "1"
}

Import Appliances
This section describes tasks associated with the Oracle-provided import appliance.
Requesting Appliances
Tip:

To save time, identify the data you intend to upload and make data copy
preparations before requesting the import appliance.
Creating an import appliance request returns an Oracle-assigned appliance label. For example:

XAKWEGKZ5T

To request an appliance using the Console

Oracle Cloud Infrastructure User Guide 1068


Data Transfer

1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
Choose the transfer job for which you want to request an import appliance.
2. Under Transfer Appliances, click Request Transfer Appliance.
The Request Transfer Appliance dialog appears.
3. Enter the shipping address details where you want the import appliance sent.

Company Name: Required. Specify the name of the company that owns the data being migrated to Oracle
Cloud Infrastructure.
• Recipient Name: Required. Specify the name of the recipient who receives the import appliance.
• Recipient Phone Number: Required. Specify the recipient's phone number.
• Recipient Email Address: Required. Specify the recipient's email address.
• Care Of; Optional intermediary party responsible for transferring the import appliance shipment from the
delivery vendor to the intended recipient.
• Address Line 1: Required. Specify the street address where the import appliance is being sent.
• Address Line 2: Optional identifying address details like building, suite, unit, or floor information.
• City/Locality: Required. Specify the city or locality.
• State/Province/Region: Required. Specify the state, province, or region.
• Zip/Postal Code: Specify the zip code or postal code.
• Country: Required. Select the country.
4. Click Request Transfer Appliance.
To request an appliance using the CLIs

oci dts appliance request --job-id job_id --addressee addressee --care-


of care_of --address1 address_line1 --city-or-locality city_or_locality --
state-province-region state_province_region --country country --zip-postal-
code zip_postal_code --phone-number phone_number --email email [OPTIONS]

<options> are:
• --address2: Optional address of the addressee (line 2).
• --address3: Optional address of the addressee (line 3).
• --address4: Optional address of the addressee (line 4).
• --from-json: Provide input to this command as a JSON document from a file using the file://path-to/file
syntax. The --generate-full-command-json-input option can be used to generate a sample JSON file
to be used with this command option. The key names are pre-populated and match the command option names
(converted to camelCase format, e.g. compartment-id --> compartmentId), while the values of the keys need to
be populated by the user before using the sample file as an input to this command. For any command option that
accepts multiple values, the value of the key can be a JSON array. Options can still be provided on the command
line. If an option exists in both the JSON document and the command line then the command line specified value
will be used. For examples on usage of this option, please see our "using CLI with advanced JSON options" link:
https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliusing.htm#AdvancedJSONOptions.
For example:

oci dts appliance request --job-id


ocid1.datatransferjob.oc1..exampleuniqueID --addressee MyCompany --care-of
"John Doe" --address1 "123 Main Street" --city-or-locality Anytown --state-
province-region NY --country USA --zip-postal-code 12345 --phone-number
8005551212 --email [email protected]

{
"data": {
"appliance-delivery-tracking-number": null,
"appliance-delivery-vendor": null,
"appliance-return-delivery-tracking-number": null,

Oracle Cloud Infrastructure User Guide 1069


Data Transfer

"creation-time": "2020-05-20T22:08:13+00:00",
"customer-received-time": null,
"customer-returned-time": null,
"customer-shipping-address": {
"address1": "123 Main Street",
"address2": null,
"address3": null,
"address4": null,
"addressee": "MyCompany",
"care-of": "John Doe",
"city-or-locality": "Anytown",
"country": "USA",
"email": "[email protected]",
"phone-number": "3115551212",
"state-or-region": "NY",
"zipcode": "12345"
},
"delivery-security-tie-id": null,
"label": "XAKWEGKZ5T",
"lifecycle-state": "REQUESTED",
"next-billing-time": null,
"return-security-tie-id": null,
"serial-number": null,
"transfer-job-id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"upload-status-log-uri": "JAKQVAGJF/XAKWEGKZ5T/upload_summary.txt"
}
}

When you submit an appliance request, Oracle generates a unique label (label": "XAKWEGKZ5T) to identify the
import appliance and your request is sent to Oracle for approval and processing.
Monitoring the Appliance Request Status
The time it takes to approve, prepare, and ship your appliance request varies and depends on various factors,
including current available inventory. Oracle provides status updates daily throughout the appliance request and ship
process.
To monitor the status of your appliance request using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Select the Compartment from the list.
The transfer jobs in that compartment are displayed.
3. Find and select the transfer job for which you want to monitor associated appliance requests.
4. Under Transfer Appliances, find the appliance label Oracle assigned to your appliance request and look at the
Status field.
Here are the key status values to look for when monitoring your appliance request:
• Requested: You successfully completed your request for an import appliance. The status displays Requested
until Oracle approves your appliance request.
• Rejected: Oracle denied your appliance request.
Important:

If your appliance request is denied and you have questions, contact your
Sales Representative or file a Service Request (SR).
• Oracle Preparing: Oracle approved your appliance request. The status displays Oracle Preparing until the
import appliance is shipped to you.
• Shipping: Oracle completed the necessary preparations and shipped your import appliance. When the import
appliance is shipped, Oracle provides the serial number of the import appliance, the shipping vendor, and the

Oracle Cloud Infrastructure User Guide 1070


Data Transfer

tracking number in the Transfer Appliance Details. The status displays Shipping until the import appliance is
delivered to you.
• Delivered: The shipping vendor delivered your import appliance. When the import appliance is delivered, Oracle
provides the date and time the import appliance was received in the Transfer Appliance Details. The status
displays Delivered.
To monitor the status of your appliance request using the CLI

oci dts appliance show --job-id job_id --appliance-label appliance label

For example:

oci dts appliance show --job-id ocid1.datatransferjob.oc1..exampleuniqueID


--appliance-label XAKWEGKZ5T

{
"data": {
"appliance-delivery-tracking-number": null,
"appliance-delivery-vendor": null,
"appliance-return-delivery-tracking-number": null,
"creation-time": "2020-05-20T22:08:13+00:00",
"customer-received-time": null,
"customer-returned-time": null,
"customer-shipping-address": {
"address1": "123 Main Street",
"address2": null,
"address3": null,
"address4": null,
"addressee": "MyCompany",
"care-of": "John Doe",
"city-or-locality": "Anytown",
"country": "USA",
"email": "[email protected]",
"phone-number": "3115551212",
"state-or-region": "NY",
"zipcode": "12345"
},
"delivery-security-tie-id": null,
"label": "XAKWEGKZ5T",
"lifecycle-state": "REQUESTED",
"next-billing-time": null,
"return-security-tie-id": null,
"serial-number": null,
"transfer-job-id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"upload-status-log-uri": "JAKQVAGJF/XAKWEGKZ5T/upload_summary.txt"
}
}

The request status is displayed as the value for lifecycle-state.


Displaying the List of Appliances
To display the list of appliances using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Select the Compartment from the list.
The transfer jobs in that compartment are displayed.
3. Choose the transfer job for which you want to display the list of associated import appliances.
The list of import appliances is displayed below the transfer job details.

Oracle Cloud Infrastructure User Guide 1071


Data Transfer

To display the list of appliances using the CLI

oci dts appliance list --job-id job_id

For example:

oci dts appliance list --job-id ocid1.datatransferjob.oc1..exampleuniqueID


{
"data": {
"transfer-appliance-objects": [
{
"creation-time": "2020-05-20T22:08:13+00:00",
"label": "XAKWEGKZ5T",
"lifecycle-state": "PROCESSING",
"serial-number": null
}
]
}
}

Displaying Appliance Details


To display the details of an appliance using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Select the Compartment from the list.
The transfer jobs in that compartment are displayed.
3. Find the transfer job for which you want to display the details of an associated import appliance.
The list of appliances is displayed below the transfer job details.
4. Find the import appliance for which you want to display the details.
5.
Click the Actions icon ( ), and then click View Details.
To display the details of an appliance using the CLI

oci dts appliance show --job-id job_id --appliance-label appliance_label

For example:

oci dts appliance show --job-id ocid1.datatransferjob.oc1..exampleuniqueID


--appliance-label XAKWEGKZ5T

{
"data": {
"appliance-delivery-tracking-number": null,
"appliance-delivery-vendor": null,
"appliance-return-delivery-tracking-number": null,
"creation-time": "2020-05-20T22:08:13+00:00",
"customer-received-time": null,
"customer-returned-time": null,
"customer-shipping-address": {
"address1": "123 Main Street",
"address2": null,
"address3": null,
"address4": null,
"addressee": "MyCompany",
"care-of": "John Doe",
"city-or-locality": "Anytown",
"country": "USA",
"email": "[email protected]",

Oracle Cloud Infrastructure User Guide 1072


Data Transfer

"phone-number": "3115551212",
"state-or-region": "NY",
"zipcode": "12345"
},
"delivery-security-tie-id": "exampleuniqueID",
"label": "XAKWEGKZ5T",
"lifecycle-state": "PROCESSING",
"next-billing-time": null,
"return-security-tie-id": "exampleuniqueID",
"serial-number": "exampleuniqueserialnumber",
"transfer-job-id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"upload-status-log-uri": "JAKQVAGJF/XAKWEGKZ5T/upload_summary.txt"
}
}

Editing the Appliance Request Shipping Information


You can only edit the shipping information when the status is Requested.
To edit the appliance request shipping information using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the Requested import appliance that you want to edit the shipping information.
3.
Click the Actions icon ( ), and then click Edit.
4. Edit the shipping information for the import appliance.
5. Click Save.
To edit the appliance request shipping information using the CLI

oci dts appliance update-shipping-address --job-id job_id --appliance-


label appliance_label --addressee addressee changed_fields

Include the addressee field even if it has not changed.


changed_fields represents one or more of the following shipping address fields that you want to update:

--care-of care_of --address1 address --city city --state addressee --zip zip
--country country --phone-number phone_number email email

You only need to include those fields that are being updated. For example:

oci dts appliance update-shipping-address --job-id


ocid1.datatransferjob.oc1..exampleuniqueID --appliance-label XAKWEGKZ5T
--addressee MyCompany --care-of "Richard Roe" --phone-number 3115559876 --
email [email protected]

Confirm the update of the appliance request shipping information when prompted. The appliance details are displayed
with the updated information.

{
"data": {
"appliance-delivery-tracking-number": null,
"appliance-delivery-vendor": null,
"appliance-return-delivery-tracking-number": null,
"creation-time": "2020-05-20T22:08:13+00:00",
"customer-received-time": null,
"customer-returned-time": null,
"customer-shipping-address": {
"address1": "123 Main Street",

Oracle Cloud Infrastructure User Guide 1073


Data Transfer

"address2": null,
"address3": null,
"address4": null,
"addressee": "MyCompany",
"care-of": "Richard Roe",
"city-or-locality": "Anytown",
"country": "USA",
"email": "[email protected]",
"phone-number": "3115559876",
"state-or-region": "NY",
"zipcode": "12345"
},
"delivery-security-tie-id": null,
"label": "XAKWEGKZ5T",
"lifecycle-state": "REQUESTED",
"next-billing-time": null,
"return-security-tie-id": null,
"serial-number": null,
"transfer-job-id": "ocid1.datatransferjob.oc1..exampleuniqueID",
"upload-status-log-uri": "JAKQVAGJF/XAKWEGKZ5T/upload_summary.txt"
}
}

Deleting an Appliance Request


You can delete an appliance request before Oracle approves the request—the status must be Requested. For example,
you initiated the transfer by creating a transfer job and requested an appliance, but changed your mind.
To delete an appliance request using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Imports.
2. Find the data transfer job and appliance request that you want to delete.
3.
Click the Actions icon ( ), and then click Delete.
Alternatively, you can delete an appliance request from the Transfer Appliance Details page.
4. Confirm the deletion when prompted.
To delete an appliance request using the CLI

oci dts appliance delete --job-id job_id --appliance-label appliance_label

For example:

oci dts appliance delete --job-id ocid1.datatransferjob.oc1..exampleuniqueID


--appliance-label XAKWEGKZ5T

Confirm the deletion when prompted.


Displaying Registered Appliances
You can display a list of all appliances registered through the initialize authentication command. See Initializing
Authentication to the Import Appliance on page 1044 for more information.
Note:

You can only use the CLI command to display a list of all appliances
registered through the initialize authentication command.
To display the list of all registered appliances using the CLI

oci dts physical-appliance list

Oracle Cloud Infrastructure User Guide 1074


Data Transfer

For example:

oci dts physical-appliance list

{
"data": [
{
"appliance_profile": "DEFAULT",
"endpoint": "https://10.20.20.7"
}
]
}

Unregistering Appliances
You can unregister an appliance previously registered through the initialize authentication command. See Initializing
Authentication to the Import Appliance on page 1044 for more information.
To unregister an appliance using the CLI

oci dts physical-appliance unregister

Configuring Import Appliance Encryption


Configure the import appliance to use encryption. Oracle Cloud Infrastructure creates a strong passphrase for each
appliance. The command securely collects the strong passphrase from Oracle Cloud Infrastructure and sends that
passphrase to the Data Transfer service.
If your environment requires Internet-aware applications to use network proxies, ensure that you set up the required
Linux environment variables. See for more information.
Important:

If you are working with multiple appliances at the same time, be sure the job
ID and appliance label you specify in this step matches the physical appliance
you are currently working with. You can get the serial number associated
with the job ID and appliance label using the Console or the Oracle Cloud
Infrastructure CLI. You can find the serial number of the physical appliance
on the back of the device on the agency label.
Note:

You can only use the Oracle Cloud Infrastructure CLI to configure
encryption.
Configuring Appliance Encryption
Note:

You can only use the CLI command to configure encryption for appliances.
To configure appliance encryption using the CLI
At the command prompt on the host, run oci dts physical-appliance configure-encryption to
configure import appliance encryption.

oci dts physical-appliance configure-encryption --job-id job_id--appliance-


label appliance_label

Oracle Cloud Infrastructure User Guide 1075


Data Transfer

For example:

oci dts physical-appliance configure-encryption --job-id


ocid1.datatransferjob.oc1..exampleuniqueID --appliance-label XAKWEGKZ5T

Moving the state of the appliance to preparing...


Passphrase being retrieved...
Configuring encryption...
Encryption configured. Getting physical transfer appliance info...
{
"data": {
"availableSpaceInBytes": "Unknown",
"encryptionConfigured": true,
"finalizeStatus": "NA",
"lockStatus": "LOCKED",
"totalSpaceInBytes": "Unknown"
}
}

Initializing Authentication to the Appliance


Note:

You can only use the Oracle Cloud Infrastructure CLI to initialize
authentication.
Initialize authentication to allow the host machine to communicate with the import appliance. Use the values returned
from the Configure Networking command. See Configuring the Transfer Appliance Networking for details.
Initializing Authentication
Note:

You can only use the CLI command to initialize authentication for
appliances.
To initialize authentication using the CLI

oci dts physical-appliance initialize-authentication --job-id job-id


--appliance-cert-fingerprint appliance_cert_fingerprint --appliance-
ip appliance-ip --appliance-label appliance-label

For example:

oci dts physical-appliance initialize-authentication --job-id


ocid1.datatransferjob.oc1..exampleuniqueID --appliance-cert-fingerprint
F7:1B:D0:45:DA:04:0C:07:1E:B2:23:82:E1:CA:1A:E9 --appliance-ip 10.0.0.1 --
appliance-label XAKWEGKZ5T

Retrieving the Appliance serial id from Oracle Cloud Infrastructure


Access token:
Registering and initializing the authentication between the CLI and the
appliance
{
"data": {
"availableSpaceInBytes": "Unknown",
"encryptionConfigured": false,
"finalizeStatus": "NA",
"lockStatus": "NA",
"totalSpaceInBytes": "Unknown"
}
}

Oracle Cloud Infrastructure User Guide 1076


Data Transfer

When prompted, supply the access token and system. The Control Host can now communicate with the import
appliance.
Datasets
A dataset is a collection of files that are treated similarly. You can write up to 100 million files onto the appliance for
import. We currently support one dataset per appliance.
Note:

You can only use the CLI to run dataset tasks.


Creating the Dataset
Appliance data transfer supports NFS version 3, 4, and 4.1 to write data to the import appliance. In preparation for
writing data, create and configure a dataset to write to.
To create a dataset using the CLI

oci dts nfs-dataset create --name dataset_name

For example:

oci dts nfs-dataset create --name nfs-ds-1

Creating dataset with NFS export details nfs-ds-1


{
"data": {
"datasetType": "NFS",
"name": "nfs-ds-1",
"nfsExportDetails": {
"exportConfigs": null
},
"state": "INITIALIZED"
}
}

Activating the Dataset


Activation creates the NFS export, making the dataset accessible to NFS clients.
To activate the dataset using the CLI

oci dts nfs-dataset activate --name dataset_name

For example:

oci dts nfs-dataset activate --name nfs-ds-1

Fetching all the datasets


Activating dataset small-files
Dataset nfs-ds-1 activated

Configuring Export Settings on the Dataset


To configure export settings on a dataset using the CLI

oci dts nfs-dataset set-export --name dataset_name --rw true --world true

For example:

oci dts nfs-dataset set-export --name nfs-ds-1 --rw true --world true

Settings NFS exports to dataset nfs-ds-1

Oracle Cloud Infrastructure User Guide 1077


Data Transfer

{
"data": {
"datasetType": "NFS",
"name": "nfs-ds-1",
"nfsExportDetails": {
"exportConfigs": [
{
"hostname": null,
"ipAddress": null,
"readWrite": true,
"subnetMaskLength": null,
"world": true
}
]
},
"state": "INITIALIZED"
}
}

Here is another example of creating the export to give read/write access to a subnet:

oci dts nfs-dataset set-export --name nfs-ds-1 --ip 10.0.0.0 --subnet-mask-


length 24 --rw true --world false

Settings NFS exports to dataset nfs-ds-1


{
"data": {
"datasetType": "NFS",
"name": "nfs-ds-1",
"nfsExportDetails": {
"exportConfigs": [
{
"hostname": null,
"ipAddress": "10.0.0.0",
"readWrite": true,
"subnetMaskLength": "24",
"world": false
}
]
},
"state": "INITIALIZED"
}
}

Deactivating the Dataset


Note:

Deactivating the dataset is only required if you are running appliance


commands using the Data Transfer Utility. If you are using the Oracle Cloud
Infrastructure CLI to run your Appliance-Based Data Import, you can skip
this step and proceed to Sealing the Dataset on page 1079.
After you are done writing data, deactivate the dataset. Deactivation removes the NFS export on the dataset,
disallowing any further writes.
To deactivate the dataset using the CLI

dts nfs-dataset deactivate --name dataset_name

Oracle Cloud Infrastructure User Guide 1078


Data Transfer

For example:

dts nfs-dataset deactivate --name nfs-ds-1

Sealing the Dataset


Sealing a dataset stops all writes to the dataset. This process can take some time to complete, depending upon the
number of files and total amount of data copied to the import appliance.
If you issue the seal command without the --wait option, the seal operation is triggered and runs in the
background. You are returned to the command prompt and can use the seal-status command to monitor the
sealing status. Running the seal command with the --wait option results in the seal operation being triggered and
continues to provide status updates until sealing completion.
Important:

You can only copy regular files to transfer appliances. Special files (for
example, symbolic links, device special, sockets, and pipes) cannot be copied
directly. To transfer special files, create a tar archive of these files and copy
the tar archive to the transfer appliance.
The sealing operation generates a manifest across all files in the dataset. The manifest contains an index of the copied
files and generated data integrity hashes.
To seal the dataset using the CLI

oci dts nfs-dataset seal --name dataset_name [--wait]

For example:

oci dts nfs-dataset seal --name nfs-ds-1

Seal initiated. Please use seal-status command to get progress.

Monitoring the Dataset Sealing Process


To monitor the dataset sealing process using the CLI

oci dts nfs-dataset seal-status --name dataset_name

For example:

oci dts nfs-dataset seal-status --name nfs-ds-1

{
"data": {
"bytesProcessed": 2803515612507,
"bytesToProcess": 2803515612507,
"completed": true,
"endTimeInMs": 1591990408804,
"failureReason": null,
"numFilesProcessed": 182,
"numFilesToProcess": 182,
"startTimeInMs": 1591987136180,
"success": true
}
}

Downloading the Dataset Seal Manifest


After sealing the dataset, you can optionally download the dataset's seal manifest to a user-specified location. The
manifest file contains the checksum details of all the files. The transfer site uploader consults the manifest file to

Oracle Cloud Infrastructure User Guide 1079


Data Transfer

determine the list of files to upload to object storage. For every uploaded file, it validates that the checksum reported
by object storage matches the checksum in manifest. This validation ensures that no files got corrupted in transit.
To download the dataset seal manifest file using the CLI

oci dts nfs-dataset get-seal-manifest --name dataset_name --output-


file output_file_path

For example:

oci dts nfs-dataset get-seal-manifest --name nfs-ds-1 --output-file ~/


Downloads/seal-manifest

Reopening a Dataset
If changes are necessary after sealing a dataset or finalizing an import appliance, you must reopen the dataset to
modify the contents. Make the required changes and again seal the dataset. Resealing the dataset generates a new
manifest.
Note:

If an import appliance is rebooted or power cycled, follow the instructions in


this topic to reopen the dataset.
Step 1: Unlocking the Appliance
You must unlock the appliance before you can write data to it. Unlocking the appliance requires the strong passphrase
that is created by Oracle Cloud Infrastructure for each appliance.
Unlock the appliance using one of the following ways:
• If you provide the --job-id and --appliance-label when running the unlock command, the data
transfer system retrieves the passphrase from Oracle Cloud Infrastructure and sends it to the appliance during the
unlock operation.
• You can query Oracle Cloud Infrastructure for the passphrase and provide that passphrase when prompted during
the unlock operation.
To unlock the appliance and send the passphrase to the appliance

oci dts physical-appliance unlock --job-id job_id --appliance-


label appliance_label

For example:

oci dts physical-appliance unlock --job-id


ocid1.datatransferjob.oc1..exampleuniqueID --appliance-label XAKWEGKZ5T
Retrieving the passphrase from Oracle Cloud Infrastructure
{
"data": {
"availableSpaceInBytes": "64.00GB",
"encryptionConfigured": true,
"finalizeStatus": "NOT_FINALIZED",
"lockStatus": "NOT_LOCKED",
"totalSpaceInBytes": "64.00GB"
}
}

To query Oracle Cloud Infrastructure for the passphrase to provide to unlock the appliance

oci dts appliance get-passphrase --job-id job_id --appliance-


label appliance_label

Oracle Cloud Infrastructure User Guide 1080


Data Transfer

For example:

oci dts appliance get-passphrase --job-id


ocid1.datatransferjob.oc1..exampleuniqueID --appliance-label XAKWEGKZ5T

{
"data": {
"encryption-passphrase": "passphrase"
}
}

Then, run dts physical-appliance unlock without --job-id and --appliance-label and supply
the passphrase when prompted.

oci dts physical-appliance unlock

Step 2: Reopening the Appliance


Reopen the dataset to write data to the import appliance again.
To reopen an NFS dataset

oci dts nfs-dataset reopen --name dataset_name

Step 3: Repeat Steps to Write Data to the Appliance


Repeat the same tasks you performed when you originally wrote data to the import appliance beginning with
activating the dataset in the Copying Files to the NFS Share on page 1049 section.
Displaying the List of Datasets
To display the list of datasets using the CLI

oci dts nfs-dataset list

For example:

oci dts nfs-dataset list

Listing NFS datasets


{
"data": [
{
"datasetType": "NFS",
"name": "nfs-ds-1",
"nfsExportDetails": {
"exportConfigs": [
{
"hostname": null,
"ipAddress": null,
"readWrite": true,
"subnetMaskLength": null,
"world": true
}
]
},
"state": "ACTIVE"
}
]
}

Oracle Cloud Infrastructure User Guide 1081


Data Transfer

Displaying Dataset Details


To display the details of a transfer job using the CLI

oci dts nfs-dataset show --name dataset_name

For example:

oci dts nfs-dataset show --name MyDataset

{
"data": {
"datasetType": "NFS",
"name": "nfs-ds-1",
"nfsExportDetails": {
"exportConfigs": [
{
"hostname": null,
"ipAddress": null,
"readWrite": true,
"subnetMaskLength": null,
"world": true
}
]
},
"state": "ACTIVE"
}
}

Deleting a Dataset
To delete a dataset using the CLI

oci dts nfs-dataset delete --name dataset_name

For example:

oci dts nfs-dataset delete --name nfs-ds-1

Confirm the deletion when prompted. The dataset is deleted with no further action or return. To confirm the deletion,
display the list of datasets. See Displaying the List of Datasets on page 1081 for more information.

Appliance Data Export


Data Export is Oracle's offline data export solution that lets you migrate petabyte-scale datasets from your Oracle
Cloud Infrastructure Object Storage bucket to your data center using an Oracle-provided Data Transfer Appliance.
Use Data Export when you have stored terabytes or petabytes of data in Oracle Cloud Infrastructure and need to
retrieve it from Object Storage more quickly than using the public internet. For example, you may have media content
or processed datasets you need to share with a customer or business partner.
You cannot export data from multiple Object Storage buckets to the same export job. If you want to export data from
more than one bucket, you must create an export job for each of the buckets.
Note:

Data Export does not support exporting files from an Archive Storage bucket.
Move your data from the Archive Storage bucket to an Object Storage bucket
and then create an export job specifying the Object Storage bucket. See
Overview of Archive Storage on page 488 for more information.

Oracle Cloud Infrastructure User Guide 1082


Data Transfer

Note:

Data Export is not available for free trial or Pay As You Go accounts.

Data Export Concepts


EXPORT JOB
An export job is the logical representation of an offline export of data from your Oracle Cloud Infrastructure
Object Storage bucket to your data center. Exporting your data does not remove it from the original storage
bucket on Oracle Cloud Infrastructure.
Note:

An export job is associated with a single export appliance. If your data


export needs exceed the capacity of the appliance (150 TB), you need
to create additional export jobs with their own dedicated appliances.
DATA TRANSFER APPLIANCE
The Data Transfer Appliance (export appliance) is a high-storage capacity device used to export data from
Oracle Cloud Infrastructure to your data center. You request an export appliance from Oracle, specify the
data files to be copied from your Object Storage bucket to the export appliance, and then have the export
appliance containing the data shipped to you. After you receive the export appliance, copy your data to your
data center. When you complete the copying of data, completely delete all the data from the export appliance
before sending it back to Oracle.
COMMAND LINE INTERFACE
The command line interface (CLI) is a small footprint tool that you can use on its own or with the Console
to complete Oracle Cloud Infrastructure tasks, including Appliance-Based Data Import jobs. See Command
Line Interface (CLI) on page 4228 for more information.
Note:

You can only run Oracle Cloud Infrastructure CLI commands from a
Linux host. This differs from running CLI commands for other Oracle
Cloud Infrastructure Services on a variety of host operating systems.
Appliance-based commands require validation that is only available on
Linux hosts.
HOST
A physical computer at the customer site on which one or more of the logical hosts (Control, Data, Terminal
Emulation) is running. Depending on your computing environment, you can have any of the following:
• A separate physical host for each logical host
• All three logical hosts consolidated onto a single physical host
• Two logical hosts on one physical host and the third logical host on a separate physical host
All physical hosts much be on network used for the data transfer.
CONTROL HOST
The logical representation of the host computer at your site from which you perform data export tasks.
Depending on your needs, you may use one or more separate hosts (Control and Data) to configure your
export job.
Note:

You can only run Oracle Cloud Infrastructure CLI commands from a
Linux-based Control Host machine. You can run Console tasks from a
browser running on a Windows machine.

Oracle Cloud Infrastructure User Guide 1083


Data Transfer

DATA HOST
The logical representation of the host computer on your site that receives the data exported from Oracle
Cloud Infrastructure.
Note:

Only Linux machines can be used as Data Hosts.


TERMINAL EMULATION HOST
The logical representation of the host computer that uses terminal emulation software to communicate with,
and allow you to command, the appliance.
BUCKET
The logical container in the Object Storage from where your data is copied to the appliance before it is
shipped to you. A bucket is associated with a single compartment in your tenancy whose policies determine
what actions a user can perform.
APPLIANCE MANAGEMENT SERVICE
Software running on the appliance that provides management functions. Users interact with this service
though the Oracle Cloud Infrastructure CLI.

Appliance Specifications
Use NFS versions 3, 4, or 4.1 to copy your data onto the appliance. Here are some details about the appliance:

Item Description Specification

Storage Capacity • US East (Ashburn), US West (Phoenix), Germany Central (Frankfurt):


150 TB of protected usable space.
• All other regions: 95 TB of protected usable space.

Network Interfaces • 10 GbE - RJ45


• 10 GbE - SFP+
You are responsible for providing all network cables. If you want to use
SFP+, your transceivers must be compatible with Intel X520 NICs.

Provided Cables • NEMA 5–15 type B to C13


• C13 - 14 power
• USB - DB9 serial

Environmental • Operational temperature: 50–95°F (10–35°C)


• Operational relative humidity: 8–90% non-condensing
• Acoustics: < 75 dB @ 73°F (23° C)
• Operational altitude: -1,000 ft - 10,000 ft (approx. -300–3048 m))

Power • Consumption: 554 W


• Voltage: 100–240 VAC
• Frequency: 47–63 Hz
• Conversion efficiency: 89%

Weight • Unit: 38 lbs (approx. 17 kg)


• Unit + Transit Case: 64 lbs (approx. 29 kg)

Oracle Cloud Infrastructure User Guide 1084


Data Transfer

Item Description Specification

Height 3.5" (approx. 9 cm) (2U)

Width 17" (approx. 43 cm)

Depth 24" (approx. 61 cm)

Shipping Case 11" x 25" x 28" (approx. 28 x 63.5 x 71 cm)

Roles and Responsibilities


Depending on your organization, the responsibilities of using and managing the data transfer may span multiple roles.
Use the following set of roles as a guideline for how you can assign the various tasks associated with the data export.
• Project Sponsor: Responsible for the overall success of the data export. Project Sponsors usually have complete
access to their organization's Oracle Cloud Infrastructure tenancy. They coordinate with the other roles in the
organization to complete the implementation of the data export. The Project Sponsor is also responsible for
signing legal documentation and setting up notifications for the data export.
• Infrastructure Engineer: Responsible for integrating the export appliance into the organization's IT
infrastructure where the data is being copied. Tasks associated with this role include connecting the export
appliance to power, placing it within the network, and setting the IP address through a serial console menu using
the provided USB-to-Serial adapter.
• Data Administrator: Responsible for identifying and preparing the data to be exported from Oracle Cloud
Infrastructure to your data center. This person usually has access to, and expertise with, the data being exported.
These roles correspond to the various phases of the data export described in the following section. A specific role can
be responsible for one or more phases.

Task Flow for Data Export


Here is a high-level overview of the tasks involved in the data export from Oracle Cloud Infrastructure to your data
center. Complete one phase before proceeding to the next one. Use the roles previously described to distribute the
tasks across individuals or groups within your organization.

Oracle Cloud Infrastructure User Guide 1085


Data Transfer

Oracle Cloud Infrastructure User Guide 1086


Data Transfer

Secure Appliance Data Export from Oracle Cloud Infrastructure


This section highlights the security details of the data export process.
• Appliances are shipped from Oracle to you with a tamper-evident security tie on the transit case. A second
tamper-evident security tie is included in the appliance transit case for you to secure the case when you ship the
case back to Oracle. The number on the physical security ties must match the numbers logged by Oracle in the
appliance details.
• The AES-256 encryption key is created by Oracle when the files are copied the export appliance.
• When you configure the appliance for the first time:
• The encryption key is protected by an encryption passphrase that you must know to access the encrypted data.
The system securely fetches a provided encryption passphrase from Oracle Cloud Infrastructure and registers
that passphrase on the appliance.
Note:

The encryption passphrase is never stored on the appliance.


• All data copied to the appliance is encrypted.
• Oracle erases all of your data from the transfer appliance after it has been returned. The erasure process follows
the NIST 800-88 standards.
• Keep possession of the security tie after you have finished unpacking and connecting the appliance. Include it
when returning the appliance to Oracle. Failure to include the security tie can result in a delay in the completion of
the export job.

What's Next
You are now ready to prepare the host for the data export. See Preparing for Data Export on page 1087 for more
information.

Preparing for Data Export

This topic describes the tasks associated with preparing for the Data Export job. The Project Sponsor role typically
performs these tasks. See Roles and Responsibilities on page 1085.
Note:

You can only run Oracle Cloud Infrastructure CLI commands from a Linux
host. This differs from running CLI commands for other Oracle Cloud
Infrastructure Services on a variety of host operating systems. Appliance-
based commands require validation that is only available on Linux hosts.
Installing and Using the Oracle Cloud Infrastructure Command Line Interface
The Oracle Cloud Infrastructure Command Line Interface (CLI) provides a set of command line-based tools for
configuring and running Data Export jobs. Use the Oracle Cloud Infrastructure CLI as an alternative to running
commands from the Console. Sometimes you must use the CLI to complete certain tasks as there is no Console
equivalent.

Minimum Required CLI Version


The minimum CLI version required for Data Export is 2.12.1.

Determining CLI Versions


Access the following URL to see the currently available version of the CLI:

Oracle Cloud Infrastructure User Guide 1087


Data Transfer

https://github.com/oracle/oci-cli/blob/master/CHANGELOG.rst
Enter the following command at the prompt to see the version of the CLI currently installed on your machine:

oci --version

If you have a version on your machine older than the version currently available, install the latest version.
Note:

Always update to the latest version of the CLI. The CLI is not updated
automatically, and you can only access new or updated CLI features by
installing the current version.

Linux Operating System Requirements


See Requirements on page 4229 for a list of the Linux operating systems that support the CLI.

Installing the CLI


Installation and configuration of the CLIs is described in detail in Command Line Interface (CLI) on page 4228.

Using the CLI


You can specify CLI options using the following commands:
• --option value or
• --option=value
The basic CLI syntax is:

oci dts resource action options

This syntax is applied to the following:


• oci dts is the shortened CLI command name.
• job is an example of a resource.
• create is an example of an action.
• Other strings are options.
The following command to create an export job shows a typical CLI command construct.

oci dts export create --compartment-id ocid.compartment.oc1..exampleuniqueID


--bucket-name MyBucket1 --display-name MyExportJob --addressee "MyCompany
Corp" --care-of "John Doe" --address1 "123 Main St." --city-or-locality
Anytown --state-province-region CA --country USA --zip-postal-code 12345 --
phone-number "555.555.1212" --email [email protected]

Note:

In the previous examples, provide a friendly name for the export job using the
##display#name option. Avoid entering confidential information.

Accessing Command Line Interface Help


All CLI help commands have an associated help component you can access from the command line. To view the help,
enter any command followed by the --help or -h option. For example:

oci dts export --help

Oracle Cloud Infrastructure User Guide 1088


Data Transfer

NAME
dts_export -

DESCRIPTION
Data Transfer Service CLI Specification

AVAILABLE COMMANDS
o change-compartment

o configure-physical-appliance

o create

o create-policy

o delete

o generate-manifest

o list

o request-appliance

o setup-notifications

o show

o update

When you run the help option (--help or -h) for a specified command, all the subordinate commands and options
for that level of CLI are displayed. If you want to access the CLI help for a specific subordinate command, include it
in the CLI string, for example:

oci dts export create --help

NAME
dts_export_create -

DESCRIPTION
Creates a new Appliance Export Job that corresponds with customer's
logical dataset

USAGE
oci dts export create [OPTIONS]

REQUIRED PARAMETERS
--address1 [text]

Address line 1.

--addressee [text]

Company or person to send the appliance to

--bucket-name [text]

Name of the object storage bucket for this export job

--care-of [text]

Place/person to direct the package to.

Oracle Cloud Infrastructure User Guide 1089


Data Transfer

--city-or-locality [text]

Setting Up the Oracle Cloud Infrastructure Configuration File


Before using the command line utility, create a configuration file that contains the required credentials for working
with Oracle Cloud Infrastructure. You can create this file using a setup dialog or manually using a text editor.

Using the Setup CLI


Run the oci setup config command line utility to walk through the first-time setup process. The command
prompts you for the information required for the configuration file and the API public/private keys. The setup dialog
generates an API key pair and creates the configuration file.
For more information about how to find the required information, see:
• Where to Get the Tenancy's OCID and User's OCID on page 4220
• Regions and Availability Domains on page 182

Manual Setup
If you want to set up the API public/private keys yourself and write your own config file, see SDK and Tool
Configuration.
Tip:

Use the oci setup keys command to generate a key pair to include in
the config file.
Create the configuration file /root/.oci/config with the following structure:

[DEFAULT]
user=<The OCID for the data transfer administrator>
fingerprint=<The fingerprint of the above user's public key>
key_file=<The _absolute_ path to the above user's private key file on the
host machine>
tenancy=<The OCID for the tenancy that owns the data transfer job and
bucket>
region=<The region where the transfer job and bucket should exist. Valid
values are:
us-ashburn-1, us-phoenix-1, eu-frankfurt-1, and uk-london-1, and ap-
osaka-1.>

For example:

[DEFAULT]
user=ocid1.user.oc1..unique_ID
fingerprint=4c:1a:6f:a1:5b:9e:58:45:f7:53:43:1f:51:0f:d8:45
key_file=/home/user/ocid1.user.oc1..exampleuniqueID.pem
tenancy=ocid1.tenancy.oc1..unique_ID
region=us-phoenix-1

For the data transfer administrator, you can create a single configuration file that contains different profile sections
with the credentials for multiple users. Then use the ##profile option to specify which profile to use in the
command. Here is an example of a data transfer administrator configuration file with different profile sections:

[DEFAULT]
user=ocid1.user.oc1..exampleuniqueID
fingerprint=4c:1a:6f:a1:5b:9e:58:45:f7:53:43:1f:51:0f:d8:45
key_file=/home/user/ocid1.user.oc1..exampleuniqueID.pem
tenancy=ocid1.tenancy.oc1..exampleuniqueID

Oracle Cloud Infrastructure User Guide 1090


Data Transfer

region=us-phoenix-1
[PROFILE1]
user=ocid1.user.oc1..exampleuniqueID
fingerprint=4c:1a:6f:a1:5b:9e:58:45:f7:53:43:1f:51:0f:d8:45
key_file=/home/user/ocid1.user.oc1..exampleuniqueID.pem
tenancy=ocid1.tenancy.oc1..exampleuniqueID
region=us-ashburn-1

By default, the DEFAULT profile is used for all CLI commands. For example:

oci dts export create --compartment-id ocid.compartment.oc1..exampleuniqueID


--bucket-name MyBucket --display-name MyDisplay ...

Instead, you can issue any CLI command with the --profile option to specify a different data transfer
administrator profile. For example:

oci dts export create --compartment-id ocid.compartment.oc1..exampleuniqueID


--bucket-name MyBucket --display-name MyDisplay ... --profile MyProfile

Using the example configuration file above, the <profile_name> would be profile1.
If you created two separate configuration files, use the following command to specify the configuration file to use:

oci dts export create --compartment-id compartment_id --bucket bucket_name


--display-name display_name

Creating the Required IAM Policies


Each service in Oracle Cloud Infrastructure integrates with IAM for authentication and authorization.
To use Oracle Cloud Infrastructure, you must be given the required type of access in a policy written by an
administrator, whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you try
to perform an action and get a message that you don’t have permission or are unauthorized, confirm with your
administrator the type of access you've been granted and which compartment you should work in.
You provide resource access to the data export administrators group using policies. See Managing Groups on page
2438 for more information.
Create this group using the following policies:

Allow group group-name to manage appliance-export-jobs in


compartment compartment-name
Allow group group name to manage buckets in compartment compartment name
Allow group group name to manage objects in compartment compartment name

To enable notifications, add the following policies:

Allow group group name to manage ons-topics in tenancy


Allow group group name to manage ons-subscriptions in tenancy
Allow group group name to manage cloudevents-rules in tenancy
Allow group group name to inspect compartments in tenancy

See Getting Started with Policies on page 2143 for more information.
Requesting Appliance Entitlement
If your tenancy is not entitled to use the Data Transfer Appliance, you must request the Data Transfer Appliance
Entitlement before creating an export job.
To request the Data Transfer Appliance Entitlement using the Console
Open the Transfer Job page and click Request at the top. Otherwise, you are prompted to request the entitlement
when attempting to create your first export job.

Oracle Cloud Infrastructure User Guide 1091


Data Transfer

Once requested, the status of your request is visible at the top of the Transfer Job page. For example:
Data Transfer Appliance Entitlement: Granted
It can take a while to get the Data Transfer Appliance Entitlement approved. After Oracle receives your request, a
Terms and Conditions Agreement is sent to the account owner via DocuSign to use the appliance. The entitlement
request is approved once the signature is received. The Data Transfer Appliance Entitlement is a tenancy-wide
entitlement that you need to request once for each tenancy.
To request the Data Transfer Appliance Entitlement using the CLI

oci dts appliance request-entitlement --compartment-id compartment_id --


name name --email email

name is the name of the requester.


email is the email address of the requester.
For example:

oci dts appliance request-entitlement --compartment-id


ocid.compartment.oc1..exampleuniqueID --name "John Doe" --email
[email protected]

{
"data": {
"compartment-id": "ocid.compartment.oc1..exampleuniqueID",
"creation-time": "2019-12-18T18:29:15+00:00",
"defined-tags": {},
"display-name": null,
"freeform-tags": {},
"id": "ocid1.datatransferapplianceentitlement.oc1..exampleuniqueID",
"lifecycle-state": "CREATING",
"lifecycle-state-details": "REQUESTED",
"requestor-email": "[email protected]",
"requestor-name": "John Doe",
"update-time": "2019-12-20T19:04:09+00:00"
}
}

To show the status of a Data Transfer Appliance Entitlement request using the CLI

oci dts appliance show-entitlement --compartment-id compartment_id

For example:

oci dts appliance show-entitlement --compartment-id


ocid.compartment.oc1..exampleuniqueID
{
"data": {
"compartment-id": ""ocid.compartment.oc1..exampleuniqueID",
"defined-tags": null,
"display-name": null,
"freeform-tags": null,
"id": null,
"lifecycle-state": "ACTIVE",
"lifecycle-state-details": "APPROVED",
"requestor-email": null,
"requestor-name": null
}
}

Oracle Cloud Infrastructure User Guide 1092


Data Transfer

Establishing the Data Transfer Appliance Entitlement Policy


Use the following policy to enable users in a specific group to request a Data Transfer Appliance Entitlement in your
tenancy.

Allow group group_name to {DTA_ENTITLEMENT_CREATE} in tenancy

Entitlement Eligibility
Your request for a Data Transfer Appliance Entitlement in your tenancy may be denied if you are a free trial
customer. If your request is denied, upgrade to a full account. You can also contact your Oracle Customer Support
Manager or Oracle Support to determine your options for obtaining the entitlement.
Appliance Entitlement Eligibility
Your request for a Data Transfer Appliance Entitlement in your tenancy may be denied if you are a free trial
customer. If your request is denied, upgrade to a full account. You can also contact your Oracle Customer Support
Manager or Oracle Support to determine your options for obtaining the entitlement.
Setting Up Notifications
Set up rules for the Export Job notification resource that notify the appropriate administrators of the tenancy of events
such as when someone creates an export job and when Oracle ships the export appliance.
The types of events you can set up are:
• CREATE
• DELETE
• UPDATE
See Notifications Overview on page 3378 and Overview of Events on page 1788 for more information.
You can also set up notifications for your export job using the CLI. See Setting Up Export Job Notifications on page
1097 for more information on this feature.

Notification Policies
Set up the following policies to support notifications for export jobs:

Allow group group name to manage ons-topics in tenancy


Allow group group name to manage ons-subscriptions in tenancy
Allow group group name to manage cloudevents-rules in tenancy
Allow group group name to inspect compartments in tenancy

Configuring Firewall Settings


Ensure that your local environment's firewall can communicate with the Data Transfer Service running on the IP
address ranges: 140.91.0.0/16. You also need to open access to the Object Storage IP address ranges: 134.70.0.0/17.
Creating Export Jobs
This section describes how to create an export job as part of the preparation for the data export. See Data Export
Reference on page 1113 for complete details on all tasks related to export jobs.
An export job represents the collection of files that you want to export and signals the intention to copy those files
from your Oracle Cloud Infrastructure Object Storage or Archive Storage bucket to your data center using an Oracle-
provided export appliance. Create the export job in the same compartment as the bucket and supply a human-readable
name for the export job.
Note:

It is recommended that you create a compartment for each export job to


minimize the required access your tenancy.

Oracle Cloud Infrastructure User Guide 1093


Data Transfer

Creating an export job returns a job ID that you specify in other export tasks. For example:

ocid.compartment.oc1..exampleuniqueID

To create an export job using the Console


1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Exports.
2. Select the Compartment you are to use for data exports from the list.
A list of export jobs that have already been created is displayed in the Export Jobs page.
3. Click Create Export Job.
The Create Export Job dialog appears.
4. Enter a Job Name. Avoid entering confidential information. Then, select the Bucket Name from the list.
5. Complete the following fields:
• Company Name
• Recipient Phone
• Recipient Email
• Care of - Optional intermediary party responsible for transferring the appliance from the delivery vendor to
the intended recipient.
• Address 1
• Address 2
• City/Locality
• State/Province/Region
• Zip/Postal Code
• Country
6. (Optional) Add any tagging information, including the tag namespace, key, and value in the associated fields.
7. Click Create.
The export job you created is added to the list of export jobs.
To create an export job using the CLI

oci dts export create --compartment-id compartment_id --bucket-


name bucket_name --display-name display_name --addressee addressee --care-
of care_of --address address --city-or-locality city_or_locality --state-
province-region state_province_region --country country --zip-postal-
code zip_postal_code --phone-number phone_number --email email [OPTIONS]

display_name is the name of the export job as it appears. Avoid entering confidential information.
addressee is the company or person to receiving the appliance.
care_of is the contact associated with the addressee.
address is the required address of the addressee.
city_or_locality is city or locality of the addressee.
state_province_region is the state, province, or region of the addressee.
country is the country of the addressee.
zip_postal_code is the zip or postal code of the addressee.
phone_number is the phone number of the addressee or contact.
email is the email address of the addressee or contact.
OPTIONS are:

Oracle Cloud Infrastructure User Guide 1094


Data Transfer

• --prefix: List of objects with names matching this prefix would be part of this export job.
• --range-start: Object names returned by a list query must be greater or equal to this parameter.
• --range-end: Object names returned by a list query must be strictly less than this parameter.
• --freeform-tags: Free-form tags for this resource. Each tag is a simple key-value pair with no predefined
name, type, or namespace. Example: `{"Department": "Finance"}` This is a complex type whose value must be
valid JSON. The value can be provided as a string on the command line or passed in as a file using the file://path/
to/file syntax. The --generate-param-json-input option can be used to generate an example of the JSON which
must be provided. We recommend storing this example in a file, modifying it as needed and then passing it back
in via the file:// syntax.
• --defined-tags: Defined tags for this resource. Each key is predefined and scoped to a namespace. For
more information, see [Resource Tags]. Example: `{"Operations": {"CostCenter":"42"}}` This is a complex type
whose value must be valid JSON. The value can be provided as a string on the command line or passed in as a file
using the file://path/to/file syntax. The --generate-param-json-input option can be used to generate
an example of the JSON which must be provided. We recommend storing this example in a file, modifying it as
needed and then passing it back in via the file:// syntax.
• --wait-for-state: This operation creates, modifies or deletes a resource that has a defined lifecycle state:
CREATING, ACTIVE, IN PROGRESS, SUCCEEDED, FAILED, CANCELLED, or DELETED. Specify this
option to perform the action and then wait until the resource reaches a given lifecycle state. Multiple states can
be specified, returning on the first state. For example, --wait-for-state SUCCEEDED --wait-for-
state FAILED would return on whichever lifecycle state is reached first. If timeout is reached, a return code of
2 is returned. For any other error, a return code of 1 is returned.
• --max-wait-seconds: The maximum time in seconds to wait for the resource to reach the lifecycle state
defined by the --wait-for-state attribute. Default is 1200.
• --wait-interval-seconds: The check interval in seconds to determine whether the resource to see if it has
reached the lifecycle state defined by the --wait-for-state. Default is 30.
• --address2
: Optional address line 2.
• --address3
: Optional address line 3.
• --address4
: Optional address line 4.
• --from-json: Provide input to this command as a JSON document from a file using the file://path-to/file
syntax. The --generate-full-command-json-input option can be used to generate a sample JSON file
to be used with this command option. The key names are pre-populated and match the command option names
(converted to camelCase format, e.g. compartment-id --> compartmentId), while the values of the keys need to
be populated by the user before using the sample file as an input to this command. For any command option that
accepts multiple values, the value of the key can be a JSON array. Options can still be provided on the command
line. If an option exists in both the JSON document and the command line then the command line specified value
will be used. For examples on usage of this option, please see our "using CLI with advanced JSON options" link:
https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliusing.htm#AdvancedJSONOptions.
For example:

oci dts export create --compartment-id ocid.compartment.oc1..exampleuniqueID


--bucket-name MyBucket1 --display-name MyExportJob1 --addressee "MyCompany
Corp" --care-of "John Doe" --address1 "123 Main St." --city-or-locality
Anytown --state-province-region CA --country USA --zip-postal-code 12345 --
phone-number "4085551212" --email [email protected]

{
"data": {
"appliance-decryption-passphrase": "********",
"appliance-delivery-tracking-number": null,
"appliance-delivery-vendor": null,
"appliance-return-delivery-tracking-number": null,

Oracle Cloud Infrastructure User Guide 1095


Data Transfer

"appliance-serial-number": null,
"bucket-access-policies": [
"POLICIES CREATION IN PROGRESS"
],
"bucket-name": "MyExportJobs",
"compartment-id": "ocid.compartment.oc1..exampleuniqueID",
"creation-time": "2020-06-18T17:24:13+00:00",
"customer-shipping-address": {
"address1": "123 Main St.",
"address2": null,
"address3": null,
"address4": null,
"addressee": "MyCompany Corp",
"care-of": "John Doe",
"city-or-locality": "Anytown",
"country": "USA",
"email": "[email protected]",
"phone-number": "4085554321",
"state-or-region": "CA",
"zipcode": "12345"
},
"defined-tags": {},
"display-name": "MyExportJob1",
"first-object": null,
"freeform-tags": {},
"id": "ocid1.datatransferapplianceexportjob.oc1..exampleuniqueID",
"last-object": null,
"lifecycle-state": "CREATING",
"lifecycle-state-details": "PENDING_MANIFEST_GENERATION",
"manifest-file": null,
"manifest-md5": null,
"next-object": null,
"number-of-objects": null,
"prefix": null,
"range-end": null,
"range-start": null,
"receiving-security-tie": null,
"sending-security-tie": null,
"total-size-in-bytes": null
},
"etag": "4--gzip"
}

Notifications To include notifications, include the --setup-notifications option. See Setting Up Export Job
Notifications on page 1097 for more information on this feature.
Getting Export Job IDs
Each export job you create has a unique ID within Oracle Cloud Infrastructure. For example:

ocid1.datatransferapplianceexportjob.oc1..exampleuniqueID

You will need to forward this export job ID to the Data Administrator.
To get the export job ID using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Exports.
2. Select the Compartment from the list.
The export jobs in that compartment are displayed.
3. Find the export job for which you want to display the details.
4.
Click the Actions icon ( ), and then click View Details.

Oracle Cloud Infrastructure User Guide 1096


Data Transfer

To get the export job ID using the CLI

oci dts export list --compartment-id compartment_id

For example:

oci dts export list --compartment-id ocid.compartment.oc1..exampleuniqueID

{
"data": [
{
"bucket-name": "MyExportJobs",
"creation-time": "2020-06-18T17:24:13+00:00",
"defined-tags": {},
"display-name": "MyExportJob1",
"freeform-tags": {},
"id": "ocid1.datatransferapplianceexportjob.oc1..exampleuniqueID",
"lifecycle-state": "CREATING",
"lifecycle-state-details": "PENDING_MANIFEST_GENERATION"
},
{
"bucket-name": "MyTestExportJobs",
"creation-time": "2020-06-18T18:07:59+00:00",
"defined-tags": {},
"display-name": "MyTestExportJob",
"freeform-tags": {},
"id": "ocid1.datatransferapplianceexportjob.oc1..exampleuniqueID",
"lifecycle-state": "CREATING",
"lifecycle-state-details": "PENDING_MANIFEST_GENERATION"
}
]
}

The ID for each export job is included in the return:

"id": "ocid1.datatransferapplianceexportjob.oc1..exampleuniqueID"

Tip:

When you create an export job using the oci dts export create CLI,
the export job ID is displayed in the CLI's return. You can also run the oci
dts export show CLI for that specific job to get the ID.
Setting Up Export Job Notifications
You can generate notifications that send messages regarding changes to a new or existing export job through the CLI.
Using this feature creates a topic, subscription for a list of email addresses, and a rule that notifies you on all events
related to the export job's activities and changes in state. This method provides a more convenient way to generate
notifications tailored to export jobs.
The CLI command to set up export job notifications is different depending on whether you are creating a new export
job or updating an existing export job. In both cases, running the CLI command prompts you to enter the email
addresses of each notification subscriber as a comma separated list. Each recipient is sent an email with a link to
confirm they want to receive the notifications.
You are prompted to enter those email addresses you want included in the notifications, separated
by commas (","). When your list is complete, add a colon (":") followed by your own email address:
[email protected],[email protected] : [email protected].

Oracle Cloud Infrastructure User Guide 1097


Data Transfer

For both of the notification commands, the following is returned:

If the commands fail to run, you can use the OCI CLI to do the setup
manually:
export ROOT_COMPARTMENT_OCID=ocidv1:tenancy:oc1:exampleuniqueID
oci ons topic create --compartment-id $ROOT_COMPARTMENT_OCID --name
DTSExportTopic --description "Topic for data transfer service export jobs"
oci ons subscription create --protocol EMAIL --compartment-id
$ROOT_COMPARTMENT_OCID --topic-id $TOPIC_OCID --endpoint $EMAIL_ID
oci events rule create --display-name DTSExportRule --is-enabled
true --compartment-id $ROOT_COMPARTMENT_OCID --actions '{"actions":
[{"actionType":"ONS","topicId":"$TOPIC_OCID","isEnabled":true}]}' --
condition '{"eventType":
["com.oraclecloud.datatransferservice.addapplianceexportjob","com.oraclecloud.datatransf
--description "Rule for data transfer service to send notifications for
export jobs"
Creating topic for export

To set up notifications when creating an export job using the CLI


To set up notifications when creating an export job, include the --setup notifications option as part of the CLI:

oci dts export create --compartment-id <compartment_id> --bucket-


name <bucket_name> --display-name <display_name> --addressee <addressee> --
care-of <care_of> --address <address> --city-or-locality <city_or_locality>
--state-province-region <state_province_region> --country <country>
--zip-postal-code <zip_postal_code> --phone-number <phone_number> --
email <email> <options> --setup-notifications

To set up notifications for an existing export job using the CLI


To set up notifications for an existing export job:

oci dts export setup-notifications

Generating the Export Manifest File


The data export job requires that you generate a manifest file for the files you want to be exported to you in the Data
Transfer Appliance. This manifest file is stored in your export job's bucket. Oracle uses the manifest file to download
from that bucket all the files that are listed in the manifest. When the Data Transfer Appliance is sent to you, the
download summary is available at the root of the Data Transfer Appliance's mount point, allowing you to compare the
manifest against the download summary.
Note:

You can only use the CLI command to generate the export manifest file.
To generate an export manifest file using the CLI

oci dts export generate-manifest --compartment-id compartment_id --job-


id <job_id> --bucket <bucket> [OPTIONS]

Options are:
• --prefix: The subset of objects that needs to be exported whose names start with this prefix.
• --start: The subset of objects that needs to be exported starting with this object (inclusive).
• --end: The subset of objects that needs to be exported up to this object (inclusive).

Oracle Cloud Infrastructure User Guide 1098


Data Transfer

Creating the Data Export Policy


Data export requires you to add the provided policy language to authorize a secure Oracle IAM user to have read-
only access to the bucket for export. You must have administrator privileges in your tenancy to create the data export
policy.
To create an export policy using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Exports.
2. Select the Compartment from the list.
The export jobs in that compartment are displayed.
3. Click the export job link whose data export policies you want to access.
The Details page for that export job appears.
4. Find the export policies under Policy Language in the Details page.
5. Copy and use these policy statements in the policy you create in the Console.
See Getting Started with Policies on page 2143 for more information.
To create an export policy using the CLI
The export create-policy CLI automates the process of generating policies for export job. You do not need to use the
Console to create individual policies if you run this CLI.

oci dts export create-policy --job-id job_id

For example:

oci dts export create-policy --job-id


ocid1.datatransferapplianceentitlement.oc1..exampleuniqueID

Setting up the following policies in the root compartment. If the following


operation fails it means that you do not have enough privileges to create
policies. Re-run the below command with the correct user
NOTE: Sometimes you will need to replace a single quote with '"'"'
oci iam policy create --name e2e-export-test-af3df5176d46_Policy
--compartment-id $ROOT_COMPARTMENT --statements '["DEFINE TENANCY
OCI_TENANCY AS ocid1.tenancy.oc1..exampleuniqueID", "DEFINE GROUP
OCI_EXPORT_GROUP AS ocid1.group.region1..exampleuniqueID", "ADMIT GROUP
OCI_EXPORT_GROUP OF TENANCY OCI_TENANCY TO read objects IN TENANCY where
target.bucket.name='dtsTestBucket'", "ADMIT GROUP OCI_EXPORT_GROUP OF
TENANCY OCI_TENANCY TO read objectstorage-namespaces IN TENANCY"]' --
description "The policies to allow DTS to process the export job"

{
"data": {
"compartment-id": "ocid1.tenancy.region1..exampleuniqueID",
"defined-tags": {},
"description": "The policies to allow DTS to process the export job",
"freeform-tags": {},
"id": "ocid1.policy.region1..uniqueID",
"inactive-status": null,
"lifecycle-state": "ACTIVE",
"name": "e2e-export-test-af3df5176d46_Policy",
"statements": [
"DEFINE TENANCY OCI_TENANCY AS ocid1.tenancy.oc1..exampleuniqueID",
"DEFINE GROUP OCI_EXPORT_GROUP AS
ocid1.group.region1..exampleuniqueID",
"ADMIT GROUP OCI_EXPORT_GROUP OF TENANCY OCI_TENANCY TO read objects
IN TENANCY where target.bucket.name='dtsTestBucket'",
"ADMIT GROUP OCI_EXPORT_GROUP OF TENANCY OCI_TENANCY TO read
objectstorage-namespaces IN TENANCY"

Oracle Cloud Infrastructure User Guide 1099


Data Transfer

],
"time-created": "2020-07-09T18:37:23.332000+00:00",
"version-date": null
},
"etag": "20eb7654cd14fcfcaf8648de3c6dcc84d553069f"
}

Requesting the Export Appliance


This section describes how to request an export appliance from Oracle.
To request an export appliance using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Exports.
2. Select the Compartment from the list.
The export jobs in that compartment are displayed.
3. Click the export job link for which you want to request the export appliance.
The Details page for that export job appears.
4. Click Request Export Appliance.
The Request Export Appliance dialog appears prompting you to review the manifest file before continuing with
the export appliance request.
Review the manifest file to ensure all your export job information, such as the bucket and contact information
is correct. If anything is not correct, cancel the application request and correct the export job information before
trying again to request an export application.
5. Click Request.
The Details page is updated to indicate Appliance Requested. The state of your export job is updated to Pending
Approval in the list of export jobs.
To request an export appliance using the CLI

oci dts export request-appliance --job-id job_id

For example:

oci dts export request-appliance --job-id


ocid1.datatransferapplianceentitlement.oc1..exampleuniqueID

{
"data": {
"appliance-decryption-passphrase": null,
"appliance-delivery-tracking-number": null,
"appliance-delivery-vendor": null,
"appliance-return-delivery-tracking-number": null,
"appliance-serial-number": null,
"bucket-access-policies": [
"DEFINE TENANCY OCI_TENANCY AS ocid1.tenancy.oc1..exampleuniqueID",
"DEFINE GROUP OCI_EXPORT_GROUP AS
ocid1.group.region1..exampleuniqueID",
"ADMIT GROUP OCI_EXPORT_GROUP OF TENANCY OCI_TENANCY TO read objects
IN TENANCY where target.bucket.name='MyExportBucket'",
"ADMIT GROUP OCI_EXPORT_GROUP OF TENANCY OCI_TENANCY TO read
objectstorage-namespaces IN TENANCY"
],
"bucket-name": "MyExportBucket",
"compartment-id": "ocid1.compartment.region1..exampleuniqueID",
"creation-time": "2020-07-09T18:36:59+00:00",
"customer-shipping-address": {
"address1": "123 Main St.",

Oracle Cloud Infrastructure User Guide 1100


Data Transfer

"address2": null,
"address3": null,
"address4": null,
"addressee": "DTS",
"care-of": "John Doe",
"city-or-locality": "Anytown",
"country": "USA",
"email": "[email protected]",
"phone-number": "4085551212",
"state-or-region": "CA",
"zipcode": "12345"
},
"defined-tags": {},
"display-name": "e2e-export-test-af3df5176d46",
"first-object": "oci_data_export_export-job-37",
"freeform-tags": {},
"id": "ocid1.datatransferapplianceexportjob.region1..exampleuniqueID",
"last-object": "oci_data_export_export-job-40",
"lifecycle-state": "ACTIVE",
"lifecycle-state-details": "PENDING_APPROVAL",
"manifest-file":
"oci_data_export_manifest_ocid1.datatransferapplianceexportjob.region1..exampleuniqueID
"manifest-md5": "NcEMgcgK2fK8HfvUV3eWAA==",
"next-object": null,
"number-of-objects": "2",
"prefix": null,
"range-end": "oci_data_export_export-job-50",
"range-start": "oci_data_export_export-job-37",
"receiving-security-tie": null,
"sending-security-tie": null,
"total-size-in-bytes": "85585303"
},
"etag": "1"
}

Tracking the Export Appliance Delivery


To track the export appliance delivery status using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Exports.
2. Select the Compartment from the list.
The export jobs in that compartment are displayed.
3. Click the export job for which you want to display the details.

As an alternative, you can click the Actions icon ( ) associated with your export job, and then click View
Details.
The status of the requested export appliance is listed under Appliance Information.
To track the export appliance delivery status using the CLI

oci dts export show --job-id job_id [OPTIONS]

Options are:
• --from-json: Provide input to this command as a JSON document from a file using the file://path-to/file
syntax. The --generate-full-command-json-input option can be used to generate a sample JSON file
to be used with this command option. The key names are pre-populated and match the command option names
(converted to camelCase format, e.g. compartment-id --> compartmentId), while the values of the keys need to
be populated by the user before using the sample file as an input to this command. For any command option that
accepts multiple values, the value of the key can be a JSON array. Options can still be provided on the command

Oracle Cloud Infrastructure User Guide 1101


Data Transfer

line. If an option exists in both the JSON document and the command line then the command line specified value
will be used. For examples on usage of this option, please see our "using CLI with advanced JSON options" link:
https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliusing.htm#AdvancedJSONOptions.
For example:

oci dts export show --job-id


ocid1.datatransferapplianceexportjob.oc1..exampleuniqueID

{
"data": {
"appliance-decryption-passphrase": "********",
"appliance-delivery-tracking-number": null,
"appliance-delivery-vendor": null,
"appliance-return-delivery-tracking-number": null,
"appliance-serial-number": null,
"bucket-access-policies": [
"POLICIES CREATION IN PROGRESS"
],
"bucket-name": "MyExportJobs",
"compartment-id": "ocid.compartment.oc1..exampleuniqueID",
"creation-time": "2020-06-18T17:24:13+00:00",
"customer-shipping-address": {
"address1": "123 Main St.",
"address2": null,
"address3": null,
"address4": null,
"addressee": "MyCompany",
"care-of": "John Doe",
"city-or-locality": "Anytown",
"country": "US",
"email": "[email protected]",
"phone-number": "4085551212",
"state-or-region": "CA",
"zipcode": "12345"
},
"defined-tags": {},
"display-name": "MyExportJob1",
"first-object": null,
"freeform-tags": {},
"id": "ocid1.datatransferapplianceexportjob.oc1..exampleuniqueID",
"last-object": null,
"lifecycle-state": "CREATING",
"lifecycle-state-details": "PENDING_MANIFEST_GENERATION",
"manifest-file": null,
"manifest-md5": null,
"next-object": null,
"number-of-objects": null,
"prefix": null,
"range-end": null,
"range-start": null,
"receiving-security-tie": null,
"sending-security-tie": null,
"total-size-in-bytes": null
},
"etag": "1--gzip"
}

The status of the requested export appliance is listed in the appliance attributes in the returned information.
Notifying the Data Administrator
When you have completed all the tasks in this topic, provide the Data Administrator of the following:

Oracle Cloud Infrastructure User Guide 1102


Data Transfer

• IAM login credentials


• Oracle Cloud Infrastructure CLI configuration files
• Export job ID
What's Next
You can follow the progress of the export job and view the metrics associated with the copying of files from Oracle
Cloud Infrastructure to your appliance. See Monitoring the Export Job Status and Data Transfer Metrics on page
1103.

Monitoring the Export Job Status and Data Transfer Metrics

This topic describes the monitoring tasks you can perform after your appliance request has been approved and the
export job begins. The Project Sponsor role typically performs these tasks. See Roles and Responsibilities on page
1085.
Note:

You can only run Oracle Cloud Infrastructure CLI commands from a Linux
host. This differs from running CLI commands for other Oracle Cloud
Infrastructure Services on a variety of host operating systems. Appliance-
based commands require validation that is only available on Linux hosts.
Once Oracle approves the appliance request and the export job begins, follow progress of the export job status.
To monitor the status of your export job using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Exports.
2. Select the Compartment from the list.
The export jobs in that compartment are displayed.
3. Click the export job for which you want to display the details.

As an alternative, you can click the Actions icon ( ) associated with your export job, and then click View
Details.
4. Look at the State and Details fields. See Export Appliance State Values on page 1103 for a list of the states and
their details.
To monitor the status of your export job using the CLI
At the command prompt on the host, run dts appliance show to monitor the export appliance status.

oci dts export show --job-id <job_id>

See Export Appliance State Values on page 1103 for a list of the states and their details.
Export Appliance State Values
Here are the export job and appliance state values, including their details:

Oracle Cloud Infrastructure User Guide 1103


Data Transfer

State Description Details Description

Creating The job is in the initialization Pending Manifest Generate an export manifest for
phase. Additional steps are Generation the bucket from where data is
required from the user before it being exported.
becomes Active.
Pending Submission The export job is in a pending
state until you request an
appliance for the export job.

Active The job is awaiting approval from Oracle before an appliance can be assigned for this export
job.

In Progress The export job is approved and is Appliance An appliance is being


actively being worked on. Provisioning provisioned for the export job.

Downloading Data download to the appliance


is in progress.

Oracle Shipped Oracle has shipped the appliance.

Customer Received The appliance has been delivered


to the customer site.

Customer Processing The appliance is being processed


at the customer site.

Customer Shipped The appliance has been shipped


to Oracle from the customer site.

Oracle Received Oracle has received back the


appliance after export.

Succeeded The export is complete. Closed The export job is closed.

Failed The export job failed.

Cancelled The export job was canceled before an appliance was requested for export.

Deleted The export job was deleted.

Viewing Data Transfer Metrics


Once the data transfer begins, you can view the metrics associated with the export job in the Transfer Appliance
Details page in chart or table format.
Tip:

Set up your notifications to alert you when the data transfer from Oracle
Cloud Infrastructure to the appliance is occurring. When the state changes to
In Progress Downloading, you can start viewing data transfer metrics.
Select Metrics under Resources to display each of these measures:
• Export Files Downloaded: Total number of files downloaded to the appliance for export.
• Export Bytes Downloaded: Total number of bytes downloaded to the appliance for export.
• Export Files Remaining: Total number of files left to be downloaded for export.

Oracle Cloud Infrastructure User Guide 1104


Data Transfer

• Export Bytes Remaining: Total number of bytes left to be downloaded for export.
• Export Files in Error: Total number of files in error for export.
• Export Download Verification Progress: Progress of verification of files that have already been downloaded for
export.
Select the Start Time and End Time for these measures. You can either set them manually by entering the days and
times in their respective fields, or by selecting the Calendar feature and picking the times that way. You can also
select from a list of standard times (last hour, last 6 hours, and so forth) from the Quick Selects list for the period
measured. The time period you specify applies to all the measures.
Specify the Interval (for example, 5 minutes, 1 hour) that each measure is recorded from the list.
Specify the Statistic being recorded (for example, Sum, Mean) for each measure from the list.
Tip:

Mean is the most useful statistic for data transfer as it reflects an absolute
value of the metric.
Choose additional actions from the Options list, including viewing the query in the Metrics Explorer, capturing the
URL for the measure, and switching between chart and table view.
Click Reset Charts to delete any existing information in the charts and begin recording new metrics.
See Monitoring Overview on page 2686 for general information on monitoring your Oracle Cloud Infrastructure
services.
What's Next
You are now ready to configure the export appliance after you receive it from Oracle. See Configuring the Export
Appliance on page 1105.

Configuring the Export Appliance

This topic describes the tasks associated with configuring the export appliance after you receive it from Oracle. The
Infrastructure Engineer role typically performs these tasks. See Roles and Responsibilities on page 1085.
Unpacking and Connecting the Appliance to the Network
When the shipping vendor delivers your export appliance, Oracle updates the status as Delivered and provides the
date and time the appliance was received in the Appliance Information section.
Your export appliance arrives in a transit case with a telescoping handle and wheels. The case amenities allow for
easy movement to the location where you intend to place the appliance to upload your data.
Important:

Retain all packaging materials! When shipping the export appliance back to
Oracle, you must package the appliance in the same manner and packaging in
which the appliance was received.
Here are the tasks involved in unpacking and getting your export appliance ready to configure.
1. Inspect the tamper-evident security tie on the transit case.
If the appliance was tampered with during transit, the tamper-evident security tie serves to alert you.
Caution:

If the security tie is damaged or is missing, do not plug the appliance into
your network! Immediately file a Service Request (SR).

Oracle Cloud Infrastructure User Guide 1105


Data Transfer

2. Remove and compare the number on the security tie with the number logged by Oracle.
3. Open the transit case and ensure that the case contains the following items:
• Appliance unit and power cable (two types of power cables provided: C14 and C13 to 14)
• USB to DB-9 serial cable
• Return shipping instructions (retain these instructions)
• Return shipping label, label sleeve, tie-on tag, and zip tie
• Return shipment tamper-evident security tie (use this tie to ensure secure transit case back to Oracle)
4. Compare the number on the return shipment security tie with the number logged by Oracle.
5. Remove the export appliance from the case and place the appliance on a solid surface or in a rack.
Caution:

Oracle recommends assistance lifting the export appliance out of the


transit case and placing the appliance in a rack or on a desk top. The total
shipping weight is about 64 lbs (29.0299 kg) and appliance weight is 38 lbs
(17.2365 kg).
6. Connect the appliance to your local network using one of the following:
• 10GBase-T: Standard RJ-45
• SFP+: The transceiver must be compatible with Intel X520 NICs
7. Attach one of the provided power cords to the appliance and plug the other end into a grounded power source.
8. Turn on the appliance by flipping the power switch on the back of the appliance.
To see the security tie number logged by Oracle using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Exports.
2. Find the export job and export appliance associated with the removed security tie.
3.
Click the Actions icon ( ), and then click View Details.
4. Look at the contents of the Send Security Tie ID field in the Appliance Information section and compare that
number with the number on the physical tag.
To see the security tie number logged by Oracle using the CLI
At the command prompt on the Control Host, run oci dts appliance show to delete an export appliance.

oci dts appliance show --job-id job_id --appliance-label appliance_label

Caution:

If the number on the physical security tie does not match the number logged
by Oracle, do not plug the appliance into your network! Immediately file a
Service Request (SR).
Note:

Keep possession of the security tie after you have finished unpacking and
connecting the appliance. Include it when returning the appliance to Oracle.
Failure to include the security tie can result in a delay in the data migration
process.
To see the security tie number logged by Oracle using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Exports.
2. Find the export job and export appliance associated with the return shipment security tie.
3.
Click the Actions icon ( ), and then click View Details.

Oracle Cloud Infrastructure User Guide 1106


Data Transfer

4. Look at the contents of the Return Security Tie ID field in the Appliance Information section and compare that
number with the number on the physical tag.
To see the security tie number logged by Oracle using the CLI

oci dts appliance show --job-id job_id --appliance-label appliance_label

Caution:

If the number on the return security tie does not match the number logged by
Oracle, file a Service Request (SR). These security tie numbers must match
or Oracle cannot upload data from your returned export appliance.
Connecting the Appliance to the Terminal Emulation Host
Connect the appliance to your designated Terminal Emulation Host computer using the provided USB to DB-9 serial
cable.
Note:

You might need to download the driver for this cable on your Terminal
Emulation Host: https://www.cablestogo.com/product/26887/5ft-usb-to-db9-
male-serial-rs232-adapter-cable#support.

Setting Up Terminal Emulation


Set up terminal emulation so you can communicate with the appliance device through the appliance's serial console.
This communication requires installing serial console terminal emulator software. We recommend using the
following:
• PuTTY for Windows
• ZOC for OS X
• PuTTY or Minicom for Linux
Configure the following terminal emulator software settings:
• Baud Rate: 115200
• Emulation: VT102
• Handshaking: Disabled/off
• RTS/DTS: Disabled/off
Note:

PuTTY does not allow you to configure all of these settings individually.
However, you can configure the PuTTY default settings by selecting
the Serial connection type and specifying "115200" for the Serial Line
baud speed. This is sufficient to use PuTTY as a terminal emulator for the
appliance.
Configuring the Export Appliance Networking
When the appliance boots up, an appliance serial console configuration menu is displayed on the Terminal Emulation
Host to which the appliance is connected.

Oracle Cloud Data Transfer Appliance


- For use with minimum dts version: dts-0.4.140
- See "Help" for determining your dts version

1) Configure Networking
2) Show Networking
3) Reset Authentication
4) Show Authentication

Oracle Cloud Infrastructure User Guide 1107


Data Transfer

5) Show Status
6) Collect Appliance Diagnostic Information
7) Generate support bundle
8) Shutdown Appliance
9) Reboot Appliance
10) Help

Select a command:

Note:

It can take up to 5 minutes for the serial console menu to display. Press Enter
if you do not see the serial console configuration menu after this amount of
time.
The appliance supports a single active network interface on any of the 10-Gbps network ports. If only one interface is
cabled and active, that interface is chosen automatically. If multiple interfaces are active, you are given the choice to
select the interface to use.
To configure your export appliance networking
1. From the Terminal Emulation Host, select Configure Networking from the appliance serial console menu.
2. Provide the required networking information when prompted:
• IP Address: IP address of the export appliance.
• Subnet Mask Length: The count of leading 1 bit in the subnet mask. For example, if the subnet mask is
255.255.255.0 then the length is 24.
• Default Gateway: Default gateway for network communications.
For example:

Configure Networking:
^C to cancel

Configuring IP address, subnet mask length, gateway


Example:
IP Address : 10.0.0.2
Subnet Mask Length : 24
Gateway : 10.0.0.1

Address: 10.0.0.1
Subnet Mask Length: 24
Gateway: 10.0.0.1

Configuring IP address 10.0.0.1 netmask 255.255.255.0 default gateway


10.0.0.1
Enabling enp0s3
Now trying to restart the network

Network configuration is complete

New authentication material created.

Client access token : 4iH1gw1okPJO


Appliance certificate MD5 fingerprint :
BF:C6:49:9B:25:FE:9F:64:06:7E:DF:F5:F9:E5:C6:56
Press ENTER to return...

When you configure a network interface, the appliance software generates a new client access token and appliance
X.509/SSL certificate. The access token is used to authorize your Control Host to communicate with the Data
Transfer Appliance's Management Service. The x.509/SSL certificate is used to encrypt communications with the

Oracle Cloud Infrastructure User Guide 1108


Data Transfer

Data Transfer Appliance's Management Service over the network. Provide the access token and SSL certificate
fingerprint values displayed here when you use the CLI commands to initialize authentication on your host machine.
You can change the selected interface, network information, and reset the authentication material at any time by
selecting Configure Networking again from the appliance serial console menu.
Notify the Data Administrator
After completing the tasks in this topic, send the following appliance information IP address of the export appliance
to the Data Administrator:
• Appliance IP address
• Access token
• SSL certificate fingerprint
What's Next
You are now ready to copy your data from the export appliance to your data center. See Copying Data from the
Export Appliance on page 1109.

Copying Data from the Export Appliance

This topic describes the tasks associated with copying data from the export appliance to your data center's Data Host
using the Control Host. The Data Administrator role typically performs these tasks. See Roles and Responsibilities on
page 1085.
Note:

You can only run Oracle Cloud Infrastructure CLI commands from a Linux
host. This differs from running CLI commands for other Oracle Cloud
Infrastructure Services on a variety of host operating systems. Appliance-
based commands require validation that is only available on Linux hosts.
Information Prerequisites
Before performing any export appliance copying tasks, you must obtain the following information:
• Appliance IP address - typically is provided by the Infrastructure Engineer.
• IAM login information, Data Transfer Utility configuration files, export job ID, and job label - typically is
provided by the Project Sponsor.
Setting Up an HTTP Proxy Environment
You might need to set up an HTTP proxy environment on the Control Host to allow access to the public internet. This
proxy environment allows the Oracle Cloud Infrastructure CLI to communicate with the Data Transfer Appliance
Management Service and the export appliance over a local network connection. If your environment requires internet-
aware applications to use network proxies, configure the Control Host to use your environment's network proxies by
setting the standard Linux environment variables on your Control Host.
Assume that your organization has a corporate internet proxy at http://www-proxy.myorg.com and that the
proxy is an HTTP address at port 80. You would set the following environment variable:

export HTTPS_PROXY=http://www-proxy.myorg.com:80

If you configured a proxy on the Control Host and the export appliance is directly connected to that host, the Control
Host tries unsuccessfully to communicate with the export appliance using a proxy. Set a no_proxy environment

Oracle Cloud Infrastructure User Guide 1109


Data Transfer

variable for the appliance. For example, if the appliance is on a local network at 10.0.0.1, you would set the
following environment variable:

export NO_PROXY=10.0.0.1

Setting Firewall Access


If you have a restrictive firewall in the environment where you are using the Oracle Cloud Infrastructure CLI, you
may need to open your firewall configuration to the following IP address ranges: 140.91.0.0/16.
Configuring the Export Appliance
After you have physically set up the export appliance and connected it to your network, you can configure it using the
CLI to allow the copying of the data it contains to your data center.
Note:

You can only use the CLI command to configure the appliance.
To configure the export appliance using the CLI:

oci dts export configure-physical-appliance --job-id job-id --appliance-


cert-fingerprint appliance_cert_fingerprint --appliance-ip appliance_ip --rw
[true|false] --world [true|false] [OPTIONS]

<fingerprint> is the export appliance X.509/SSL certificate fingerprint.


<appliance_ip> is the IP address of the export appliance.
<rw> indicates whether the exported data has read/write permissions. Specify true or false.
<world> indicates whether the exported data is accessible by all. Specify true or false.
<options> are:
• --appliance-port: The port number used by the export appliance.
• --appliance-profile: User-defined name or description of the transfer appliance. This is useful if you have
multiple transfer appliances.
• --access-token: The access token to authenticate with the export appliance.
• --appliance-port: The port number used by the export appliance.
• --subnet-mask-length: The subnet mask length for the IP address.
Setting Your Data Host as an NFS Client
Note:

Only Linux machines can be used as Data Hosts.


Set up your Data Host as an NFS client:
• For Debian or Ubuntu, install the nfs-common package. For example:

sudo apt-get install nfs-common


• For Oracle Linux or Red Hat Linux, install the nfs-utils package. For example:

sudo yum install nfs-utils

Mounting the NFS Share


To mount the NFS share:
At the command prompt on the Data Host, create the mountpoint directory:

mkdir -p /mnt/mountpoint

Oracle Cloud Infrastructure User Guide 1110


Data Transfer

For example:

mkdir -p /mnt/nfs-ds-1

Next, use the mount command to mount the NFS share.

mount -t nfs <appliance_ip>:/data/dataset_name mountpoint

For example:

mount -t nfs 10.0.0.1:/data/nfs-ds-1 /mnt/nfs-ds-1

Note:

The appliance IP address in this example (10.0.0.1) may be different that


the one you use for your appliance.
After the NFS share is mounted, you can write data to the share.
After the NFS share is mounted, you can write data to the share.
Copying Files to the Data Host
Copy your file from the appliance to your NFS-mounted Data Host using normal file system tools.
Viewing Data Export Metrics
You can view the metrics associated with an export job in the Transfer Appliance Details page in chart or table
format. Select Metrics under Resources to display each of these measures:
• Export Files Downloaded: Total number of files downloaded to the appliance for export.
• Export Bytes Downloaded: Total number of bytes downloaded to the appliance for export.
• Export Files Remaining: Total number of files left to be downloaded for export.
• Export Bytes Remaining: Total number of bytes left to be downloaded for export.
• Export Files in Error: Total number of files in error for export.
• Export Download Verification Progress: Progress of verification of files that have already been downloaded for
export.
Select the Start Time and End Time for these measures, either by manually entering the days and times in their
respective fields, or by selecting the Calendar feature and picking the times that way. As an alternative to selecting
a start and end time, you can also select from a list of standard times (last hour, last 6 hours, and so forth) from the
Quick Selects list for the period measured. The time period you specify applies to all the measures.
Specify the Interval (for example, 5 minutes, 1 hour) that each measure is recorded from the list.
Specify the Statistic being recorded (for example, Sum, Mean) for each measure from the list.
Tip:

Mean is the most useful statistic for data export.


Choose additional actions from the Options list, including viewing the query in the Metrics Explorer, capturing the
URL for the measure, and switching between chart and table view.
Click Reset Charts to delete any existing information in the charts and begin recording new metrics.
See Monitoring Overview on page 2686 for general information on monitoring your Oracle Cloud Infrastructure
services.
Erasing Files from the Export Appliance
Use standard system tools to remove your data from the export appliance after you have copied your files.

Oracle Cloud Infrastructure User Guide 1111


Data Transfer

Notifying Export Appliance Return


When the erasure of the client data from the export appliance is complete, notify the Infrastructure Engineer that the
appliance is ready to be returned to Oracle.
What's Next
You are now ready to ship the export appliance back to Oracle. See Returning the Export Appliance to Oracle on page
1112.

Returning the Export Appliance to Oracle

This topic describes the tasks associated with shipping the export appliance back to Oracle after you have copied
your data to your data center. The Infrastructure Engineer role typically performs these tasks. See Roles and
Responsibilities on page 1085.
Note:

You can only run Oracle Cloud Infrastructure CLI commands from a Linux
host. This differs from running CLI commands for other Oracle Cloud
Infrastructure Services on a variety of host operating systems. Appliance-
based commands require validation that is only available on Linux hosts.
Shutting Down the Export Appliance
Shut down the export appliance before packing up and shipping it back to Oracle.
To shut down the export appliance
Using the terminal emulator on the host machine, select Shutdown from the appliance serial console.
Packing and Shipping the Export Appliance to Oracle
Return the export appliance to Oracle within 30 days. If you need the appliance beyond the standard 30-day window,
you can file a Service Request (SR) to ask for an extension of up to 60 days.
Important:

Review and follow the instructions that were provided in the transit case with
the appliance.
To pack and ship the export appliance
1. Unplug the power cord from the power source and detach the other end of the cord from the export appliance.
2. Disconnect the appliance from your network.
3. Remove the return shipment tamper-evident security tie from the transit case.
4. Place the appliance, power cord, and serial cable in the transit case.
Caution:

Oracle recommends assistance lifting and placing the appliance back into
the transit case. The total shipping weight is about 64 lbs (29.0299 kg) and
appliance weight is 38 lbs (17.2365 kg).
5. Close and secure the transit case with the return tamper-evident security tie.
6. Loop the top of the plastic tie-on tag with return shipping label through the handle of the transit case. Remove the
protective tape from the back of the tie-on tag, exposing the adhesive area on which to secure the tag onto itself.
Use the provided zip tie to secure the tie-on tag to the handle.

Oracle Cloud Infrastructure User Guide 1112


Data Transfer

7. Return the transit case to FedEx by doing one of the following:


• Drop off the packed, sealed, and labeled transit case to an FedEx Authorized ShipCenter location or a nearby
FedEx Office location. Obtain a receipt from the vendor to certify transfer of custody.
• Schedule a pickup with FedEx at your location. Ensure that the transit case is packed, sealed, and labeled
before FedEx arrives for pickup.
The shipping vendor notifies Oracle when the appliance is returned to Oracle.
Tracking the Export Appliance Return
Use your shipping carrier's tracking feature to following the progress of your export appliance after you return it to
Oracle.

Data Export Reference


This topic provides complete task details that are not otherwise fully documented in other topics. Use this topic as a
reference to learn and use commands associated with components included in the data export procedure.
Export Jobs are what determine how and when data is copied from Oracle Cloud Infrastructureto the export
appliance. Perform the following export job tasks using the Console or the CLI:
Creating Export Jobs
To create an export job using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Exports.
2. Select the Compartment you are to use for data exports from the list.
A list of export jobs that have already been created is displayed.
3. Click Create Export Job.
The Create Export Job dialog appears.
4. Enter a Job Name. Avoid entering confidential information. Then, select the Upload Bucket from the list.
5. (Optional) Complete the following fields:
• Prefix: List of objects with names matching this prefix would be part of this export job.
• Range Start: Object names returned by a list query must be greater or equal to this parameter.
• Range End: Object names returned by a list query must be strictly less than this parameter.
6. Complete the following fields:
• Company Name
• Recipient Name
• Recipient Email
• Address 1
• Address 2
• City/Locality
• State/Province/Region
• Zip/Postal Code
• Country
7. (Optional) Add any tagging information, including the tag namespace, key, and value in the associated fields.
8. Click Create Transfer Job.
To create an export job using the CLI

oci dts export create --compartment-id <compartment_id> --


bucket-name <bucket_name> --display-name <display_name> --
addressee <addressee> --care-of <care_of> --address1 <address1> --city-or-
locality <city_or_locality> --state-province-region <state_province_region>

Oracle Cloud Infrastructure User Guide 1113


Data Transfer

--country <country> --zip-postal-code <zip_postal_code> --phone-


number <phone_number> --email <email> <options>

<display_name> is the name of the export job as it appears. Avoid entering confidential information.
<addressee> is the company or person to receiving the appliance.
<care_of> is the contact associated with the addressee.
<address> is the required address of the addressee.
<city_or_locality> is city or locality of the addressee.
<state_province_region> is the state, province, or region of the addressee.
<country> is the country of the addressee.
<zip_postal_code> is the zip or postal code of the addressee.
<phone_number> is the phone number of the addressee or contact.
<email> is the email address of the addressee or contact.
<options> are:
• --setup-notifications: Sets up the required export notifications as part of the export job creation process.
• --prefix: List of objects with names matching this prefix would be part of this export job.
• --range-start: Object names returned by a list query must be greater or equal to this parameter.
• --range-end: Object names returned by a list query must be strictly less than this parameter.
• --freeform-tags: Free-form tags for this resource. Each tag is a simple key-value pair with no predefined
name, type, or namespace. Example: `{"Department": "Finance"}` This is a complex type whose value must be
valid JSON. The value can be provided as a string on the command line or passed in as a file using the file://path/
to/file syntax. The --generate-param-json-input option can be used to generate an example of the JSON which
must be provided. We recommend storing this example in a file, modifying it as needed and then passing it back
in via the file:// syntax.
• --defined-tags: Defined tags for this resource. Each key is predefined and scoped to a namespace. For
more information, see [Resource Tags]. Example: `{"Operations": {"CostCenter":"42"}}` This is a complex type
whose value must be valid JSON. The value can be provided as a string on the command line or passed in as a file
using the file://path/to/file syntax. The --generate-param-json-input option can be used to generate
an example of the JSON which must be provided. We recommend storing this example in a file, modifying it as
needed and then passing it back in via the file:// syntax.
• --wait-for-state: This operation creates, modifies or deletes a resource that has a defined lifecycle state:
CREATING, ACTIVE, IN PROGRESS, SUCCEEDED, FAILED, CANCELLED, or DELETED. Specify this
option to perform the action and then wait until the resource reaches a given lifecycle state. Multiple states can
be specified, returning on the first state. For example, --wait-for-state SUCCEEDED --wait-for-
state FAILED would return on whichever lifecycle state is reached first. If timeout is reached, a return code of
2 is returned. For any other error, a return code of 1 is returned.
• --max-wait-seconds: The maximum time in seconds to wait for the resource to reach the lifecycle state
defined by the --wait-for-state attribute. Default is 1200.
• --wait-interval-seconds: The check interval in seconds to determine whether the resource to see if it has
reached the lifecycle state defined by the --wait-for-state. Default is 30.
• --address2: Optional address line 2.
• --address3: Optional address line 3.
• --address4: Optional address line 4.
• --from-json: Provide input to this command as a JSON document from a file using the file://path-to/file
syntax. The --generate-full-command-json-input option can be used to generate a sample JSON file
to be used with this command option. The key names are pre-populated and match the command option names
(converted to camelCase format, e.g. compartment-id --> compartmentId), while the values of the keys need to
be populated by the user before using the sample file as an input to this command. For any command option that
accepts multiple values, the value of the key can be a JSON array. Options can still be provided on the command

Oracle Cloud Infrastructure User Guide 1114


Data Transfer

line. If an option exists in both the JSON document and the command line then the command line specified value
will be used. For examples on usage of this option, please see our "using CLI with advanced JSON options" link:
https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliusing.htm#AdvancedJSONOptions.
For example:

oci dts export create --compartment-id ocid.compartment.oc1..exampleuniqueID


--bucket-name MyBucket1 --display-name MyExportJob1 --addressee "MyCompany
Corp" --care-of "John Doe" --address1 "123 Main St." --city-or-locality
Anytown --state-province-region CA --country USA --zip-postal-code 12345 --
phone-number "4085551212" --email [email protected]

{
"data": {
"appliance-decryption-passphrase": "********",
"appliance-delivery-tracking-number": null,
"appliance-delivery-vendor": null,
"appliance-return-delivery-tracking-number": null,
"appliance-serial-number": null,
"bucket-access-policies": [
"POLICIES CREATION IN PROGRESS"
],
"bucket-name": "MyExportJobs",
"compartment-id": "ocid.compartment.oc1..exampleuniqueID",
"creation-time": "2020-06-18T17:24:13+00:00",
"customer-shipping-address": {
"address1": "123 Main St.",
"address2": null,
"address3": null,
"address4": null,
"addressee": "MyCompany Corp",
"care-of": "John Doe",
"city-or-locality": "Anytown",
"country": "USA",
"email": "[email protected]",
"phone-number": "4085554321",
"state-or-region": "CA",
"zipcode": "12345"
},
"defined-tags": {},
"display-name": "MyExportJob1",
"first-object": null,
"freeform-tags": {},
"id": "ocid1.datatransferapplianceexportjob.oc1..exampleuniqueID",
"last-object": null,
"lifecycle-state": "CREATING",
"lifecycle-state-details": "PENDING_MANIFEST_GENERATION",
"manifest-file": null,
"manifest-md5": null,
"next-object": null,
"number-of-objects": null,
"prefix": null,
"range-end": null,
"range-start": null,
"receiving-security-tie": null,
"sending-security-tie": null,
"total-size-in-bytes": null
},
"etag": "4--gzip"
}

Notifications

Oracle Cloud Infrastructure User Guide 1115


Data Transfer

If you do not include the --setup-notification option when you run the command, the following is returned:

It is a pre-requisite to setup notifications for export. Do you want to


setup notifications? [y/N]:

If you do not have the necessary permissions to set up the notifications for export, or if you have previously done this
step, then select N. Otherwise, select y. See Setting Up Notifications on page 1093 for more information.
Listing Export Jobs
To display a list of export jobs using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Exports.
2. Select the Compartment from the list.
The export jobs in that compartment are displayed.
To display a list of export jobs using the CLI

oci dts export list --compartment-id <compartment_id> <options>

<options> are:
• --lifecycle-state: Filter the returned export jobs by the specified lifecycle state (specify one only):
CREATING, ACTIVE, INPROGRESS, SUCCEEDED, FAILED, CANCELLED, or DELETED.
• --display-name: Filter the returned exports jobs by the specified display name.
• --limit: The maximum number of results per page, or items to return. For important details about how
pagination works, see [List Pagination]. Example: `50`.
• --page: The value of the `opc- next-page` response header from the previous "List" call. For important details
about how pagination works, see [List Pagination].
• --all: Returns all pages of results. If you provide this option, then you cannot provide the --limit option.
• --page-size: Specify the number of export jobs returned per call. Only valid when used with --all or --limit,
and ignored otherwise.
• --from-json: Provide input to this command as a JSON document from a file using the file://path-to/file
syntax. The --generate-full-command-json-input option can be used to generate a sample JSON file
to be used with this command option. The key names are pre-populated and match the command option names
(converted to camelCase format, e.g. compartment-id --> compartmentId), while the values of the keys need to
be populated by the user before using the sample file as an input to this command. For any command option that
accepts multiple values, the value of the key can be a JSON array. Options can still be provided on the command
line. If an option exists in both the JSON document and the command line then the command line specified value
will be used. For examples on usage of this option, please see our "using CLI with advanced JSON options" link:
https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliusing.htm#AdvancedJSONOptions.
For example:

oci dts export list --compartment-id ocid.compartment.oc1..exampleuniqueID

{
"data": [
{
"bucket-name": "MyExportJobs",
"creation-time": "2020-06-18T17:24:13+00:00",
"defined-tags": {},
"display-name": "MyExportJob1",
"freeform-tags": {},
"id": "ocid1.datatransferapplianceexportjob.oc1..exampleuniqueID",
"lifecycle-state": "CREATING",
"lifecycle-state-details": "PENDING_MANIFEST_GENERATION"
},
{

Oracle Cloud Infrastructure User Guide 1116


Data Transfer

"bucket-name": "MyTestExportJobs",
"creation-time": "2020-06-18T18:07:59+00:00",
"defined-tags": {},
"display-name": "MyTestExportJob",
"freeform-tags": {},
"id": "ocid1.datatransferapplianceexportjob.oc1..exampleuniqueID",
"lifecycle-state": "CREATING",
"lifecycle-state-details": "PENDING_MANIFEST_GENERATION"
}
]
}

Displaying Export Job Details


To show the details of an export job using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Exports.
2. Select the Compartment from the list.
The export jobs in that compartment are displayed.
3. Find the export job for which you want to display the details.
4.
Click the Actions icon ( ), and then click View Details.
To show the details of an export job using the CLI

oci dts export show --job-id job_id [OPTIONS]

Options are:
• --from-json: Provide input to this command as a JSON document from a file using the file://path-to/file
syntax. The --generate-full-command-json-input option can be used to generate a sample JSON file
to be used with this command option. The key names are pre-populated and match the command option names
(converted to camelCase format, e.g. compartment-id --> compartmentId), while the values of the keys need to
be populated by the user before using the sample file as an input to this command. For any command option that
accepts multiple values, the value of the key can be a JSON array. Options can still be provided on the command
line. If an option exists in both the JSON document and the command line then the command line specified value
will be used. For examples on usage of this option, please see our "using CLI with advanced JSON options" link:
https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliusing.htm#AdvancedJSONOptions.
For example:

oci dts export show --job-id


ocid1.datatransferapplianceexportjob.oc1..exampleuniqueID

{
"data": {
"appliance-decryption-passphrase": "********",
"appliance-delivery-tracking-number": null,
"appliance-delivery-vendor": null,
"appliance-return-delivery-tracking-number": null,
"appliance-serial-number": null,
"bucket-access-policies": [
"POLICIES CREATION IN PROGRESS"
],
"bucket-name": "MyExportJobs",
"compartment-id": "ocid.compartment.oc1..exampleuniqueID",
"creation-time": "2020-06-18T17:24:13+00:00",
"customer-shipping-address": {
"address1": "123 Main St.",
"address2": null,
"address3": null,

Oracle Cloud Infrastructure User Guide 1117


Data Transfer

"address4": null,
"addressee": "MyCompany",
"care-of": "John Doe",
"city-or-locality": "Anytown",
"country": "US",
"email": "[email protected]",
"phone-number": "4085551212",
"state-or-region": "CA",
"zipcode": "12345"
},
"defined-tags": {},
"display-name": "MyExportJob1",
"first-object": null,
"freeform-tags": {},
"id": "ocid1.datatransferapplianceexportjob.oc1..exampleuniqueID",
"last-object": null,
"lifecycle-state": "CREATING",
"lifecycle-state-details": "PENDING_MANIFEST_GENERATION",
"manifest-file": null,
"manifest-md5": null,
"next-object": null,
"number-of-objects": null,
"prefix": null,
"range-end": null,
"range-start": null,
"receiving-security-tie": null,
"sending-security-tie": null,
"total-size-in-bytes": null
},
"etag": "1--gzip"
}

Editing Export Jobs


To edit an export job using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Exports.
2. Select the Compartment from the list.
The export jobs in that compartment are displayed.
3. Click the link under Transfer Jobs for the export job whose name you want to edit.
The Details page for that export job appears.

Alternatively, you can click the Actions icon ( ), and then click View Details.
4. Click Edit in the Details page.
The Edit Export Job dialog appears.
5. Edit any of the attributes displayed, including the job name, bucket, and address.
Avoid entering confidential information.
6. Click Save Changes.
You are returned to the Details page for that export job.
To edit an export job using the CLI

oci dts export update --job-id <job_id> <options>

<options> are:
• --bucket-name: Name of the bucket associated with the data export. Avoid entering confidential information.
• --prefix: List of objects with names matching this prefix would be part of this export job.

Oracle Cloud Infrastructure User Guide 1118


Data Transfer

• --range-start: Object names returned by a list query must be greater or equal to this parameter.
• --range-end: Object names returned by a list query must be strictly less than this parameter.
• --display-name: Name of the export job as it appears.
• --lifecycle-state: CREATING, ACTIVE, INPROGRESS, SUCCEEDED, FAILED, CANCELLED, or
DELETED.
• --lifecycle-state-details: A property that can contain details on the lifecycle.
• --manifest-file: Manifest file associated with this export job.
• --manifest-md5: md5 digest of the manifest file.
• --number-of-objects: Total number of objects that are exported in this job.
• --total-size-in-bytes: Total size of objects in bytes that are exported in this job.
• --first-object: First object in the list of objects that are exported in this job.
• --last-object: Last object in the list of objects that are exported in this job.
• --next-object: First object from which the next potential export job could start.
• --freeform-tags: Free-form tags for this resource. Each tag is a simple key-value pair with no predefined
name, type, or namespace. Example: `{"Department": "Finance"}` This is a complex type whose value must be
valid JSON. The value can be provided as a string on the command line or passed in as a file using the file://path/
to/file syntax. The --generate-param-json-input option can be used to generate an example of the JSON which
must be provided. We recommend storing this example in a file, modifying it as needed and then passing it back
in via the file:// syntax.
• --defined-tags: Defined tags for this resource. Each key is predefined and scoped to a namespace. For
more information, see [Resource Tags]. Example: `{"Operations": {"CostCenter":"42"}}` This is a complex type
whose value must be valid JSON. The value can be provided as a string on the command line or passed in as a file
using the file://path/to/file syntax. The --generate-param-json-input option can be used to generate
an example of the JSON which must be provided. We recommend storing this example in a file, modifying it as
needed and then passing it back in via the file:// syntax.
• --if-match: The tag that must be matched for the task to occur for that entity. If set, the update is only
successful if the object's tag matches the tag specified in the request.
• --force: Perform task without prompting for confirmation.
• --wait-for-state: This operation creates, modifies or deletes a resource that has a defined lifecycle state:
CREATING, ACTIVE, IN PROGRESS, SUCCEEDED, FAILED, CANCELLED, or DELETED. Specify this
option to perform the action and then wait until the resource reaches a given lifecycle state. Multiple states can
be specified, returning on the first state. For example, --wait-for-state SUCCEEDED --wait-for-
state FAILED would return on whichever lifecycle state is reached first. If timeout is reached, a return code of
2 is returned. For any other error, a return code of 1 is returned.
• --max-wait-seconds: The maximum time in seconds to wait for the resource to reach the lifecycle state
defined by the --wait-for-state attribute. Default is 1200.
• --wait-interval-seconds: The check interval in seconds to determine whether the resource to see if it has
reached the lifecycle state defined by the --wait-for-state. Default is 30.
• --addressee: Company or person to receiving the appliance.
• --care-of: Contact associated with the addressee.
• --address1: Required address of the addressee (line 1).
• --address2: Optional address of the addressee (line 2).
• --address3: Optional address of the addressee (line 3).
• --address4: Optional address of the addressee (line 4).
• --city-or-locality: City or locality of the addressee.
• --state-province-region: State, province, or region of the addressee.
• --country: Country of the addressee.
• --zip-postal-code: Zip or postal code of the addressee.
• --phone-number: Phone number of the addressee or contact.
• --email: Email address of the addressee or contact.
• --from-json: Provide input to this command as a JSON document from a file using the file://path-to/file
syntax. The --generate-full-command-json-input option can be used to generate a sample JSON file

Oracle Cloud Infrastructure User Guide 1119


Data Transfer

to be used with this command option. The key names are pre-populated and match the command option names
(converted to camelCase format, e.g. compartment-id --> compartmentId), while the values of the keys need to
be populated by the user before using the sample file as an input to this command. For any command option that
accepts multiple values, the value of the key can be a JSON array. Options can still be provided on the command
line. If an option exists in both the JSON document and the command line then the command line specified value
will be used. For examples on usage of this option, please see our "using CLI with advanced JSON options" link:
https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliusing.htm#AdvancedJSONOptions.
For example:

oci dts export update --job-id


ocid1.datatransferapplianceexportjob.oc1..exampleuniqueID --care-of "Robert
Roe" --phone-number 4085554321 --email [email protected]

Running the command displays the following message:

WARNING: Updates to customer-shipping-address and freeform-tags and defined-


tags will replace any existing values. Are you sure you want to continue?
[y/N]:

Confirm you want to continue. The information returned contains the update items you specified:

"bucket-name": "MyExportJobs",
"compartment-id": "ocid.compartment.oc1..exampleuniqueID",
"creation-time": "2020-06-18T17:24:13+00:00",
"customer-shipping-address": {
"address1": "123 Main St.",
"address2": null,
"address3": null,
"address4": null,
"addressee": "MyCompany",
"care-of": "Robert Roe",
"city-or-locality": "Anytown",
"country": "US",
"email": "[email protected]",
"phone-number": "4085554321",
"state-or-region": "CA",
"zipcode": "12345"
}

Deleting Export Jobs


To delete an export job using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Exports.
2. Select the Compartment from the list.
The export jobs in that compartment are displayed.
3. Click the link under Transfer Jobs for the export job whose name you want to delete.
The Details page for that export job appears.

Alternatively, you can click the Actions icon ( ), and then click Delete.
4. Click Delete in the Details page.
The Delete Export Job dialog appears to confirm the deletion.
5. Click Delete.
The export job is deleted and you are returned to the export jobs page.

Oracle Cloud Infrastructure User Guide 1120


Data Transfer

To delete an export job using the CLI

oci dts export delete --job-id <job_id> <options>

<options> are:
• --if-match: The tag that must be matched for the task to occur for that entity. If set, the update is only
successful if the object's tag matches the tag specified in the request.
• --force: Perform task without prompting for confirmation.
• --wait-for-state: This operation creates, modifies or deletes a resource that has a defined lifecycle state:
CREATING, ACTIVE, IN PROGRESS, SUCCEEDED, FAILED, CANCELLED, or DELETED. Specify this
option to perform the action and then wait until the resource reaches a given lifecycle state. Multiple states can
be specified, returning on the first state. For example, --wait-for-state SUCCEEDED --wait-for-
state FAILED would return on whichever lifecycle state is reached first. If timeout is reached, a return code of
2 is returned. For any other error, a return code of 1 is returned.
• --max-wait-seconds: The maximum time in seconds to wait for the resource to reach the lifecycle state
defined by the --wait-for-state attribute. Default is 1200.
• --wait-interval-seconds: The check interval in seconds to determine whether the resource to see if it has
reached the lifecycle state defined by the --wait-for-state. Default is 30.
• --from-json: Provide input to this command as a JSON document from a file using the file://path-to/file
syntax. The --generate-full-command-json-input option can be used to generate a sample JSON file
to be used with this command option. The key names are pre-populated and match the command option names
(converted to camelCase format, e.g. compartment-id --> compartmentId), while the values of the keys need to
be populated by the user before using the sample file as an input to this command. For any command option that
accepts multiple values, the value of the key can be a JSON array. Options can still be provided on the command
line. If an option exists in both the JSON document and the command line then the command line specified value
will be used. For examples on usage of this option, please see our "using CLI with advanced JSON options" link:
https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliusing.htm#AdvancedJSONOptions.
For example:

oci dts export delete --job-id


ocid1.datatransferapplianceexportjob.oc1..exampleuniqueID

Confirm the deletion when prompted. The export job is deleted with no further action or return. To confirm the
deletion, display the list of export jobs in the compartment. See Displaying Export Job Details on page 1117 for
more information.
Moving Export Jobs Between Compartments
To move an export job to a different compartment using the Console
1. Open the navigation menu. Under Core Infrastructure, go to Object Storage and click Data Transfer -
Exports.
2. Select the Compartment from the list.
The export jobs in that compartment are displayed.
3. Click the link under Transfer Jobs for the export job that you want to move.
The Details page for that export job appears.

Alternatively, you can click the Actions icon ( ), and then click Move Resource.
4. Click Move Resource in the Details page.
The Move Resource to a Different Compartment dialog appears.
5. Choose the compartment you want to which you want to move the export job from the list.
6. Click Move Resource.
You are returned to the Details page for that export job.

Oracle Cloud Infrastructure User Guide 1121


Data Transfer

To move an export job to a different compartment using the CLI

oci dts export change-compartment --job-id <job_id> compartment-


id <compartment_id> <options>

<compartment_id> is the compartment to which the data export job is being moved.
<options> are:
• --if-match: The tag that must be matched for the task to occur for that entity. If set, the update is only
successful if the object's tag matches the tag specified in the request.
• --from-json: Provide input to this command as a JSON document from a file using the file://path-to/file
syntax. The --generate-full-command-json-input option can be used to generate a sample JSON file
to be used with this command option. The key names are pre-populated and match the command option names
(converted to camelCase format, e.g. compartment-id --> compartmentId), while the values of the keys need to
be populated by the user before using the sample file as an input to this command. For any command option that
accepts multiple values, the value of the key can be a JSON array. Options can still be provided on the command
line. If an option exists in both the JSON document and the command line then the command line specified value
will be used. For examples on usage of this option, please see our "using CLI with advanced JSON options" link:
https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliusing.htm#AdvancedJSONOptions.
For example:

oci dts export change-compartment --jpb-id


ocid1.datatransferapplianceexportjob.oc1..exampleuniqueID --compartment-id
ocid.compartment.oc1..exampleuniqueID

To confirm the transfer, display the list of transfer jobs in the new compartment. See Listing Export Jobs on page
1116 for more information.

Troubleshooting
This topic describes various troubleshooting issues related to the Data Transfer Service.

General
Troubleshooting entries in this section can apply to all data transfer methods.
Installing a Specific CLI Version
You may need to change the version of the Oracle Cloud Infrastructure command line interface (CLI) to address
issues with a particular feature. Installation of a CLI version other than the one currently installed requires the
following steps in order:

Downloading the Required Oracle Cloud Infrastructure CLI Version


1. Go to the following site: https://github.com/oracle/oci-cli/releases.
2. Scroll down to the version you need and download it to your local machine.

Uninstalling the Existing version of the CLI Version


If you manually installed the CLI using pip, run the following command:

pip uninstall oci-cli

If you manually installed the CLI in a virtualenv, run the following command:

<virtualenv_path>/bin/pip uninstall oci-cli

Oracle Cloud Infrastructure User Guide 1122


Data Transfer

Installing the Downloaded CLI Version


See Manual Installation on page 4233 for installation instructions for your downloaded CLI version.

Appliance-Based Data Transfers


These troubleshooting entries are associated with appliance-based import and export jobs.
Troubleshooting the Appliance
You can generate performance information for troubleshooting issues with the appliance through the terminal
emulator on the host machine. Select Collect Appliance Diagnostic Information from the serial console
configuration menu. The diagnostic tool generates system, network, storage, and performance data while the transfer
job is running. It then forwards the data to the appliance serial console. Here you can scroll through the terminal to
view it.
You can also use the log capture feature of the serial port emulator to capture the output. Serial port emulators often
support the ability to copy the session to a file. Refer to the documentation of your serial port emulation package for
instructions. Copying to a log file is useful if you need assistance from Oracle or if your emulation session does not
allow you to scroll back and see all the output.
For each operation, the display shows exactly what command was executed and all the options.
Here is an example of the diagnostic output:

--------------------------------------------------------------------------------
- systemctl -l --type service --state=active
-
--------------------------------------------------------------------------------
UNIT LOAD ACTIVE SUB DESCRIPTION
auditd.service loaded active running Security Auditing Service
blk-availability.service loaded active exited Availability of block devices
chronyd.service loaded active running NTP client/server
[email protected] loaded active running Diagnostic
Collection Server for the XA (PID 3147/UID 1001)
crond.service loaded active running Command Scheduler
data-transfer-appliance.service loaded active running Data Transfer
Appliance
data-transfer-console.service loaded active running Data Transfer Serial
Console

Any problem with the diagnostic data collection results in the console output being written to the log file of the
service. Failure of the commands is indicative of a serious problem, perhaps requiring the return of the appliance.
Here is an example of the log:

Mar 6 17:55:33 localhost console-diags: {"Module": "main", "Type": "Info",


"Message": "Received message {\"cmd\": \"collect\"}"}
Mar 6 17:55:33 localhost console-diags: {"Module": "main", "Type": "Info",
"Message": "Setting up output file. First to remove all /tmp/xa-diags-
results"}
Mar 6 17:55:33 localhost console-diags: {"Module": "main", "Type": "Info",
"Message": "Removing /tmp/xa-diags-results.2019-03-06T17:54:56.000471"}

Initializing Appliance Fails Because of IP Address Issues


Initializing the Appliance can fail because of using the incorrect IP address. The IP address for initialize-auth
can differ from the IP address obtained when running ping or ssl connect. If you experience an initialization failure,
ensure that you are using the correct IP address for your Appliance and try initializing again.
Initialize Authentication Fails with "connection refused" or "connection timed out"
If you try to configure networking using the appliance serial console but fail with a "connection refused" or
"connection timed out" message, follow these troubleshooting steps.

Oracle Cloud Infrastructure User Guide 1123


Data Transfer

Run the following command at the command prompt on the host:

ping appliance_ip

If a failure occurs, run the following command to verify appliance IP and the path to appliance.

ping -I local_interface appliance_ip

To determine expected interface, run ip route or an equivalent command. Verify that routing table is sane. Try
running traceroute if you're not sure to see the network path to the appliance IP.
Run the following command:

curl -k https://appliance_ip

You should receive the response "Not found." This failure can indicate the IP address may be wrong. For example,
nothing is listening on port 443.If you receive a failure message, run the following command:

openssl s_client -showcerts -connect appliance_ip:443

You should see a certificate issued for "Oracle Cloud Infrastructure" / "Data Transfer Appliance."
This command is similar to curl but does not use HTTPS and so proxies do not affect it. If this command works,
and curl fails, then verify there are no proxy environmental variables.
Dataset Sealing Process Fails
The dataset sealing process can fail sometimes because there are special files in the dataset:

dts nfs-dataset seal-status --name nfs-ds-1

Seal Status :
success : false
failureReason :
Number of special files : 5
startTime : 2019/03/26 11:52:37 PDT
endTime : 2019/03/26 11:52:39 PDT
numFilesToProcess : 0
numFilesProcessed : 0
bytesToProcess : 0.00 KB
bytesProcessed : 0.00 KB
bytesToProcess : 0.00 KB

At the command prompt on the host, reactivate the NFS dataset.

oci dts nfs-dataset activate --name <dataset_name>

Then run find to get the full list of all special files and the specific type of each one.

find <mountpoint> \! -type f \! -type d | xargs file

For example:

$ find /mnt/nfs-ds-1 \! -type f \! -type d | xargs file


/mnt/nfs-ds-1/myfile1: symbolic link to `/home/user1/myfile1'
/mnt/nfs-ds-1/myfile2: symbolic link to `/home/user1/myfile2'

Next, review the list and remove all special files from the NFS mount point.

find <mountpoint> \! -type f \! -type d | xargs rm

Oracle Cloud Infrastructure User Guide 1124


Data Transfer

Deactivate the NFS dataset.

oci dts nfs-dataset deactivate --name dataset_name

Finally, reseal the dataset.

oci dts nfs-dataset seal --name dataset_name [--wait]

Monitor the seal progress. Wait for it to complete successfully and continue with the subsequent steps.
Special Characters in Names Cause Data Sealing Failures
Data sealing fails if the names of the files being transferred contain characters that are not UTF-8, contain a newline,
or include a return. The error returned is similar to the following:
failureReason": "\nNumber of file paths with invalid chars: 1
If this error occurs, activate the data set, mount it, and run the following find command on the filesystem:

find . -print0 | perl -n0e 'chomp; print $_, "\n" if /[[:^ascii:]


[:cntrl:]]/'

Rename or delete those files that appear in the returned list.

Disk-Based Data Transfers


These troubleshooting entries are associated with disk-based import jobs.
Data Transfer Utility Fails Because of Lack of Exclusive Access to Disk
The Data Transfer Utility requires exclusive access to the disk. If you have any drivers that already claim exclusive
access to the disk, then the Data Transfer Utility fails. For example, if you employ a devicemapper multipath driver
over all your disk devices, you must first remove the disk used for the data transfer from the list of devices managed
by the multipath driver.
Be sure that access to the disk is not done through any devicemapper or volume manager. During the data transfer, the
expectation is that the file system is created on a "raw" device. Any layering or mapping through intermediate drivers
or abstraction layers makes it impossible for the disk to be uploaded at the transfer site. The source of these failures
can include drivers like multipath, md, striping, logical volume managers, and potentially others as well.
You can confirm that the Data Transfer Utility has exclusive access by attempting to manually format the disk being
used for your data transfer. The Data Transfer Utility uses the cryptsetup utility to create an encrypted device.
You can run cryptsetup from the command line (root privileges required):

cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha512 --iter-time 2000


--use-random /dev/<sdXX>

<sdXX> is the name of the disk being used for the data transfer.
When prompted, respond that you do want to encrypt the device. You are required to provide a passphrase. Any
passphrase is acceptable as the cryptsetup utility can run on a disk repeatedly without any problems.
If the command succeeds, then you know that the Data Transfer Utility can gain exclusive ownership of the disk to do
the necessary for the data transfer.
Data Transfer Utility Fails with "Processing exception..." While Communicating to Oracle Cloud
Infrastructure
Check if your environment has proxies to the internet. If so, update them to the latest version and set "https_proxy." If
you are using the appliance, set "no_proxy" environmental variables. See Installing the Data Transfer Utility on page
974 for more information on proxies.

Oracle Cloud Infrastructure User Guide 1125


Data Transfer

Data Transfer Utility Fails with "invalid configuration file"


If you attempt to run Data Transfer commands and receive the error message "invalid configuration file," verify that
the following files are present on your host and are correctly set up:
• ~/.oci/config and
• ~/.oci/config_upload_user
Both files must have "[DEFAULT]" as the first line. Use of the "~" character in a path is not valid in the file's
contents.
Creating Transfer Disk Fails Because of Serial Number Error
Creating a transfer disk using the Data Transfer can fail because of a serial number error:

dts disk create --job-id ocid1.datatransferjob.oc1..exampleuniqueID --block-


device /dev/sdb
ERROR : Unable to determine serial number for device /dev/sdb

This error may result from a garbled serial number resulting from the hdparm -I command. For example:

/bin/sh -c "hdparm -I /dev/sdb"


/dev/sdb:
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 00 00 00 00 24
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
ATA device, with non-removable media
Standards:
Likely used: 1
Configuration:
Logical max current
cylinders 0 0
heads 0 0
sectors/track 0 0

Logical/Physical Sector size: 512 bytes
device size with M = 1024*1024: 0 MBytes
device size with M = 1000*1000: 0 MBytes
cache/buffer size = unknown
Capabilities:
IORDY not likely
Cannot perform double-word IO
R/W multiple sector transfer: not supported
DMA: not supported
PIO: pio0

If you see this type of error, use the following workaround:


1. Run the following command at the prompt:

lsblk --nodeps -no serial /dev/<device>


<serial_number>
2. Create an hdparm script in your Home directory using the following command:

vi $HOME/hdparm
#!/usr/bin/bash
while getopts ":Iht" opt;do
case ${opt} in
h) # process option h
;;
t)
;;
I)
echo "Serial Number: <serial_number>"

Oracle Cloud Infrastructure User Guide 1126


Data Transfer

;;
esac
done

Use the same serial number in your script that was returned when you ran the lsblk command in the previous
step.
3. Make the script you just created executable.
4. Change your path using the following command:

export PATH=/<home_dir_path>:$PATH
5. Retry creating the transfer disk.

Help Sheets
Oracle provides a number of help sheets to print and carry with you as you perform your data transfer tasks.

Disk Import
• Disk Import Job Checklist
• Prepare for Disk Import Jobs
• Disk Import Procedures

Appliance Import
• Appliance Import Job Checklist
• Prepare for Appliance Import Job
• Appliance Import Procedures

Help Sheet - Disk Import Job Checklist


Use this checklist for preparing to use the transfer disk for use in an import job. Check each item in order to ensure
you are fully prepared for the data transfer.
__ USB 2.0/3.0 external hard disk drive.
__ Someone tasked to create labels for the FedEx, UPS, or DHL carriers.
__ Linux machine running Oracle Linux 6 or greater, Ubuntu 14.04 or greater, or SUSE 11 or greater.
__ Users interacting with the Linux machine must have root access.
__ Physical access to the Linux machine to connect the hard disk drive.
__ Linux operating system can create an EXT file system.
__ Java 1.8 or Java 1.11 installed on Linux machine.
__ Hdparm 9.0 or later installed on Linux machine.
__ Cryptsetup 1.2.0. or greater installed on Linux machine.
__ Open firewall for Linux machine to OCI Data Transfer Service on the IP address ranges: 140.91.0.0/16.
__ Open firewall to OCI Object Storage IP address ranges: 134.70.0.0/17.
__ Download and install the Data Transfer Utility.
__ Install OCI Command Line Interface.
__ Generate public/ private keys for users who will copy data on the Linux machine (run oci setup keys
command)
__ Administrative user on tenancy who can create users, groups, compartments, and add policies

Oracle Cloud Infrastructure User Guide 1127


Data Transfer

Help Sheet - Prepare for Disk Import Jobs


Use this help sheet to preparing and running your disk import job.

Preparing
1. Ensure you have the following set up in your environment:
• USB 2.0/3.0 external hard disk drive (disk) to be used as your import disk.
• Computer running one of the following Linux operating systems:
• Oracle Linux 6 or greater
• Ubuntu 14.04 or greater
• SUSE 11 or greater
All Linux operating systems must have the ability to create an EXT file system. Make sure the system has the
following installed:
• Java 1.8 or Java 1.11
• hdparm 9.0 or later
• Cryptsetup 1.2.0 or greater
2. Download and install the Data Transfer Utility on the Linux machine where the data will be copied from and the
disk will be mounted. You should have root access to this machine.
Installation instructions are located at: https://docs.cloud.oracle.com/iaas/Content/DataTransfer/Tasks/
disk_preparing.htm
3. Install OCI Command Line Interface on the Linux machine where the data will be copied from and the disk will
be mounted. You should have root access to this Linux machine.
Installation instructions are located at: https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliinstall.htm
4. On the machine where data will be copied from generate public/private keys for the user(s) who will do the data
copy, run the following command:

oci setup keys

See https://docs.cloud.oracle.com/iaas/Content/API/Concepts/apisigningkey.htm for more information on keys.


5. Login to OCI with an Administrative user for the tenancy.
6. Create the user policies. Ensure that the policies include the following:

Allow group <group_name> to {DTA_ENTITLEMENT_CREATE} in tenancy

See https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingpolicies.htm for more information on


policies
7. Create a compartment where the transfer job and landing bucket will reside.
See https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcompartments.htm for more information
on compartments.
8. Create the necessary user accounts for those individuals who will copy data to the disk. Include the public key that
was previously generated.
See https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingusers.htm for more information on users.
See https://docs.cloud.oracle.com/iaas/Content/API/Concepts/apisigningkey.htm for more information on public
keys.
9. Create a group for the user who will copy data to the disk. Include the following policies in the group:

Allow group <group_name> to manage data-transfer-jobs in


compartment <compartment_name>
Allow group <group_name> to manage buckets in
compartment <compartment_name>

Oracle Cloud Infrastructure User Guide 1128


Data Transfer

Allow group <group_name> to manage objects in


compartment <compartment_name>

See https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managinggroups.htm for more information on


groups.
If you want to include notifications for the group, includes these additional policies:

Allow group <group name> to manage ons-topics in tenancy


Allow group <group name> to manage ons-subscriptions in tenancy
Allow group <group name> to manage cloudevents-rules in tenancy
Allow group <group name> to inspect compartments in tenancy

See https://docs.cloud.oracle.com/iaas/Content/Notification/Concepts/notificationoverview.htm for more


information on notifications.
See https://docs.cloud.oracle.com/iaas/Content/Events/Concepts/eventsoverview.htm for more information on
events.
10. Create an upload user for Oracle personnel to upload data into the bucket.
See https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingusers.htm for more information on users.
11. Create a group for the upload user, and include the public key that was previously generated.
See https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managinggroups.htm for more information on
groups.
See https://docs.cloud.oracle.com/iaas/Content/API/Concepts/apisigningkey.htm for more information on public
keys.
12. Add the following policies for the upload user group:

Allow group <group_name> to manage buckets


in compartment <compartment_name> where all
{ request.permission='BUCKET_READ', target.bucket.name='<bucket_name>' }
Allow group <group_name> to manage objects
in compartment <compartment_name> where all
{ target.bucket.name='<bucket_name>', any
{ request.permission='OBJECT_CREATE',
request.permission='OBJECT_OVERWRITE',
request.permission='OBJECT_INSPECT' }}

The permissions for upload users allow Oracle personnel to upload standard and multi-part objects on your behalf
and inspect bucket and object metadata. The permissions do not allow Oracle personnel to inspect the actual data.
See https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingpolicies.htm for more information on
policies
13. Open firewall to OCI Data Transfer Service on the IP address ranges:
140.91.0.0/16
14. Open firewall to OCI Object Storage IP address ranges:
134.70.0.0/17

Creating the Transfer Job


Run these command line items on the host where you plan on mounting USB HDD and copying data and/or the host
that you will use to manage the data transfer job:
1. As root, create the configuration files:

sudo bash
mkdir /root/.oci
cd /root/.oci

Oracle Cloud Infrastructure User Guide 1129


Data Transfer

vi config
[DEFAULT]
user=<The OCID for the data transfer administrator>
fingerprint=<The fingerprint of the above user's public key>
key_file=<The _absolute_ path to the above user's private key file on the
host machine>
tenancy=<The OCID for the tenancy that owns the data transfer job and
bucket>
region=<The region where the transfer job and bucket should exist. Valid
values are:
us-ashburn-1, us-phoenix-1, eu-frankfurt-1, and uk-london-1.>

vi config_upload_user
[DEFAULT]
user=<The OCID for the data transfer upload user>
fingerprint=<The fingerprint of the above user's public key>
key_file=<The _absolute_ path to the above user's private key file on the
host machine>
tenancy=<The OCID for the tenancy that owns the data transfer job and
bucket>
region=<The region where the transfer job and bucket should exist. Valid
values are:
us-ashburn-1, us-phoenix-1, eu-frankfurt-1, and uk-london-1.>
endpoint=https://objectstorage.<region information>.com
2. Get the tenancy namespace:

oci os ns get
3. Create a bucket in the compartment created for the transfer job

oci os bucket create –namespace <object_storage_namespace> --name <bucket


name> --compartment-id <compartment_id>
4. Verify the data transfer upload user credentials:

dts job verify-upload-user-credentials --bucket <bucket_name>


5. Create the transfer job:

dts job create -–bucket <bucket_name> --compartment-id <compartment_id> --


display-name <display_name>

The job OCID is displayed in the Data Transfer Utility return after you create the job. Send this job OCID to the
person who will copy data to the disk.
6. (Optional) Add notifications:

oci dts job setup-notifications --job-id <job_id>


7. Create a virtual representation of the physical shipping package for the disk called a transfer package:

dts package create --job-id <job_id>


8. Get the package label:

dts job show --job-id <job_id>

The package label is included in the Data Transfer Utility return. Send it to the person who will copy data to the
disk.

Oracle Cloud Infrastructure User Guide 1130


Data Transfer

9. Get the shipping address for the disk:

dts package show --job-id <job_id> --package-label <package_label>

Shipping information is included in the Data Transfer Utility return. Send it to the person who will create the
shipping labels.
10. Create a FedEx, UPS, or DHL shipping label for the disk using the address from above to ship the disk to Oracle.
Send the carrier-provided tracking to the person who will copy data to the disk.
11. Create a return label for the disk and send it electronically or in person to the person who will ship the disk. Send
the tracking number for the return label to the person who will copy data to the disk.

Help Sheet - Disk Import Procedures

Before Starting
Before starting, ensure the following:
• You have the following information available:
• The disk transfer job OCID
• The package label
• The shipping vendor and tracking ID
• The return shipping vendor and tracking ID
• You have root access to the Linux machine where the data will be copied
• The configuration files and Data Transfer Utility are already installed on the Linux machine where the data will be
copied.

Attach and Create the Transfer Disk


To attach and create the transfer disk:
1. Physically attach the import disk to the Linux host where data will be copied.
As part of this process, do the following:
a. Run lsblk and verify you can see the device, take note of the device path as it will be needed in future steps
b. Run hdparm -I <device> and verify the disk provides a valid response, particularly a readable serial
number.
The disk should have not have any partitions or filesystems. If it does, run:

wipefs -a /dev/path
2. Use the Data Transfer Utility to create the transfer disk, it will also generate and display an encryption passphrase,
create a unique mount point, and mount the disk:

dts disk create --job-id job_id --block-device block_device

The encryption passphrase will be displayed to output. Store it in a secure place as it will not be displayed or
accessible again.
Record the disk label from the output. You will need it later in the procedure.
Verify there is a new mount point called /mnt/orcdts_disk_label

Oracle Cloud Infrastructure User Guide 1131


Data Transfer

3. Copy files to the data transfer disk using the mount point from the previous step. We recommend the tar is
recommended but you can use other types of copy methods such as cp or rsync.
Here are two examples:

tar -cvzf /mnt/disk_label/filesystem.tgz filesystem/

tar -cvzf /mnt/disk_label/filesystem.tgz filesystem/ |xargs -I '{}' sh -
c "test -f '{}' && md5sum '{}'"|tee tarzip_md5

You can only copy regular files to disks. Special files (links, sockets, pipes, and so forth) cannot be copied
directly. To transfer special files, create a tar archive of the files and copy the tar archive to the disk.
Do not disconnect the disk until you copy all files from the Data Host and generate the manifest file. If you
accidentally disconnect the disk before copying all files, you must unlock the disk using the encryption
passphrase.
4. Generate a manifest file after all data has been copied to the disk:

dts disk manifest --job-id job_id --disk-label disk_label

The manifest file will be on the transfer disk.


5. Lock the transfer disk:

dts disk lock --job-id job_id --disk-label disk_label --block-


device block_device
6. Attach the transfer disk to the transfer package:

dts disk attach --job-id job_id --package-label package_label --disk-


label disk_label
7. Update the transfer package with the tracking information :

dts package ship --job-id job_id --package-label package_label --package-


vendor package_vendor_name --tracking-number tracking_number --return-
tracking-number return_tracking_number
8. Physically disconnect the disk from the Linux host.
9. Have the disk packaged, insert the printed return label, attach the shipping label to the outside of the package.
10. Pass the disk to the vendor to ship to Oracle
11. Monitor the status of the transfer package:

dts package show --job-id job_id --package-label package_label


12. Review the upload status after Oracle receives the disk and uploads your files.
13. Close the transfer job after the job is complete and the import disk is returned to you:

dts job close --job-id job_id

Help Sheet - Appliance Import Job Checklist


Use this checklist for preparing to use the Data Transfer Appliance (import appliance) for use in an import job. Check
each item in order to ensure you are fully prepared for the data transfer.
__ Administrative user to the tenancy who can create users, groups, compartments, add policies, and request import
appliance entitlement.
__ Access to the main buyer or administrator who is VP-level or higher who can sign the terms and conditions
document.

Oracle Cloud Infrastructure User Guide 1132


Data Transfer

__ Linux machine running Oracle Linux 6.10/7.3-7.5/8.0, Ubuntu 16.04/18.04, or CentOS 6.9/6.10/7.0.
__ Root access for the prepare and copy on the Linux machine.
__ Someone who has physical access to where the import appliance will be installed.
__ Meet all appliance specifications and physical environment requirements.
__ Terminal emulation host that can connect to the import appliance using USB or DB-9 serial cable.
__ Terminal emulation host with one of the following installed: PuTTY for Windows, ZOC for OS X, PuTTY or
Minicom for Linux.
__ Network connection to the import appliance consisting of either a 10GBase-T: Standard RJ-45 or SFP+ with
transceiver compatible with Intel X520NICS.
__ IP address for the import appliance __________ .
__ Subnet mask length for the import appliance subnet __________ .
__ Default gateway for the import appliance network __________ .
__ NFS communication between the import appliance subnet and servers from where data will be copied
__ HTTP Proxy information if your corporation uses an internet proxy __________ .
__ Open firewall for Linux machine for preparation and copying to OCI Data Transfer Service on the IP address
ranges: 140.91.0.0/16.
__ Open firewall for Linux machine for preparation and copying to OCI Object Storage IP address ranges:
134.70.0.0/17.
__ Installation of OCI Command Line Interface.
__ Generate public/ private keys for users who will copy data on the Linux machine (run oci setup keys
command).

Help Sheet - Prepare for Appliance Import Jobs


Use this help sheet to prepare and use your Data Transfer Appliance

Preparing
1. Install the OCI Command Line Interface on the Linux machine where the data will be copied from and the Data
Transfer Appliance will be mounted. You should have root access to the Linux machine.
Installation instructions are located at: https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliinstall.htm
On the machine where data will be copied from generate public/private keys for the user(s) who will do the data
copy, run the following command:

oci setup keys

See https://docs.cloud.oracle.com/iaas/Content/API/Concepts/apisigningkey.htm for more information on keys.


2. Login to OCI with an Administrative user for the tenancy.
3. Create the user policies. Ensure that the policies include the following:

Allow group group_name to {DTA_ENTITLEMENT_CREATE} in tenancy

See https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingpolicies.htm for more information on


policies

Oracle Cloud Infrastructure User Guide 1133


Data Transfer

4. Create a compartment where the transfer job and landing bucket will reside. This compartment must be in a region
that supports usage pf the Data Transfer Appliance.
See https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcompartments.htm for more information
on compartments.
5. Create the necessary user accounts for those individuals who will copy data to the appliance. Include the public
key that was previously generated.
6. Create a group for the user who will copy data to the appliance. Include the following policies in the group:

Allow group group_name to manage data-transfer-jobs in


compartment compartment_name
Allow group group_name to manage buckets in compartment compartment_name
Allow group group_name to manage objects in compartment compartment_name

See https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managinggroups.htm for more information on


groups.
If you want to include notifications for the group, includes these additional policies:

Allow group <group name> to manage ons-topics in tenancy


Allow group group name to manage ons-subscriptions in tenancy
Allow group group name to manage cloudevents-rules in tenancy
Allow group group name to inspect compartments in tenancy

See https://docs.cloud.oracle.com/iaas/Content/Notification/Concepts/notificationoverview.htm for more


information on notifications.
See https://docs.cloud.oracle.com/iaas/Content/Events/Concepts/eventsoverview.htm for more information on
events.
7. Create an upload user for Oracle personnel to upload data into the bucket.
See https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingusers.htm for more information on users.
8. Create a group for the upload user, and include the public key that was previously generated.
See https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managinggroups.htm for more information on
groups.
9. Add the following policies for the upload user group:

Allow group group_name to manage buckets in compartment compartment_name


where all { request.permission='BUCKET_READ',
target.bucket.name='bucket_name' }
Allow group group_name to manage objects in compartment compartment_name
where all { target.bucket.name='bucket_name',
any { request.permission='OBJECT_CREATE',
request.permission='OBJECT_OVERWRITE',
request.permission='OBJECT_INSPECT' }}

The permissions for upload users allow Oracle personnel to upload standard and multi-part objects on your behalf
and inspect bucket and object metadata. The permissions do not allow Oracle personnel to inspect the actual data.
See https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingpolicies.htm for more information on
policies
10. Open firewall to OCI Data Transfer Service on the IP address ranges:
140.91.0.0/16
11. Open firewall to OCI Object Storage IP address ranges:
134.70.0.0/17

Oracle Cloud Infrastructure User Guide 1134


Data Transfer

Creating the Transfer Job


Run these command line items on the host where you plan on mounting USB HDD and copying data and/or the host
that you will use to manage the data transfer job:
1. As root, create the configuration files:

sudo bash
mkdir /root/.oci
cd /root/.oci
vi config
[DEFAULT]
user=<The OCID for the data transfer administrator>
fingerprint=<The fingerprint of the above user's public key>
key_file=<The _absolute_ path to the above user's private key file on the
host machine>
tenancy=<The OCID for the tenancy that owns the data transfer job and
bucket>
region=<The region where the transfer job and bucket should exist. Valid
values are:
us-ashburn-1, us-phoenix-1, eu-frankfurt-1, and uk-london-1.>

vi config_upload_user
[DEFAULT]
user=<The OCID for the data transfer upload user>
fingerprint=<The fingerprint of the above user's public key>
key_file=<The _absolute_ path to the above user's private key file on the
host machine>
tenancy=<The OCID for the tenancy that owns the data transfer job and
bucket>
region=<The region where the transfer job and bucket should exist. Valid
values are:
us-ashburn-1, us-phoenix-1, eu-frankfurt-1, and uk-london-1.>
endpoint=https://objectstorage.<region information>.com
2. Get the tenancy namespace:

oci os ns get
3. Create a bucket in the compartment created for the transfer job

oci os bucket create –namespace object_storage_namespace --


name bucket_name --compartment-id compartment_id
4. Verify the data transfer upload user credentials:

dts job verify-upload-user-credentials --bucket bucket_name


5. Create the transfer job:

oci dts job create -–bucket bucket_name --compartment-id compartment_id --


display-name display_name

The job OCID is displayed in the CLI return after you create the job. Send this job OCID to the person who will
copy data to the disk.
6. (Optional) Add notifications:

oci dts job setup-notifications --job-id job_id


7. Request the appliance:

oci dts appliance request --job-id job_id --addressee addressee --care-


of care_of --address1 address_line1 --city-or-locality city_or_locality

Oracle Cloud Infrastructure User Guide 1135


Data Transfer

--state-province-region state_province_region --country country --zip-


postal-code zip_postal_code --phone-number phone_number --email email

Note the appliance label in the output (the label will begin with "XA"). You will need to this label value for other
commands involving the appliance.
To include job notifications when requesting an import appliance, include the --setup-notifications
option:

oci dts appliance request --job-id job_id --addressee addressee --


address1 address_line1 --city-or-locality city_or_locality --state-or-
region state_or_region --country country --zip-postal_code zip_code ... --
setup-notifications

If you have already made your appliance request without including notifications, but subsequently want to include
them, run the following:

oci dts appliance setup-notifications --appliance-label appliance_label

Help Sheet - Appliance Import Procedures


Follow the tasks in this help sheet after you login to the host where you will be mounting the Data Transfer Appliance
(import appliance) and copying data. You need to run all Command Line Interface (CLI) tasks as sudo.
• Have the IP address for the import appliance.
• Have the access token for the import appliance.
• Have the transfer job OCID.
• Have the appliance label information.
• Ensure there is no firewall and communication is open between the import appliance the host where it will be
mounted.
• Open the firewall to the Data Transfer service on the IP address ranges: 140.91.0.0/16.
• Open the firewall to the Object Storage IP address ranges: 134.70.0.0/17.
• Set the environment variable If HTTP proxy environment is needed to allow access to the internet. The proxy
environment allows Oracle Cloud Infrastructure CLI to communicate with the Data Transfer Appliance
Management Service and the import appliance over a local network connection.

export HTTPS_PROXY=http://www-proxy.myorg.com:80
• Go to root as sudo and install NFS utilities if they are not already installed (first command for RHEL, OEL and
second command for Debian, Ubuntu):

sudo yum install nfs-utils


sudo apt-get install nfs-common
• Continue as root.
• Initialize the appliance. Have the import appliance access token ready.

oci dts physical-appliance initialize-authentication --job-id job_id


--appliance-cert-fingerprint appliance_cert_fingerprint --appliance-
ip appliance_ip --appliance-label appliance_label

When prompted, supply the access token, and answer y to permit overwriting of data.
• Configure the import appliance encryption:

oci dts physical-appliance configure-encryption --job-id job_id --


appliance-label appliance_label

Oracle Cloud Infrastructure User Guide 1136


Data Transfer

• Unlock the import appliance:

oci dts physical-appliance unlock --job-id job_id --appliance-


label appliance_label
• Create an NFS dataset:

oci dts nfs-dataset create --name dataset_name


• Export the dataset:

oci dts nfs-dataset set-export --name <dataset_name> --rw true --world


true
• Activate the dataset:

oci dts nfs-dataset activate --name dataset_name


• Check the dataset is exported:

showmount -e appliance_ip
• Mount the dataset:

mkdir -p /mnt/mountpoint_name
oci dts nfs-dataset activate --name dataset_name
• Copy files to the DTA. The tar method is recommended but other types of copies such as cp or rsync can also
be used.
Here are two examples:

tar -cvzf /mnt/nfs-dts-1/filesystem.tgz filesystem/

tar cvzf /mnt/nfs-dts-1/filesystem.tgz filesystem/ |xargs -I '{}' sh -c
"test -f '{}' && md5sum '{}'"|tee tarzip_md5
• Deactivate the dataset:

oci dts nfs-dataset deactivate --name dataset_name


• Seal the dataset. Note this can be a long running process.

oci dts nfs-dataset seal --name dataset_name [--wait]


• Monitor the sealing process:

oci dts nfs-dataset seal-status --name dataset_name


• Download the dataset seal manifest:

oci dts nfs-dataset get-seal-manifest --name dataset_name --output-


file output_file_path
• Finalize the import appliance:

oci dts physical-appliance finalize --job-id job_id --appliance-


label appliance_label
• Shut down the import appliance by selecting option #8 on the terminal emulation host.
• Have the import appliance packed and shipped to Oracle.

Oracle Cloud Infrastructure User Guide 1137


Data Transfer

• Monitor the status of the data upload from the DTA to your object storage bucket in OCI:

oci dts appliance show --job-id job_id --appliance-label appliance_label


• Once the data upload is finished, check the object storage bucket from the Console and get the upload file location
• Download the upload file and review them to understand what was transferred:

oci os object get --namespace object_storage_namespace --bucket-


name bucket_name --name object_name --file file_location
• Close the import job:

oci dts job close --job-id job_id


• Delete the import appliance associated with the import job:

oci dts appliance delete --job-id job_id --appliance-label appliance_label


• Delete the transfer job:

oci dts job delete --job-id job_id

Oracle Cloud Infrastructure User Guide 1138


Data Transfer

Oracle Cloud Infrastructure User Guide 1139


Database

Chapter

14
Database
This chapter explains how to launch a DB System and manage databases on DB Systems.

Overview of the Database Service


The Database service offers autonomous and co-managed Oracle Database cloud solutions. Autonomous databases
are preconfigured, fully-managed environments that are suitable for either transaction processing or for data
warehouse workloads. Co-managed solutions are bare metal, virtual machine, and Exadata DB systems that you can
customize with the resources and settings that meet your needs.
You can quickly provision an autonomous database or co-managed DB system. You have full access to the features
and operations available with the database, but Oracle owns and manages the infrastructure.
You can also extend co-managed database services into your data center by using Exadata Cloud@Customer,
which applies the combined power of Exadata and Oracle Cloud Infrastructure while enabling you to meet your
organization's data-residency requirements.
For details about each offering, start with the following overview topics:
Autonomous Databases
The Database service offers Oracle's Autonomous Database with transaction processing and data warehouse workload
types.
Co-managed Systems
• Bare Metal and Virtual Machine DB Systems on page 1353
• Exadata Cloud Service on page 1222
• Exadata Cloud@Customer
Note:

For information about MySQL Database, see MySQL Database.

License Types and Bring Your Own License (BYOL) Availability

Database Service License Options


Oracle Cloud Infrastructure supports a licensing model with two license types. With License included, the cost of the
cloud service includes a license for the Database service. With Bring Your Own License (BYOL), Oracle Database
customers can use existing licenses with Oracle Cloud Infrastructure. Note that Oracle Database customers remain
responsible for complying with license restrictions applicable to their BYOL licenses, as defined in their program
order for those licenses.
You do not need separate on-premises licenses and cloud licenses. BYOL databases support all advanced Database
service manageability functionality, including backing up and restoring a DB system, patching, and Oracle Data
Guard.

Oracle Cloud Infrastructure User Guide 1140


Database

You can choose BYOL when you launch a cloud-hosted Oracle Cloud Infrastructure database or DB system.
Choosing BYOL impacts how the usage data for the instance is metered and subsequent billing. You can also switch
license types after provisioning.
Note that on some provisioning dialogs in the Console, the BYOL option is labeled My Organization Already Owns
Oracle Database Software Licenses.
For additional information about license pricing and features, see Oracle Cloud Database Services. For guidelines on
using Oracle Database licenses, see Database Licensing.

Switching Database Service License Types


You can switch license type after provisioning your resource. For information about switching the license type, see
the following:
• To change the license type of an Autonomous Database on page 1173
• To change the license type of an Exadata Cloud Service cloud VM cluster or DB system on page 1262
• To change the license type of a bare metal or virtual machine DB system on page 1388

Always Free Database Resources


The Database service is one of the Oracle Cloud Infrastructure services that provides you with Always Free resources
as a part of Oracle's Free Tier. For an introduction to the Free Tier, see Oracle Cloud Infrastructure Free Tier on page
142. For details about the Always Free Autonomous Database, see Always Free Availability on page 1149 in the
Autonomous Database overview topic. To provision an Always Free Autonomous Database, see To create an Always
Free Autonomous Database on page 1159.

Moving Database Resources to a Different Compartment


You can move DB systems, Autonomous Database resources, and Exadata Cloud@Customer resources from one
compartment to another. When you move a Database resource to a new compartment, its dependent resources move
with it. After you move the resource to the new compartment, inherent policies apply immediately and affect access
to that resource and its dependent resources through the Console.
Important:

• To move resources between compartments, resource users must have


sufficient access permissions on the compartment that the resource
is being moved to, as well as the current compartment. For more
information about permissions for Database resources, see Details for the
Database Service on page 2251.
• If your database resource is in a security zone compartment, the
destination compartment must also be in a security zone. See the Security
Zone Policies topic for a full list of policies that affect Database service
resources.

Dependent Resource Details


Details about dependent resources are as follows:
• Bare metal, virtual machine, and Exadata DB systems: Dependent resources that move with these DB systems
include Database Homes and databases, as well as the metadata for automatic backups. To verify the compartment
of a dependent resource, check the compartment of the DB system.
• Autonomous Database: Autonomous Database dependent resources are limited to its automatic backups.
Autonomous Exadata Infrastructure instances and Autonomous Container Databases have no dependent resources
that move with them. Associated (non-dependent) resources remain in their current compartments.
• Exadata Cloud@Customer: Resources that can be moved are Exadata Infrastructure, VM clusters, and backup
destinations. VM cluster networks are dependent resources of Exadata Infrastructure instances, so they move with

Oracle Cloud Infrastructure User Guide 1141


Database

them. VM clusters have the following dependent resources: Database Homes, and databases and their automatic
backups. Backup destinations have no dependent resources.
For more information about moving resources to other compartments, see To move a resource to a different
compartment on page 2463.

Monitoring Resources
You can monitor the health, capacity, and performance of your Oracle Cloud Infrastructure resources by using
metrics, alarms, and notifications. For more information, see Monitoring Overview on page 2686 and Notifications
Overview on page 3378.
• For information about available Database service metrics and how to view them, see Database Metrics on page
1579.
• For information about measuring the performance of Oracle Cloud Infrastructure resources with Performance
Hub, see Using Performance Hub to Analyze Database Performance.

Creating Automation with Events


You can create automation based on state changes for your Oracle Cloud Infrastructure resources by using event
types, rules, and actions. For more information, see Overview of Events on page 1788.
See Database on page 1847 for details about Database resources that emit events.

Resource Identifiers
Most types of Oracle Cloud Infrastructure resources have a unique, Oracle-assigned identifier called an Oracle
Cloud ID (OCID). For information about the OCID format and other ways to identify your resources, see Resource
Identifiers.

Ways to Access Oracle Cloud Infrastructure


You can access Oracle Cloud Infrastructure using the Console (a browser-based interface) or the REST API.
Instructions for the Console and API are included in topics throughout this guide. For a list of available SDKs, see
Software Development Kits and Command Line Interface on page 4262.
To access the Console, you must use a supported browser.
Oracle Cloud Infrastructure supports the following browsers and versions:
• Google Chrome 69 or later
• Safari 12.1 or later
• Firefox 62 or later
For more information on tenancies and compartments, see Key Concepts and Terminology in the Oracle Cloud
Infrastructure Getting Started Guide. For general information about using the API, see REST APIs on page 4409.
For information on deprecated Database Service APIs, see Deprecated Database Service APIs on page 1674
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. If
you want to write policies that provide stricter access to database resources, see Details for the Database Service on
page 2251.

Authentication and Authorization


Each service in Oracle Cloud Infrastructure integrates with IAM for authentication and authorization, for all
interfaces (the Console, SDK or CLI, and REST API).
An administrator in your organization needs to set up groups, compartments, and policies that control which users
can access which services, which resources, and the type of access. For example, the policies control who can create
new users, create and manage the cloud network, launch instances, create buckets, download objects, etc. For more
information, see Getting Started with Policies on page 2143. For specific details about writing policies for each of
the different services, see Policy Reference on page 2176.

Oracle Cloud Infrastructure User Guide 1142


Database

If you’re a regular user (not an administrator) who needs to use the Oracle Cloud Infrastructure resources that
your company owns, contact your administrator to set up a user ID for you. The administrator can confirm which
compartment or compartments you should be using.
For common policies used to authorize Oracle Cloud Infrastructure Database users, see Common Policies.
For in-depth information on granting users permissions for the Database service, see Details for the Database Service
in the IAM policy reference.

Security Zone Integration


A security zone is associated with a compartments and a set of policies called a security zone recipe. When you create
and update resources in a security zone, Oracle Cloud Infrastructure validates these operations against the list of
policies defined in the security zone recipe. If any security zone policy is violated, then the operation is denied.
The Database service allows you to create and update your databases and associated resources in security zones. For
a general overview of the security zones, see the Security Zone documentation. For an overview of the Database
service's integration with the security zone feature, see Security Zone Integration on page 1576.

Limits on the Database Service


See Service Limits on page 217 for a list of applicable limits and instructions for requesting a limit increase. To set
compartment-specific limits on a resource or resource family, administrators can use compartment quotas.
Note:

Service limits and compartment quotas do not apply to Exadata


Cloud@Customer.
Many Database API operations are subject to throttling.

Work Requests Integration


The Database service is integrated with the Oracle Cloud Infrastructure Work Requests API. Work requests allow
you to monitor long-running operations such as the provisioning of DB systems. A work request is an activity log that
enables you to track each step in the operation's progress. Each work request has an OCID that allows you to interact
with it programmatically and use it for automation.
For general information on using work requests in Oracle Cloud Infrastructure, see Work Requests on page 262 and
the Work Requests API. See Database Service Work Requests Reference on page 1143 for a listing of Database
service operations that create work requests.

Database Service Work Requests Reference


This topic lists the Database service operations generate work requests. For general information on using work
requests in Oracle Cloud Infrastructure, see Work Requests on page 262 and the Work Requests API.

Autonomous Database on Shared Exadata Infrastructure


Lifecycle operations:
• Create
• Delete
• Start
• Restart
• Stop
• Restore
Database management operations:
• Scale

Oracle Cloud Infrastructure User Guide 1143


Database

• Rename
• Reset password
• Update license type
• Update workload type
• Update open mode / permission level (read-only/read-write, and admin only/all users)
• Upgrade version
• Update tags
• Change compartment
• Rotate instance wallet
• Rotate regional wallet
Network access operations:
• Update network ACL (public endpoint only)
• Update network security group (NSG) (private endpoint only)
• Update private endpoint
Backup operations:
• Create backup
• Delete backup
Refreshable clone operations:
• Manual refresh
• Disconnect refreshable clone
• Update refreshable lag time
Autonomous Data Guard operations:
• Enable Data Guard
• Disable Data Guard
• Failover
• Switchover
Associated services operations:
• Register with Data Safe
• Deregister with Data Safe
• Enable Operations Insights
• Disable Operations Insights

Autonomous Database on Dedicated Exadata Infrastructure


Database lifecycle operations:
• Create
• Update
• Delete
• Start
• Restart
• Stop
• Restore
Backup operations:
• Create backup
• Delete backup
Container database operations:

Oracle Cloud Infrastructure User Guide 1144


Database

• Create
• Delete
• Update
• Restart
• Rotate container database encryption key
• Rotate database encryption key
Infrastructure resource operations:
• Create
• Terminate
• Update
Autonomous Data Guard operations:
• Setup Data Guard
• Failover Autonomous Container Database
• Switchover Autonomous Container Database
• Reinstate Autonomous Container Database
Associated services operations:
• Register with Data Safe
• Deregister with Data Safe

Exadata Cloud Service, Virtual Machine DB Systems, and Bare Metal DB Systems
DB systems (Exadata, bare metal, and virtual machine)
Lifecycle operations:
• Create
• Create from backup
• Update
• Terminate
DB system management operations:
• Change compartment
• Scale storage
• Scale CPU
• Add SSH key
• Update license type
• Configure IORM
• Update shape
• Apply FIPS security
Maintenance operations:
• Precheck system for patching
• Patch system
• Upgrade database
• Install DB system component
• Switch to the new exadata API and user experience
Note:

Virtual machine DB systems have a single database that is created and


terminated as part of the creation or termination of the parent DB system

Oracle Cloud Infrastructure User Guide 1145


Database

Cloud Exadata infrastructure operations (new resource model)


• Create
• Update
• Scale (flexible shape systems only)
• Delete
• Change compartment
Exadata Cloud VM cluster operations
Lifecycle operations:
• Create
• Delete
Management operations:
• Change compartment
• Scale CPU
• Add SSH key
• Update license type
• Scale compute (flexible shape systems only)
• Scale storage (flexible shape systems only)
Maintenance operations:
• Precheck for grid infrastructure (GI) patch
• Apply GI patch
• Precheck for grid infrastructure (GI) upgrade
• Upgrade grid infrastructure (GI)
• Upgrade database
• Precheck for OS update
• Apply OS update

Database Homes: Exadata, Virtual Machine, and Bare Metal Cloud Service Instances
Lifecycle operations:
• Create
• Delete
Maintenance operations:
• Patch
• One-off Patch

Database Nodes: Exadata, Virtual Machine, and Bare Metal Cloud Service Instances
Start
Stop
Reboot

Virtual Machines: Exadata Cloud Service (New Infrastructure Resource Model only)
Start
Stop
Reboot
Precheck customer-managed (Vault service) database key

Oracle Cloud Infrastructure User Guide 1146


Database

Migrate customer-managed (Vault service) database key


Rotate customer-managed (Vault service) database key

Databases: Exadata, Virtual Machine, and Bare Metal Cloud Service Instances
Lifecycle operations:
• Create
• Update
• Restore
• Delete
Maintenance operations:
• Upgrade database
• Rollback database upgrade
Note:

Virtual machine DB systems have a single database that is created and


terminated as part of the creation or termination of the parent DB system

Database Backups: Exadata, Virtual Machine, and Bare Metal Cloud Service Instances
Create
Delete

Data Guard: Exadata, Virtual Machine, and Bare Metal Cloud Service Instances
Create Data Guard
Delete Data Guard
Switch over Data Guard
Fail over Data Guard
Reinstate Data Guard

Exadata Cloud Service Instances (New Infrastructure Resource Model)


Cloud Exadata infrastructure operations
Lifecycle operations:
• Create
• Update
• Delete
Management operations:
• Scale (flexible shape systems only)
• Change compartment
Cloud VM cluster operations
Lifecycle operations:
• Create
• Delete
Management operations:
• Change compartment

Oracle Cloud Infrastructure User Guide 1147


Database

• Scale CPU
• Add SSH key
• Update license type
• Scale compute (flexible shape systems only)
• Scale storage (flexible shape systems only)
Maintenance operations:
• Precheck for grid infrastructure (GI) patch
• Apply grid infrastructure (GI) Patch
• Precheck for grid infrastructure (GI) upgrade
• Upgrade grid infrastructure (GI)
• Precheck for OS update
• Apply OS update

Exadata Cloud@Customer Systems


Exadata Cloud@Customer infrastructure operations:
• Create
• Update
• Activate
• Delete
Exadata Cloud@Customer VM cluster operations:
• Create
• Update
• Delete
• Change compartment
Exadata Cloud@Customer Autonomous VM cluster operations:
• Create
• Update
• Delete
• Change compartment
Exadata Cloud@Customer VM cluster network operations:
• Create
• Update
• Validate
• Delete
Additional maintenance and management operations:
• Update VM cluster licence type
• Patch VM cluster
• Patch VM cluster database
• Update VM cluster OCPU count
• Update SSH key
• Update VM cluster memory
• Update VM cluster Exadata storage
• Update VM cluster local storage
• Update Exadata database backup configuration

Oracle Cloud Infrastructure User Guide 1148


Database

Databases Software Images


• Creating a Database Software Image
• Deleting a Database Software Image
• Moving a Database Software Image to a new compartment

Getting Oracle Support Help for Your Database Resources


You can open a My Oracle Support ticket for individual Database resources while viewing them in the Oracle Cloud
Infrastructure Console. For more information, see Getting Help and Contacting Support on page 126.

Overview of Autonomous Databases


Oracle Cloud Infrastructure's Autonomous Database is a fully-managed, preconfigured database environment
with three workload types available, Autonomous Transaction Processing, Autonomous Data Warehouse and
Autonomous JSON Database. You do not need to configure or manage any hardware or install any software. After
provisioning, you can scale the number of CPU cores or the storage capacity of the database at any time without
impacting availability or performance. Autonomous Database handles creating the database, as well as the following
maintenance tasks:
• Backing up the database
• Patching the database
• Upgrading the database
• Tuning the database

Always Free Availability


Autonomous Database can be used without charge as part of Oracle Cloud Infrastructure's suite of Always Free
resources. Users of both paid and free Oracle Cloud Infrastructure accounts have access to two Always Free instances
of Autonomous Database. Always Free Autonomous Databases have a fixed 8 GB of memory, 20 GB of storage,
1 OCPU, and can be configured for either Autonomous Transaction Processing or Autonomous Data Warehouse
workloads.
Always Free databases have only single available version. You can see the version that is being used for your
database in the details screen. After a newer Oracle Database version is available in Oracle Cloud Infrastructure, your
database will be automatically upgraded during one of your database's upcoming maintenance windows.
To learn about Free Tier Databases, see Oracle Cloud Infrastructure Free Tier on page 142. To learn about the details
of the Always Free Autonomous Database, see Overview of the Always Free Autonomous Database on page 1221.
To provision an Always Free Autonomous Database, see Creating an Autonomous Database on Shared Exadata
Infrastructure on page 1157.
For information on regional availability of Always Free Autonomous Database, see the "Always Free Cloud
Services" section of Data Regions for Platform and Infrastructure Services.

Available Workload Types


Autonomous Database offers the following workload types:
• Autonomous Data Warehouse: Built for decision support and data warehouse workloads. Offers fast queries
over large volumes of data.
For a complete product overview of Autonomous Data Warehouse, see Autonomous Data Warehouse. For
Autonomous Data Warehouse tutorials, see Quick Start tutorials.
• Autonomous JSON Database: Built for JSON-centric application development. Offers developer-friendly
document APIs and native JSON storage.
Autonomous JSON Database is Oracle Autonomous Transaction Processing, but specialized for developing
NoSQL-style applications that use JavaScript Object Notation (JSON) documents. You can upgrade an

Oracle Cloud Infrastructure User Guide 1149


Database

Autonomous JSON Database service to an Autonomous Transaction Processing service if you need the additional
functionality of Autonomous Transaction Processing. Currently available on shared Exadata infrastructure.
For a complete product overview of Autonomous JSON Database, see Using Oracle Autonomous JSON Database.
Also available in the Oracle Help Center are Autonomous JSON Database tutorials and the JSON Developer's
Guide.
• Autonomous Transaction Processing: Built for transactional workloads. Offers high concurrency for short-
running queries and transactions.
For a complete product overview of Autonomous Transaction Processing, see Autonomous Transaction
Processing. For Autonomous Transaction Processing tutorials, see Quick Start tutorials.
• Oracle APEX Application Development (APEX Service): Optimized for application developers, who want
a transaction processing database for application development using Oracle APEX, that enables creation
and deployment of low-code applications, including databases. See Oracle APEX Application Development
Documentation for more information about the APEX service and Oracle APEX Application Development
Specific Limitations for a list of use restrictions.
Note:

You can use the APEX service with each of the other workload types.

Infrastructure Options
Autonomous Databases have the following Exadata infrastructure options:
• Dedicated Exadata Infrastructure: With this option, you have exclusive use of the Exadata hardware.
Dedicated Exadata infrastructure offers multitenant database architecture, allowing you to create and manage
multiple Autonomous Databases within a single database system. Both workload types (transaction processing
and warehouse) can be provisioned on dedicated Exadata infrastructure. You have the following hardware
configuration options:
• System Models: X7 and X8
• Configurations: quarter rack, half rack, and full rack
See Overview of Autonomous Database on Dedicated Exadata Infrastructure on page 1194 for more information
about dedicated Exadata infrastructure architecture, features, and hardware specifications.
• Shared Exadata Infrastructure: With this option, you provision and manage only the Autonomous Database,
while Oracle deploys and manages the Exadata infrastructure. Both workload types (transaction processing and
warehouse) can be provisioned with shared Exadata infrastructure.

Oracle Data Guard for Autonomous Databases with Shared Exadata Infrastructure
Autonomous Database uses a feature called Autonomous Data Guard to enable a standby (peer) database to provide
data protection and disaster recovery for Autonomous Databases using shared Exadata infrastructure. For more
information, see Using a Standby Database with Autonomous Database.

Per-Second Billing Billing for Autonomous Database Resources

Shared Exadata Infrastructure


Autonomous Database onShared Exadata infrastructure uses per-second billing. This means that OCPU and storage
usage is billed by the second. OCPU resources have a minimum usage period of 1 minute.

Dedicated Exadata Infrastructure


For each Autonomous Exadata Infrastructure instance you provision, you are billed for the infrastructure for a
minimum of 48 hours, and then by the second after that. Each OCPU you add to the system is billed by the second,
with a minimum usage period of 1 minute.

Oracle Cloud Infrastructure User Guide 1150


Database

Private Endpoint for Autonomous Databases with Shared Exadata Infrastructure


When you provision an Autonomous Database with Shared Exadata infrastructure, you can configure the network
access so that the database uses a private endpoint within one of your tenancy's virtual cloud networks (VCNs). When
you use a private endpoint, your database is only accessible via the IP address of the associated private endpoint. For
more information, see Autonomous Database with Private Endpoint on page 1163.

CPU Scaling
You can manually scale the database's base number of CPU cores up or down at any time. Note the following:
• CPU scaling does not require any downtime.
• CPU utilization information is available for all Autonomous Databases on the database details page in the Metrics
section. CPU utilization is reported as a percentage of available CPUs, aggregated across all consumer groups.
For databases using Shared Exadata infrastructure, you can also view hourly snapshots of the database's
CPU usage (actual number of cores allocated) over the most recent 8 days. This information is available in the
Service Console, in the Overview page graph "Number of OCPUs Allocated". For more information, see To view
OCPU allocation hourly snapshot data for an Autonomous Database on page 1172.
Autonomous Database's auto scaling feature allows your database to use up to three times the current base number
of CPU cores at any time. As demand increases, auto scaling automatically increases the number of cores in use.
Likewise, as demand drops, auto scaling automatically decreases the number of cores in use. Scaling takes place
without any lag time, and you are only billed for your actual average CPU core usage per hour. Note the following
points regarding the auto scaling feature:
• Auto scaling is enabled by default and can be enabled or disabled at any time.
• The auto scaling status for a database (enabled or disabled) is displayed on the database details page.
• The base number of OCPU cores allocated to a database is guaranteed. For databases on dedicated Exadata
infrastructure, the maximum number of cores available to a database depends on the total number of cores
available in the Exadata infrastructure instance, and is further limited by the number of free cores that aren't being
used by other auto scaling databases to meet high-load demands. Available OCPU cores are enabled on a "first
come, first served basis" for autoscaling databases sharing an Autonomous Exadata Infrastructure instance.
The following table illustrates OCPU core availability for a single database on an X8 quarter rack dedicated Exadata
infrastructure instance. As you increase the database's base core count from 1 to 40 cores, the maximum core count
scales until it reaches the hardware limit of 100 OCPUs. The final column, which shows the remaining available
OCPUs that can be allocated to additional databases, assumes that no other databases exist on the quarter rack
instance.
Example: OCPU auto scaling for a single database on an X8 quarter rack as base OCPU is increased

Base OCPU core count Maximum OCPU core count OCPU cores remaining
1 3 99
8 24 92
32 96 68
40 100 60

The following table illustrates OCPU core availability for four databases on an X8 half rack dedicated Exadata
infrastructure instance. The hardware limit is 200 OCPUs. The Base OCPU count for each database is guaranteed to
be available to the database at all times. In the example, the three database with auto scaling enabled are in contention
for the 60 available cores that are not allocated to any database as base cores. Databases Sales and Development each
auto scaled to take a combined 140 OCPU, and database Chicago (with auto scaling disabled) is using its 10 base
OCPU. That leaves only 50 OCPU remaining in the half rack hardware instance, and the base OCPU of database
HR is 50. Therefore, database HR cannot auto scale up until cores are released by the other auto scaling databases.
Example: OCPU auto scaling for four databases on an X8 half rack

Oracle Cloud Infrastructure User Guide 1151


Database

DB Name Auto Scaling Base OCPU Count Maximum OCPU OCPU Enabled
(Guaranteed core count During Load
OCPU)
Sales yes 60 180 100
Development yes 20 60 40
Chicago no 10 10 10
HR yes 50 150 50

Storage Scaling
Autonomous Database allows you to scale the storage capacity of the database at any time without impacting
availability or performance.

Performance Monitoring using Oracle Performance Hub


You can use Performance Hub to monitor and analyze the performance of shared or dedicated Autonomous Databases
in the Oracle Cloud Infrastructure Console. Performance Hub includes the ASH Analytics, SQL Monitoring,
Workload, Blocking Sessions, and ADDM features, described below.
In addition to monitoring databases in the Oracle Cloud Infrastructure cloud, you can also use Performance Hub to
monitor and analyze the performance of external databases running on customer premises. When managing external
databases, you can access Performance Hub directly from the Performance Hub link on the database details page.
Performance Hub includes the following features:
• The ASH (Active Session History) Analytics feature shows ASH analytics charts that you can use to explore
ASH data.
• The SQL Monitoring feature displays monitored SQL statement executions by dimensions including Last Active
Time, CPU Time, and Database Time, and provides information for monitored SQL statements, including Status,
Duration, and SQL ID.
• The Workload performance monitoring feature provides detailed real-time and historical performance data of
CPU Statistics, Wait Time Statistics, Workload Profile, and Sessions.
• The Blocking Sessions feature provides information about blocking and waiting sessions for a selected
Autonomous Database, includes procedures to display detailed information about the sessions, and explains how
to kill sessions as needed.
• The ADDM (Automatic Database Diagnostic Monitor) feature displays findings and recommendations for
performance problems.
Detailed information about these features and how to use them in the Oracle Cloud Infrastructure Console is located
in Using Performance Hub to Analyze Database Performance on page 1599.
For information about using Performance Hub with external databases, see About the Database Management Service.

Operations Insights
Operations Insights is a cloud-native service that enables users to make informed, data-driven, Oracle Autonomous
database resource and performance management decisions. See To enable or disable Operations Insights on an
Autonomous Database on page 1174 for information about managing Operations Insights.

Oracle Database Preview Version Availability


Oracle Cloud Infrastructure periodically offers Autonomous Database preview versions of Oracle Database for testing
purposes. You can provision an Autonomous Database using preview version software to test applications before the
general availability of the software in Autonomous Database. Oracle will notify Autonomous Database customers
when preview versions are available. Preview version software is available for a limited time. Databases provisioned
with preview version software will display the end date of the preview period at the top of the database details page in

Oracle Cloud Infrastructure User Guide 1152


Database

the Console. If you are using the Console, you can also see the end date of the preview period in the Create Database
provisioning dialog before the database is created.
Preview version software should not be used for production databases or for databases that need to persist beyond the
limited preview period. Note that preview databases and their associated resources (including backups) are terminated
automatically at the conclusion of the preview period. Oracle will notify customers prior to the conclusion of the
preview period regarding the end date of the preview.
Any existing Autonomous Database (including those provisioned with preview version software) can be cloned using
a preview version of Autonomous Database. However, preview version databases cannot be cloned using the regular
(general-availability) Autonomous Database software.
See Creating an Autonomous Database on Shared Exadata Infrastructure on page 1157 for details on provisioning a
preview version of Autonomous Database.

Oracle Database Versions for Autonomous Database with Shared Exadata


Infrastructure
Depending on the region where you provision or clone your database, Autonomous Database supports one or more
Oracle Database versions.
When multiple database versions are available, you choose an Oracle Database version when you provision or clone a
database.
Note:

Always Free Autonomous Databases can be provisioned with either


version 19c or version 21c, depending on the region. Most regions offer
both versions. Note that Always FreeAutonomous Databases can only be
provisioned in the home region of your tenancy or account. See Overview of
the Always Free Autonomous Database on page 1221 for more information.

Upgrading a Database
Autonomous Database instances currently use Oracle Database 19c. There is no database software upgrade currently
available.

Regional Availability
Autonomous Database is currently available in all regions of the commercial realm. Autonomous Database is
currently not available in regions within the Government Cloud realm.

Security Considerations

Safeguard Your Data with Data Safe on Autonomous Database


Oracle Data Safe is a fully-integrated, regional Cloud service providing features that help you protect sensitive and
regulated data in your Autonomous Transaction Processing database. See the Data Safe documentation for more
information.

Private Access Using a Service Gateway


Autonomous Database is one of the Oracle Cloud Infrastructure services that can be privately accessed through a
service gateway within a VCN. This means you do not need a public IP or NAT to access your Autonomous Database
instance from any of the cloud services within the Oracle Services Network. For example, if you have a Compute
instance that uses a VCN with a service gateway, you can route traffic between your Compute instance and an
Autonomous Database in the same region without the traffic going over the internet. For information on setting up
a VCN service gateway and configuring it to access all supported Oracle Service Network services (which include
Autonomous Database), see Access to Oracle Services: Service Gateway on page 3284.

Oracle Cloud Infrastructure User Guide 1153


Database

Access Control Lists (ACLs) for Databases on Shared Exadata Infrastructure


For Autonomous Databases on shared Exadata infrastructure, an access control list (ACL) provides additional
protection for your database by allowing only specified IP addresses and VCNs in the list to connect to the database.
Specified IP addresses can include private IP addresses from your on-premises network that connect to your database
using transit routing and allow traffic to move directly from your on-premises network to your Autonomous Database
without going over the internet. See Transit Routing: Private Access to Oracle Services on page 2829 for more
information on this method of access.
You can add the following to your ACL:
• Public IP addresses (individually, or in CIDR blocks)
• An entire VCN (specified by OCID )
• Private IP addresses within a specified VCN (individually, or in CIDR blocks)
• Private IP addresses within an on-premises network that have access using a transit routing
You can create an ACL during database provisioning, or at any time thereafter. You can also edit an ACL at any time.
Removing all entries from the list makes the database accessible to all clients with the applicable credentials. See To
manage the access control list of an Autonomous Database on shared Exadata infrastructure on page 1177 to learn
how to create, update, or delete an ACL.
Important:

If you want to only allow connections coming through a service gateway you
need to use the IP address of the service gateway in your ACL definition.
To do this you need to add an ACL rule using the CIDR source type and
the value 240.0.0.0/4. Note that this is not recommended. Instead, you can
specify individual VCNs in your ACL definition for the VCNs you want to
allow access from. See Access to Oracle Services: Service Gateway on page
3284 for more information.
Note the following about using an ACL with your Autonomous Database:
• When you restore a database the existing ACLs are not overwritten by the restore.
• The network ACL applies to the database connections and Oracle Machine Learning notebooks. If an ACL is
defined and you try to login to Oracle Machine Learning from a client whose IP is not specified on the ACL you
will see a "login rejected based on access control list set by the administrator" error.
• Oracle Application Express (APEX), RESTful services, and Oracle Database Actions are subject to ACLs. You
can create rules specifying Virtual Cloud Networks, Virtual Cloud Network OCIDs, IP addresses, or CIDR blocks
to control access to these tools.
• The Autonomous Database Service console is not subject to ACL rules.
• If you have a private subnet in your VCN that is configured to access the public internet through an NAT gateway,
you need to enter the public IP address of the NAT gateway in your ACL definition. Clients in the private subnet
do not have public IP addresses. See NAT Gateway on page 3275 for more information.

Network Security Groups for Databases Resources That Use Private Endpoints
Network security groups (NSGs) are an optional Networking security feature available for dedicated Exadata
infrastructure and databases on shared Exadata infrastructure that use private endpoints. NSGs act as a virtual firewall
for your Autonomous Database resources. An NSG consists of a set of ingress and egress security rules that apply
only to a set of VNICs of your choice within a single VCN. For more information, see the following topics:
• Network Security Groups on page 2867
• NSG security rule guidelines for private endpoint on page 1164
• To edit the network security groups (NSGs) for your Autonomous Exadata Infrastructure resource on page 1205
• To update the network configuration of an Autonomous Database on shared Exadata infrastructure that uses a
private endpoint on page 1178

Oracle Cloud Infrastructure User Guide 1154


Database

Automatic Maintenance
For Autonomous Databases on shared Exadata infrastructure, Oracle manages the automatic maintenance. You
can view the next scheduled maintenance in the Console on the details page for your Autonomous Database. For
Autonomous Databases on dedicated Exadata infrastructure, see Overview of Dedicated Exadata Infrastructure
Maintenance on page 1197.

Development and Administration Tools


Oracle's Database Actions, Application Express (APEX), and Machine Learning applications are available for
Autonomous Databases. For information on how to use these applications and access them from the Console, see
Autonomous Database Tools on page 1217.

Compartment Quotas for Autonomous Databases


You can use compartment quotas to control how Autonomous Database OCPU and storage resources are allocated
to Oracle Cloud Infrastructure compartments. You can use compartment quota policy statements to control
OCPU and storage resources by both workload type and Exadata infrastructure type. For example, you can allocate 10
Autonomous Transaction Processing OCPUs on shared Exadata infrastructure to a specific compartment. This would
not affect the number of OCPUs available to Autonomous Data Warehouse databases, or databases using dedicated
Exadata infrastructure. For more information on using compartment quotas, see Compartment Quotas on page 246
and Database Quotas on page 255.

Using the Oracle Cloud Infrastructure Console to Manage Autonomous Databases


For information on provisioning, managing, and backing up an Autonomous Database in the Oracle Cloud
Infrastructure Console, see the following topics:
• Creating an Autonomous Database on Shared Exadata Infrastructure on page 1157
• Managing an Autonomous Database on page 1170
• Connecting to an Autonomous Database on page 1182
• Backing Up an Autonomous Database Manually on page 1185
• Restoring an Autonomous Database on page 1188

Additional Autonomous Database Product Information


Autonomous Database on Shared Exadata Infrastructure
For in-depth documentation on using and managing your Autonomous Transaction Processing database on shared
Exadata infrastructure, see the following topics:
• Getting Started with Autonomous Database
• Connecting to Autonomous Database
• Loading Data with Autonomous Database
• Querying External Data with Autonomous Database
• Creating Dashboards, Reports, and Notebooks with Autonomous Database
• Managing Users on Autonomous Database
• Managing and Monitoring Performance of Autonomous Database
For information on using a database client to manage your database, see Connect Autonomous Database Using a
Client Application.

Autonomous Database Tutorials


Autonomous Database Quickstart
Learn about Autonomous Database on Shared Infrastructure and learn how to create an Autonomous Database in just
a few clicks. Then load data into your database, query it, and visualize it.
Autonomous Database Quickstart Workshop

Oracle Cloud Infrastructure User Guide 1155


Database

• Provision Autonomous Database


• Load Data
• Query and Visualize Data
• Wallets
• Manage and Monitor
• Scale
Analyzing Your Data with Autonomous Database
Connect using secure wallets and monitor your Autonomous Database instances. Use Oracle Analytics Desktop to
visualize data in Autonomous Database. Use Oracle Machine Learning Notebooks to try your hand at predictive
analytics.
Analyzing your data with Autonomous Database Workshop
• Provision Autonomous Database
• Load Data
• Query and Visualize Data
• Wallets
• Manage and Monitor
• Scale
• Machine Learning Notebooks
• Build a Machine Learning Algorithm
Autonomous Database on Dedicated Exadata Infrastructure
For in-depth documentation on using and managing your Autonomous Database on dedicated Exadata infrastructure,
see the following topics:
• Getting Started with Autonomous Database
• Connecting to Autonomous Database
• Loading Data into Autonomous Database
• Managing Dedicated Autonomous Databases
• Managing Database Users
• Managing and Monitoring Performance
• Backing Up and Restoring Autonomous Database
• Cloud Object Storage URI Formats
• Using Oracle Database Features in Dedicated Autonomous Database Deployments
For information on how application developers connect their applications to Autonomous Databases, see Developer’s
Guide to Oracle Autonomous Database on Dedicated Exadata Infrastructure.
See Fleet Administrator’s Guide to Oracle Autonomous Database on Dedicated Exadata Infrastructure for information
on administering multiple sets of Autonomous Database resources provisioned on dedicated Exadata Infrastructure.
For known issues, see Known Issues for Oracle Autonomous Database on Dedicated Exadata Infrastructure.
Autonomous JSON Database
Autonomous JSON Database is available on shared Exadata infrastructure. For in-depth documentation on using and
managing your Autonomous JSON Database, see the following topics:
• Get Started Using Autonomous JSON Database
• Use SQL Developer Web with JSON Collections
• Develop RESTful Services
• Build an Application
• Load JSON Documents
• Code for High Performance

Oracle Cloud Infrastructure User Guide 1156


Database

See JSON Developer's Guide for information on using Autonomous JSON Database as a part of application
development.
Oracle APEX Application Development
Oracle APEX Application Development is available on Autonomous Database for shared Exadata infrastructure.
For in-depth documentation on using and managing your Oracle APEX Application Development instance, see the
following topics:
• What's Included in Oracle APEX Application Development
• Sign Up for Oracle APEX Application Development
• Access APEX Service
• Manage APEX Service
• Learn About Oracle Application Express

Creating an Autonomous Database on Shared Exadata Infrastructure


This topic describes how to provision a new Autonomous Database on Shared Exadata infrastructure using the
Oracle Cloud Infrastructure Console or the API. Autonomous Databases can be provisioned on Dedicated Exadata
infrastructure or Shared Exadata infrastructure. Your database can by optimized for either data warehouse, JSON,
transaction processing, or APEX service workloads.
To provision an Always Free Autonomous Database, see To create an Always Free Autonomous Database on page
1159. For more information on the Free Tier, see Oracle Cloud Infrastructure Free Tier on page 142.
For Oracle By Example tutorials on provisioning Autonomous Databases, see Provisioning Autonomous Transaction
Processing and Provisioning Autonomous Data Warehouse Cloud.
Prerequisites
• To create an Autonomous Database, you must be given the required type of access in a policy written by an
administrator, whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you
try to perform an action and get a message that you don’t have permission or are unauthorized, confirm with
your administrator the type of access you've been granted and which compartment you should work in. See
Authentication and Authorization for more information on user authorizations for the Oracle Cloud Infrastructure
Database service.
Tip:

See Let database and fleet admins manage Autonomous Databases on


page 2159 for sample Autonomous Database policies. See Details for the
Database Service on page 2251 for detailed information on policy syntax.
• For information on additional prerequisites for provisioning an Autonomous Transaction Processing database, see
What Do You Need? Likewise, for information on additional prerequisites for provisioning an Autonomous Data
Warehouse, see What Do You Need?
• To create an Autonomous Transaction Processing database on Dedicated Exadata infrastructure, you must first
provision the infrastructure and at least one Autonomous Container Database. For more information, see Creating
an Autonomous Exadata Infrastructure Resource on page 1199 and Creating an Autonomous Container Database
on page 1206.
Using the Oracle Cloud Infrastructure Console

To create an Autonomous Database on shared Exadata infrastructure


Tip:

For Autonomous Databases with shared Exadata infrastructure, Oracle Cloud


Infrastructure uses per-second billing. This means that CPU and storage
usage is billed by the second, with a minimum usage period of one minute.

Oracle Cloud Infrastructure User Guide 1157


Database

1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Provide the following information for the Autonomous Database:
• Compartment: Select the compartment of the Autonomous Database.
• Display name: A user-friendly description or other information that helps you easily identify the resource.
The display name does not have to be unique. Avoid entering confidential information.
• Database name: The database name must consist of letters and numbers only, starting with a letter. The
maximum length is 14 characters.
Note:

You cannot use the same database name concurrently for an Autonomous
Data Warehouse, an Autonomous JSON, or an Autonomous Transaction
Processing database for databases using shared Exadata infrastructure.
Names associated with databases terminated within the last 60 days cannot
be used when creating a database.
3. Choose a workload type. See About Autonomous Data Warehouse, About Autonomous JSON Database, About
Autonomous Transaction Processing, and About Oracle Application Express for information about each workload
type.
4. Choose the Shared Infrastructure deployment type.
Note:

If you choose JSON or APEX as your workload type, then Shared


Infrastructure is the only available deployment type.
5. Configure the database:
• Always Free: Use this selector to show only Always Free configuration options if you are provisioning an
Always Free Autonomous Database. See Overview of the Always Free Autonomous Database on page 1221
for more information.
Note:

This option is not available for either JSON or APEX workload types.
• Choose database version: Select a database version from the available versions.
• OCPU count: Specify the number of cores for your Autonomous Database. The actual number of available
cores is subject to your tenancy's service limits.
Deselect Auto scaling to disable auto scaling. By default auto scaling is enabled to allow the system to
automatically use up to three times more CPU and IO resources to meet workload demand. See CPU Scaling
for more information.
• Storage (TB): Specify the storage you wish to make available to your Autonomous Database, in terabytes.
• Enable preview version: (This option only displays during periods when a preview version of Autonomous
Database is available) Select this option to provision the database with an Autonomous Database preview
version. Preview versions of Autonomous Database are made available for limited periods for testing
purposes. Do not select this option if you are provisioning a database for production purposes or if you will
need the database to persist beyond the limited availability period of the preview version.

Oracle Cloud Infrastructure User Guide 1158


Database

6. Create administrator credentials: Set the password for the Autonomous Database ADMIN user by entering
a password that meets the following criteria. You use this password when accessing the Autonomous Database
service console and when using a SQL client tool.
Password criteria:

Contains from 12 to 30 characters and includes at least one uppercase letter, one lowercase letter, and one
numeric character.
• Does not contain the string "admin", regardless of case
• Is not one of the last four passwords used for the ADMIN user
• Does not contain the double quotation mark (")
• Cannot be the same password that was set less than 24 hours ago
7. Choose the type of network access.

Allow secure access from anywhere: This option provides access using a public endpoint that you secure
with an access control list (ACL). Use this option if you need to access your database from the internet or your
on-premises network. See Adding an Access Control List (ACL) to an Autonomous Database with a Public
Endpoint on page 1162 for more information.
• Virtual cloud network: This option creates a private endpoint for your database within a specified VCN. See
Configuring a Virtual Cloud Network (VCN) for Private Endpoint Access to Your Autonomous Database on
page 1162 for more information.
8. Choose a license type. Your choice affects metering for billing. You have the following options:
• Bring Your Own License (BYOL): Bring my existing database software licenses to the database cloud
service.
• License Included: Subscribe to new database software licenses and the Database cloud service.
Note:

If you choose either JSON or APEX as your workload type, then


License Included is the only available license type.
9. Clicking Show Advanced Options allows you to configure the following:
•Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags
to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more
information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip
this option (you can apply tags later) or ask your administrator.
10. Click Create Autonomous Database.
WHAT NEXT?
• Connect to the database
• Create database users - Data Warehouse | Transaction Processing
• Load data into the database - Data Warehouse | JSON | Transaction Processing
• Connect applications that use the database - Data Warehouse | JSON | s Transaction Processing
• Create APEX applications that use the database - Data Warehouse | Transaction Processing
• Register the database with Data Safe
• Set up Object Storage for manual backups
To create an Always Free Autonomous Database
Note:

An Always Free Autonomous Database cannot be created in a security zone


compartment. See the Security Zone Policies topic for a full list of policies
that affect Database service resources.

Oracle Cloud Infrastructure User Guide 1159


Database

1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse or Autonomous
Transaction Processing.
Note:

The JSON workload type is not available for Always Free Autonomous
Database. You can use the Autonomous Transaction Processing workload
type to work with JSON objects. Always Free Autonomous Transaction
Processing databases can be converted to paid Autonomous JSON
Databases.
2. Provide the following information for the Autonomous Database:
• Compartment: Select the compartment of the Autonomous Database.
• Display name: A user-friendly description or other information that helps you easily identify the resource.
The display name does not have to be unique. Avoid entering confidential information.
• Database name: The database name must consist of letters and numbers only, starting with a letter. The
maximum length is 14 characters.
Note:

You cannot use the same database name concurrently for both an
Autonomous Data Warehouse and an Autonomous Transaction Processing
database for databases using shared Exadata infrastructure.
3. Choose a workload type. See About Autonomous Data Warehouse, About Autonomous JSON Database, About
Autonomous Transaction Processing, and About Oracle Application Express for information about each workload
type.
Note:

The Oracle Application Express workload type is not available with


Always Free Autonomous Databases
4. Choose the Shared Infrastructure deployment type.
5. Configure the database:
• Always Free: Move this selector to the right so that the provisioning workflow shows only the Always Free
configuration options. Note that the Core CPU count and Storage configuration fields are disabled when
provisioning an Always Free Autonomous Database. Your database will have 1 OCPU, 8 GB of memory, and
20 GB of storage.
• Choose database version: Select a database version from the available versions.
Note:

You can select only the current database version or a newer one. You
cannot downgrade to an older database version.
6. Create administrator credentials: Set the password for the Autonomous Database ADMIN user by entering
a password that meets the following criteria. You use this password when accessing the Autonomous Database
service console and when using a SQL client tool.
Password criteria:

Contains from 12 to 30 characters and includes at least one uppercase letter, one lowercase letter, and one
numeric character.
• Does not contain the string "admin", regardless of case
• Is not one of the last four passwords used for the ADMIN user
• Does not contain the double quotation mark (")
• Cannot be the same password that was set less than 24 hours ago
7. Network access for Always Free Autonomous Database is Allow secure access from anywhere. This option
provides access using a public endpoint that you secure with an access control list (ACL). Use this option if you
need to access your database from the internet or your on-premises network. See Adding an Access Control List

Oracle Cloud Infrastructure User Guide 1160


Database

(ACL) to an Autonomous Database with a Public Endpoint on page 1162 for more information on creating an
ACL.
8. Clicking Show Advanced Options allows you to configure the following:

Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags
to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more
information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip
this option (you can apply tags later) or ask your administrator.
9. Click Create Autonomous Database.
Note:

The following naming restrictions apply to Autonomous Transaction


Processing and Autonomous Data Warehouse databases using Shared
Exadata Infrastructure:
• Names associated with databases terminated within the last 60 days
cannot be used when creating a new database.
• A database name cannot be used concurrently for two Autonomous
Databases, regardless of workload type.
WHAT NEXT?
• Create database users - Data Warehouse | Transaction Processing
• Load data into the database - Data Warehouse | Transaction Processing
• Connect applications that use the database - Data Warehouse | Transaction Processing
• Create APEX applications that use the database - Data Warehouse | Transaction Processing
• Connect to the database
Using the API
Use the CreateAutonomousDatabase API operation to create Autonomous Databases of either the Autonomous Data
Warehouse (DW), Autonomous JSON Database (AJD), or Autonomous Transaction Processing (OLTP) workload
types.
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
For More Information

Autonomous Database: Autonomous Transaction Processing and Autonomous Data Warehouse


• Using Oracle Autonomous Database on Shared Exadata Infrastructure (full product documentation)
• Autonomous Database Quickstart Workshop (Learn about Autonomous Database on Shared Infrastructure and
learn how to create an Autonomous Database in just a few clicks. Then load data into your database, query it, and
visualize it.)

Autonomous JSON Database


• Using Autonomous JSON Database (full product documentation)
• Autonomous JSON Database Videos (video tutorials)

Oracle APEX Application Development


• Oracle APEX Application Development (full product documentation)
• Oracle APEX Application Development Videos (video tutorials)

Oracle Cloud Infrastructure User Guide 1161


Database

Adding an Access Control List (ACL) to an Autonomous Database with a Public Endpoint
This topic describes how to add an access control list (ACL) when provisioning an Autonomous Database on shared
Exadata infrastructure that uses the Allow secure access from anywhere networking option. See Choose the type of
network access in the To create an Autonomous Database on shared Exadata infrastructure on page 1157 topic to
return to the provisioning instructions.
The network access rules you create for an access control list (ACL) provide protection for your Autonomous
Database by allowing only the public and VCN IP addresses in the list to connect to the database. Click Configure
Access Control Rules in the Create Autonomous Database dialog to create an ACL for your database.
You can specify the following types of addresses in your list by using the IP notation type drop-down selector:
• IP Address allows you to specify one or more individual public IP address. Use commas to separate your
addresses in the input field.
• CIDR Block allows you to specify one or more ranges of public IP addresses using CIDR notation. Use commas
to separate your CIDR block entries in the input field.
• Virtual Cloud Network allows you to specify an existing VCN. The drop-down listing in the input field allows
you to choose from the VCNs in your current compartment for which you have access permissions. Click the
Change Compartment link to display the VCNs of a different compartment.
• Virtual Cloud Network (OCID) allows you input the OCID of a VCN in a text box. You can use this input
method if the VCN you are specifying is in a compartment which you do not have permission to access.
Caution:

If you want to specify multiple IP addresses or CIDR ranges within the same
VCN, do not create multiple ACL entries. Use one ACL entry with the values
for the multiple IP addresses or CIDR ranges separated by commas.
If you add a Virtual Cloud Network to your ACL, you can limit further by specifying allowed VCN IP addresses or
CIDR ranges. Enter those addresses or CIDR blocks in the IP Addresses or CIDRs field that is displayed below your
Virtual Cloud Network choice. Use commas to separate your VCN addresses and CIDR blocks in the input field.
You can specify the following types of IP addresses at the VCN level:
• Private IP addresses within your Oracle Cloud Infrastructure VCN
• Private IP addresses within an on-premises network that have access to your Autonomous Database using a transit
routing and a private connection via FastConnect or VPN Connect.
Click + Another Entry to add additional access rules to your list.
Return to the Create Autonomous Database dialog instructions
Configuring a Virtual Cloud Network (VCN) for Private Endpoint Access to Your Autonomous
Database
This topic describes how to configure a virtual cloud network (VCN) when provisioning an Autonomous Database
on shared Exadata infrastructure that uses the Virtual cloud network networking option to provide private endpoint
access. See Choose the type of network access in the To create an Autonomous Database on shared Exadata
infrastructure on page 1157 topic to return to the provisioning instructions.
Note:

• See Networking Prerequisites Needed for Private Endpoint on page


1164 for information on creating the Networking resources needed for
this configuration.
• If you are creating an Autonomous Database in a security zone
compartment, your private endpoint networking configuration must use
a subnet that is also in a security zone compartment. See the Security
Zone Policies topic for a full list of policies that affect Database service
resources.

Oracle Cloud Infrastructure User Guide 1162


Database

Select Virtual cloud network in the Create Autonomous Database dialog to configure private access, then specify
the following information:
• Virtual cloud network: The VCN in which to launch the Autonomous Database. Click Change Compartment
to select a VCN in a different compartment. Important: You cannot change the specified VCN after provisioning,
except by switching to the Allow secure access from anywhere option, then switching back to the Virtual cloud
network option and creating a new private endpoint network configuration.
• Subnet: The subnet to which the Autonomous Database should attach. Click change compartment to select
a subnet in a different compartment. Important: You cannot change the specified subnet after provisioning,
except by switching to the Allow secure access from anywhere option, then switching back to the Virtual cloud
network option and creating a new private endpoint network configuration.
• Hostname prefix: Optional. This specifies a host name prefix for the Autonomous Database and associates a
DNS name with the database instance, in the following form:

hostname_prefix.adb.region.oraclecloud.com

The host name must begin with an alphabetic character, and can contain only alphanumeric characters and
hyphens (-). You can use up to 63 alphanumeric characters for your hostname prefix.
If you choose not to specify a hostname prefix, Oracle creates a unique DNS name for your database.
• Network security groups: You must specify at least one network security group (NSG) for your Autonomous
Database. NSGs function as virtual firewalls, allowing you to apply a set of ingress and egress security rules to
your database. A maximum of five NSGs can be specified. See NSG security rule guidelines for private endpoint
on page 1164 for details on configuring an NSG for your Autonomous Database.
For more information on creating and working with NSGs, see Network Security Groups on page 2867.
Note that if you choose a subnet with a security list, the security rules for the database will be a union of the rules
in the security list and the NSGs.
Tip:

When connecting to your database from an on-premises network, Oracle


recommends using a FastConnect connection. If you are using an IPSec VPN
connection, see the configuration tips in the Hanging Connection on page
3356 topic in the Networking service documentation to avoid connection
problems.
Return to the Create Autonomous Database dialog instructions

Autonomous Database with Private Endpoint


Note:

This topic applies only to Autonomous Databases with shared Exadata


infrastructure.
Private endpoint refers to a network setup for your Autonomous Database with shared Exadata infrastructure where
all network traffic moves through a private endpoint within a VCN in your tenancy. If your organization has strict
security mandates that do not allow you to have a public endpoint for your database, this provides you with the
necessary private endpoint. Additionally, this configuration uses no public subnets and allows you to keep all traffic
to and from your Autonomous Database off of the public internet.
Overview of Private Endpoint
Enabling a private endpoint for an Autonomous Database ensures that the only access path to the database is via a
VCN inside your Oracle Cloud Infrastructure tenancy. This network configuration completely blocks access to the
database from public endpoints. A private endpoint offers the following advantages over other methods of private
network access:
• Does not require you to set up transit routing in you VCN and use a service gateway to connect.

Oracle Cloud Infrastructure User Guide 1163


Database

• Can satisfy security requirements that forbid the use of a public endpoint.
The private endpoint option is available for both new and existing Autonomous Databases on shared Exadata
infrastructure. See To create an Autonomous Database on shared Exadata infrastructure on page 1157 for
instructions on creating a new Autonomous Database with a private endpoint. See To change the network access of an
Autonomous Database on shared Exadata infrastructure from private endpoint to public endpoint for information on
switching network access configuration of an existing database.
Networking Prerequisites Needed for Private Endpoint
To provision an Autonomous Database with a private endpoint, you must have the following resources already
created:
• A VCN within the region that will contain your Autonomous Database with shared Exadata infrastructure. Cannot
be changed after provisioning.
• A private subnet within your VCN configured with default DHCP options. Cannot be changed after provisioning.
• At least 1 network security group (NSG) within your VCN for the Autonomous Database. Can be changed or
edited after provisioning.
NSGs create a virtual firewall for your Autonomous Database using security rules. You can specify up to five NSGs
to control access to your Autonomous Database.
NSG security rule guidelines for private endpoint
Your security rules for the NSG of your Autonomous Database need to be configured as follows:
• The private endpoint feature supports both stateful and stateless security rules within NSGs.
• Your rule covering ingress traffic must specify the IP Protocol "TCP", and your Destination Port Range must be
1522.
• To use Oracle Application Express, Oracle SQL Developer Web, and Oracle REST Data Services, add port 443 to
the NSG rule.
To connect another resource located inside Oracle Cloud Infrastructure (for example, a Compute instance) to your
Autonomous Database, the second resource needs a security rule that allows all egress traffic to the NSG of the
Autonomous Database. This means you specify the NSG of the Autonomous Database as the Destination for this
security rule. The second resource's security rule can be part of an NSG or a security list.
See Network Security Groups on page 2867 and To create an NSG on page 2873 for more information on working
with NSGs.
Connecting to an Autonomous Database with a Private Endpoint
You can connect to an Autonomous Database that uses a private endpoint from within Oracle Cloud Infrastructure
resources, or from your data center. See To find the Fully Qualified Domain Name (FQDN) and IP address of your
private endpoint for information on locating the IP address and URL of your endpoint.

Example 1: Connecting from Within Oracle Cloud Infrastructure


You can connect from a resource (like a Compute instance) within the same VCN as the private endpoint. Note that
you can also connect from a resource located in a different VCN from the private endpoint by using local or remote
VCN peering.

Oracle Cloud Infrastructure User Guide 1164


Database

Example network layout for connecting to an Autonomous Database with a private endpoint from within Oracle
Cloud Infrastructure
You set up:
• A VCN and a private subnet
• An NSG for the Autonomous Database that includes either stateful or stateless security rules, as described in
Networking Prerequisites Needed for Private Endpoint

Example stateful security rule for the Autonomous Database NSG. Note that stateless rules are also supported.
• An NSG security rule for the resource that will be allowed access to the Autonomous Database. This stateful
egress security rule allows all egress traffic to the NSG of the Autonomous Database.

Example stateful egress security rule for the NSG of the resource connecting to the Autonomous Database

Oracle Cloud Infrastructure User Guide 1165


Database

Example 2: Connecting from an On-Premise Data Center

Example network layout for connecting to an Autonomous Database with a private endpoint from an on-premises
network
You set up:
• A VCN and a private subnet
• An NSG for the Autonomous Database that includes one or more security rules as described in see Networking
Prerequisites Needed for Private Endpoint allowing traffic to a CIDR within your on-premises network

Example stateful security rule for the Autonomous Database NSG


• A Oracle Cloud Infrastructure FastConnect dedicated private connection or a VPN Connect IPSec VPN
connection
• A dynamic routing gateway (DRG)
• A route table
Tip:

When connecting from an on-premises network, Oracle recommends using


a FastConnect connection. If you are using an IPSec VPN connection, see
the configuration tips in the Hanging Connection on page 3356 topic in the
Networking service documentation to avoid connection problems.
To find the Fully Qualified Domain Name (FQDN) and IP address of your private endpoint
Your database's private endpoint IP address is displayed on the Autonomous Database Details page in the Oracle
Cloud Infrastructure Console.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.

Oracle Cloud Infrastructure User Guide 1166


Database

2. Choose your Compartment.


3. In the list of Autonomous Databases, click the display name of the database you want to connect to.
4. On the Autonomous Database Details page, in the Network section, the Private Endpoint IP Private Endpoint
URL fields display the IP address and URL of the endpoint.
To resolve the Autonomous Database private endpoint in your on-premise host's /etc/hosts file
To resolve the Autonomous Database private endpoint, a Fully Qualified Domain Name (FQDN) requires that you
add an entry in your on-premise client's hosts /etc/hosts file. For example:

# example /etc/hosts entry


10.0.2.7 example.adb.us-phoenix-1.oraclecloud.com

To use Oracle Application Express, Oracle SQL Developer Web, and Oracle REST Data Services, add another entry
with the same IP. For example:

# example /etc/hosts entry


10.0.2.7 example.adb.ca-toronto-1.oraclecloudapps.com

You find the private endpoint IP and the FQDN as follows:


• The Private IP is shown on the Oracle Cloud Infrastructure Console Autonomous Database details page for the
instance.
• The FQDN is shown in the tnsnames.ora file in the Autonomous Database client credential wallet.
Alternatively you can set up a hybrid DNS in Oracle Cloud Infrastructure for DNS name resolution.
Additional Information
See To create an Autonomous Database on shared Exadata infrastructure on page 1157 for instructions on
provisioning an Autonomous Database that uses a private endpoint.
See To update the network configuration of an Autonomous Database on shared Exadata infrastructure that uses a
private endpoint on page 1178 for information on editing networking settings related to a private endpoint.
See Private Access on page 3281 in the Networking service documentation for an overview of the options for
enabling private access to services within Oracle Cloud Infrastructure.
See Hanging Connection on page 3356 in the Networking service documentation for troubleshooting IPSec VPN
connection issues that can occur when connecting from your on-premises network.

Creating an Autonomous Database on Dedicated Exadata Infrastructure


This topic describes how to provision a new Autonomous Database on dedicated Exadata infrastructure using the
Oracle Cloud Infrastructure Console or the API. Autonomous Databases can be provisioned on either dedicated
Exadata infrastructure or shared Exadata infrastructure. Your database can by optimized for either transaction
processing or data warehouse workloads, and you can create a standby database to facilitate disaster recovery by
enabling Autonomous Data Guard.
For Oracle By Example tutorials on provisioning Autonomous Databases, see Provisioning Autonomous Transaction
Processing and Provisioning Autonomous Data Warehouse Cloud.
Prerequisites
• To create an Autonomous Database, you must be given the required type of access in a policy written by an
administrator, whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you
try to perform an action and get a message that you don’t have permission or are unauthorized, confirm with
your administrator the type of access you've been granted and which compartment you should work in. See

Oracle Cloud Infrastructure User Guide 1167


Database

Authentication and Authorization for more information on user authorizations for the Oracle Cloud Infrastructure
Database service.
Tip:

See Let database and fleet admins manage Autonomous Databases on


page 2159 for sample Autonomous Database policies. See Details for the
Database Service on page 2251 for detailed information on policy syntax.
• For information on additional prerequisites for provisioning an Autonomous Transaction Processing database, see
What Do You Need? Likewise, for information on additional prerequisites for provisioning an Autonomous Data
Warehouse, see What Do You Need?
• To create an Autonomous Transaction Processing database on Dedicated Exadata infrastructure, you must first
provision the infrastructure and at least one Autonomous Container Database. For more information, see Creating
an Autonomous Exadata Infrastructure Resource on page 1199 and Creating an Autonomous Container Database
on page 1206.
Using the Oracle Cloud Infrastructure Console
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Provide the following information for the Autonomous Database:
•Compartment: Select the compartment of the Autonomous Database.
•Display name: A user-friendly description or other information that helps you easily identify the resource.
The display name does not have to be unique. Avoid entering confidential information.
• Database name: The database name must consist of letters and numbers only, starting with a letter. The
maximum length is 14 characters.
3. Choose a workload type. See About Autonomous Data Warehouse and About Autonomous Transaction
Processing for information about each workload type.
4. Click the Dedicated Infrastructure deployment type.
5. In the Choose Autonomous Container Database section, select an Autonomous Container Database from the
Autonomous Container Database in compartment drop-down.
You can change the compartment from which to choose an Autonomous Container Database by clicking the
CHANGE COMPARTMENT link.
You can provision a standby Autonomous Container Database to provide data protection, high availability, and to
facilitate disaster recovery for the primary database.
a. Select Autonomous Data Guard-enabled Autonomous Container Databases to list Autonomous Container
Databases that have Oracle Data Guard enabled.
b. Select an Autonomous Container Database from the Autonomous Container Database in compartment drop-
down.
Note:

The standby Autonomous Container Database inherits the resource


configuration (OCPU count and amount of storage) from the primary
database.
• See Creating an Autonomous Container Database on page 1206 for information about provisioning a
container database.
• See Database System Resource types on page 1194 in the Dedicated Deployment overview for information
about the container database resource type.
• See Managing a Standby Autonomous Container Database on page 1215 for more information about Oracle
Data Guard.

Oracle Cloud Infrastructure User Guide 1168


Database

6. Configure the database:


• OCPU count:Specify the number of cores for your Autonomous Database. The actual number of available
cores is subject to your tenancy's service limits.
Deselect Auto scaling to disable auto scaling. By default auto scaling is enabled to allow the system to
automatically use up to three times more CPU and IO resources to meet workload demand. See CPU Scaling
for more information.
• Storage (TB): Specify the storage you wish to make available to your Autonomous Database, in terabytes.
7. Create administrator credentials: Set the password for the Autonomous Database ADMIN user by entering
a password that meets the following criteria. You use this password when accessing the Autonomous Database
service console and when using a SQL client tool.
Password criteria:
• Contains from 12 to 30 characters and includes at least one uppercase letter, one lowercase letter, and one
numeric character.
• Does not contain the string "admin", regardless of case
• Is not one of the last four passwords used for the ADMIN user
• Does not contain the double quotation mark (")
• Cannot be the same password that was set less than 24 hours ago
Note:

If you enable Autonomous Data Guard, then the standby Autonomous


Database that gets provisioned has the same ADMIN user password as the
primary database.
8. Click Show Advanced Options to configure the following:
• Encryption Key: Whichever encryption key-management choice you made, either Oracle managed or your
own encryption key, when you created the Autonomous Container Database, is displayed. The Autonomous
Database you are creating inherits key management from the Autonomous Container Database.
See Managing Keys on page 3998 for more information about encryption keys
• Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags
to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more
information about tagging, see Resource Tags on page 213. If you are not sure if you should apply tags, then
skip this option (you can apply tags later) or ask your administrator.
9. Click Create Autonomous Database.
Using the API
Use the CreateAutonomousDatabase API operation to create Autonomous Databases of either the Autonomous Data
Warehouse (DW), Autonomous JSON Database (AJD), or Autonomous Transaction Processing (OLTP) workload
types.
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
For More Information

Full Autonomous Database product documentation for dedicated Exadata infrastructure


Getting Started with Autonomous Database

Tutorials
• Autonomous Transaction Processing: Tutorials (Oracle By Example tutorials)
• Autonomous Data Warehouse: Tutorials (Oracle By Example tutorials)

Oracle Cloud Infrastructure User Guide 1169


Database

Videos
• Autonomous Transaction Processing: Videos (video tutorials)
• Autonomous Data Warehouse: Videos (video tutorials)

Managing an Autonomous Database

Autonomous Database
This topic describes the database management tasks for Autonomous Databases that you complete using the Oracle
Cloud Infrastructure Console or the API. These tasks apply to Autonomous Databases of either the Data Warehouse,
JSON Database, or Transaction Processing workload types. You can filter Autonomous Databases by workload type
on the Autonomous Databases page of the Oracle Cloud Infrastructure Console.
Note:

Some database management tasks not described here are performed using the
Autonomous Transaction Processing service console or the Autonomous Data
Warehouse service console.
Prerequisites
To perform the management tasks in this topic, you must be given the required type of access in a policy written
by an administrator, whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you
try to perform an action and get a message that you don’t have permission or are unauthorized, confirm with your
administrator the type of access you've been granted and which compartment you should work in. See Authentication
and Authorization for more information on user authorizations for the Oracle Cloud Infrastructure Database
service.See Let database and fleet admins manage Autonomous Databases on page 2159 for sample Autonomous
Database policies. See Details for the Database Service on page 2251 for detailed information on policy syntax.
Using the Console
Lifecycle Management Operations

To check the lifecycle state of your Autonomous Database


1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click the display name of the database you want to administer.
4. In the Information tab, note the value displayed for Lifecycle State. For some lifecycle states, an information icon
( ) is displayed to provide additional details regarding the lifecycle state or ongoing operations such as backups,
restores, or terminations. The database has one of the following lifecycle states:
• Available
• Available needs attention
• Backup in progress
• Provisioning
• Restore in progress
• Scaling in progress
• Starting
• Stopping
• Stopped
• Terminating
• Terminated
• Unavailable
To stop or start an Autonomous Database

Oracle Cloud Infrastructure User Guide 1170


Database

1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click the display name of the database you want to administer.
4. Go to More Actions, and then click Stop (or Start). When you stop your Autonomous Database, billing stops for
CPU usage. Billing for storage continues when the database is stopped.
5. Confirm that you want to stop or start your Autonomous Database in the confirmation dialog.
Note:

Stopping your database has the following consequences:


• On-going transactions are rolled back.
• CPU billing is halted
• You will not be able to connect to your database using database clients or
tools.
To restart an Autonomous Database
Tip:

Restarting a database is equivalent to manually stopping and then starting the


database. Using restart allows you to minimize downtime and requires only a
single action.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click the display name of the database you want to administer.
4. Go to More Actions, and then click Restart).
5. Confirm that you want to restart your Autonomous Database in the confirmation dialog. The system stops and
then immediately starts your database.
To terminate an Autonomous Database
Caution:

Terminating an Autonomous Database permanently deletes it. The database


data, including automatic backups, will be lost when the system is terminated.
Manual backups remain in Object Storage and are not automatically deleted
when you terminate an Autonomous Database. Oracle recommends that you
create a manual backup prior to terminating.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click the display name of the database you want to administer.
4. Go to More Actions, and then click Terminate.
5. Confirm that you want to terminate your Autonomous Database in the confirmation dialog.
To scale the CPU core count or storage of an Autonomous Database
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click the display name of the database you want to administer.
4. Click Scale Up/Down.

Oracle Cloud Infrastructure User Guide 1171


Database

5. Enter a new value for CPU Core Count or Storage from 1 to 128. The number you enter represents the
total value for your database's CPU core count or storage.
The number of available cores is subject to your tenancy's service limits. An Autonomous Database database can
have a maximum of 128 cores and 128 TB of storage. Scaling the CPU core count affects your CPU billing.
6. Click Update.
To enable or disable auto scaling for an Autonomous Database
Note the following points regarding the auto scaling feature:
• With auto scaling enabled, the database can use up to three times more CPU and IO resources than specified by
the number of OCPUs currently shown in the Scale Up/Down dialog. See CPU Scaling for more information.
• If auto scaling is disabled while more CPU cores are in use than the database's currently assigned number of cores,
then Autonomous Database scales the number of CPU cores in use down to the assigned number.
• Enabling auto scaling does not change the concurrency and parallelism settings for the predefined services. See
Managing Concurrency and Priorities on Autonomous Data Warehouse and Managing Priorities on Autonomous
Transaction Processing for more information.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click the display name of the database you want to administer.
4. Click Scale Up/Down.
5. Check Auto Scaling to enable the auto scaling feature, or uncheck Auto Scaling to disable the feature.
6. Click Update.
To view OCPU allocation hourly snapshot data for an Autonomous Database
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click the display name of the database for which you want to view CPU
usage data.
4. Click the Service Console button. The Service Console opens in a new tab or window.
5. In the Overview screen, the Number of OCPUs allocated graph shows hourly snapshot data of OCPU allocation
over the last eight days. Place your cursor over the graph and move it to the left or right to see data for a specific
day and hour.
Database Management Tasks

To set the Admin password


1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click the display name of the database you want to administer.
4. Go to More Actions, and then click Admin Password. The Admin Password dialog opens.
5. Enter a password for the Autonomous Database. The password must meet the following criteria:
• Contains from 12 to 30 characters
• Contains at least one lowercase letter
• Contains at least one uppercase letter
• Contains at least one number
• Does not contain the double quotation mark (")
• Does not contain the string "admin", regardless of case
• Is not one of the last four passwords used for the database
• Is not a password you previously set within the last 24 hours

Oracle Cloud Infrastructure User Guide 1172


Database

6. Enter the password again in the Confirm Password field.


7. Click Update.
To access the Autonomous Database service console for databases on shared Exadata infrastructure
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click the display name of the database you want to administer.
4. Click Service Console.
For information on using the Autonomous Transaction Processing service console features, see Managing and
Monitoring Performance of Autonomous Transaction Processing. For information on using the Autonomous Data
Warehouse service console features, see Managing and Monitoring Performance of Autonomous Data Warehouse
Cloud.
To change the license type of an Autonomous Database
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click the display name of the database you want to administer.
4. Go to More Actions, and then click Update License Type.
The dialog displays the options with your current license type selected.
5. Select the new license type.
6. Click Update.
See Known Issue.
To rename an Autonomous Database on shared Exadata infrastructure
Note:

Renaming a database changes the contents of the database wallet and requires
that you download a new wallet.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Select the compartment, from the Compartment drop-down, that contains the database you want to rename.
3. From the list of Autonomous Databases contained in the compartment, click the display name of the database you
want to rename to display the Autonomous Database Details page for that database.
4. Click More Actions to display a list of actions.
5. Click Rename Database to display the Rename Database dialog.
6. Enter a database name that contains only letters and numbers, begins with a letter, and does not exceed 14
characters.
7. Enter the current database name to confirm the name change.
8. Click Rename Database.
To change the workload type of an Autonomous Database
You can change the workload type of an Autonomous Database from JSON or APEX to Transaction Processing if
you require additional capabilities offered by Autonomous Transaction Processing.
Note:

This change results in billing changes and cannot be reversed.


1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Click Autonomous Database to display a list of Autonomous Databases of all workload types.

Oracle Cloud Infrastructure User Guide 1173


Database

3. If you are not in the correct compartment, then select the compartment from the Compartment drop-down that
contains the database for which you want to change the workload type.
4. Click the display name of the Autonomous Database of either JSON or APEX workload type for which you want
to change the workload type to display the Autonomous Database Details page for that database.
5. Click More Actions to display a list of operations. Click Change Workload Type to display the Change
Workload Type to Transaction Processing confirmation dialog.
6. Click Convert.
To change the access mode of an Autonomous Database
You can select an operation mode for an Autonomous Database. The default mode is read/write but you can select
read-only to limit users to querying the database, only. In addition, for either of these modes you can restrict access to
only allow a user with administrator privileges to access the database.
Note:

Changing access modes and permission level do no not apply to an


Autonomous JSON Database.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your compartment from the Compartment drop-down.
3. In the list of Autonomous Databases, click the name of the database for which you want to change the access
mode to display the details page for that database.
Note:

The database you choose must be in the Available state to successfully


change the access mode.
4. On the Autonomous Database Information tab, click the Edit link in the Mode field.
5. In the Edit Database Mode dialog, choose either Read/Write or Read-only, depending on the access mode you
want. By default, an Autonomous Database is provisioned in read/write mode.
You can also restrict access to the ADMIN user or users with administrator privileges by checking Allow
administrator access only. You can apply this restriction whether the database is in read/write or read-only
mode.
6. Click Confirm to apply the change.
Note:

• Changing the permission level requires that users and applications


reestablish connections to the database.
• When the database is in read-only mode:
• You cannot change the ADMIN user password.
• You cannot upgrade the database.
To enable or disable Operations Insights on an Autonomous Database
Operations Insights is an Oracle Cloud Infrastructure service that provides analytics that you can use to monitor an
Autonomous Database on either dedicated or shared Exadata infrastructure.
Note:

Enabling Operations Insights for the first time takes a few minutes to
complete. Subsequent enabling operations take less time.
To enable Operations Insights:
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse or Autonomous
Transaction Processing.

Oracle Cloud Infrastructure User Guide 1174


Database

2. If necessary, choose the compartment from the Compartment drop-down that contains the Autonomous Database
you want to register with Operations Insights.
3. From the list of Autonomous Databases, click the display name of the database to display the details page for that
database.
4. In the Operations Insights section, click Enable or Disable to display a confirmation dialog.
Once enabled, click the View link in the Operations Insights section to display the various metrics.
5. Click Confirm.
Networking Management Tasks

To change the network access type of an Autonomous Database on shared Exadata infrastructure from public endpoint to
private endpoint
Autonomous Databases on shared Exadata infrastructure can use either of the following network access options:
• Virtual cloud network: This option uses a private endpoint within a VCN in your tenancy. The private endpoint
conncts to the private endpoint in either an Autonomous Database with private endpoint using dedicated Exadata
infrastructureor an Autonomous Database with a private endpoint in the Shared Exadata infrastructure
• Allow secure access from everywhere: This option uses a public endpoint provided by Oracle.
This topic describes how to switch a database's network access to the Virtual cloud network option. See
Autonomous Database with Private Endpoint on page 1163 more information on this option, including the
prerequisites needed to switch.
Note:

• Your Oracle Database version must be 19c or higher to perform an in-


place switch the network access type after provisioning.
• To change the network access configuration from a private to a public
endpoint, the Autonomous Database must have the Available lifecycle
state.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click the display name of the database you want to administer.
4. Go to More Actions, and then click Update Network Access.
5. In the Update Network Access dialog, select Virtual cloud network.
Note:

If Data Safe is enabled,


a. Ensure that you have created a private endpoint in the selected VCN for
Autonomous Database with a private endpoint
b. Ensure that you have configured the security rules to allow traffic from
Data Safe to Autonomous Database. For more information, see the Data
Safe documentation.

Oracle Cloud Infrastructure User Guide 1175


Database

6. Specify the following information:


• Virtual cloud network: The VCN in which to launch the Autonomous Database. Click Change
Compartment to select a VCN in a different compartment.
• Subnet: The subnet to which the Autonomous Database should attach. Click change compartment to select a
subnet in a different compartment. Oracle recommends that you specify a private subnet.
• Hostname prefix: Optional. This specifies a host name prefix for the Autonomous Database and associates a
DNS name with the database instance, in the following form:

hostname_prefix.adb.region.oraclecloud.com

The host name must begin with an alphabetic character, and can contain only alphanumeric characters and
hyphens (-). You can use up to 63 alphanumeric characters for your hostname prefix.
If you choose not to specify a hostname prefix, Oracle creates a unique DNS name for your database.
• Network security groups: You must specify at least one network security group (NSG) for your
Autonomous Database. NSGs function as virtual firewalls, allowing you to apply a set of ingress and egress
security rules to your database. A maximum of five NSGs can be specified.
For more information on creating and working with NSGs, see Network Security Groups on page 2867.
Note that if you choose a subnet with a security list, the security rules for the database will be a union of the
rules in the security list and the NSGs.
7. Click Update.
8. In the Confirm dialog, type the Autonomous Database name to confirm the change.
Click Update.
Notes:
• After updating the network access type all database users must obtain a new wallet and use the new wallet to
access the database. See About Downloading Client Credentials (Wallets) for more information.
• If you had access control list (ACL) rules defined for the public endpoint, the rules do not apply for the private
endpoint.
To change the network access of an Autonomous Database on shared Exadata infrastructure from private endpoint to public
endpoint
Autonomous Databases on shared Exadata infrastructure can use either of the following network access options:
• Virtual cloud network: This option uses a private endpoint within a VCN in your tenancy.
• Allow secure access from everywhere: This option uses a public endpoint provided by Oracle.
This topic describes how to switch a database's network access to the Allow secure access from everywhere option.
Note:

• Your Oracle Database version must be 19c or higher to perform an in-


place switch the network access type after provisioning.
• To change the network access configuration from a private to a public
endpoint, the Autonomous Database must have the Available lifecycle
state.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click the display name of the database you want to administer.
4. Go to More Actions, and then click Update Network Access.
5. In the Update Network Access dialog, select Allow secure access from everywhere.
6. Click Update.
7. In the Confirm dialog, type the Autonomous Database name to confirm the change.

Oracle Cloud Infrastructure User Guide 1176


Database

8. Click Update.
Notes:
• After updating the network access type all database users must obtain a new wallet and use the new wallet to
access the database. See About Downloading Client Credentials (Wallets) for more information.
• After the update completes, you can define access control rules for the public endpoint by specifying ACLs.
See To manage the access control list of an Autonomous Database on shared Exadata infrastructure for more
information. See Access Control Lists (ACLs) for Databases on Shared Exadata Infrastructure on page 1154 for
details and restrictions regarding ACLs.
To manage the access control list of an Autonomous Database on shared Exadata infrastructure
This task applies to Autonomous Databases using the Allow secure access from everywhere network access option.
This option uses a public endpoint provided by Oracle.
An access control list (ACL) provides additional protection for your Autonomous Database by allowing only the IP
addresses in the list to connect to the database. An ACL must contain at least one entry representing an IP address or
a range of addresses. To create or edit an ACL for an existing database that uses shared Exadata infrastructure, do the
following:
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click the display name of the database you want to administer.
4. Under Network in the database details, find the Access Control List field and click add (if no ACL currently
exists) or edit (to update an existing ACL).
5. In the Access Control List dialog, add or modify entries, as applicable.
Note:

If you are editing an ACL, the ACL's existing entries display in the Access
Control List dialog. Do not overwrite the existing values unless you intend
to replace one or more entries. To add new ACL entries, click + Another
Entry.
You can specify the following types of addresses in your list by using the IP notation type drop-down selector:
• IP Address allows you to specify one or more individual public IP address. Use commas to separate your
addresses in the input field.
• CIDR Block allows you to specify one or more ranges of public IP addresses using CIDR notation. Use
commas to separate your CIDR block entries in the input field.
• Virtual Cloud Network allows you to specify an existing VCN. The drop-down listing in the input field
allows you to choose from the VCNs in your current compartment for which you have access permissions.
Click the Change Compartment link to display the VCNs of a different compartment.
• Virtual Cloud Network (OCID) allows you input the OCID of a VCN in a text box. You can use this input
method if the VCN you are specifying is in a compartment which you do not have permission to access.
Caution:

If you want to specify multiple IP addresses or CIDR ranges within the


same VCN, do not create multiple ACL entries. Use one ACL entry with
the values for the multiple IP addresses or CIDR ranges separated by
commas.
If you add a Virtual Cloud Network to your ACL, you can limit further by specifying allowed VCN IP addresses
or CIDR ranges. Enter those addresses or CIDR blocks in the IP Addresses or CIDRs field that is displayed

Oracle Cloud Infrastructure User Guide 1177


Database

below your Virtual Cloud Network choice. Use commas to separate your VCN addresses and CIDR blocks in
the input field. You can specify the following types of IP addresses at the VCN level:
• Private IP addresses within your Oracle Cloud Infrastructure VCN
• Private IP addresses within an on-premises network that have access to your Autonomous Database using a
transit routing and a private connection via FastConnect or VPN Connect.
Click + Another Entry to add additional access rules to your list.
Important:

If you are using a service gateway, ensure that the CIDR range 240.0.0.0/4
is included in the list to allow clients accessing the database through the
service gateway to connect to it.
To remove the ACL, simply delete all entries in the list. This action allows all clients to connect to the database.
6. Click Update.
If the Lifecycle State is Available when you click Update, the Lifecycle State changes to Updating until the ACL
update is complete. The database is still up and accessible, there is no downtime. When the update is complete the
Lifecycle State returns to Available and the network ACL rules from the access control list are in effect.
For more information about access control lists, see Security Considerations on page 1153.
To update the network configuration of an Autonomous Database on shared Exadata infrastructure that uses a private
endpoint
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click the display name of the database you want to administer. The
Autonomous Database Details page displays.
4. In the Autonomous Database Information tab, the Network section displays the names of your database's
virtual cloud network (VCN), subnet, and network security groups (NSGs). The following networking
configuration changes are possible:
• Clicking the names of your VCN and subnet will take you to the resource details pages of those resources.
While you can edit the configurations of those resources, you cannot assign a different VCN or subnet to your
Autonomous Database.
• You can add or remove NSGs by clicking the edit link by the listed NSGs in the Network section on the
Autonomous Database Details page. Note that your specified NSGs must contain stateless security rules.
• You can edit the security rules of any of your Autonomous Database's NSGs by clicking the name of the NSG.
Clicking the name takes your to the Network Security Group Details page of the NSG.
Management Tasks for the Oracle Cloud Infrastructure Platform

To view a work request for your Autonomous Database


1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click the display name of the database you want to administer.
4. In the Resources section, click Work Requests. The status of all work requests appears on the page.
5. To see the log messages, error messages, and resources that are associated with a specific work request, click the
operation name. Then, select an option in the More information section.
For associated resources, you can click the the Actions icon (three dots) next to a resource to copy the resource's
OCID.
For more information, see Work Requests on page 262.
To move an Autonomous Database to another compartment

Oracle Cloud Infrastructure User Guide 1178


Database

Note:

• To move resources between compartments, resource users must have


sufficient access permissions on the compartment that the resource
is being moved to, as well as the current compartment. For more
information about permissions for Database resources, see Details for the
Database Service on page 2251.
• Security zone considerations:
• If your Autonomous Database is in a security zone, the destination
compartment must also be in a security zone.
• If your Autonomous Database uses a public endpoint, you cannot
move it to a security zone compartment unless you first switch the
networking configuration to use a private endpoint.
• If your Autonomous Database is not in a security zone and has Data
Guard standby database, you cannot move the database into a security
zone compartment while the standby remains in a compartment that is
not in a security zone.
See the Security Zone Policies topic for a full list of policies that affect
Database service resources.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click the display name of the database you want to move.
4. Go to More Actions, and then click Move Resource.
5. Select the new compartment.
6. Click Move Resource.
For information about dependent resources for Database resources, see Moving Database Resources to a Different
Compartment on page 1141.
To manage tags for your Autonomous Database
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click the display name of the database you want to administer.
4. Go to More Actions, and then click Apply Tag(s) to add new tags. Or click the Tags tab to view or edit the
existing tags.
For more information, see Resource Tags on page 213.
To upgrade an Always Free Autonomous Database to a paid instance
If you are using a paid account, you can upgrade an Always FreeAutonomous Database to a paid instance. Paid
instances include the ability to scale the OCPU count and storage.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click the display name of the database you want to ugrade.
4. Go to More Actions, and then click Upgrade Instance to Paid.
5. In the Confirm Upgrade dialog, confirm that you want to upgrade the instance by clicking the Upgrade Instance
to Paid button.
The lifecyle state of the Autonomous Database will change to Updating while your upgrade is in progress. You
database remains online and accessible while the upgrade is in progress. After your instance is upgraded to paid,
you will be able to provision a new Always FreeAutonomous Database in place of your upgraded instance.

Oracle Cloud Infrastructure User Guide 1179


Database

Security Management Tasks


For information on network security management tasks, see Networking Management Tasks on page 1175.
To register or deregister an Autonomous Database with Data Safe
To use Oracle Data Safe with an Autonomous Database, you register your database with Data Safe. To
discontinue using Data Safe, you deregister the database. For information about using Data Safe, see the Data Safe
documentation.

Prerequisites for Autonomous Databases with private endpoints


• Data Safe must be enabled in the region containing the Autonomous Database. For information about creating and
using Data Safe instances, see the Data Safe documentation.
• Ensure that the Data Safe instance has a private endpoint. Instructions for creating a Data Safe private endpoint
are located in the Register DB Systems that have Private IP Addresses topic in this manual.
• Enable communication from the Data Safe private endpoint to your database. To do this, update the security list
or network security group (NSG) rules in the database's VCN to allow the Data Safe private endpoint to access
the database. See the Access and Security section in this manual for information about network security groups,
security rules, and security lists. See To update the network configuration of an Autonomous Database on shared
Exadata infrastructure that uses a private endpoint on page 1178 for information on updating NSG rules for
Autonomous Databases running on shared Exadata infrastructure. See and To edit the network security groups
(NSGs) for your Autonomous Exadata Infrastructure resource on page 1205 for information on updating NSG
rules for Autonomous Databases running on dedicated Exadata infrastructure.
Note:

If a Data Safe private endpoint does not exist in the same VCN used by the
target Autonomous Database, the Data Safe registration will fail. For more
information, see the Data Safe documentation.
Note:

Additional procedures are required when Database Vault is enabled on the


database you are registering or deregistering. See the information at the end
of the following procedure.

Registering and deregistering an Autonomous Database


1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
Note:

The Data Discovery and Data Masking features are not supported for
JSON type columns. See Target Database Registration Overview for
more information.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click the display name of the database you want to manage.
4. In the Details page, under Data Safe, click register or deregister depending on the status of the database.
• If the database uses dedicated Exadata infrastructure, enter the ADMIN password and click Confirm to start
the registration or deregistration process. The ADMIN password is needed to unlock Data Safe so that it can
monitor the database.
• If the database uses Shared Exadata infrastructure, click Confirm to start the registration or deregistration
process.
If the registration fails and the database is using a private endpoint, ensure that the prerequisites listed in this topic
for allowing network access have been met.

Oracle Cloud Infrastructure User Guide 1180


Database

5. You can optionally use either or both of the following to monitor the registration and deregistration process:
• Use the work request created by the system to monitor the progress of the registration or deregistration.
• Click View Console to display the Data Safe user interface for the registered database.

Registering and deregistering an Autonomous Database when Database Vault is enabled


When Database Vault is enabled on the database you are registering or deregistering, complete the following
additional steps to register or deregister the database. These are needed to accommodate the additional database
security provided by Database Vault.
• The Database Vault account manager must first grant specific access rights to the ADMIN database user before
they can register or deregister an Autonomous Database. Otherwise, an "insufficient privileges" error is displayed.
• After the database has been registered or deregistered, the Database Vault account manager can remove the access
rights to limit access during normal operations,
• After the database registration is complete, the Database Vault owner must grant specific access rights to the DS
$ADMIN database user to enable normal operations. See the Data Safe documentation to learn about the access
rights needed.
For more information on using Database Vault with an Autonomous Database, see the following user guides:
• Using Oracle Autonomous Transaction Processing on Shared Exadata Infrastructure
• Using Oracle Autonomous Transaction Processing on Dedicated Exadata Infrastructure
• Using Oracle Autonomous Data Warehouse on Shared Exadata Infrastructure
• Using Oracle Autonomous Data Warehouse on Dedicated Exadata Infrastructure
To rotate the encryption key for an Autonomous Database
Rotating the encryption key creates a new version of the vault key that replaces the current version of the vault key.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your compartment from the Compartment drop-down.
3. Click the display name of the Autonomous Database for which you want to rotate the encryption key to display
the details page for that database.
Note:

The Autonomous Database you choose must be on dedicated Exadata


infrastructure and in an Available state.
4. Click the More Actions drop-down.
5. Click Rotate Encryption Key to display a confirmation dialog.
6. Click Rotate Key.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to manage Autonomous Databases:
• ListAutonomousDatabases
• GetAutonomousDatabase
• UpdateAutonomousDatabase
• ChangeAutonomousDatabaseCompartment
• StartAutonomousDatabase
• RestartAutonomousDatabase
• StopAutonomousDatabase
• DeleteAutonomousDatabase

Oracle Cloud Infrastructure User Guide 1181


Database

For More Information


• Using Oracle Autonomous Database on Shared Exadata Infrastructure (full product documentation)
• Using Oracle Autonomous Database on Dedicated Exadata Infrastructure (full product documentation)

Connecting to an Autonomous Database


This topic describes the following actions related to connecting client applications to an Autonomous Database:
• Obtaining the credentials and information (wallet) you need to create a connection (applies to both shared
Exadata infrastructure and dedicated Exadata infrastructure)
• Rotating the keys and credentials (wallet) needed for a connection (applies to shared Exadata infrastructure only)
• Obtaining access URLs for Oracle Application Express (APEX) and Oracle Database Actions
Tip:

For more information on connecting a client to an Autonomous Database on


dedicated Exadata infrastructure, see Connecting to Autonomous Database.
About Connecting to Autonomous Databases
Applications and tools connect to Autonomous Databases by using Oracle Net Services (also known as SQL*Net).
SQL*Net supports a variety of connection types to Autonomous Databases, including Oracle Call Interface (OCI),
ODBC drivers, JDBC OC, and JDBC Thin Driver.
To support connections of any type, you must download the client security credentials and network configuration
settings required to access your database. You must also supply the applicable TNS names or connection strings for a
connection, depending on the client application or tool, type of connection, and service level. You can view or copy
the TNS names and connection strings in the DB Connection dialog for your Autonomous Database. For detailed
information about the TNS names, see Predefined Database Service Names for Autonomous Transaction Processing
and Predefined Database Service Names for Autonomous Data Warehouse.
Connecting to an Autonomous Database
You can connect to an Autonomous Database that uses shared Exadata infrastructure from a VCN with either a public
or private endpoint.
To connect to Autonomous Databases that use a public endpoint from a VCN, the VCN must be configured with one
of the following gateways:
• internet gateway: For access from a public subnet in the VCN
• service gateway: For access from a private subnet in the VCN
Make sure to configure the subnet's route table with a rule that sends the desired traffic to the specific gateway. Also
configure the subnet's security lists to allow the desired traffic.
You can also connect to your database from private IP addresses in your on-premises network by using transit routing
with an Oracle Cloud Infrastructure VCN. This allows traffic to move directly from your on-premises network to your
Autonomous Database without going over the internet. See Transit Routing: Private Access to Oracle Services on
page 2829 for more information on this method of access.
To connect to Autonomous Databases that use a private endpoint from a VCN, you must configure a security rule
within one of the database's network security groups (NSGs) to allow access to the Autonomous Database endpoint.
For more information on private endpoint network configuration, see Networking Prerequisites Needed for Private
Endpoint.
About Downloading Client Credentials
The client credentials .zip that you download contains the following files:
• cwallet.sso - Oracle auto-login wallet
• ewallet.p12 - PKCS #12 wallet file associated with the auto-login wallet
• sqlnet.ora - SQL*Net profile configuration file that includes the wallet location and TNSNAMES naming method

Oracle Cloud Infrastructure User Guide 1182


Database

• tnsnames.ora - SQL*Net configuration file that contains network service names mapped to connect descriptors for
the local naming method
• Java Key Store (JKS) files - Key store files for use with JDBC Thin Connections
Important:

Wallet files, along with the database user ID and password, provide access
to data in your Autonomous Database. Store wallet files in a secure location.
Share wallet files only with authorized users. If wallet files are transmitted in
a way that might be accessed by unauthorized users (for example, over public
email), transmit the wallet password separately and securely.
For Autonomous Databases on shared Exadata infrastructure, you have the choice of downloading an instance
wallet file or a regional wallet file. The instance wallet contains only credentials and keys for a single Autonomous
Database. The regional wallet contains credentials and keys for all Autonomous Databases in a specified region. For
security purposes, Oracle recommends that regional wallets be used only by database administrators, and that instance
wallets be supplied to other users whenever possible.
For Autonomous Databases on dedicated Exadata infrastructure, the wallet file contains only credentials and keys for
a single Autonomous Database.
About Rotating Your Autonomous Database Wallet
For Autonomous Databases on shared Exadata infrastructure, you can rotate an instance or regional wallet for security
purposes. When your wallet rotation is complete, you will have a new set of certificate keys and credentials, and the
old wallet's keys and credentials will be invalid. Rotating an instance wallet does not invalidate the regional wallet
that covers the same database instance. Rotating a regional wallet affects all databases in the specified region. User
session termination begins after wallet rotation completes, however this process does not happen immediately.
Important:

If you are rotating a wallet to address a security breach and need to


reestablish all database connections immediately using the keys and
credentials of your newly rotated wallet, stop and restart the database
instance.
Before You Begin
The Autonomous Database is preconfigured to support Oracle Net Services (a TNS listener is installed and
configured to use secure TCPS and client credentials.) The client computer must be prepared to use Oracle Net
Services to connect to the Autonomous Database. Preparing your client includes downloading the client credentials.
See the following links for steps you might have to perform before you access the client credentials and connection
information for your Autonomous Database:
Shared Exadata Infrastructure
Preparing for Oracle Call Interface (OCI), ODBC, and JDBC OCI Connections
Dedicated Exadata infrastructure
• Preparing for Oracle Call Interface (OCI), ODBC, and JDBC OCI Connections
• Preparing for JDBC Thin Connections
Using the Oracle Cloud Infrastructure Console
To download a wallet for an Autonomous Database on shared Exadata infrastructure
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click on the display name of the database you are interested in.
4. Click DB Connection.
5. In the Download Client Credentials (Wallet) section, select the Wallet Type. You can choose to download an
instance wallet or a regional wallet.

Oracle Cloud Infrastructure User Guide 1183


Database

6. To obtain the client credentials, click Download Wallet.


You will be prompted to provide a password to encrypt the keys inside the wallet. The password must be at least 8
characters long and must include at least 1 letter and either 1 numeric character or 1 special character.
Save the client credentials zip file to a secure location. See About Downloading Client Credentials on page 1182
for information about the files included in the download.
7. Take note of or copy the TNS names or connection strings you need for your connection. See About Connecting
to Autonomous Databases on page 1182 for information about making connections.
To download a wallet for an Autonomous Database on dedicated Exadata infrastructure
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click on the display name of the database you are interested in.
4. Click DB Connection.
5. Select the DB Connection option.
6. Click the DB Connection tab.
7. To obtain the client credentials, click Download.
You will be prompted to provide a password to encrypt the keys inside the wallet. The password must be at least 8
characters long and must include at least 1 letter and either 1 numeric character or 1 special character.
Save the client credentials zip file to a secure location. See About Downloading Client Credentials on page 1182
for information about the files included in the download.
8. Take note of or copy the TNS names or connection strings you need for your connection. See About Connecting
to Autonomous Databases on page 1182 for information about making connections.
WHAT NEXT?
Connect to the database - Data Warehouse | Transaction Processing
To rotate the wallet of an Autonomous Database on shared Exadata infrastructure
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click on the display name of the database you are interested in.
4. Click DB Connection.
5. In the Download Client Credentials (Wallet) section, select the Wallet Type. You can choose to rotate an
instance wallet or a regional wallet.
6. Click Rotate Wallet. A confirmation dialog will prompt you to enter the database name to confirm the rotation.
7. Enter the name of the database, then click Rotate Wallet.
The rotation takes a few minutes to complete.
To obtain access URLs for Oracle Application Express (APEX) and Oracle SQL Developer Web for an
Autonomous Database on dedicated Exadata infrastructure
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click on the display name of the database you are interested in.
4. Select the Application Connection option.
5. Application URLs are displayed in plain text in the Application URL field. Copy the URL string using the Copy
link.
6. Paste the URL into a browser running on a Compute instance that is inside of the VCN of the Autonomous
Database. Alternately, you can use the URL with a compute instance that has a direct connection to the VCN of
the Autonomous Database.

Oracle Cloud Infrastructure User Guide 1184


Database

Using the API


Use the GenerateAutonomousDatabaseWallet API operation to download the client credentials for your Autonomous
Database.
Use the UpdateAutonomousDatabaseWalletDetails API operation to rotate the wallet for your Autonomous Database.
Use the AutonomousDatabase API operation to get the access URLs for Application Express (APEX) and SQL
Developer Web.
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Backing Up an Autonomous Database Manually


This topic describes how to create manual backups of Autonomous Databases. You can use the Oracle Cloud
Infrastructure Console or the API to perform these tasks.
Oracle Cloud Infrastructure automatically backs up your Autonomous Databases and retains these backups for
60 days. Automatic backups are weekly full backups and daily incremental backups. You can also create manual
backups to supplement your automatic backups. Manual backups are stored in an Object Storage bucket that you
create, and are retained for 60 days.
Note:

During the backup operation, the database remains available. However,


lifecycle management operations such as stopping the database, scaling it, or
terminating it are disabled.
Prerequisites
• To create or manage Autonomous Database backups, you must be given the required type of access in a policy
written by an administrator, whether you're using the Console or the REST API with an SDK, CLI, or other tool.
If you try to perform an action and get a message that you don’t have permission or are unauthorized, confirm
with your administrator the type of access you've been granted and which compartment you should work in. See
Authentication and Authorization for more information on user authorizations for the Oracle Cloud Infrastructure
Database service. See Let database and fleet admins manage Autonomous Databases on page 2159 for sample
Autonomous Database policies. See Details for the Database Service on page 2251 for detailed information on
policy syntax.
• To create a manual backup for an Autonomous Database, you must first configure an Object Storage bucket
to serve as a destination for your manual backups. See Setting Up a Bucket to Store Manual Backups for
instructions.
Using the Console
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, find the database that you want to back up.
4. Click the name of the Autonomous Database to display the Autonomous Database details.
5. In the database details page, click Create Manual Backup.
The create Manual Backup dialog box is displayed.
a. If you have properly configured your database for manual backups, continue to step 6.
b. If the database is not properly configured for manual backups, a message box informs you that you must first
configure the database before you can create a manual backup. See Setting Up a Bucket to Store Manual
Backups for instructions.
c. Click Close, configure your database according to the instructions, and click Create Manual Backup again.
6. In the Create Manual Backup dialog box, enter a name for your backup. Avoid entering confidential
information.

Oracle Cloud Infrastructure User Guide 1185


Database

7. In the Create Manual Backup dialog box, click Create.


Note:

Your backup may take several hours to complete, depending on the size of
your database.
8. Optionally, you can check the state of your backup in the list of backups on the database details page. For some
states, an information icon ( ) is displayed to provide additional details regarding the state or ongoing operations
like deletions. The backup has one of the following states:
• Creating
• Active
• Deleting
• Deleted
• Failed
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to manage Autonomous Database backups:
• ListAutonomousDatabaseBackups
• GetAutonomousDatabaseBackup
• CreateAutonomousDatabaseBackup
Setting Up a Bucket to Store Manual Backups
You must create an Oracle Cloud Infrastructure Object Storage bucket to hold your Autonomous Database manual
backups and configure your database to connect to it. This is a one-time operation.

To set up an object store and user credentials for your manual backups
Some of the steps in this procedure require you to connect to the database by using an Oracle Database client such
as SQL Developer. See Connecting with Oracle SQL Developer (18.2 or later) for information and instructions on
connecting to an Autonomous Transaction Processing database. See Connecting with Oracle SQL Developer (18.2 or
later) for information and instructions on connecting to an Autonomous Data Warehouse database.
1. If you have not already done so, generate an auth token for the Oracle Cloud Infrastructure Object Storage user to
access the bucket you create in the next step. See To create an auth token on page 2482 to learn how to do this.
(You will need this auth token for the database credential you create in step 4.)
2. In the Oracle Cloud Infrastructure Console, create a bucket in your designated Object Storage Swift compartment
to hold the backups.
For example, if your backup bucket is named "backup_database1", the URL would be:
Note:

When you create your bucket:


• Pick Standard as the storage tier. Manual backups are only supported
with buckets created in the standard storage tier.
• Ensure that you use the database name, and not the display name, as the
bucket name.

Oracle Cloud Infrastructure User Guide 1186


Database

3. Using an Oracle Database client, log in to the database as the administrator set the database
DEFAULT_BACKUP_BUCKET property to the URL of your Object Storage bucket. The format of the tenancy
URL is https://swiftobjectstorage.region.oraclecloud.com/v1/object_storage_namespace/bucket_name.
For example:

ALTER DATABASE PROPERTY SET default_backup_bucket='https://


swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/ansh8lvru1zp/
backup_database1';In the example, the Object Storage namespace
is ansh8lvru1zp.

Tip:

• To determine the Object Storage namespace string to use, click View


Bucket in the actions (three dots) menu of the bucket you created in the
previous step.
• Do not include the bucket name in the URL.
• Ensure that you follow the format indicated. Do not use a pre-
authenticated request URL.
4. With the tenancy user and the auth token referenced in step 1, create the credential for your Oracle Cloud
Infrastructure Object Storage account. Use DBMS_CLOUD.CREATE_CREDENTIAL to create the credential.
For example:

BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => 'DEF_CRED_NAME',
username => '[email protected]',
password => '<auth_token>'
);
END;
/

For more information on creating this credential, see CREATE_CREDENTIAL Procedure.


5. Set the database property default_credential to the credential you created in the previous step.
For example:

ALTER DATABASE PROPERTY SET DEFAULT_CREDENTIAL = 'ADMIN.DEF_CRED_NAME';

To list the current value for the default bucket, run the following command:

SELECT PROPERTY_VALUE from database_properties WHERE


PROPERTY_NAME='DEFAULT_BUCKET';

After completing these steps you can take manual backups any time you want.

Manual Backup Notes


Database configuration notes:
• If you previously configured manual backups using the DEFAULT_BUCKET property, you do not need
to make any changes to perform manual backups with your existing configuration. In this case, the
DEFAULT_BUCKET property is set to the value of Oracle Cloud Infrastructure Object Storage tenancy URL
and the required bucket name is in the format of backup_databasename, where databasename is lowercase.
However, Oracle recommends that you configure Autonomous Database for manual backups using the
DEFAULT_BACKUP_BUCKET database property.
• If you previously configured Autonomous Database to use manual backups using the DEFAULT_BUCKET
property and created backups, then after configuring the DEFAULT_BACKUP_BUCKET property to use a new

Oracle Cloud Infrastructure User Guide 1187


Database

manual backup bucket, the old manual backups in the old bucket are not available for restore. If you want to use
the old backups then you must change the value of the DEFAULT_BACKUP_BUCKET property to specify the
URL of the old manual backup bucket.
• If you previously configured Autonomous Database to use manual backups and you rename your Autonomous
Database, then your backups will continue to work without changes.
Manual backup notes:
• Each manual backup creates a full backup on your Oracle Cloud Infrastructure Object Storage bucket and the
backup can only be used by the Autonomous Database instance when you initiate a point-in-time-recovery.
• The retention period for manual backups is the same as for automatic backups, which is 60 days.
• While backing up a database, the database is fully functional. However, during the backup the lifecycle
management operations, such as stopping the database, are not allowed.

Restoring an Autonomous Database

Autonomous Database
This topic describes how to restore an Autonomous Database from a backup. You can use the Oracle Cloud
Infrastructure Console or the API to perform this task.
You can use any existing manual or automatic backup to restore your database, or you can restore and recover your
database to any point in time in the 60-day retention period of your automatic backups. For point-in-time restores, you
specify a timestamp, and your Autonomous Database decides which backup to use for the fastest restore.
Note:

Restoring Autonomous Database puts the database in the unavailable state


during the restore operation. You cannot connect to a database in that state.
The only lifecycle management operation supported in the unavailable state is
terminate.
Prerequisites
To restore Autonomous Databases, you must be given the required type of access in a policy written by an
administrator, whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you try
to perform an action and get a message that you don’t have permission or are unauthorized, confirm with your
administrator the type of access you've been granted and which compartment you should work in. See Authentication
and Authorization for more information on user authorizations for the Oracle Cloud Infrastructure Database service.
See Let database and fleet admins manage Autonomous Databases on page 2159 for sample Autonomous Database
policies. See Details for the Database Service on page 2251 for detailed information on policy syntax.
Using the Oracle Cloud Infrastructure Console
To restore an Autonomous Database from a backup
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, find the database that you wish to restore.
4. Click the name of the Autonomous Database to display the database details.
5. Click the Restore button to open the restore dialog.
6. Click Select Backup.
7. Specify the date range for a list of backups to display.
8. Select the backup.
9. Click Restore.
To restore an Autonomous Database using point-in-time restore
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.

Oracle Cloud Infrastructure User Guide 1188


Database

2. Choose your Compartment.


3. In the list of Autonomous Databases, find the database that you wish to restore.
4. Click the name of the Autonomous Database to display the database details.
5. Click the Restore button to open the restore dialog.
6. Click Specify Timestamp.
7. Enter a timestamp. Your Autonomous Database decides which backup to use for faster recovery. The timestamp
input allows you to specify precision to the seconds level (YYYY-MM-DD HH:MM:SS GMT).
8. Click Restore.
Using the API
Use the RestoreAutonomousDatabase API operation to restore your Autonomous Database from a backup.
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Cloning an Autonomous Database


This topic describes how to clone an existing Autonomous Database using the Oracle Cloud Infrastructure Console
or the API. You can use the cloning feature to create a point-in-time copy of your Autonomous Database for purposes
such as testing, development, or analytics. To clone only the database schema of your source database, choose the
metadata clone option.
Note:

• You can clone any existing Autonomous Database (including those


provisioned with preview version software) using a preview version of
Autonomous Database. However, preview-version databases cannot be
cloned using the regular (general-availability) Autonomous Database
software.
• You cannot clone an Autonomous Database from one Autonomous
Container Database to another Autonomous Container Database if
one of the Autonomous Container Databases has different encryption
management configured. Encryption management must be the same
for both Autonomous Container Databases, either Oracle-managed or
customer-managed.
Clone Types
The clone feature offers the following types of Autonomous Database clones:
• Full clone: This option creates a database that includes the metadata and data from the source database.
• Metadata clone: This option creates a database that includes only the metadata from the source database.
• Refreshable clone: This option creates a clone that can be easily updated with changes from the source database.
For information about creating and using refreshable clones, see Using Refreshable Clones with Autonomous
Database.
Clone Sources
You can use a running database to create a clone. For databases running on shared Exadata infrastructure, you can
also use a backup as the source of your clone. When using a backup, you can select a listed backup to clone from,
or create a point-in-time clone. Point-in-time clones contain all data up to a specified timestamp. The specified
timestamp must be in the past.
Note:

When you create a clone from a backup, you must select a backup that is at
least two hours old.

Oracle Cloud Infrastructure User Guide 1189


Database

Prerequisites
To clone an Autonomous Database, you must have the required type of access in a policy written by an administrator,
whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you perform an action and
get a message that you don’t have permission or are unauthorized, confirm with your administrator the type of access
you've been granted and which compartment you should work in. See Authentication and Authorization for more
information on user authorizations for the Oracle Cloud Infrastructure Database service.
Note:

You can't clone a database in a security zone to create a database that isn't
in a security zone. This is true whether the source is a running database or a
database backup. See the Security Zone Policies topic for a full list of policies
that affect Database service resources.
Password Requirement for New Databases on Dedicated Exadata Infrastructure
When cloning a database on dedicated Exadata infrastructure, the password you set for the target database cannot be
one of the three most recently used passwords of the source database.
Using the Oracle Cloud Infrastructure Console
Note:

See Create a Refreshable Clone for an Autonomous Database Instance for


instructions for creating a refreshable clone.
To clone an Autonomous Database to shared Exadata infrastructure
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. If you are not already in the correct compartment, then choose one from the Compartment drop-down in the List
Scope section that contains the database you want to clone.
3. In the list of Autonomous Databases, click the display name of the database you want to clone.

Oracle Cloud Infrastructure User Guide 1190


Database

4. Go to More Actions, and then click Create Clone.


In the Create Autonomous Database Clone dialog, enter the following:
Clone Type
Select the type of clone you want to create. Choose either Full Clone or Metadata Clone.
Clone Source
The clone source selection allows you to specify whether the clone is created from a running database or from a
database backup. Select one of the following options:
• Clone from running database: Creates a clone of a running database as it exists at the current moment.
• Clone from a backup: Creates a clone from a database backup. If you choose this option, select one of the
following options:
• Specify a timestamp: Creates a point-in-time clone.
• Select from a list of backups: Creates a clone using all data from the specified backup. To limit your list
of backups to a specific date range, enter the starting date in the From field and the ending date in the To
field.
Note: You must select a backup that is at least 2 hours old, or the clone operation will fail.
Database Information
• Compartment:Your current compartment is the default selection.
• Display Name: A user-friendly description or other information that helps you easily identify the resource.
The display name does not have to be unique, and you can change it whenever you like. Avoid entering
confidential information.
• Database Name: The database name must consist of letters and numbers only, starting with a letter. The
maximum length is 14 characters. Avoid entering confidential information.
• Choose database version: Select a database version from the available versions.
• CPU Core Count: You can enable up to 128 cores for your Autonomous Database. The actual number of
available cores is subject to your tenancy's service limits.
Auto Scaling: allows Autonomous Databaseto automatically increase the number of CPU cores by up to
three times the assigned CPU core count value, depending on demand for processing. The auto scaling feature
reduces the number of CPU cores when additional cores are not needed. For databases with up to 42 assigned
cores, you can increase the maximum number of cores available through auto scaling by increasing the CPU
core count value.
Note:

The maximum number of cores that are available to any Autonomous


Database database not using dedicated Exadata infrastructure is 128,
regardless of whether auto scaling is enabled or not. This means that
database with a CPU core count of 64 could auto scale up to two times
the assigned number of cores (2 x 64 = 128). A database with 42 cores
(or fewer) could auto scale up to three times the assigned number (3 x 42
= 126). For billing purposes, the database service determines the average
number of CPUs used per hour.
• Storage: Specify the storage you wish to make available to your Autonomous Database database, in terabytes.
You can make up to 128 TB available. For full clones, the size of the source database determines the minimum
amount of storage you can make available.
• Enable Preview Version: (This option only displays during periods when a preview version of Autonomous
Database is available) Select this option to provision the database with an Autonomous Database preview
version. Preview versions of Autonomous Database are made available for limited periods for testing

Oracle Cloud Infrastructure User Guide 1191


Database

purposes. Do not select this option if you are provisioning a database for production purposes or if you will
need the database to persist beyond the limited availability period of the preview version.
Administrator Credential
Set the password for the Autonomous Database Admin user by entering a password that meets the following
criteria. You use this password when accessing the Autonomous Database service console and when using an SQL
client tool.
• Password cannot be one of the three most recently used passwords of the source database
• Contains from 12 to 30 characters
• Contains at least one lowercase letter
• Contains at least one uppercase letter
• Contains at least one number
• Does not contain the double quotation mark (")
• Does not contain the string "admin", regardless of casing
License Type
The type of license you want to use for the Autonomous Transaction Processing database. Your choice affects
metering for billing. You have the following options:

My Organization Already Owns Oracle Database Software Licenses: This choice is used for the Bring
Your Own License (BYOL) license type. If you choose this option, make sure you have proper entitlements to
use for new service instances that you create.
• Subscribe to New Database Software Licenses and the Database Cloud Service: This is used for the
License Included license type. With this choice, the cost of the cloud service includes a license for the
Database service.
5. Click Create Autonomous Database Clone.
The Console displays the details page for the new clone of your database and the service begins provisioning the
Autonomous Database. Note the following:
• The new clone displays the Provisioning lifecycle state until the provisioning process completes.
• The source database remains in the Available lifecycle state.
• Backups associated with the source database are not cloned for either the full clone or the metadata clone option.
• Oracle recommends that you evaluate the security requirements for the new database and implement them, as
applicable. See Security Considerations on page 1153 for details.
WHAT NEXT?
• Modify the access control list
• Manage database users - Data Warehouse | Transaction Processing
• Load data into the database - Data Warehouse | Transaction Processing
• Connect applications that use the database - Data Warehouse | Transaction Processing
• Create APEX applications that use the database - Data Warehouse | Transaction Processing
• Register the database with Data Safe
• Set up Object Storage for manual backups
• Connect to the database
To clone an Autonomous Database to dedicated Exadata infrastructure
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. If you are not already in the correct compartment, then choose one from the Compartment drop-down in the List
Scope section that contains the database you want to clone.
3. In the list of Autonomous Databases, click the display name of the database you want to clone.
4. Click More Actions to display a list of actions.

Oracle Cloud Infrastructure User Guide 1192


Database

5. Click Create Clone to display the Create Autonomous Database Clone page.
In the Clone Type section, select the type of clone you want to create. Choose either Full Clone or Metadata
Clone.
Provide basic information for the Autonomous Database
• Create in Compartment: Your current compartment is the default selection but you can select a different
compartment in which to create the clone from the drop-down list.
• The name of the source database displays in the read-only Source database name field.
• Display Name: Enter a description or other information to identify the database clone. You can change the
display name any time and it does not have to be unique. Avoid entering confidential information.
• Database Name: Enter a database name for the clone that contains only letters and numbers, begins with a
letter, and does not exceed 14 characters. Avoid entering confidential information.
• Autonomous Container Database in compartment: You can choose to create the database clone in the same
compartment and container database as the source database, or you can choose a different compartment by
clicking Change compartment, and a different container database by choosing one from the drop-down list.
Configure the database
• OCPU Count: You can enable up to 92 cores for your cloned Autonomous Database.
• Storage: Specify the amount of storage, in terabytes, that you want to make available to your cloned
Autonomous Database database, up to 128 TB. For full clones, the size of the source database determines the
minimum amount of storage you can make available.
Create administrator credentials
Set the password for the Autonomous Database administrator user by entering a password that meets the following
criteria.
• Password cannot be one of the three most recently used passwords of the source database
• Between 12 and 30 characters long
• Contains at least one lowercase letter
• Contains at least one uppercase letter
• Contains at least one number
• Does not contain the double quotation mark (")
• Does not contain the string "admin", regardless of casing
Use this password when accessing the service console and when using a SQL client tool.
6. Click Create Autonomous Database Clone.
The Console displays the details page for the new clone of your database and the service begins provisioning the
Autonomous Database. Note the following:
• The new clone displays the Provisioning lifecycle state until the provisioning process completes.
• The source database remains in the Available lifecycle state.
• Backups associated with the source database are not cloned for either the full-clone or the metadata-clone option.
• Oracle recommends that you evaluate the security requirements for the new database and implement them, as
applicable. See Network Security Groups for Databases Resources That Use Private Endpoints on page 1154 for
details.
WHAT NEXT?
• Manage database users - Data Warehouse | Transaction Processing
• Load data into the database - Data Warehouse | Transaction Processing
• Create applications that use the database - Data Warehouse | Transaction Processing
• Connect to the database - Data Warehouse | Transaction Processing
Using the API
Use the CreateAutonomousDatabase API operation to clone an Autonomous Database.

Oracle Cloud Infrastructure User Guide 1193


Database

For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
For More Information
For information about optimizer statistics, resource management rules and performance data for a cloned database,
see Using Oracle Autonomous Database on Shared Exadata Infrastructure, which has full product documentation for
Autonomous Database.

Maintenance Updates for Autonomous Databases with Shared Exadata


Infrastructure
Autonomous Databases perform maintenance updates and database patching for you. Your database remains available
throughout the maintenance process. This topic describes Autonomous Database maintenance for shared Exadata
infrastructure.
For information on maintenance for dedicated Exadata infrastructure, see Overview of Dedicated Exadata
Infrastructure Maintenance on page 1197.
Maintenance Duration
For Autonomous Databases with shared Exadata infrastructure, Oracle performs regular maintenance updates that
generally take no more than two hours.
Checking the Scheduling of Maintenance Updates
To see when your next scheduled maintenance update is, navigate to the Autonomous Database details page in the
Console for the database you are interested in. The Next Maintenance metadata field displays the beginning and
ending times of the next database maintenance . You can also use the GetAutonomousDatabase API operation to
determine the time of your next maintenance update.

Overview of Autonomous Database on Dedicated Exadata Infrastructure


This topic describes the database system architecture, features, user roles and hardware shapes for Autonomous
Database on dedicated Exadata infrastructure. For a general overview of Autonomous Databases that covers the
basics common to both infrastructure options, see Overview of Autonomous Databases on page 1149.
Database System Architecture Overview
Autonomous Databases on dedicated Exadata infrastructure have a three-level database architecture model that makes
use of Oracle multitenant database architecture.
Database System Resource types
Each level of the architecture model corresponds to one of the following resources types:
• An Autonomous Exadata Infrastructure resource. This is a hardware rack which includes compute nodes and
storage servers, tied together by a high-speed, low-latency InfiniBand network and intelligent Exadata software.
On dedicated Exadata infrastructure, you have exclusive use of the Exadata infrastructure and hardware on which
your Autonomous Transaction Processing databases run.
For a list of the hardware and Oracle Cloud resource characteristics of Autonomous Exadata Infrastructure
resources, see Characteristics of Autonomous Exadata Infrastructure Resources.

Oracle Cloud Infrastructure User Guide 1194


Database

• An Autonomous Container Database, which provides a container for multiple user databases. This resource is
sometimes referred to as a CDB, and is functionally equivalent to the multitenant container databases found in
Oracle 12c and higher databases.
Multitenant architecture offers many advantages over non-CDB architecture. For example, it does the following:
• Allows you to easily manage multiple individual user databases
• Makes more efficient use of database hardware, as individual databases may use only a fraction of the server
hardware capacity
• Allows for easier and more rapid movement of data and code
• Allows for easier testing, as development databases can be housed within the same container as production
databases
• Allows for the separation of duties between database administrators, who manage only the individual
Autonomous Database instances to which they are granted privileges, and fleet managers, who manage
infrastructure resources and container databases.
• An Autonomous Database. You can create multiple Autonomous Databases within the same container database.
This level of the database architecture is analogous to the pluggable databases (PDBs) found in non-Autonomous
Exadata systems. Your Autonomous Database can be configured for either transaction processing or data
warehouse workloads.
Database System Resource Deployment Order
You must create the dedicated Exadata infrastructure resources in the following order:
1. Autonomous Exadata Infrastructure. See Creating an Autonomous Exadata Infrastructure Resource on page
1199 for more information.
2. Autonomous Container Database. See Creating an Autonomous Container Database on page 1206 for more
information.
3. Autonomous Database. See Creating an Autonomous Database on Dedicated Exadata Infrastructure on page
1167 for more information.
Related Database System Resources
Related resources and prerequisites include:
• A Virtual Cloud Network (VCN) and a Subnet, which you create using Oracle Cloud Infrastructure's
Networking service. You must have at least one VCN and one subnet available to provision an Autonomous
Database with dedicated Exadata infrastructure.
For more information, see the following topics:
• Network Isolation (from Fleet Administrator’s Guide to Oracle Autonomous Transaction Processing on
Dedicated Exadata Infrastructure)
• Networking Overview on page 2774
• To create a VCN on page 2851
• To create a subnet on page 2851
• Autonomous Backups, created for you automatically by the Autonomous Database service. By default, backups
are stored for 60 days. Using the Console, you can choose to change the retention period to 7, 15, or 30 days.
• Manual Backups. Optionally, you can create on-demand manual backups. Manual backups are subject to the
retention policy you have in place for the Autonomous Container Database.
User Roles
Your organization may choose to split the administration of the Autonomous Database on dedicated Exadata
infrastructure into the following roles:
• Fleet Administrator. Fleet administrators create, monitor and manage Autonomous Exadata Infrastructure and
Autonomous Container Database resources. A fleet administrator must have permissions for using the networking

Oracle Cloud Infrastructure User Guide 1195


Database

resources required by the dedicated Exadata infrastructure, and permissions to manage the infrastructure and
container database resources.
See Fleet Administrator’s Guide to Oracle Autonomous Transaction Processing on Dedicated Exadata
Infrastructure for a complete overview of the fleet administrator role.
• Database Administrator. Database administrators create, monitor and manage Autonomous Databases. They
also create and manage users within the database. Database administrators must have permissions for using
container databases, for managing Autonomous Transaction Processing databases and backups, and for using
the related networking resources. For manual backups, they must have permissions to use the designated Object
Storage bucket. At the time of provisioning an Autonomous Database, the administrator provides user credentials
for the automatically created ADMIN account, which provides administrative rights to the new database.
See Using Oracle Autonomous Transaction Processing on Dedicated Exadata Infrastructure for a complete
overview of the database administrator role.
• Database User. Database users are the developers who write applications that connect to and use an Autonomous
Database to store and access the data. Database users do not need Oracle Cloud Infrastructure accounts. They
gain network connectivity to and connection authorization information for the database from the database
administrator.
CPU Provisioning, CPU Scaling, and Storage Scaling
You can scale the CPU count and the storage capacity of the database at any time without impacting availability
or performance. Autonomous Database on dedicated Exadata infrastructuredoes not currently support over-
provisioning, the ability for multiple Autonomous Databases to share a single CPU core. Therefore, an Autonomous
Exadata Infrastructure resource can currently support, across all its Autonomous Container Databases, up to as
many Autonomous Databases as it has CPU cores. This maximum number will increase when Oracle Autonomous
Database supports over-provisioning.
Available Exadata Infrastructure Hardware Shapes
Oracle Cloud Infrastructure currently offers Autonomous Database with the following dedicated Exadata
infrastructure system models and configurations:
• System Models: X7 and X8
• Configurations: quarter rack, half rack, and full rack
The subsections that follow provide the details for each shape's configuration.
Exadata X8 Shapes

Property Quarter Rack Half Rack Full Rack


Shape Name Exadata.Quarter3.100 Exadata.Half3.200 Exadata.Full3.400
Number of Compute 2 4 8
Nodes
Total Maximum Number 100 200 400
of Enabled CPU Cores
Total RAM Capacity 1440 GB 2880 GB 5760 GB
Number of Exadata 3 6 12
Storage Servers
Total Raw Flash Storage 76.8 TB 179.2 TB 358.4 TB
Capacity
Total Usable Storage 149 TB 298 TB 596 TB
Capacity

Oracle Cloud Infrastructure User Guide 1196


Database

Exadata X7 Shapes

Property Quarter Rack Half Rack Full Rack


Shape Name Exadata.Quarter2.92 Exadata.Half2.184 Exadata.Full2.368
Number of Compute 2 4 8
Nodes
Total Maximum Number 92 184 368
of Enabled CPU Cores
Total RAM Capacity 1440 GB 2880 GB 5760 GB
Number of Exadata 3 6 12
Storage Servers
Total Raw Flash Storage 76.8 TB 153.6 TB 307.2 TB
Capacity
Total Usable Storage 106 TB 212 TB 424 TB
Capacity

Using the Oracle Cloud Infrastructure Console to Manage Dedicated Exadata Infrastructure
For information on provisioning, managing, and backing up dedicated Exadata infrastructure resources in the Oracle
Cloud Infrastructure Console, see the following topics:
For Database Fleet Administrators
• Creating an Autonomous Exadata Infrastructure Resource on page 1199
• Creating an Autonomous Container Database on page 1206
• Managing an Autonomous Exadata Infrastructure Resource on page 1202
• Managing an Autonomous Container Database on page 1209
• Fleet Administrator’s Guide to Oracle Autonomous Database on Dedicated Exadata Infrastructure (complete fleet
administrator guide)
For Database Administrators
• Creating an Autonomous Database on Dedicated Exadata Infrastructure on page 1167
• Managing an Autonomous Database on page 1170
• Connecting to an Autonomous Database on page 1182
• Backing Up an Autonomous Database Manually on page 1185
• Restoring an Autonomous Database on page 1188
• Using Oracle Autonomous Database on Dedicated Exadata Infrastructure (complete database administrator
guide)
Additional Information
For known issues, see Known Issues for Oracle Autonomous Database on Dedicated Exadata Infrastructure.
Overview of Dedicated Exadata Infrastructure Maintenance
Autonomous Database systems on dedicated Exadata infrastructure have separate regularly scheduled maintenance
runs for both Autonomous Exadata Infrastructure resources and Autonomous Container Databases. You can choose to
set the scheduling for your maintenance runs, or let the system schedule maintenance. You can view the maintenance
history for infrastructure instances and container databases in the Oracle Cloud Infrastructure Console. Additionally,
one-off patching is available for certain resources on dedicated Exadata infrastructure when you file a service request
for an eligible resource with My Oracle Support.

Oracle Cloud Infrastructure User Guide 1197


Database

Tip:

Oracle recommends that you define the acceptable maintenance times


for your Autonomous Exadata Infrastructure resources and Autonomous
Container Databases. Doing so will prevent maintenance runs from occurring
at times that would be disruptive to regular database operations.
Autonomous Exadata Infrastructure Maintenance
Exadata infrastructure maintenance takes place at least once each quarter and is mandatory. You can schedule
a maintenance window to control the time, day of the week, and week of the month for Exadata infrastructure
maintenance. Exadata infrastructure maintenance patches the Exadata infrastructure (including patching of the
Exadata grid infrastructure code and operating systems updates), and do not include database patching. Oracle notifies
you about upcoming Exadata infrastructure maintenance in the weeks before quarterly Exadata infrastructure patching
occurs. You can also view scheduled maintenance runs in the Oracle Cloud Infrastructure console. The following
tasks explain how to view scheduled and past maintenance updates, and how to edit the maintenance schedule for an
Exadata infrastructure instance:
• To configure the automatic maintenance schedule for an Autonomous Exadata Infrastructure resource on page
1202
• To view the next scheduled maintenance for an Autonomous Exadata Infrastructure resource on page 1203
• To view the maintenance history of an Autonomous Exadata Infrastructure resource on page 1203
You can use the GetMaintenanceRun, ListMaintenanceRun, and UpdateAutonomousExadataInfrastructure API
operations to view details about scheduled and past maintenance updates, and to update the maintenance schedule of
your infrastructure instance.
Autonomous Container Database Maintenance
Container database maintenance updates include Oracle Database software patches and take place at least once each
quarter. You can configure a maintenance window to control the time, day of the week, and week of the month that
your maintenance update run will begin. Otherwise, Oracle will schedule container database maintenance runs for you
so that they are coordinated with the maintenance runs of the associated Exadata infrastructure.
Tip:

Container database maintenance runs must be scheduled to take place after


quarterly Exadata infrastructure maintenance runs occur.
If a scheduled container database maintenance run cannot take place (because of changes made to infrastructure
maintenance scheduling or other reasons), Oracle will automatically reschedule the container database maintenance
for the following quarter. You can change your container database maintenance window or reschedule a single
container database maintenance run to ensure that your container database maintenance runs follow infrastructure
maintenance within the same quarter.
Autonomous Database offers two container database maintenance choices:
• Release Update (RU): Autonomous Database installs only the most current release update.
• Release Update Revision (RUR): Autonomous Database installs the release update plus additional fixes.
The following tasks explain how to view and edit maintenance updates information for Autonomous Container
Databases:
• To configure the automatic maintenance schedule for an Autonomous Container Database on page 1210
• To view the maintenance history of an Autonomous Container Database on page 1211
• To reschedule or skip scheduled maintenance for an Autonomous Container Database on page 1211
• To configure the type of maintenance patching for an Autonomous Container Database on page 1210
Use the UpdateAutonomousContainerDatabase API operation to change the patching type for an Autonomous
Container Database. Use the ListMaintenanceRun API operation to see past maintenance update information. Use the
UpdateMaintenanceRun API operations to skip a container database maintenance update. You can skip maintenance
runs for up to 2 consecutive quarters if needed.

Oracle Cloud Infrastructure User Guide 1198


Database

Notifications for Maintenance of Autonomous Exadata Infrastructure and Autonomous Container Database
Resources
Autonomous Database emits events for Autonomous Exadata Infrastructure and Autonomous Container Database
maintenance runs. Using the Notifications service (which consumes events), you can create and subscribe to a
Notifications topic, allowing you to receive notifications about your maintenance runs by email, PagerDuty alert,
Slack, or https.
You can set up notifications based on the following events:
• A new maintenance run is scheduled
• A maintenance reminder email is sent
• A maintenance run begins
• A maintenance run ends.
See Getting Started with Events on page 1790 to learn about creating and subscribing to an Events topic. See
Services that Produce Events for a full list of Database service events. See Managing Topics and Subscriptions on
page 3384 to learn how to create and subscribe to a Notifications topic.
Managing One-off Patches as Part of Dedicated Exadata Infrastructure Maintenance
Oracle generates one-off patches when a user files a service request with My Oracle Support. If appropriate to resolve
the service request, and Oracle and the user agree that a one-off patch is the best solution, then the service team
generates a one-off patch and makes it available to the user that filed the service request. One-off patches are separate
from scheduled maintenance patches.
If you enable Oracle Cloud Notifications and Oracle Cloud Events with a subscription to receive notifications
regarding new updates, then when one-off patches become available, Oracle sends a notification that contains the
OCID of the product to be patched. Otherwise, you can find the update in the My Oracle Support portal for the
service request that you filed.
You can edit the scheduled start time or choose to install the one-off patch, immediately. By default, Oracle schedules
a one-off patch to be applied within 72 hours of the patch becoming available and, if no action to change the schedule
occurs, then the patch is automatically applied.
1. Copy the OCID from the notification and paste it into the Search field on the Oracle Cloud Infrastructure Console
to navigate to the correct product.
2. On the product details page, in the Maintenance section, click the View link in the Next Maintenance field to
display the Maintenance page for the product that you want to patch.
3. One-off patches are displayed in the Unplanned maintenance section and are denoted as one-off patches in
the Type field. If the scheduled start time for the patch interferes with enterprise operations or is otherwise
inconvenient, then click Edit to change the scheduled time to install the patch. Click Patch Now to immediately
install the one-off patch.
4. Click Run Maintenance to start the patching operation.

Creating an Autonomous Exadata Infrastructure Resource


This topic describes how to provision an Autonomous Exadata Infrastructure resource for Autonomous Databases
using the Oracle Cloud Infrastructure Console or the API. For an overview of dedicated Exadata infrastructure, see
Overview of Autonomous Database on Dedicated Exadata Infrastructure on page 1194.
The infrastructure resource includes the physical Exadata hardware and intelligent Exadata software. Once you have
provisioned an infrastructure instance, you can provision one or more Autonomous Container Databases to run on
your infrastructure. To provision an Autonomous Database, you must have both an infrastructure resource and at least
one container database available.

Note:

This topic is not applicable to Autonomous Databases on shared Exadata


infrastructure.

Oracle Cloud Infrastructure User Guide 1199


Database

Prerequisites
• To create an Autonomous Exadata Infrastructure resource, you must be given the required type of access in a
policy written by an administrator, whether you're using the Console or the REST API with an SDK, CLI, or
other tool. If you try to perform an action and get a message that you don’t have permission or are unauthorized,
confirm with your administrator the type of access you've been granted and which compartment you should
work in. See Authentication and Authorization for more information on user authorizations for the Oracle Cloud
Infrastructure Database service.
Tip:

See Let database and fleet admins manage Autonomous Databases on


page 2159 for sample Autonomous Database policies. See Details for the
Database Service on page 2251 for detailed information on policy syntax.
• You will also need a Virtual Cloud Network and a Subnet, which you create using Oracle Cloud Infrastructure's
Networking service. For information on creating and managing these resources, see VCNs and Subnets on page
2847.
• If you are creating an Autonomous Exadata Infrastructure resource in a security zone compartment, your
networking configuration must use a subnet that is also in a security zone compartment. See the Security Zone
Policies topic for a full list of policies that affect Database service resources.
Using the Oracle Cloud Infrastructure Console

To create an Autonomous Exadata Infrastructure resource


1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. Click Autonomous Exadata Infrastructure.
4. Click Create Autonomous Exadata Infrastructure.
5. In the Create Autonomous Exadata Infrastructure dialog, enter the following general information:

Compartment: Specify the compartment in which the Autonomous Exadata Infrastructure will be created.

Display Name: A user-friendly description or other information that helps you easily identify the
infrastructure resource. The display name does not have to be unique. Avoid entering confidential information.
• Availability Domain: Select an availability domain for the Autonomous Exadata Infrastructure.
• System Model: Select a system model, either the X7 or X8. See Available Exadata Infrastructure Hardware
Shapes on page 1196 for detailed information on available system models.
• System Configuration: Select Quarter Rack, Half Rack., or Full Rack. Total available OCPUs and storage
are displayed on the system configuration buttons. Total available OCPU and storage values are determined by
the specified system model.
6. Enter the following network information:
• Virtual cloud network compartment: The compartment containing the VCN you wish to use for the
Autonomous Exadata Infrastructure. The default value is the user's current compartment. Click change
compartment to select a VCN in a different compartment.
• Virtual cloud network: The VCN in which to launch the Autonomous Exadata Infrastructure.
• Subnet compartment: The compartment containing the subnet you wish to use for the Autonomous Exadata
Infrastructure. The default value is the user's current compartment. Click change compartment to select a
subnet in a different compartment.
• Subnet: The subnet to which the Autonomous Exadata Infrastructure should attach.
• Use network security groups to control traffic: Optional. You can specify up to five network security
groups (NSGs) for your Autonomous Exadata Infrastructure resource by selecting this option. NSGs function
as virtual firewalls, allowing you to apply a set of ingress and egress security rules to your infrastructure
resource. A maximum of five NSGs can be specified. To add an NSG, select the compartment containing the

Oracle Cloud Infrastructure User Guide 1200


Database

NSG using the Network security group compartment selector, then select the NSG itself using the Network
security group selector.
For more information on creating and working with NSGs, see Network Security Groups on page 2867.
Note that if you choose a subnet with a security list, the security rules for the infrastructure resource will be a
union of the rules in the security list and the NSGs.
7. Optionally, you can specify the date and start time for the Autonomous Exadata Infrastructure quarterly
maintenance:
a. Click Modify Schedule.
b. In the Automatic Maintenance Schedule dialog, select Specify a Schedule.
c. In the Maintenance months selector, specify at least one month for each quarter during which infrastructure
maintenance will take place.
d. For Week of the Month, select a week during the month that the maintenance will take place. Weeks start on
the 1st, 8th, 15th, and 22nd days of the month, and have a duration of 7 days. Weeks start and end based on
calendar dates, not days of the week. For example, to allow maintenance during the 2nd week of the month
(from the 8th day to the 14th day of the month), use the value 2. Maintenance cannot be scheduled for the fifth
week of months that contain more than 28 days.
e. For Day of the Week, select the day of the week that the maintenance will take place.
f. For Start Hour, select one of the six start time windows available. The maintenance will begin during the
4 hour time window that you specify and may continue beyond the end of the period chosen. The start time
window is specified in universal coordinated time (UTC).
g. Click Update Maintenance Schedule.
Tip:

Oracle recommends that you define the acceptable maintenance times


for your Autonomous Exadata Infrastructure resources and Autonomous
Container Databases. Doing so will prevent maintenance runs from
occurring at times that would be disruptive to regular database operations.
8. Choose the license type you wish to use. Your choice affects metering for billing. You have the following options:

Bring your own license: If you choose this option, make sure you have proper entitlements to use for new
service instances that you create.
• License included: With this choice, the cost of the cloud service includes a license for the Database service.
9. The following Advanced Options are available:
Tags - If you have permissions to create a resource, then you also have permissions to apply free-form tags to
that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more information
about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip this option (you
can apply tags later) or ask your administrator.
10. Click Create Autonomous Exadata Infrastructure.
WHAT NEXT?
After creating an Autonomous Exadata Infrastructure resource, you can create one or more Autonomous Container
Databases on your infrastructure. You must have provisioned both an infrastructure resource and at least one
container database before you can create your first Autonomous Database in Oracle Cloud Infrastructure.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the LaunchAutonomousExadataInfrastructure API operation to create an Autonomous Exadata Infrastructure
resource.

Oracle Cloud Infrastructure User Guide 1201


Database

Managing an Autonomous Exadata Infrastructure Resource


This topic describes the Autonomous Exadata Infrastructure management tasks for Autonomous Databases that you
complete using the Oracle Cloud Infrastructure Console or the API. Autonomous Databases use Autonomous Exadata
Infrastructure resources on dedicated Exadata infrastructure. See Overview of Autonomous Database on Dedicated
Exadata Infrastructure on page 1194 for more information.
Note:

This topic is not applicable to Autonomous Databases on shared Exadata


infrastructure.
Using the Oracle Cloud Infrastructure Console
You can perform the following management operations on Autonomous Exadata Infrastructure resources in Oracle
Cloud Infrastructure:
• Configuring the automatic maintenance schedule
• Rescheduling maintenance
• Viewing the next scheduled maintenance date and maintenance history
• Immediately patching an Autonomous Exadata Infrastructure
• Copying the Autonomous Exadata Infrastructure endpoint
• Terminating the Autonomous Exadata Infrastructure resource
Tip:

Oracle recommends that you define the acceptable maintenance times


for your Autonomous Exadata Infrastructure resources and Autonomous
Container Databases. Doing so will prevent maintenance runs from occurring
at times that would be disruptive to regular database operations.
To configure the automatic maintenance schedule for an Autonomous Exadata Infrastructure resource
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. Under Autonomous Database, click Autonomous Exadata Infrastructure.
4. In the list of Autonomous Exadata Infrastructure resources, click on the display name of the resource you are
interested in.
5. On the Autonomous Exadata Infrastructure details page, under Maintenance, click the edit link in the
Maintenance Schedule field.
6. In the Automatic Maintenance Schedule dialog, select Specify a schedule.
7. Under Maintenance months, specify at least one month for each maintenance quarter during which you want
Autonomous Exadata Infrastructure maintenance to occur.
Note:

Maintenance quarters begin in February, May, August, and November,


with the first maintenance quarter of the year beginning in February.
8. Under Week of the month, specify which week of the month maintenance will take place. Weeks start on the
1st, 8th, 15th, and 22nd days of the month, and have a duration of 7 days. Weeks start and end based on calendar
dates, not days of the week. Maintenance cannot be scheduled for the fifth week of months that contain more than
28 days.
9. Under Day of the week, specify the day of the week on which the maintenance will occur.
10. Under Start hour, specify the hour during which the maintenance run will begin.
11. Click Update Maintenance Schedule.

Oracle Cloud Infrastructure User Guide 1202


Database

To view the next scheduled maintenance for an Autonomous Exadata Infrastructure resource
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. Under Autonomous Database, click Autonomous Exadata Infrastructure.
4. In the list of Autonomous Exadata Infrastructure resources, click on the display name of the resource you are
interested in.
5. On the Autonomous Exadata Infrastructure details page, under Maintenance, click the view link in the Next
Maintenance field.
6. On the Maintenance page, scheduled maintenance events are listed under the Regular Autonomous Exadata
Infrastructure maintenance heading.
To reschedule maintenance of Exadata infrastructure
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your compartment from the Compartment drop-down.
3. In the Dedicated Infrastructure section, click Autonomous Exadata Infrastructure.
4. In the list of Autonomous Exadata Infrastructures, click the display name of the Exadata infrastructure for which
you want to reschedule maintenance.
5. On the Autonomous Exadata Infrastructure Details page, in the Maintenance section, click the View link in the
Next Maintenance field.
6. The Maintenance page displays any Exadata infrastructure maintenance events planned for the next 15 days in
the list of maintenance events.
7. To reschedule maintenance, click the Edit link in the Scheduled Start Time field to display the Edit
Maintenance Start Time dialog.
8. Click the calendar icon and choose a date and time on which to run maintenance.
9. Click Save Changes.
To view the maintenance history of an Autonomous Exadata Infrastructure resource
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. Under Autonomous Database, click Autonomous Exadata Infrastructure.
4. In the list of Autonomous Exadata Infrastructure resources, click on the display name of the resource you are
interested in.
5. On the Autonomous Exadata Infrastructure details page, under Maintenance, click the view link in the Next
Maintenance field.
6. On the Maintenance page, under Autonomous Database Maintenance, click History. In the list of past
maintenance events, you can click on an individual event title to read the details of the maintenance that took
place. Maintenance event details include the following:
• The category of maintenance (quarterly software maintenance, hardware maintenance, or a critical patch)
• Whether the maintenance was scheduled or unplanned
• The OCID of the maintenance event. (Go to More Actions, then choose Copy OCID.)
• The start time and date of the maintenance
To immediately patch an Autonomous Exadata Infrastructure
After you schedule maintenance for your Autonomous Exadata Infrastructures, you can patch the component,
immediately, at any time before scheduled maintenance occurs. If you choose to patch your Autonomous Exadata
Infrastructure, immediately, while another component is in the process of maintenance, then this maintenance
operation gets queued and will begin, in turn.
1. Open the navigation menu. Under Database, click Autonomous Transaction Processing or Autonomous Data
Warehouse.

Oracle Cloud Infrastructure User Guide 1203


Database

2. Choose your compartment from the Compartment drop-down list.


3. In the Dedicated Infrastructure section, click Autonomous Exadata Infrastructure to display a list of
Autonomous Exadata Infrastructures for your selected compartment.
4. Click the display name of the Autonomous Exadata Infrastructure that you want to patch.
5. On the Autonomous Exadata Infrastructure Details page, in the Maintenance section, click the View link in the
Next Maintenance field to display the Maintenance page for the Autonomous Exadata Infrastructure that you
want to patch.
6. In the Autonomous Exadata Infrastructure section, click the Patch Now link in the Scheduled Start Time
field to display the Run Maintenance dialog.
7. Click Run Maintenance to start the patching operation.
To view or copy the Autonomous Exadata Infrastructure endpoint
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. Under Autonomous Database, click Autonomous Exadata Infrastructure.
4. In the list of Autonomous Exadata Infrastructure resources, click on the display name of the resource you are
interested in.
5. On the Autonomous Exadata Infrastructure Information tab, click Show or Copy in the DB Infrastructure
Endpoint Name field.
To manage security certificates in an Autonomous Exadata Infrastructure
To maintain security compliance within an Autonomous Exadata Infrastructure, you must periodically rotate the
security certificates for Oracle REST Data Services (ORDS) and Secure Sockets Layer (SSL).
Note:

Rotating the ORDS and SSL security certificates is not currently supported
for Autonomous Exadata Infrastructure with Autonomous Data Guard
enabled.
To rotate security certificates:
1. Open the navigation menu and click Oracle Database.
2. Click Autonomous Data Warehouse, Autonomous JSON Database, or Autonomous Transaction Processing.
3. Choose your compartment from the Compartment drop-down list.
4. In the Dedicated Infrastructure section, click Autonomous Exadata Infrastructure to display a list of
Autonomous Exadata Infrastructures for your selected compartment.
5. Click the display name of the Autonomous Exadata Infrastructure for which you want to rotate security
certificates.
6. Click Manage Certificates to display the Manage Certificates dialog.
7. Click either Rotate ORDS Certificate or Rotate SSL Certificate, depending on which operation you want to
perform.
If you are rotating the Oracle REST Data Services certificate, then click Apply.
If you are rotating the SSL certificate, then you must enter the name of the Autonomous Exadata Infrastructure in
which you are working to confirm the operation. Click Apply.
Note:

After you rotate an SSL certificate, you must download wallets for all of
the Autonomous Databases in the Autonomous Exadata Infrastructure in
which you are working.

Oracle Cloud Infrastructure User Guide 1204


Database

To edit the network security groups (NSGs) for your Autonomous Exadata Infrastructure resource
Your Autonomous Exadata Infrastructure instance can use up to five network security groups (NSGs). Note that if
you choose a subnet with a security list, the security rules for the infrastructure instance will be a union of the rules in
the security list and the NSGs. For more information, see Network Security Groups on page 2867.
1. Open the navigation menu. Under Oracle Database, click Autonomous Transaction Processing.
2. Choose your Compartment.
3. Under Autonomous Database, click Autonomous Exadata Infrastructure.
4. In the list of Autonomous Exadata Infrastructure resources, click on the display name of the resource you are
interested in.
5. In the Network details, click the Edit link to the right of the Network Security Groups field.
6. In the Edit Network Security Groups dialog, click + Another Network Security Group to add an NSG to the
Autonomous Exadata Infrastructure resource.
To change an assigned NSG, click the drop-down menu displaying the NSG name, then select a different NSG.
To remove an NSG from your DB system, click the X icon to the right of the displayed NSG name.
7. Click Save.
To move an Autonomous Exadata Infrastructure resource to another compartment
Note:

• To move resources between compartments, resource users must have


sufficient access permissions on the compartment that the resource
is being moved to, as well as the current compartment. For more
information about permissions for Database resources, see Details for the
Database Service on page 2251.
• If your Autonomous Exadata Infrastructure is in a security zone
compartment, the destination compartment must also be in a security
zone. See the Security Zone Policies topic for a full list of policies that
affect Database service resources.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. Under Autonomous Database, click Autonomous Exadata Infrastructure.
4. In the list of Autonomous Exadata Infrastructure resources, click on the display name of the resource you wish to
move.
5. Click Move Resource.
6. Select the new compartment.
7. Click Move Resource.
For information about dependent resources for Database resources, see Moving Database Resources to a Different
Compartment on page 1141.
To terminate an Autonomous Exadata Infrastructure resource
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. Under Autonomous Database, click Autonomous Exadata Infrastructure.
4. In the list of Autonomous Exadata Infrastructure resources, click on the display name of the resource you are
interested in.
5. Go to More Actions, and then click Terminate.
6. Confirm that you wish to terminate your Autonomous Exadata Infrastructure in the confirmation dialog.

Oracle Cloud Infrastructure User Guide 1205


Database

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the UpdateAutonomousExadataInfrastructure API operation to configure the automatic maintenance schedule for
your infrastructure resource.
Use the GetMaintenanceRun API to view the details of a maintenance run that is scheduled, in progress, or that has
ended.
Use the ListMaintenanceRun API to get a list of maintenance runs in a specified compartment.
Use the ChangeAutonomousExadataInfrastructureCompartment API operation to move an Autonomous Exadata
Infrastructure resource to another compartment.
Use the TerminateAutonomousExadataInfrastructure API operation to delete an Autonomous Exadata Infrastructure
resource.
For More Information
Create and Manage Autonomous Exadata Infrastructure Resources (Fleet Administrator’s Guide to Oracle
Autonomous Transaction Processing on Dedicated Exadata Infrastructure)

Creating an Autonomous Container Database


This topic describes how to provision a new Autonomous Container Database using the Oracle Cloud Infrastructure
Console or the API. Container databases are only necessary for Autonomous Databases on dedicated Exadata
infrastructure. For a brief overview, see Overview of Autonomous Database on Dedicated Exadata Infrastructure on
page 1194.

Note:

This topic is not applicable to Autonomous Databases on shared Exadata


infrastructure.
Prerequisites
• To create an Autonomous Container Database, you must be given the required type of access in a policy written
by an administrator, whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you
try to perform an action and get a message that you don’t have permission or are unauthorized, confirm with
your administrator the type of access you've been granted and which compartment you should work in. See
Authentication and Authorization for more information on user authorizations for the Oracle Cloud Infrastructure
Database service.
Tip:

See Let database and fleet admins manage Autonomous Databases on


page 2159 for sample Autonomous Database policies. See Details for the
Database Service on page 2251 for detailed information on policy syntax.
• To create an Autonomous Container Database, you must have an available Autonomous Exadata Infrastructure
resource. For information on creating an infrastructure instance, see Creating an Autonomous Exadata
Infrastructure Resource on page 1199.
• If you want to create your Autonomous Container Database in a security zone, the Autonomous Exadata
Infrastructure that runs the container database must be in a security zone. See the Security Zone Policies topic for
a full list of policies that affect Database service resources.
Using the Oracle Cloud Infrastructure Console
To create an Autonomous Container Database
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse or Autonomous
Transaction Processing.
2. Under Dedicated Infrastructure, click Autonomous Container Database.

Oracle Cloud Infrastructure User Guide 1206


Database

3. Click Create Autonomous Container Database.


4. In the Create Autonomous Container Database dialog, enter the following database information:
• Select a compartment: Specify the compartment in which you want to create the container database, if
different from the default.
• Display name: Enter a description or other information that helps you identify the resource. The display name
does not have to be unique. Avoid entering confidential information.
• Autonomous Exadata Infrastructure in compartment: From the drop-down, choose an Autonomous
Exadata Infrastructure for your container database. See Creating an Autonomous Exadata Infrastructure
Resource on page 1199 for more information.
You can change the compartment containing the Autonomous Exadata Infrastructure you want to use for your
container database by clicking the Change Compartment link.
5. You can enable Autonomous Data Guard on your primary Autonomous Container Database to create a standby
Autonomous Container Database.
a. Select Enable Autonomous Data Guard to provision a peer Autonomous Container Database as a standby
database.
b. Select a compartment in which you want to provision a peer Autonomous Container Database from the Select
peer Autonomous Container Database compartment drop-down.
c. You can enter a display name in the Peer Autonomous Container Database display name field or accept the
default display name. Avoid entering confidential information.
d. Select a region in which you want to locate the peer Autonomous Container Database from the Region drop-
down.
e. Select an Autonomous Exadata Infrastructure to apply to the peer Autonomous Container Database.
Note:

You must provision the standby Autonomous Container Database with


a different Autonomous Exadata Infrastructure from that of the primary
database.
You can change the compartment containing the Autonomous Exadata Infrastructure you want to use for your
peer Autonomous Container Database by clicking the Change Compartment link.
f. Select a protection mode for the peer Autonomous Container Database from the Protection mode drop-down:
• Maximum Availability: Provides the highest level of data protection that is possible without
compromising the availability of a primary database.
• Maximum Performance: Provides the highest level of data protection that is possible without affecting
the performance of a primary database. This is the default protection mode.
See Oracle Data Guard Concepts and Administration for more information about protection modes
After you complete this task, the database you initially created is labeled in the Oracle Cloud Infrastructure
console database list view as "Primary" and the peer database is labeled as "Standby".
6. Optionally, you can change the default scheduling and maintenance patching type for your Autonomous Container
Database maintenance. The patch type choices are Release Update (RU) and Release Update Revision (RUR).
The Release Update setting installs only the most current release update, while the Release Update Revision
installs the release update plus additional fixes. For information about Release Updates (RUs) and Release
Updatate Revisions (RURs), see Release Update Introduction and FAQ (Doc ID 2285040.1) in the My Oracle
Support online help portal (MOS login required).
Note:

• The patch type you choose for the primary Autonomous Container
Database is also applied to the standby database.
• Scheduled maintenance for a standby Autonomous Container Database
must be between one and seven days before scheduled maintenance for
a primary Autonomous Container Database.

Oracle Cloud Infrastructure User Guide 1207


Database

7. Click Show Advanced Options to display the following:


• Management: Optionally, you can specify the backup retention policy, which controls the length of time you
retain backups in the Autonomous Container Database. The choices are 7 days, 15 days, 30 days, and 60 days.
The default setting is 60 days.
Note:

The backup retention policy you specify also applies to the standby
Autonomous Container Database.
• Encryption Key: You can choose encryption based on either Oracle-managed encryption keys or encryption
keys that you manage.
If you choose encryption based on encryption keys that you manage, then you must have access to a valid
encryption key.
Note:

Customer-managed encryption keys are not applicable to Autonomous


Data Guard-enabled Autonomous Container Databases.
• Select a vault that contains the encryption key you want to use. You can change the compartment
containing the vault you want to use by clicking the Change Compartment link.
• Select an encryption key. You can change the compartment containing the encryption key you want to use
by clicking the Change Compartment link.
Note:

• You cannot change the vault or vault key once the Autonomous
Container Database is provisioned.
• Oracle supports using 256-bit and hardware security module
(HSM) encryption keys for Autonomous Container Database
encryption.
See Managing Keys on page 3998 for more information about encryption keys

Tagging: If you have permissions to create a resource, then you also have permissions to apply free-form
tags to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more
information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip
this option (you can apply tags later) or ask your administrator.
8. Click Create Autonomous Container Database.
WHAT NEXT?
After creating an Autonomous Container Database, you can create one or more Autonomous Databases within the
container database.
To configure the Autonomous Container Database maintenance type and scheduling
1. Click Configure Maintenance. In the Edit Automatic Maintenance dialog that opens, you can configure both
the maintenance schedule and the patch type.
2. For Maintenance Type, select either Release Update (RU) or Release Update Revision (RUR). Learn more.
3. To configure the maintenance schedule, select Specify a schedule in the Configure the automatic maintenance
schedule section. Choose your preferred month, week, weekday, and start time for container database
maintenance. Autonomous Container Database maintenance should be scheduled so that it follows after the
maintenance scheduled for the associated Autonomous Exadata Infrastructure. To see the scheduling of the
associated Autonomous Exadata Infrastructure, you can click Show Autonomous Exadata Infrastructure
maintenance schedule. If you have not specified an infrastructure maintenance schedule (and Oracle is

Oracle Cloud Infrastructure User Guide 1208


Database

scheduling infrastructure maintenance), your infrastructure maintenance will be scheduled to precede your
container database maintenance.

Under Maintenance months, specify at least one month for each quarter during which Autonomous Exadata
Infrastructure maintenance will take place.
• Under Week of the month, specify which week of the month maintenance will take place. Weeks start on
the 1st, 8th, 15th, and 22nd days of the month, and have a duration of 7 days. Weeks start and end based on
calendar dates, not days of the week. Maintenance cannot be scheduled for the fifth week of months that
contain more than 28 days.
• Under Day of the week, specify the day of the week on which the maintenance will occur.
• Under Start hour, specify the hour during which the maintenance run will begin.
4. Click Save Changes.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the CreateAutonomousContainerDatabase API operation to create an Autonomous container database.

Managing an Autonomous Container Database


This topic describes the database management tasks for Autonomous Container Databases that you complete using
the Oracle Cloud InfrastructureConsole or the API. Container databases are used by Autonomous Databases on
dedicated Exadata infrastructure. See Overview of Autonomous Database on Dedicated Exadata Infrastructure on
page 1194 for more information.
Note:

This topic is not applicable to Autonomous Databases on shared Exadata


infrastructure.
The following management operations can be performed on Autonomous Container Databases in Oracle Cloud
Infrastructure:
• Edit the backup retention policy. By default, database backups are retained for 60 days. You have the option of
retaining backups for 7, 15, 30, or 60 days. The current backup retention policy for an Autonomous Container
Database is displayed on the Autonomous Container Database details page.
• Configure the type of database maintenance. You can choose to use Release Update (RU) or Release
Update Revision (RUR) updates for your Autonomous Container Database maintenance. Release Update
(RU): Autonomous Database installs only the most current release update. Release Update Revision
(RUR): Autonomous Database installs the release update plus additional fixes.
For information about Release Updates (RUs) and Release Updatate Revisions (RURs), see Release Update
Introduction and FAQ (Doc ID 2285040.1) in the My Oracle Support online help portal (MOS login required).
• Configure the scheduling for your Autonomous Container Database.
• View the Autonomous Container Database next scheduled maintenance and maintenance history.
• Immediately patch an Autonomous Container Database
• Skip a scheduled maintenance run. For container databases, you can skip maintenance runs for up to two
consecutive quarters, if necessary.
• Edit the maintenance patch version of your Autonomous Container Database.
• Perform a rolling restart of databases within an Autonomous Container Database. You can perform a "rolling
restart" on all the Autonomous Databases in an Autonomous Container Database to ensure that the current
memory allocation is optimized. During a rolling restart, each node of an Autonomous Database is restarted
separately while the remaining nodes continue to be available. No interruption of service occurs during a rolling
restart. You cannot perform a container database restart if a backup is in progress.
• Rotate encryption keys for an Autonomous Container Database.
• Terminate an Autonomous Container Database. Note that you must terminate all Autonomous Databases within a
container database before you can terminate the container database itself.

Oracle Cloud Infrastructure User Guide 1209


Database

Using the Oracle Cloud Infrastructure Console


To set the backup retention policy for an Autonomous Container Database
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. Under Autonomous Database, click Autonomous Container Database.
4. In the list of Autonomous Container Databases, click on the display name of the container database you are
interested in.
5. On the Autonomous Container Database details page, under Backup, click the Edit link in the Backup Retention
Field.
6. Specify a backup retention period from the list of choices.
7. Click Save Changes.
To configure the automatic maintenance schedule for an Autonomous Container Database
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. Under Autonomous Database, click Autonomous Container Database.
4. In the list of Autonomous Container Databases, click on the display name of the container database you are
interested in.
5. On the Autonomous Container Database details page, under Maintenance, click the edit link in the Maintenance
Details field. In the Edit Automatic Maintenance dialog that opens, you can configure both the maintenance
schedule and the patch type.
6. Optionally, you can change the maintenance patch type. To edit this setting, select either Release Update (RU) or
Release Update Revision (RUR). Learn more.
7. To configure the maintenance schedule, select Specify a schedule in the Configure the automatic maintenance
schedule section. Choose your preferred month, week, weekday, and start time for container database
maintenance. Autonomous Container Database maintenance should be scheduled so that it follows after the
maintenance scheduled for the associated Autonomous Exadata Infrastructure. To see the scheduling of the
associated Autonomous Exadata Infrastructure, you can click Show Autonomous Exadata Infrastructure
maintenance schedule. If you have not specified an infrastructure maintenance schedule (and Oracle is
scheduling infrastructure maintenance), your infrastructure maintenance scheduling will be automatically
modified so that it precedes your container database maintenance during each quarter.
• Under Maintenance months, specify at least one month for each maintenance quarter during which you want
Autonomous Exadata Infrastructure maintenance to occur.
Note:

Maintenance quarters begin in February, May, August, and November,


with the first maintenance quarter of the year beginning in February.
• Under Week of the month, specify which week of the month maintenance will take place. Weeks start on
the 1st, 8th, 15th, and 22nd days of the month, and have a duration of 7 days. Weeks start and end based on
calendar dates, not days of the week. Maintenance cannot be scheduled for the fifth week of months that
contain more than 28 days.
• Under Day of the week, specify the day of the week on which the maintenance will occur.
• Under Start hour, specify the hour during which the maintenance run will begin.
8. Click Save Changes.
To configure the type of maintenance patching for an Autonomous Container Database
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. Under Autonomous Database, click Autonomous Container Database.

Oracle Cloud Infrastructure User Guide 1210


Database

4. In the list of Autonomous Container Databases, click on the display name of the container database you are
interested in.
5. On the Autonomous Container Database details page, under Maintenance, click the Edit link in the Maintenance
Details field.
6. In the Automatic Maintenance Schedule dialog, under Maintenance Type, select either Release Update (RU)
or Release Update Revision (RUR).
• Release Update (RU): Autonomous Database installs only the most current release update.
• Release Update Revision (RUR): Autonomous Database installs the release update plus additional fixes.
7. Optionally, you can configure the automatic maintenance schedule as described in To configure the automatic
maintenance schedule for an Autonomous Container Database on page 1210.
8. Click Save Changes.
To view the next scheduled maintenance run of an Autonomous Container Database
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. Under Autonomous Database, click Autonomous Container Database.
4. In the list of Autonomous Container Databases, click on the display name of the container database you are
interested in.
5. On the Autonomous Container Database details page, under Maintenance, click the View link in the Next
Maintenance field.
6. On the Maintenance page, under Autonomous Database Maintenance, click Maintenance. In the list of
maintenance events, you can the details of scheduled maintenance runs. Maintenance event details include the
following:
• The status of the scheduled maintenance run
• The type of maintenance run (quarterly software maintenance or a critical patch)
• The OCID of the maintenance event.
• The start time and date of the maintenance
To view the maintenance history of an Autonomous Container Database
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. Under Autonomous Database, click Autonomous Container Database.
4. In the list of Autonomous Container Databases, click on the display name of the container database you are
interested in.
a. On the Autonomous Container Database details page, under Maintenance, click the View link in the Next
Maintenance field.
5. On the Maintenance page, under Autonomous Database Maintenance, click Maintenance History. In the list
of past maintenance events, you can click on an individual event title to read the details of the maintenance that
took place. Maintenance event details include the following:
• The category of maintenance (quarterly software maintenance or a critical patch)
• Whether the maintenance was scheduled or unplanned
• The OCID of the maintenance event.
• The start time and date of the maintenance
To reschedule or skip scheduled maintenance for an Autonomous Container Database
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the Autonomous Database section, click Autonomous Container Database.

Oracle Cloud Infrastructure User Guide 1211


Database

4. In the list of Autonomous Container Databases, click the display name of the container database that you want to
manage.
5. On the Autonomous Container Database details page, in the Maintenance section, click the View link in the Next
Maintenance field.
6. On the Maintenance page, any container database maintenance events planned for the next 15 days will appear in
the list of maintenance events.
To skip scheduled maintenance for a container database, click Skip.
Note:

You cannot skip scheduled maintenance more than twice, consecutively.


To reschedule maintenance, click Edit and enter a start time for the update in the Edit Maintenance dialog.
Make sure that your specified container database maintenance window is later in the quarter than your scheduled
Exadata infrastructure maintenance.
To immediately patch an Autonomous Container Database
After you schedule maintenance for your Autonomous Container Databases, you can patch the component,
immediately, at any time before scheduled maintenance occurs. If you choose to patch a Autonomous Container
Database, immediately, while another component is in the process of maintenance, then this maintenance operation
gets queued and will begin, in turn.
1. Open the navigation menu. Click Oracle Database, click Autonomous Transaction Processing or Autonomous
Data Warehouse.
2. Choose your compartment from the Compartment drop-down.
3. In the Dedicated Infrastructure section, click Autonomous Container Database to display a list of
Autonomous Container Databases for your selected compartment.
4. Click the display name of the Autonomous Container Database that you want to patch.
5. On the Autonomous Container Database Details page, in the Maintenance section, click the View link in the
Next Maintenance field to display the Maintenance page for the Autonomous Container Database that you want
to patch.
6. In the Autonomous Container Database section, click the Patch Now link in the Scheduled Start Time field to
display the Run Maintenance dialog.
7. Click Run Maintenance to start the patching operation.
To edit the maintenance patch version of an Autonomous Container Database
You can select from a list of available patches of either maintenance type (release update or release update revision)
to apply to your Autonomous Container Database during scheduled maintenance.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your compartment from the Compartment drop-down.
3. In the Autonomous Database section, click either Autonomous Container Database or Autonomous Exadata
Infrastructure.
4. In the list of Autonomous Container Databases, click the display name of the container database that you want to
manage.Alternatively, if you clicked Autonomous Exadata Infrastructure in the previous step, then click the
name of the Exadata infrastructure that contains the Autonomous Container Database that you want to edit.
5. On the Autonomous Container Database details page or the Autnomous Exadata Infrastructure Details page, in the
Maintenance section, click the View link in the Next Maintenance field.
6. On the Maintenance page, click the Edit link in the Version field to display the Edit Maintenance dialog.
7. Select the database version with which you want to patch your Autonomous Container Database.
Note:

• You must select a version that is later than the current version of the
Autonomous Container Database.

Oracle Cloud Infrastructure User Guide 1212


Database

• The list of available versions may contain both release update (RU)
and release update revision (RUR) maintenance types. Your configured
maintenance policy does not change if you choose a maintenance
type other than that for which your Autonomous Container Database
maintenance schedule is configured.
8. Click Save Changes.
To perform a rolling restart of databases within an Autonomous Container Database
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. Under Autonomous Database, click Autonomous Container Database.
4. In the list of Autonomous Container Databases, click on the display name of the container database you are
interested in.
5. On the Autonomous Container Database details page, click Restart.
6. In the confirmation dialog, type the name of the Autonomous Container Database.
7. Click Restart.
To move an Autonomous Container Database to another compartment
Note:

• To move resources between compartments, resource users must have


sufficient access permissions on the compartment that the resource
is being moved to, as well as the current compartment. For more
information about permissions for Database resources, see Details for the
Database Service on page 2251.
• If your Autonomous Container Database is in a security zone, the
destination compartment must also be in a security zone. See the Security
Zone Policies topic for a full list of policies that affect Database service
resources.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. Under Autonomous Database, click Autonomous Container Database.
4. In the list of Autonomous Container Databases, click on the display name of the container database you wish to
move.
5. Click Move Resource.
6. Select the new compartment.
7. Click Move Resource.
For information about dependent resources for Database resources, see Moving Database Resources to a Different
Compartment on page 1141.
To rotate the encryption key for an Autonomous Container Database
Rotating the encryption key creates a new version of the vault key that replaces the current version of the vault key.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your compartment from the Compartment drop-down.
3. In the Dedicated Infrastructure section, click Autonomous Container Database to display a list of
Autonomous Container Databases for your selected compartment.
4. Click the display name of the database for which you want to rotate the encryption key to display the details page
for that database.
5. Click Rotate Encryption Key to display a confirmation dialog.

Oracle Cloud Infrastructure User Guide 1213


Database

6. Click Rotate Key.


To rotate the encryption key for an Autonomous Database
Rotating the encryption key creates a new version of the vault key that replaces the current version of the vault key.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your compartment from the Compartment drop-down.
3. Click the display name of the Autonomous Database for which you want to rotate the encryption key to display
the details page for that database.
Note:

The Autonomous Database you choose must be on dedicated Exadata


infrastructure and in an Available state.
4. Click the More Actions drop-down.
5. Click Rotate Encryption Key to display a confirmation dialog.
6. Click Rotate Key.
To terminate an Autonomous Container Database
Note:

You can terminate a standby Autonomous Container Database without


terminating any standby Autonomous Databases contained within. To
terminate a primary Autonomous Container Database, you must terminate all
Autonomous Databases contained inside the primary database.
You cannot terminate standby Autonomous Databases, directly. To terminate
a standby Autonomous Database, you must terminate the associated primary
Autonomous Database.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. In the List Scope section, if not already selected, then select the compartment that contains the Autonomous
Container Database you want to terminate from the Compartment drop-down.
3. In the Dedicated Infrastructure section, click Autonomous Container Database to display a list of
Autonomous Container Databases contained in the compartment.
4. Click the display name of the Autonomous Container Database that you want to terminate to display the details
page for that database.
If you want to terminate a standby Autonomous Container Database, then click the display name of the database
you want to terminate that is labeled "Standby" to display the details page for that database.
5. Click Terminate to display a confirmation dialog.
6. You must enter the display name of the Autonomous Container Database to confirm that you want to terminate the
database.
Note:

Terminating a standby Autonomous Container Database disables


Autonomous Data Guard and affects high availability and disaster recovery
for any associated peer Autonomous Container Databases.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the UpdateAutonomousContainerDatabase API operation to perform the following management actions:
• Set the backup retention period for an Autonomous Container Database.

Oracle Cloud Infrastructure User Guide 1214


Database

• Set the maintenance patching type of an Autonomous Container Database.


Use the UpdateMaintenanceRun API operation to skip a container database maintenance run.
Use the ListMaintenanceRun API to get a list of maintenance runs in a specified compartment. Can be used to see
maintenance history and scheduled maintenance runs.
Use the RestartAutonomousContainerDatabase API operation to perform a rolling restart on a container database.
Use the ChangeAutonomousContainerDatabaseCompartment API operation to move a container database to another
compartment.
Use the TerminateAutonomousContainerDatabase API operation to terminate a container database.

Managing a Standby Autonomous Container Database


Enabling Autonomous Data Guard on an Autonomous Container Database creates a standby (peer) Autonomous
Container Database that provides data protection, high availability, and facilitates disaster recovery for the primary
database.
Once the standby database is provisioned, you can perform various management tasks related to the standby database,
including:
• Manually switching over a primary database to a standby database
• Manually failing over a primary database to a standby database
• Reinstating a primary database to standby role after failover
• Terminating a standby database
See
• Creating an Autonomous Container Database on page 1206 for information about enabling Autonomous Data
Guard
• To terminate an Autonomous Container Database
Autonomous Data Guard-enabled databases are identified in the Oracle Cloud Infrastructure console as "Primary" and
"Standby" depending on the role assigned to a given database, and the status of Autonomous Data Guard is displayed
in the Autonomous Data Guard column.
When you view the details page of a particular Autonomous Container Database, the Autonomous Data Guard
section displays the status of Autonomous Data Guard and the state of the peer database associated with the database
you are viewing. Additionally, in the Resources section of the details page, you can click Autonomous Data Guard
to view Autonomous Data Guard configuration details and information, such as transport lag and apply lag, for peer
Autonomous Container Databases.
Using the Oracle Cloud Infrastructure Console
Use the Oracle Cloud Infrastructure console to perform the following Autonomous Data Guard management tasks.
To switch over a primary database to a standby database
When you switch a primary Autonomous Container Database over to a standby Autonomous Container Database, you
change the roles of the respective databases.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. If you are not already in the compartment that contains the Autonomous Container Database you want to switch
over, then choose the appropriate compartment from the Compartment drop-down in the List Scope section.
3. In the Dedicated Infrastructure section, click Autonomous Container Database to display a list of
Autonomous Container Databases for the compartment. Peer databases are labeled as "Primary" and "Standby" in
the list.
4. Click the display name of the Autonomous Container Database you want to switch to display the details page for
that database.

Oracle Cloud Infrastructure User Guide 1215


Database

5. In the Resources section click Autonomous Data Guard Associations to display a list of peer databases for the
Autonomous Container Database you are managing.
The state of each of the databases must be Available to perform a switchover.
6. Click the ellipsis in the Created column and click Switchover.
The states of the peer databases become Role Change in Progress... until the switchover action is complete.
At the conclusion of the operation, the respective roles of the two Autonomous Container databases change. The
primary database assumes the standby role and the standby database assumes the primary role.
To fail over a primary database to a standby database
In the event that a primary Autonomous Container Database becomes unavailable, you can fail over the primary
database to the standby Autonomous Container Database.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. If you are not already in the compartment that contains the Autonomous Container Database you want to switch
over, then choose the appropriate compartment from the Compartment drop-down in the List Scope section.
3. In the Dedicated Infrastructure section, click Autonomous Container Database to display a list of
Autonomous Container Databases for the compartment. Peer databases are labeled as "Primary" and "Standby" in
the list.
4. Click the display name of the standby Autonomous Container Database associated with the primary Autonomous
Container Database that you want to fail over to display the details page for that database.
5. In the Resources section click Autonomous Data Guard Associations to display a list of peer databases for the
primary database you are managing.
6. For the primary Autonomous Container Database you are failing over, click the ellipsis in the Created column
and click Failover.
7. Confirm that you want to perform the failover operation.
Caution:

Fail over to a standby Autonomous Container Database only in the event of


a catastrophic failure of the primary database, when there is no possibility
of recovery. Failover can result in data loss, depending on the protection
mode in effect at the time the primary database fails.
The states of the peer databases become Role Change in Progress... until the failover action is complete.
At the conclusion of the operation, the respective roles of the two Autonomous Container databases change. The
primary database is labeled as "Disabled Standby" and the standby database assumes the primary role.
To reinstate a primary database to its former standby role
After a failover has occurred and the failed primary Autonomous Container Database assumes a disabled, standby
role, you can reinstate the failed database to an enabled, standby role.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. If you are not already in the compartment that contains the Autonomous Container Database you want to switch
over, then choose the appropriate compartment from the Compartment drop-down in the List Scope section.
3. In the Dedicated Infrastructure section, click Autonomous Container Database to display a list of
Autonomous Container Databases for the compartment. The primary database that you failed over is labeled as
"Disabled Standby" in the list.
4. Click the display name of the disabled standby Autonomous Container Database that you want to reinstate.
5. In the Resources section click Autonomous Data Guard Associations to display a list of peer databases for the
primary database you are managing.

Oracle Cloud Infrastructure User Guide 1216


Database

6. For the Autonomous Container Database you are reinstating, click the ellipsis in the Created column and click
Reinstate.
The states of the peer databases become Role Change in Progress... until the reinstate action is complete.
At the conclusion of the operation, the role of the original primary database changes from disabled standby to
standby. You can now perform a switchover operation to revert the respective databases to their original roles.
See To manually switch over a primary database to a standby database

Autonomous Database Tools

Autonomous Database Development and Administration Tools


This topic describes the Oracle Database tools available for Autonomous Database using the Console and how to
access them using the Console. The following tools can be accessed directly from the Oracle Cloud Infrastructure
Console:
• Oracle Database Actions on page 1217
• Oracle Application Express on page 1218
• Oracle Machine Learning User Administration on page 1218 (available on shared Exadata infrastructure only)
Tip:

Autonomous Database supports a range of other Oracle and third-party tools


and applications. See Autonomous Data Warehouse Tools and Application
Test Matrix to learn about other tools you can use with your Autonomous
Database.
For Autonomous Databases on shared Exadata infrastructure, additional tools
can be accessed through the Service Console.
Oracle Database Actions
Database Actions is a web-based interface that uses Oracle REST Data Services to provide development, data tools,
administration and monitoring features for Oracle Autonomous Database. Database is available for databases with
both the dedicated Exadata infrastructure and shared Exadata infrastructure deployment options.
The main features of are:

Development
• SQL: Enter and execute SQL and PL/SQL commands, and create database objects.
• Data Modeler: Create diagrams from existing database schemas, generate DDL statements, and create reports.
• APEX: Link to the Oracle Application Express sign-in page. Application Express is a rapid web application
development platform for the Oracle database.
• REST: Develop RESTful web services and ensure secure access.
• JSON: Manage and query JSON collections. JSON is available only if you are signed in as a database user with
the SODA_APP role.

Data Tools
• Data Load: Load or access data from local files or remote databases.
• Catalog: Understand data dependencies and the impact of changes.
• Data Insights: Discover anomalies, outliers, and hidden patterns in your data.
• Business Models: Create business models for performance and analysis.
• Data Transforms: Design your data flows and workflows graphically. Data Transforms is available only to an
Oracle Data Integrator on Oracle Cloud Marketplace user that has connectivity enabled from Database Actions in
the Oracle Data Integrator user interface.

Oracle Cloud Infrastructure User Guide 1217


Database

Administration
• Database Users: Perform user management tasks such as create, edit, and REST enable users. Database Users is
available only if you are signed in as a database user with administrator rights.

Monitoring
• Monitor database activity and performance using various tools. Monitoring is available only on dedicated Exadata
infrastructure and only if you are signed in as a database user with administrator rights.
Complete product information can be found in About Database Actions.
Oracle Application Express
Oracle Application Express (APEX) is a low-code development platform that enables you to build scalable, secure
enterprise applications with world-class features that can be deployed anywhere. APEX provides you with an easy-
to-use browser-based environment to load data, manage database objects, develop REST interfaces, and rapidly build
applications for both desktop and mobile devices.
Oracle Application Express is available for databases with both the dedicated Exadata infrastructure and shared
Exadata infrastructuredeployment options.
See Oracle Application Express
For complete information, see the APEX topics in the Autonomous Transaction Processing and the Autonomous Data
Warehouse user guides. Use the following links:
• Using Oracle Autonomous Transaction Processing
• Using Oracle Autonomous Data Warehouse
Oracle Machine Learning User Administration
Oracle Machine Learning is a collaborative web-based interface that provides a development environment to create
data mining notebooks where you can perform data analytics, data discovery and data visualizations. Using the Oracle
Cloud Infrastructure Console, you can quickly get to the Oracle Machine Learning User Administration interface to
create and manage users.
Machine Learning is currently available for databases with shared Exadata infrastructure only.
Using the Oracle Cloud Infrastructure Console

For Autonomous Databases on Shared Exadata Infrastructure


To access Oracle Database Actions
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click on the display name of the database you want to work with.
4. Click the Tools tab on the Autonomous Database Details page.
5. Click Open Database Actions
To access Oracle Application Express from an Autonomous Database
To access the Oracle Application Express (Oracle APEX) development environment from an Autonomous Database
with the APEX workload type:
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Click Autonomous Database to display a list of Autonomous Databases of all workload types.
3. If you are not in the correct compartment, then select the compartment from the Compartment drop-down that
contains the database for which you want to change the workload type.
4. Click the display name of the Autonomous Database of APEX workload type from which you want to access
Oracle Application Express to display the Autonomous Database Details page for that database.

Oracle Cloud Infrastructure User Guide 1218


Database

5. Click the Tools tab to display administration and developer tools available for the database.
6. Click Open APEX to access the Oracle APEX sign-in page.
Additionally, you can access Oracle APEX from the APEX Instance details page, as follows:
a. On the Autonomous Database Details page click the instance name in the APEX Instance section.
b. Click Launch APEX.
7. Sign into Oracle APEX using the ADMIN password for the Autonomous Database.
WHAT NEXT?
• Create an APEX Workspace - Data Warehouse | Transaction Processing
• Access APEX App Builder - Data Warehouse | Transaction Processing
• Use web services with APEX - Data Warehouse | Transaction Processing
To switch between database details and Oracle APEX instance details
When you create an Autonomous Database of the APEX workload type, you create a database instance that is
optimized for working with Oracle Application Express (Oracle APEX). Oracle APEX-related activities are displayed
on a details page separate from that of the database.
To view the Oracle APEX instance details page:
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Click Autonomous Database to display a list of Autonomous Databases of all workload types.
3. If you are not in the correct compartment, then select the compartment from the Compartment drop-down that
contains the database for which you want to view the details.
4. Click the display name of the Autonomous Database of APEX workload type to display the Autonomous
Database Details page for that database.
5. In the APEX Instance section, click the instance name link to display the APEX Instance Details page.
You can switch back to the database details page by clicking the database name in the APEX Instance Information
tab.
To access Oracle Machine Learning's User Administration Interface
To use Oracle Machine Learning with your Autonomous Database, you must first create a user account within the
application. The following steps explain how to navigate to the User Administration interface for Machine Learning
from the Autonomous Database details page within the Console.
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click on the display name of the database you want to work with.
4. Click the Tools tab on the Autonomous Database Details page.
5. Click Open Oracle ML User Administration

For Autonomous Databases on Dedicated Exadata Infrastructure


The Console provides access URLs for Application Express (APEX) and SQL Developer Web that you can use
to connect to these applications. The URLs only work from browsers within the same VCN as the Autonomous
Database being accessed by the applications. Therefore, to use these URLs, you will need to open a browser running
on a computer that meets one of the following conditions:
• The computer is a Compute instance is provisioned in the VCN of the Autonomous Database.
• The computer has a direct connection to the VCN of the Autonomous Database
To access APEX or SQL Developer Web, paste the appropriate access URL into the browser's address field, and then
provide the Autonomous Database username and password when prompted. For more information on APEX, see the
APEX documentation. For more information on SQL Developer Web, see Oracle SQL Developer Web.

Oracle Cloud Infrastructure User Guide 1219


Database

The following tasks explain how to obtain an access URLs for APEX and SQL Developer Web.
To obtain the access URL for Oracle Database Actions for an Autonomous Database on dedicated
Exadata infrastructure
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click on the display name of the database you want to work with.
4. Click the Tools tab on the Autonomous Database Details page.
5. Click Open Database Actions
To obtain the access URLs for Oracle Application Express (APEX) for an Autonomous Database on
dedicated Exadata infrastructure
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click on the display name of the database you want to work with.
4. Click the Tools tab on the Autonomous Database Details page.
5. Click Open APEX

Using Your Autonomous Database


The topics listed in this section provide detailed information and instructions on how to use your Autonomous
Database database on shared Exadata infrastructure and dedicated Exadata infrastructure.

Topic Description Exadata Infrastructure Type


How to Create a Database Link Explains how to use database links to Shared
Oracle databases that are accessible
from an Autonomous Database Dedicated

Database Link Required Privileges Details the privileges required for a Shared
non-admin user to use database links
Manage Optimizer Statistics Describes commands for gathering Shared
optimizer statistics or enabling
optimizer hints Dedicated

Manage Automatic Workload Includes information about Shared


Repository (AWR) Retention controlling the retention time
period for Autonomous Database
performance data
Predefined Database Service Details the database service names Shared
Names Resources and Connections for an Autonomous Database
Dedicated

Predefined Job Classes with Explains how to control the priority Shared
Oracle Scheduler of user requests in an Autonomous
Database
Managing Concurrency and Explains how to control session time Shared
Priorities Idle Time Limits and limits in an Autonomous Database
MAX_IDLE_TIME
Perform Manual Backups Explains how to manually back up Shared
Autonomous Databases to Object
Storage Dedicated

Oracle Cloud Infrastructure User Guide 1220


Database

Overview of the Always Free Autonomous Database


Oracle Cloud Infrastructure's Always Free Autonomous Database is part of Oracle Cloud Infrastructure's Free Tier
of services. You can provision up to two Always Free Autonomous Databases in the home region of your tenancy.
These databases are provided free of charge, and they are available to users of both free and paid accounts. You can
use these Autonomous Databases for small-scale applications, for development, or testing purposes, or for learning
about and exploring Oracle Cloud Infrastructure.
Always Free Autonomous Database versions
Always Free Autonomous Databases support only a single Oracle Database version.
• You can see only the installed Always Free database version on the details page.
• If your Always Free database is not the latest version, Oracle automatically migrates it to the latest version at a
preselected date about which you will be notified several weeks in advance. You can also update the database by
selecting the version on the database details page.See To create an Always Free Autonomous Database on page
1159 for more information.
Note:

• You can provision Autonomous Databases only in your Home Region.


• Not all regions support the same database version. The supported version
may be 19c-only or 21c-only, depending on the region.
• You cannot create an Always Free Autonomous Database in any Home
Region where Always Free Autonomous Databases are not supported.
To learn which regions support them, see Data Regions for Platform and
Infrastructure Services.
Always Free Autonomous Database Specifications
• Processor: 1 Oracle CPU processor (cannot be scaled)
• Memory: 8 GB RAM
• Database Storage: 20 GB storage (cannot be scaled)
• Workload Type: Your choice of either the transaction processing or data warehouse workload type
• The Autonomous Transaction Processing workload type configures the database for a transactional
workload, with a bias towards high volumes of random data access.
For a complete product overview of Autonomous Transaction Processing, see Autonomous Transaction
Processing. For Autonomous Transaction Processing tutorials, see Quick Start tutorials.
• The Autonomous Data Warehouse workload type configures the database for a decision support or data
warehouse workload, with a bias towards large data scanning operations.
For a complete product overview of Autonomous Data Warehouse, see Autonomous Data Warehouse. For
Autonomous Data Warehouse tutorials, see Quick Start tutorials.
• Database Version: Oracle Database 19c or Oracle Database 21c.
• Infrastructure Type: Shared Exadata infrastructure
• Maximum Simultaneous Database Sessions: 20
Lifecycle for Always Free Autonomous Databases
After provisioning, you can continue using your Always Free Autonomous Database for as long as you want at no
charge. You can terminate the database at any time.
Lifecycle Management for Inactive Always Free Autonomous Databases
If your Always Free Autonomous Database has no activity for a period of 7 consecutive days, the Database service
will stop the database automatically. If this happens, restart the database and continue using it. If your Always
Free Autonomous Database remains in a stopped state for 3 consecutive months, the database will be reclaimed
(automatically terminated) by the Database service.

Oracle Cloud Infrastructure User Guide 1221


Database

Using Events and Notifications to Stay Informed About Inactive Always Free Autonomous Databases
You can use Oracle Cloud Infrastructure's Events service to send structured JSON messages about your Always Free
Autonomous Database lifecycle events to applications used for automation. For example, you can create automatic
email, Slack, PagerDuty, or HTTPS notifications (using the Notifications service) to be alerted if a database is going
to be stopped or terminated in the next 48 hours. You can also set up notifications to be alerted when a database is
automatically stopped or and terminated.
See the following topics for more information:
• Getting Started with Events on page 1790 for details on working with Events messages.
• Autonomous Database Event Types for details on the currently available Always Free Autonomous Database
lifecycle events messages.
• How Notifications Works for an overview of setting up automated notifications based on the JSON messages
emitted by the Events service.

Exadata Cloud Service


Exadata Cloud Service allows you to leverage the power of Exadata in the cloud. You can provision flexible X8M
systems that allow you to add database compute servers and storage servers to your system as your needs grow.
X8M systems offer RDMA over Converged Ethernet (RoCE) networking for high bandwidth and low latency,
persistent memory (PMEM) modules, and intelligent Exadata software. X8M systems can be provisioned using an
shape equivalent to a quarter rack X8 system, and then database and storage servers can be added at any time after
provisioning. For more information on X8M systems, see Overview of X8M Scalable Exadata Infrastructure on page
1228.
X8 and X7 systems are also available in fixed-shapes (quarter, half, and full rack systems). These systems use
InfiniBand networking, and do not have the ability to scale database and storage servers. You can also provision an
Exadata base system, which has a smaller capacity than a quarter rack system.
For all Exadata Cloud Service instances, you can configure automatic backups, optimize for different workloads, and
scale the OCPU and storage allocations as needed.
Note:

Exadata Cloud Service instances launched on or after March 14, 2019 run
Oracle Linux 7. Previously launched systems are running Oracle Linux 6.
See OS Updates on page 1275 for important information about updating
existing Exadata DB system operating systems.

Supported Database Edition and Versions


Exadata Cloud Service instances require Enterprise Edition - Extreme Performance. This edition provides all the
features of Oracle Database Enterprise Edition, plus all the database enterprise management packs and all the
Enterprise Edition options, such as Oracle Database In-Memory and Oracle Real Application Clusters (RAC).
Exadata Cloud Service instances support the following software releases:
• Oracle Database 19c (19.0)
• Oracle Database 18c (18.0)
• Oracle Database 12c Release 2 (12.2)
• Oracle Database 12c Release 1 (12.1)
• Oracle Database 11g Release 2 (11.2)
Note:

• If you plan to run Oracle Database 19c on a cloud VM cluster or DB


system in the Exadata Cloud Service, you must specify version 19c
when you create the resource. Earlier database versions are supported
on a 19c cloud VM cluster or DB system, and can be created at anytime.

Oracle Cloud Infrastructure User Guide 1222


Database

Cloud VM clusters and DB systems created with earlier Oracle Database


versions will not automatically support Oracle Database 19c.
• For information on upgrading an existing 18c or earlier database to Oracle
Database 19c, see Upgrading Exadata Databases on page 1302.

Subscription Types
The only subscription type available for Exadata Cloud Service instances is the Monthly Flex purchase model under
Universal Credit Pricing. See the Universal Credit Pricing FAQ for more information.

Metering Frequency and Per-Second Billing


For each Exadata Cloud Service instance you provision, you are billed for the infrastructure for a minimum of 48
hours, and then by the second after that. Each OCPU you add to the system is billed by the second, with a minimum
usage period of 1 minute. For X8M systems, if you terminate the cloud VM cluster and do not terminate the cloud
Exadata infrastructure resource, billing will continue for the infrastructure resource.

Scaling Options
Three kinds of scaling operations are supported for an Exadata Cloud Service:
• For all Exadata Cloud Service instances, you can scale the compute node processing power within the provisioned
system, adding or subtracting CPU cores as needed.
• For X8M systems, the flexible shape allows you to add additional database and storage servers to the cloud
Exadata infrastructure resource as you need them.
• For X6, X7 and X8 Exadata DB systems, you can scale by moving the system to a different shape configuration,
for example, from a quarter rack to a half rack.
For more information on each type of scaling, see Scaling an Exadata Cloud Service Instance on page 1230.
Scaling CPU Cores Within an Exadata Cloud Service Instance
If an Exadata Cloud Service instance requires more compute node processing power, you can scale up the number of
enabled CPU cores symmetrically across all the nodes in the system as follows:
X8M flexible infrastructure systems: You can scale CPU cores in multiples of the number of database servers
currently provisioned for the cloud VM cluster. For example, if you have 6 database servers provisioned, you can
add CPU cores in multiples of 6. At the time of provisioning, X8M systems have 2 database servers. For more
information on adding compute and storage resources to an X8M system, see Scaling Exadata X8M Compute and
Storage on page 1231.
Non-X8M fixed-shape systems: For a base system or an X7 or X8 quarter rack, you can scale in multiples of 2
across the 2 database compute nodes. For an X7 or X8 half rack, you can scale in multiples of 4 across the 4 database
compute nodes. For an X7 or X8 full rack, you can scale in multiples of 8 across the 8 database compute nodes.
For a non-metered service instances, you can temporarily modify the compute node processing power (bursting)
or add compute node processing power on a more permanent basis. For a metered service instance, you can simply
modify the number of enabled CPU cores.
You can provision an Exadata Cloud Service instance with zero CPU cores, or scale the service instance down to zero
cores after you provision it. With zero cores, you are billed only for the infrastructure until you scale up the system.
For detailed information about pricing, see Exadata Cloud Service Pricing.
Tip:

OCPU scaling activities are done online with no downtime.


For information on CPU cores per configuration, see Exadata Shape Configurations on page 1224. To learn how to
scale a system, see To scale CPU cores in an Exadata Cloud Service cloud VM cluster or DB system on page 1259.

Oracle Cloud Infrastructure User Guide 1223


Database

Scaling X6, X7 and X8 Exadata DB System Configurations


Scaling an Exadata X6, X7, or X8 Exadata Cloud Service instance by moving to a shape with more capacity enables
you meet the needs of your growing workload. This is useful when a database deployment requires:
• Processing power that is beyond the capacity of the current system configuration.
• Storage capacity that is beyond the capacity of the current system configuration.
• A performance boost that can be delivered by increasing the number of available compute nodes.
• A performance boost that can be delivered by increasing the number of available Exadata Storage Servers.
You can move you workloads to a larger fixed shape (X7 and X8 hardware shapes), or move to the flexible X8M
shape that allows for easy expansion of compute and storage resources as your workloads grow.
To assist with moving your database deployments between Exadata Cloud Service instances, you can restore a backup
to a different service instance that has more capacity, or create a Data Guard association for your database in a service
instance with more capacity, and then perform a switchover so that your new standby database assumes the primary
role. To start the process, contact Oracle and request a service limit increase so that you can provision the larger
service instance needed by your database.

Exadata Shape Configurations


Each Exadata Cloud Service instance consists of compute nodes and storage servers. The compute nodes are each
configured with a Virtual Machine (VM). You have root privilege for the compute node VMs, so you can load and
run additional software on them. However, you do not have administrative access to the Exadata infrastructure
components, including the physical compute node hardware, network switches, power distribution units (PDUs),
integrated lights-out management (ILOM) interfaces, or the Exadata Storage Servers, which are all administered by
Oracle.
For X8M systems, the Exadata hardware is administered through two resource types, the cloud Exadata infrastructure
resource and the cloud VM cluster. See The New Exadata Cloud Service Resource Model on page 1229 for more
details.
For X6, X7, and X8 systems, the Exadata hardware is administered through the DB system resource.
For all hardware models, you have full administrative privileges for your databases, and you can connect to your
databases by using Oracle Net Services from outside the Oracle Cloud Infrastructure. You are responsible for
database administration tasks such as creating tablespaces and managing database users. You can also customize the
default automated maintenance set up, and you control the recovery process in the event of a database failure.
For full details on the available shape configurations, see Exadata Fixed Hardware Shapes: X6, X7, X8 and Exadata
Base on page 1352

Customer-Managed Keys in Exadata Cloud Service


Customer-managed keys for Exadata Cloud Service is a feature of Oracle Cloud Infrastructure Vault service that
enables you to encrypt your data using encryption keys that you control. The Vault service provides you with
centralized key management capabilities that are highly available and durable. This key-management solution also
offers secure key storage using isolated partitions (and a lower-cost shared partition option) in FIPS 140-2 Level 3-
certified hardware security modules, and integration with select Oracle Cloud Infrastructure services. Use customer-
managed keys when you need security governance, regulatory compliance, and homogenous encryption of data, while
centrally managing, storing, and monitoring the life cycle of the keys you use to protect your data.
You can:
• Enable customer-managed keys when you create databases in Exadata Cloud Service
• Switch from Oracle-managed keys to customer-managed keys on databases that are not enabled with Oracle Data
Guard
• Rotate your keys to maintain security compliance
Related Topics
• To create a database in an existing Exadata Cloud Service instance on page 1308

Oracle Cloud Infrastructure User Guide 1224


Database

• To create an X7 or X8 Exadata DB system on page 1249


• To administer Vault encryption keys on page 1312
• Known Issues related to Oracle Database 12c
Note:

Databases that are enabled with Oracle Data Guard must use Oracle-managed
keys.

Storage Configuration
When you launch an Exadata Cloud Service instance, the storage space inside the Exadata storage servers is
configured for use by Oracle Automatic Storage Management (ASM). By default, the following ASM disk groups are
created:
• The DATA disk group is intended for the storage of Oracle Database data files.
• The RECO disk group is primarily used for storing the Fast Recovery Area (FRA), which is an area of storage
where Oracle Database can create and manage various files related to backup and recovery, such as RMAN
backups and archived redo log files.
• The /acfs file systems contain system files that support various operations. You should not store custom files,
Oracle Database data files, or backups inside the ACFS disk groups. Custom ACFS mounts can be created using
the DATA ASM disk group for files that are not service-related.
The disk group names contain a short identifier string that is associated with your Exadata Database machine
environment. For example, the identifier could be C2, in which case the DATA disk group would be named
DATAC2, the RECO disk group would be named RECOC2, and so on.
In addition, you can create a SPARSE disk group. A SPARSE disk group is required to support Exadata snapshots.
Exadata snapshots enable space-efficient clones of Oracle databases that can be created and destroyed very quickly
and easily. Snapshot clones are often used for development, testing, or other purposes that require a transient
database.
Note that you cannot change the disk group layout after service creation.
Impact of Configuration Settings on Storage
If you choose to perform database backups to the Exadata storage, or to create a sparse disk group, or to do both, your
choices profoundly affect how storage space in the Exadata storage servers is allocated to the ASM and sparse disk
groups.
The table that follows shows the approximate percentages of storage allocated for DATA, RECO, and SPARSE disk
groups for each possible configuration.

Configuration Settings DATA Disk Group RECO Disk Group SPARSE Disk Group

Database backups on 80 % 20 % 0%
Exadata storage: No
Sparse disk group: No

Database backups on 40 % 60 % 0%
Exadata storage: Yes
Sparse disk group: No

Database backups on 60 % 20 % 20 %
Exadata storage: No
Sparse disk group: Yes

Oracle Cloud Infrastructure User Guide 1225


Database

Configuration Settings DATA Disk Group RECO Disk Group SPARSE Disk Group

Database backups on 35 % 50 % 15 %
Exadata storage: Yes
Sparse disk group: Yes

Moving Databases to Oracle Cloud Exadata Systems Using Zero Downtime


Migration
Oracle now offers the Zero Downtime Migration service, a quick and easy way to move on-premises Oracle
Databases and Oracle Cloud Infrastructure Classic databases to Oracle Cloud Infrastructure. You can migrate
databases to the following types of Oracle Cloud Infrastructure systems: Exadata, Exadata Cloud@Customer, bare
metal, and virtual machine.
Zero Downtime Migration leverages Oracle Active Data Guard to create a standby instance of your database in an
Oracle Cloud Infrastructure system. You switch over only when you are ready, and your source database remains
available as a standby. Use the Zero Downtime Migration service to migrate databases individually or at the fleet
level. See Move to Oracle Cloud Using Zero Downtime Migration for more information.

Exadata Shape Configurations


This topic describes the available Exadata Cloud Service instance shapes in Oracle Cloud Infrastructure.
See the following sections for shape specifications:
• Exadata X8M on page 1226
• Exadata Base System on page 1227
• Exadata X8 Shapes on page 1227
• Exadata X7 Shapes on page 1228
• Exadata X6 Shapes on page 1228
Exadata X8M
After provisioning, the X8M shape is expandable, unlike X6, X7, and X8 shapes. The values in the table that
follows represent the specifications for an X8M cloud service instance that has not been expanded. The initial
configuration of 2 database servers and 3 storage servers is similar to the quarter rack shape offered for X6, X7 and
X8 infrastructure resources.
X8M Capacity at Provisioning

Property Value
Shape Name Exadata X8M-2
Number of Compute Nodes 2 (value can increased after provisioning)
Total Minimum Number of Enabled CPU Cores 0
Total Maximum Number of Enabled CPU Cores 100 (for 2 initial database servers)
Total RAM Capacity 2780 GB (initial value)
Number of Exadata Storage Servers 3 (value can increased after provisioning)
Total Raw Flash Storage Capacity 76.8 TB (for 3 initial storage servers)
Total Usable Storage Capacity 149 TB (for 3 initial storage servers)

X8M Expansion Server Capacity


When you add additional database or storage servers to an X8M system, the expansion servers have the following
capacity.

Oracle Cloud Infrastructure User Guide 1226


Database

Database Servers

Maximum OCPUs Total memory available


50 OCPUs 1,390 GB

Storage Servers

Total usable disk capacity Persistent memory Total flash capacity


49.9 TB 1.5 TB 25.6 TB

Exadata Base System


An Exadata base system is a fixed shape similar in size to a quarter rack, with some differences in capacity.

Property Value
Shape Name Exadata.Base.48
Number of Compute Nodes 2
Total Minimum Number of Enabled CPU Cores 0
Total Maximum Number of Enabled CPU Cores 48
Total RAM Capacity 720 GB
Number of Exadata Storage Servers 3
Total Raw Flash Storage Capacity 38.4 TB
Total Usable Storage Capacity 74 TB

Exadata X8 Shapes

Property Quarter Rack Half Rack Full Rack


Shape Name Exadata.Quarter3.100 Exadata.Half3.200 Exadata.Full3.400
Number of Compute 2 4 8
Nodes
Total Minimum Number 0 0 0
of Enabled CPU Cores
Total Maximum Number 100 200 400
of Enabled CPU Cores
Total RAM Capacity 1440 GB 2880 GB 5760 GB
Number of Exadata 3 6 12
Storage Servers
Total Raw Flash Storage 76.8 TB 179.2 TB 358.4 TB
Capacity
Total Usable Storage 149 TB 299 TB 598 TB
Capacity

Exadata X8 shapes provide 700 GB of user disk space for database homes.

Oracle Cloud Infrastructure User Guide 1227


Database

Exadata X7 Shapes

Property Quarter Rack Half Rack Full Rack


Shape Name Exadata.Quarter2.92 Exadata.Half2.184 Exadata.Full2.368
Number of Compute 2 4 8
Nodes
Total Minimum Number 0 0 0
of Enabled CPU Cores
Total Maximum Number 92 184 368
of Enabled CPU Cores
Total RAM Capacity 1440 GB 2880 GB 5760 GB
Number of Exadata 3 6 12
Storage Servers
Total Raw Flash Storage 76.8 TB 153.6 TB 307.2 TB
Capacity
Total Usable Storage 106 TB 212 TB 424 TB
Capacity

Exadata X7 shapes provide 1 TB of user disk space for database homes.


Exadata X6 Shapes

Property Quarter Rack Half Rack Full Rack


Shape Name Exadata.Quarter1.84 Exadata.Half1.168 Exadata.Full1.336
Number of Compute 2 4 8
Nodes
Total Minimum 22 44 88
(Default) Number of
Enabled CPU Cores
Total Maximum 84 168 336
Number of Enabled
CPU Cores
Total RAM Capacity 1440 GB 2880 GB 5760 GB
Number of Exadata 3 6 12
Storage Servers
Total Raw Flash Storage 38.4 TB 76.8 TB 153.6 TB
Capacity
Total Usable Storage 84 TB 168 TB 336 TB
Capacity

Exadata X6 shapes provide 200 GB of user disk space for database homes.

Overview of X8M Scalable Exadata Infrastructure


Oracle Cloud Infrastructure scalable Exadata X8M system model allows you to add additional database and storage
servers after provisioning and create a system that matches your capacity needs.

Oracle Cloud Infrastructure User Guide 1228


Database

The New Exadata Cloud Service Resource Model


Exadata Cloud Service instances can now be provisioned with a new infrastructure resource model that replaces the
DB system resource. In the new model, the DB system is split into two resources, the cloud Exadata infrastructure
resource, and the cloud VM cluster resource.
The X8M system model is only compatible with the new resource model. For provisioning new X7 and X8 systems,
Oracle recommends using the new resource model so that your instance will not have to be switched to the new
resource model later.
Oracle will continue to support the Exadata DB system resource for an interim period (for both Exadata instance
creation and management) before the DB system resource is deprecated in the Exadata Cloud Service. Existing
Exadata DB systems can be easily switched to the new resource model with no downtime. For instructions on
switching, see To switch an Exadata DB system to the new Exadata resource model on page 1230.

The Cloud Exadata Infrastructure Resource


The infrastructure resource is the top-level (parent) resource. At the infrastructure level, you control the number of
database and storage servers. You also control Exadata system maintenance scheduling at the Exadata infrastructure
level. This resource is created using the CreateCloudExadataInfrastructure API.
See To add compute and storage resources to a flexible cloud Exadata infrastructure resource on page 1231 for
information on scaling the X8M cloud Exadata infrastructure resource. Note that after adding storage or database
servers to the infrastructure resource, you must then add them to the system's VM cluster to utilize the new capacity.

The Cloud VM Cluster Resource


The VM cluster is a child resource of the infrastructure resource, providing a link between your Exadata cloud
infrastructure resource and Oracle Database. Networking, OCPU count, IORM, and Oracle Grid Infrastructure are
configured and managed at the VM cluster level. This resource is created using the CreateCloudVmCluster API.
See To add database server or storage server capacity to a cloud VM cluster on page 1231 for information on adding
available storage or database servers to the VM cluster. Note that you must add servers to the infrastructure resource
before you can add capacity to the VM cluster.
Exadata Cloud Service instances currently support creating a single cloud VM cluster.

Additional Exadata Cloud Service Instance Resources


The new Exadata resource model retains the rest of the resource types found in DB systems: Oracle Databases,
database backups, Data Guard Associations, Work Requests, Oracle Database Homes, and database server nodes
(also called "virtual machines").
Note:

The database server file system for database server nodes (also known as
"virtual machines") has changed with the X8M generation of hardware. See
The X8M Virtual Machine File System Structure on page 1353 for details
on the X8M database server node file system.

Switching an Exadata DB System to the New Resource Model and APIs


If you have existing Exadata DB systems in Oracle Cloud Infrastructure, you can switch them to the new resource
model and APIs. This does not change to the underlying hardware or shape family of your Exadata Cloud Service
instance. The existing DB system APIs will be deprecated for Exadata by Oracle Cloud Infrastructure for all users
following written notification and a transition period allowing you to switch to the new API and Console interfaces.
Note that this change will not affect bare metal and virtual machine DB systems.
Switching to the new resource model does not impact the DB system's existing Exadata databases or client
connections. If you have created automation that uses the existing DB system API, your applications may need to be
updated to use the new API.

Oracle Cloud Infrastructure User Guide 1229


Database

Important! No new systems can be provisioned with the old DB system resource model/APIs after May 15th, 2021.
Support for the old DB system resource model/APIs on existing systems will end on August 15th, 2021. Oracle
recommends that you migrate your Exadata Cloud Service instances to the new resource model APIs as soon as
possible. Converting to the new resource model does not involve any system downtime.
After converting your DB system, you will have two new resources in place of the DB system resource: a cloud
Exadata infrastructure resource, and a cloud VM cluster resource.

What to Expect After Switching


• Your new cloud Exadata infrastructure resource and cloud VM cluster are created in the same compartment as the
DB system they replace
• Your new cloud Exadata infrastructure resource and cloud VM cluster use the same networking configuration as
the DB system they replace
• After the switch, you cannot perform operations on the old Exadata DB system resource
• Switching is permanent, and the change cannot be undone
• X6, X7, X8 and Exadata base systems retain their fixed shapes after the switch, and cannot be expanded
To switch an Exadata DB system to the new Exadata resource model
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. In the list of DB systems, find the Exadata DB system you want to switch to the new resource model, and click its
highlighted name to view the system details.
4. Click More Actions, then Switch to New API.
5. In the displayed confirmation page, read the What to expect after switching section. When you are ready to
switch to the new resource model and APIs, click Start.
Caution:

Switching an Exadata DB system to the new resource model and APIs


cannot be reversed. If you have automation for your system that utilizes
the DB system APIs, you may need to update your applications prior to
switching.

Scaling an Exadata Cloud Service Instance


This topic describes the scaling options available for Exadata Cloud Service instances.
Scaling CPU Cores Within an Exadata Cloud Service Instance
If an Exadata Cloud Service instance requires more compute node processing power, you can scale up the number of
enabled CPU cores symmetrically across all the nodes in the system as follows:
X8M flexible infrastructure systems: You can scale CPU cores in multiples of the number of database servers
currently provisioned for the cloud VM cluster. For example, if you have 6 database servers provisioned, you can
add CPU cores in multiples of 6. At the time of provisioning, X8M systems have 2 database servers. For more
information on adding compute and storage resources to an X8M system, see Scaling Exadata X8M Compute and
Storage on page 1231.
Non-X8M fixed-shape systems: For a base system or an X7 or X8 quarter rack, you can scale in multiples of 2
across the 2 database compute nodes. For an X7 or X8 half rack, you can scale in multiples of 4 across the 4 database
compute nodes. For an X7 or X8 full rack, you can scale in multiples of 8 across the 8 database compute nodes.
For a non-metered service instances, you can temporarily modify the compute node processing power (bursting)
or add compute node processing power on a more permanent basis. For a metered service instance, you can simply
modify the number of enabled CPU cores.
You can provision an Exadata Cloud Service instance with zero CPU cores, or scale the service instance down to zero
cores after you provision it. With zero cores, you are billed only for the infrastructure until you scale up the system.
For detailed information about pricing, see Exadata Cloud Service Pricing.

Oracle Cloud Infrastructure User Guide 1230


Database

Tip:

OCPU scaling activities are done online with no downtime.


For information on CPU cores per configuration, see Exadata Shape Configurations on page 1224. To learn how to
scale a system, see To scale CPU cores in an Exadata Cloud Service cloud VM cluster or DB system on page 1259.
Scaling Exadata X8M Compute and Storage
The flexible X8M system model is designed to be easily scaled in place, with no need to migrate the database using
a backup or Data Guard. You can scale an X8M service instance in the Console on the cloud Exadata infrastructure
details page. After adding additional database or storage servers to your cloud Exadata infrastructure resource, you
must add the increased capacity to the associated cloud VM cluster to utilize the newly-provisioned CPU or storage
resources. After adding additional database servers to a VM cluster, you can then allocate the new CPU cores as
described in To scale CPU cores in an Exadata Cloud Service cloud VM cluster or DB system on page 1259. After
adding additional storage servers to your VM cluster, you do not need to take any further action to utilize the new
storage.
Note:

• The Exadata X8M shape does not support removing storage or database
servers from an existing X8M instance.
• For OCI Exadata Cloud Service databases configured with either Oracle
Data Guard or customer-managed keys (encryption keys stored and
managed using the OCI, Vault service), the database cannot currently
utilize additional compute nodes added to the Exadata infrastructure.
• For OCI Exadata Cloud Service VM clusters that use node subsetting
(meaning the clusters are configured to use only a subset of all nodes
available in the dedicated Exadata infrastructure instance), you cannot
currently add additional compute nodes to the VM cluster.
See Overview of X8M Scalable Exadata Infrastructure on page 1228 for more information on X8M systems.
To add compute and storage resources to a flexible cloud Exadata infrastructure resource
This task describes how to use the Oracle Cloud Infrastructure Console to scale a flexible cloud Exadata infrastructure
resource. Currently, only Exadata X8M systems in Oracle Cloud Infrastructure have the ability to add database
(compute) and storage servers to an existing service instance.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Under Exadata at Oracle Cloud, click Exadata Infrastructure.
3. In the list of cloud Exadata infrastructure resources, click the name of the resource you want to scale.
4. Click Scale Infrastructure.
5. Adding database servers: To add compute servers to the infrastructure resource, select the Database Servers
radio button, then enter the number of servers you want to add in the Database servers field.
Adding storage servers: To add storage servers to the infrastructure resource, select the Storage Servers radio
button, then enter the number of servers you want to add in the Storage servers field.
6. Click Scale.
Tip:

After scaling your infrastructure, you must add the new capacity to the cloud
VM cluster before you can use the additional CPU and storage resources in
the Exadata Cloud Service instance.
To add database server or storage server capacity to a cloud VM cluster
If you have scaled a flexible cloud Exadata infrastructure resource by adding additional database (compute) or storage
servers to the service instance, you must add the additional capacity to the cloud VM cluster to utilize the additional
resources. This topic describes how to use the Oracle Cloud Infrastructure (OCI) Console to add the new capacity to
your cloud VM cluster

Oracle Cloud Infrastructure User Guide 1231


Database

Note:

• For OCI Exadata Cloud Service databases configured with either Oracle
Data Guard or customer-managed keys (encryption keys stored and
managed using the OCI, Vault service), the database cannot currently
utilize additional compute nodes added to the Exadata infrastructure.
• For OCI Exadata Cloud Service VM clusters that use node subsetting
(meaning the clusters are configured to use only a subset of all nodes
available in the dedicated Exadata infrastructure instance), you cannot
currently add additional compute nodes to the VM cluster.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Under Exadata at Oracle Cloud, click Exadata VM Clusters.
3. In the list of cloud VM clusters, click the name of the cluster to which you want to add capacity.
4. On the VM Cluster Details page, click Scale VM Cluster.
5. If you have additional capacity available as a result of scaling the cloud Exadata infrastructure resource, a banner
at the top of the Scale VM Cluster panel provides a message telling you the type and amount of additional
capacity available to the VM cluster. Check the Add Capacity box.
6. Select either the Add Database Server or the Add Storage radio button, depending on which type of capacity
you want to add to the cloud VM cluster.
7. Click Update. The cloud VM cluster goes into the Updating state. When the capacity has been successfully added,
the cluster returns to the Available state.
Tip:

If you have added additional database servers to the cluster, you can allocate
the new CPU cores once the cluster is in the Available state by clicking
the Scale VM Cluster button again. See To scale CPU cores in an Exadata
Cloud Service cloud VM cluster or DB system on page 1259 for more
information on adding CPU cores to your cloud VM cluster.
Scaling X6, X7 and X8 Exadata DB System Configurations
Scaling an Exadata X6, X7, or X8 Exadata Cloud Service instance by moving to a shape with more capacity enables
you meet the needs of your growing workload. This is useful when a database deployment requires:
• Processing power that is beyond the capacity of the current system configuration.
• Storage capacity that is beyond the capacity of the current system configuration.
• A performance boost that can be delivered by increasing the number of available compute nodes.
• A performance boost that can be delivered by increasing the number of available Exadata Storage Servers.
You can move you workloads to a larger fixed shape (X7 and X8 hardware shapes), or move to the flexible X8M
shape that allows for easy expansion of compute and storage resources as your workloads grow.
To assist with moving your database deployments between Exadata Cloud Service instances, you can restore a backup
to a different service instance that has more capacity, or create a Data Guard association for your database in a service
instance with more capacity, and then perform a switchover so that your new standby database assumes the primary
role. To start the process, contact Oracle and request a service limit increase so that you can provision the larger
service instance needed by your database.

Best Practices for Exadata Cloud Service Instances


Oracle recommends that you follow these best practice guidelines to ensure the manageability of your Exadata Cloud
Service instance:
• Wherever possible, use the Oracle-supplied cloud interfaces such as the Oracle Cloud Infrastructure Console,
API, or CLI, or cloud-specific tools such as dbaascli and dbaasapi to perform lifecycle management and
administrative operations on your Exadata Cloud Service instance. For example, use the exadbcpatchmulti
command to apply Oracle Database patches instead of manually running opatch. In addition, if an operation

Oracle Cloud Infrastructure User Guide 1232


Database

can be performed by using the Console as well as a command line utility, Oracle recommends that you use the
Console. For example, use the Console instead of using dbaasapi to create databases.
• Do not change the compute node OS users or manually manipulate SSH key settings associated with your Exadata
DB system.
• Apply only patches that are available through the Database service. Do not apply patches from any other source
unless you are directed to do so by Oracle Support.
• Apply the quarterly patches regularly, every quarter if possible.
• Do not change the ports for Oracle Net Listener.

Network Setup for Exadata Cloud Service Instances


Before you set up an Exadata Cloud Service instance, you must set up a virtual cloud network (VCN) and other
Networking service components. This topic describes the recommended configuration for the VCN and several
related requirements for the Exadata Cloud Service instance.
VCN and Subnets
To launch an Exadata Cloud Service instance, you must have:
• A VCN in the region where you want the Exadata Cloud Service instance
• At least two subnets in the VCN. The two subnets are:
• Client subnet
• Backup subnet
Note:

For Exadata Cloud Service instances using the new Exadata resource model,
networking is configured on the cloud VM cluster resource. For instances
using DB system resource model, networking is configured on the DB system
resource.
In general, Oracle recommends using regional subnets, which span all availability domains in the region. If you
instead use AD-specific subnets, both the client and backup subnets must be in the same availability domain. The
important thing to know for your Exadata Cloud Service instance is that the resources you create in the two subnets
must be in the same availability domain. For more information, see Overview of VCNs and Subnets on page 2848.
You will create custom route tables for each subnet. You will also create security rules to control traffic to and from
the client network and backup network of the Exadata compute notes (for cloud VM clusters, nodes are called virtual
machines). More information follows about those items.
Option 1: Public Client Subnet with Internet Gateway
This option can be useful when doing a proof-of-concept or development work. You can use this setup in production
if you want to use an internet gateway with the VCN, or if you have services that run only on a public network and
need access to the database. See the following diagram and description.

Oracle Cloud Infrastructure User Guide 1233


Database

You set up:


• Subnets:
• Public client subnet (public means that the resources in the subnet can have public IP addresses at your
discretion).
• Private backup subnet (private means that the resources in the subnet cannot have public IP addresses and
therefore cannot receive incoming connections from the internet).
• Gateways for the VCN:
• Internet gateway (for use by the client subnet).
• Service gateway (for use by the backup subnet). Also see Option 1: Service Gateway Access Only to Object
Storage on page 1238.
• Route tables:
• Custom route table for the public client subnet, with a route for 0.0.0.0/0, and target = the internet gateway.
• Separate custom route table for the private backup subnet, with a route rule for the service CIDR label called
OCI <region> Object Storage, and target = the service gateway. Also see Option 1: Service Gateway Access
Only to Object Storage on page 1238.
• Security rules to enable the desired traffic to and from the Exadata virtual machines compute nodes. See Security
Rules for the Exadata Cloud Service instance on page 1240.
• Static route on the Exadata Cloud Service instance's compute nodes (to enable access to Object Storage by way of
the backup subnet).
Important:

See this known issue for information about configuring route rules with
service gateway as the target on route tables associated with public subnets.

Oracle Cloud Infrastructure User Guide 1234


Database

Option 2: Private Subnets


Oracle recommends this option for a production system. Both subnets are private and cannot be reached from the
internet. See the following diagram and description.

You set up:


• Subnets:
• Private client subnet.
• Private backup subnet.
• Gateways for the VCN:
• Dynamic routing gateway (DRG), with a FastConnect or IPSec VPN to your on-premises network (for use by
the client subnet).
• Service gateway (for use by the backup subnet to reach Object Storage, and for use by the client subnet to
reach the Oracle YUM repository for OS updates). Also see Option 2: Service Gateway Access to Both Object
Storage and YUM Repos on page 1239.
• NAT gateway (for use by the client subnet to reach public endpoints not supported by the service gateway).
• Route tables:
• Custom route table for the private client subnet, with two rules:
• A rule for the on-premises network's CIDR, and target = DRG.
• A rule for the service CIDR label called All <region> Services in Oracle Services Network, and target =
the service gateway. The Oracle Services Network is a conceptual network in Oracle Cloud Infrastructure
that is reserved for Oracle services. The rule enables the client subnet to reach the regional Oracle YUM
repository for OS updates. Also see Option 2: Service Gateway Access to Both Object Storage and YUM
Repos on page 1239.
• A rule for 0.0.0.0/0, and target = NAT gateway.
• Separate custom route table for the private backup subnet, with one rule:
• The same rule as for the client subnet: for the service CIDR label called All <region> Services in Oracle
Services Network, and target = the service gateway. This rule enables the backup subnet to reach the
regional Object Storage for backups.
• Security rules to enable the desired traffic to and from the Exadata nodes. See Security Rules for the Exadata
Cloud Service instance on page 1240.

Oracle Cloud Infrastructure User Guide 1235


Database

• Static route on the compute nodes (for VM clusters, the virtual machines) to enable access to Object Storage by
way of the backup subnet.
Requirements for IP Address Space
If you're setting up Exadata Cloud Service instances (and thus VCNs) in more than one region, make sure the IP
address space of the VCNs does not overlap. This is important if you want to set up disaster recovery with Oracle
Data Guard.
The two subnets you create for the Exadata Cloud Service instance must not overlap with 192.168.128.0/20.
The following table lists the minimum required subnet sizes, depending on the Exadata rack size. For the client
subnet, each node requires two IP addresses, and in addition, three addresses are reserved for Single Client Access
Names (SCANs). For the backup subnet, each node requires one address.
Tip:

The Networking service reserves three IP addresses in each subnet.


Allocating a larger space for the subnet than the minimum required (for
example, at least /25 instead of /28) can reduce the relative impact of those
reserved addresses on the subnet's available space.

Rack Size Client Subnet: Client Subnet: Backup Subnet: Backup Subnet:
# Required IP Minimum Size # Required IP Minimum Size
Addresses Addresses
Base System or (2 addresses * /28 (16 IP addresses) (1 address * 2 nodes) /29 (8 IP addresses)
Quarter Rack 2 nodes) + 3 for + 3 reserved in
SCANs + 3 reserved subnet = 5
in subnet = 10

Half Rack (2 * 4 nodes) + 3 + 3 /28 (16 IP addresses) (1 * 4 nodes) + 3 = 7 /29 (8 IP addresses)


= 14
Full Rack (2* 8 nodes) + 3 + 3 /27 (32 IP addresses) (1 * 8 nodes) + 3 = /28 (16 IP addresses)
= 22 11

VCN Creation Wizard: Not for Production


The Networking section of the Console includes a handy wizard that creates a VCN along with related resources. It
can be useful if you just want to try launching an instance. However, the wizard automatically creates a public subnet
and an internet gateway. You may not want this for your production network, so Oracle recommends you create the
VCN and other resources individually yourself instead of using the wizard.
DNS: Short Names for the VCN, Subnets, and Exadata Cloud Service instance
For the nodes to communicate, the VCN must use the Internet and VCN Resolver. It enables hostname assignment
to the nodes, and DNS resolution of those hostnames by resources in the VCN. It enables round robin resolution of
the database's SCANs. It also enables resolution of important service endpoints required for backing up databases,
patching, and updating the cloud tooling on an Exadata Cloud Service instance. The Internet and VCN Resolver is the
VCN's default choice for DNS in the VCN. For more information, see DNS in Your Virtual Cloud Network on page
2936 and also DHCP Options on page 2943.
When you create the VCN, subnets, and Exadata, you must carefully set the following identifiers, which are related to
DNS in the VCN:
• VCN domain label
• Subnet domain label
• Hostname prefix for the Exadata Cloud Service instance's cloud VM cluster or DB system resource
These values make up the node's fully qualified domain name (FQDN):

Oracle Cloud Infrastructure User Guide 1236


Database

<hostname_prefix>-
######.<subnet_domain_label>.<vcn_domain_label>.oraclevcn.com
For example:
exacs-abcde1.clientpvtad1.acmevcniad.oraclevcn.com
In this example, you assign exacs as the hostname prefix when you create the cloud VM cluster or DB system.
The Database service automatically appends a hyphen and a five-letter string with the node number at the end. For
example:
• Node 1: exacs-abcde1.clientpvtad1.acmevcniad.oraclevcn.com
• Node 2: exacs-abcde2.clientpvtad1.acmevcniad.oraclevcn.com
• Node 3: exacs-abcde3.clientpvtad1.acmevcniad.oraclevcn.com
• And so on
Requirements for the hostname prefix:
• Recommended maximum: 12 characters. For more information, see the example under the following section,
"Requirements for the VCN and subnet domain labels".
• Cannot be the string localhost
Requirements for the VCN and subnet domain labels:
• Recommended maximum: 14 characters each. The actual underlying requirement is a total of 28 characters
across both domain labels (excluding the period between the labels). For example, both of these are acceptable:
subnetad1.verylongvcnphx or verylongsubnetad1.vcnphx. For simplicity, the recommendation is
14 characters each.
• No hyphens or underscores.
• Recommended: include the region name in the VCN's domain label, and include the availability domain name in
the subnet's domain label.
• In general, the FQDN has a maximum total limit of 63 characters. Here is a safe general rule:
<12_chars_max>-######.<14_chars_max>.<14_chars_max>.oraclevcn.com
The preceding maximums are not enforced when you create the VCN and subnets. However, if the labels exceed the
maximum, the Exadata deployment fails.
DNS: Between On-Premises Network and VCN
To enable the use of hostnames when on-premises hosts and VCN resources communicate with each other, you have
two options:
• Set up an instance in the VCN to be a custom DNS server. For an example of an implementation of this scenario
with the Oracle Terraform provider, see Hybrid DNS Configuration.
• Manage hostname resolution yourself manually.
Node Access to Object Storage: Static Route
To be able to back up databases, and patch and update cloud tools on an Exadata Cloud Service instance, you must
configure access to Oracle Cloud Infrastructure Object Storage. Regardless of how you configure the VCN with
that access (for example, with a service gateway), you may also need to configure a static route to Object Storage
on each of the compute nodes in the cluster. This is only required if you are not using automatic backups. If you are
using customized backups using the backup APIs, then you must route traffic destined for Object Storage through
the backup interface (BONDETH1). This is not necessary if you are using the automatic backups created with the
Console, APIs, or CLIs.
Important:

You must configure a static route for Object Storage access on each compute
node in an Exadata Cloud Service instance if you are not creating automatic
backups with the Console, APIs, or CLIs. Otherwise, attempts to back up
databases, and patch or update tools on the system, can fail.

Oracle Cloud Infrastructure User Guide 1237


Database

Object Storage IP allocations


Oracle Cloud Infrastructure Object Storage uses the CIDR block IP range 134.70.0.0/17 for all regions. This range
was introduced in April and May of 2018.
As of June 1, 2018, Object Storage no longer supports the following discontinued IP ranges. Oracle recommends
that you remove these older IP addresses from your access-control lists, firewall rules, and other rules after you have
adopted the new IP ranges.
The discontinued IP ranges are:
• Germany Central (Frankfurt): 130.61.0.0/16
• UK South (London): 132.145.0.0/16
• US East (Ashburn): 129.213.0.0/16
• US West (Phoenix): 129.146.0.0/16
To configure a static route for Object Storage access
1. SSH to a compute node in the Exadata Cloud Service instance.

ssh -i <private_key_path> opc@<node_ip_address>


2. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the root user's profile.

login as: opc

[opc@dbsys ~]$ sudo su -


3. Identify the gateway configured for the BONDETH1 interface.

[root@dbsys ~]# grep GATEWAY /etc/sysconfig/network-scripts/ifcfg-bondeth1


|awk -F"=" '{print $2}'

10.0.4.1
4. Add the following static rule for BONDETH1 to the /etc/sysconfig/network-scripts/route-
bondeth1 file:

10.0.X.0/XX dev bondeth1 table 211


default via <gateway> dev bondeth1 table 211
134.70.0.0/17 via <gateway_from_previous_step> dev bondeth1
5. Restart the interface.

[root@dbsys ~]# ifdown bondeth1; ifup bondeth1;

The file changes from the previous step take effect immediately after the ifdown and ifup commands run.
6. Repeat the preceding steps on each compute node in the Exadata Cloud Service instance.
Service Gateway for the VCN
Your VCN needs access to both Object Storage for backups and Oracle YUM repos for OS updates.
Depending on whether you use option 1 or option 2 described previously, you use the service gateway in different
ways. See the next two sections.
Option 1: Service Gateway Access Only to Object Storage
You configure the backup subnet to use the service gateway for access only to Object Storage. As a reminder, here's
the diagram for option 1:

Oracle Cloud Infrastructure User Guide 1238


Database

In general, you must:


• Perform the tasks for setting up a service gateway on a VCN, and specifically enable the service CIDR label called
OCI <region> Object Storage.
• In the task for updating routing, add a route rule to the backup subnet's custom route table. For the destination
service, use OCI <region> Object Storage and target = the service gateway.
• In the task for updating security rules in the subnet, perform the task on the backup network's network security
group (NSG) or custom security list. Set up a security rule with the destination service set to OCI <region>
Object Storage. See Rule Required Specifically for the Backup Network on page 1243.
Option 2: Service Gateway Access to Both Object Storage and YUM Repos
You configure both the client subnet and backup subnet to use the service gateway for access to the Oracle Services
Network, which includes both Object Storage and the Oracle YUM repos.
Important:

See this known issue for information about accessing Oracle YUM services
through the service gateway.
As a reminder, here's the diagram for option 2:

Oracle Cloud Infrastructure User Guide 1239


Database

In general, you must:


• Perform the tasks for setting up a service gateway on a VCN, and specifically enable the service CIDR label called
All <region> Services in Oracle Services Network.
• In the task for updating routing in each subnet, add a rule to each subnet's custom route table. For the destination
service, use All <region> Services in Oracle Services Network and target = the service gateway.
• In the task for updating security rules for the subnet, perform the task on the backup network's network security
group (NSG) or custom security list. Set up a security rule with the destination service set to OCI <region>
Object Storage. See Rule Required Specifically for the Backup Network on page 1243. Note that the client
subnet already has a broad egress rule that covers access to the YUM repos.
Here are a few additional details about using the service gateway for option 2:
• Both the client subnet and backup subnet use the service gateway, but to access different services. You cannot
enable both the OCI <region> Object Storage service CIDR label and the All <region> Services in Oracle
Services Network for the service gateway. To cover the needs of both subnets, you must enable All <region>
Services in Oracle Services Network for the service gateway. The VCN can have only a single service gateway.
• Any route rule that targets a given service gateway must use an enabled service CIDR label and not a CIDR block
as the destination for the rule. That means for option 2, the route tables for both subnets must use All <region>
Services in Oracle Services Network for their service gateway rules.
• Unlike route rules, security rules can use either any service CIDR label (whether the VCN has a service gateway
or not) or a CIDR block as the source or destination CIDR for the rule. Therefore, although the backup subnet has
a route rule that uses All <region> Services in Oracle Services Network, the subnet can have a security rule that
uses OCI <region> Object Storage. See Rule Required Specifically for the Backup Network on page 1243.
Security Rules for the Exadata Cloud Service instance
This section lists the security rules to use with your Exadata Cloud Service instance. Security rules control the types
of traffic allowed for the client network and backup network of the Exadata's compute nodes. The rules are divided
into three sections.
There are different ways to implement these rules. For more information, see Ways to Implement the Security Rules
on page 1243.

Rules Required for Both the Client Network and Backup Network
This section has several general rules that enable essential connectivity for hosts in the VCN.

Oracle Cloud Infrastructure User Guide 1240


Database

If you use security lists to implement your security rules, be aware that the rules that follow are included by default in
the default security list. Update or replace the list to meet your particular security needs. The two ICMP rules (general
ingress rules 2 and 3) are required for proper functioning of network traffic within the Oracle Cloud Infrastructure
environment. Adjust the general ingress rule 1 (the SSH rule) and the general egress rule 1 to allow traffic only to and
from hosts that require communication with resources in your VCN.

General ingress rule 1: Allows SSH traffic from anywhere


• Stateless: No (all rules must be stateful)
• Source Type: CIDR
• Source CIDR: 0.0.0.0/0
• IP Protocol: SSH
• Source Port Range: All
• Destination Port Range: 22

General ingress rule 2: Allows Path MTU Discovery fragmentation messages


This rule enables hosts in the VCN to receive Path MTU Discovery fragmentation messages. Without access to these
messages, hosts in the VCN can have problems communicating with hosts outside the VCN.
• Stateless: No (all rules must be stateful)
• Source Type: CIDR
• Source CIDR: 0.0.0.0/0
• IP Protocol: ICMP
• Type: 3
• Code: 4

General ingress rule 3: Allows connectivity error messages within the VCN
This rule enables the hosts in the VCN to receive connectivity error messages from each other.
• Stateless: No (all rules must be stateful)
• Source Type: CIDR
• Source CIDR: Your VCN's CIDR
• IP Protocol: ICMP
• Type: 3
• Code: All

General egress rule 1: Allows all egress traffic


• Stateless: No (all rules must be stateful)
• Destination Type: CIDR
• Destination CIDR: 0.0.0.0/0
• IP Protocol: All

Rules Required Specifically for the Client Network


The following security rules are important for the client network.
Important:

• Client ingress rules 1 and 2 only cover connections initiated from within
the client subnet. If you have a client that resides outside the VCN, Oracle
recommends setting up two additional similar rules that instead have the
Source CIDR set to the public IP address of the client.
• Client ingress rules 3 and 4 and client egress rules 1 and 2 allow TCP
and ICMP traffic inside the client network and enable the nodes to

Oracle Cloud Infrastructure User Guide 1241


Database

communicate with each other. If TCP connectivity fails across the nodes,
the Exadata cloud VM cluster or DB system resource fails to provision.

Client ingress rule 1: Allows ONS and FAN traffic from within the client subnet
The first rule is recommended and enables the Oracle Notification Services (ONS) to communicate about Fast
Application Notification (FAN) events.
• Stateless: No (all rules must be stateful)
• Source Type: CIDR
• Source CIDR: Client subnet's CIDR
• IP Protocol: TCP
• Source Port Range: All
• Destination Port Range: 6200
• Description: An optional description of the rule.

Client ingress rule 2: Allows SQL*NET traffic from within the client subnet
This rule is for SQL*NET traffic and is required in these cases:
• If you need to enable client connections to the database
• If you plan to use Oracle Data Guard
• Stateless: No (all rules must be stateful)
• Source Type: CIDR
• Source CIDR: Client subnet's CIDR
• IP Protocol: TCP
• Source Port Range: All
• Destination Port Range: 1521
• Description: An optional description of the rule.

Client egress rule 1: Allows all TCP traffic inside the client subnet
• Stateless: No (all rules must be stateful)
• Destination Type: CIDR
• Destination CIDR: 0.0.0.0/0
• IP Protocol: TCP
• Source Port Range: All
• Destination Port Range: 22
• Description: An optional description of the rule.

Client egress rule 2: Allows all egress traffic (allows connections to the Oracle YUM repos)
Client egress rule 3 is important because it allows connections to the Oracle YUM repos. It is redundant with the
general egress rule in Security Rules for the Exadata Cloud Service instance on page 1240 (and in the default
security list). It is optional but recommended in case the general egress rule (or default security list) is inadvertently
changed.
• Stateless: No (all rules must be stateful)
• Destination Type: CIDR
• Destination CIDR: 0.0.0.0/0
• IP Protocol: All
• Description: An optional description of the rule.

Oracle Cloud Infrastructure User Guide 1242


Database

Rule Required Specifically for the Backup Network


The following security rule is important for the backup network because it enables the DB system to communicate
with Object Storage through the service gateway (and optionally with the Oracle YUM repos if the client network
doesn't have access to them). It is redundant with the general egress rule in Security Rules for the Exadata Cloud
Service instance on page 1240 (and in the default security list). It is optional but recommended in case the general
egress rule (or default security list) is inadvertently changed.

Backup egress rule: Allows access to Object Storage


• Stateless: No (all rules must be stateful)
• Destination Type: Service
• Destination Service:
• The service CIDR label called OCI <region> Object Storage
• If the client network does not have access to the Oracle YUM repos, use the service CIDR label called All
<region> Services in Oracle Services Network
• IP Protocol: TCP
• Source Port Range: All
• Destination Port Range: 443 (HTTPS)
• Description: An optional description of the rule.
Ways to Implement the Security Rules
The Networking service offers two ways to implement security rules within your VCN:
• Network security groups
• Security lists
For a comparison of the two methods, see Comparison of Security Lists and Network Security Groups on page
2859.
If you use network security groups
If you choose to use network security groups (NSGs), here is the recommended process:
1. Create an NSG for the client network. Add the following security rules to that NSG:
• The rules listed in Rules Required for Both the Client Network and Backup Network on page 1240
• The rules listed in Rules Required Specifically for the Client Network
2. Create a separate NSG for the backup network. Add the following security rules to that NSG:
• The rules listed in Rules Required for Both the Client Network and Backup Network on page 1240
• The rules listed in Rule Required Specifically for the Backup Network on page 1243
3. When the database administrator creates the Exadata Cloud Service instance, they must choose several networking
components (for example, which VCN and subnets to use):
• When they choose the client subnet, they can also choose which NSG or NSGs to use. Make sure they choose
the client network's NSG.
• When they choose the backup subnet, they can also choose which NSG or NSGs to use. Make sure they
choose the backup network's NSG.
You could instead create a separate NSG for the general rules. Then when the database administrator chooses which
NSGs to use for the client network, make sure they choose both the general NSG and the client network NSG.
Similarly for the backup network, they choose both the general NSG and the backup network NSG.
If you use security lists
If you choose to use security lists, here is the recommended process:

Oracle Cloud Infrastructure User Guide 1243


Database

1. Configure the client subnet to use the required security rules:


a. Create a custom security list for the client subnet and add the rules listed in Rules Required Specifically for the
Client Network on page 1241.
b. Associate the following two security lists with the client subnet:

VCN's default security list with all its default rules. This automatically comes with the VCN. By default it
contains the rules in Rules Required for Both the Client Network and Backup Network on page 1240.
• The new custom security list you created for the client subnet.
2. Configure the backup subnet to use the required security rules:
a. Create a custom security list for the backup subnet and add the rules listed in Rule Required Specifically for
the Backup Network on page 1243.
b. Associate the following two security lists with the backup subnet:
• VCN's default security list with all its default rules. This automatically comes with the VCN. By default it
contains the rules in Rules Required for Both the Client Network and Backup Network on page 1240.
• The new custom security list you created for the backup subnet.
Later when the database administrator creates the Exadata Cloud Service instance, they must choose several
networking components. When they select the client subnet and backup subnet that you've already created and
configured, the security rules are automatically enforced for the nodes created in those subnets.
Caution:

Do not remove the default egress rule from the default security list. If you
do, make sure to instead include the following replacement egress rule in the
client subnet's security list:
• Stateless: No (all rules must be stateful)
• Destination Type: CIDR
• Destination CIDR: 0.0.0.0/0
• IP Protocol: All

Creating an Exadata Cloud Service Instance


This topic explains how to create an Oracle Exadata Cloud Service instance. It also describes how to configure
required access to the Oracle Cloud Infrastructure Object Storage service and set up DNS.
When you create an Exadata Cloud Service instance using the Console or the API, the system is provisioned to
support Oracle databases. The service creates an initial database based on the options you provide and some default
options described later in this topic.
Resources to Be Created
You will create different Exadata Cloud Service resources depending on which resource model you use to create your
Exadata Cloud Service instance.

New Resource Model (Supported for All Exadata Shapes)


If you are using the new Exadata Cloud Service resource model, you will provision the following resources
separately:
• Cloud Exadata infrastructure resource: The infrastructure resource is the top-level (parent) resource. At the
infrastructure level, you control the number of database and storage servers. You also control Exadata system
maintenance scheduling at the Exadata infrastructure level.
• Cloud VM cluster resource: The VM cluster is a child resource of the infrastructure resource, providing a link
between your Exadata cloud infrastructure resource and Oracle Database. Networking, OCPU count, IORM, and
Oracle Grid Infrastructure are configured and managed at the VM cluster level. To create a cloud VM cluster, you
must have an existing cloud Exadata infrastructure resource to house the VM cluster.
Notes:

Oracle Cloud Infrastructure User Guide 1244


Database

• Use the new resource model to provision both the flexible X8M shape and fixed-shape systems (X7/X7/Exadata
Base System)
• Exadata Cloud Service instance currently support only one cloud VM cluster
Tip:

Oracle recommends using the new resource model to provision Exadata


Cloud Service instances, regardless of the hardware shape family you are
choosing (X7, X8, or X8M). Doing so will allow you to use the new APIs for
these resource types, and you will not have to convert your service instance
resources (or any automation associated those resources) at a later time.

Old Resource Model (X7, X8, and Exadata Base Shape Families Only)
You can use the older Exadata Cloud Service resource model if needed to provision X7 and X8 systems. The old
resource model creates a single Exadata DB system resource. See To create an X7 or X8 Exadata DB system on
page 1249 for more information.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let database admins manage Oracle Cloud database systems on page 2158 lets the
specified group do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. If
you want to dig deeper into writing policies for databases, see Details for the Database Service on page 2251.
Prerequisites
• The public key, in OpenSSH format, from the key pair that you plan to use for connecting to the system via SSH.
A sample public key, abbreviated for readability, is shown below.

ssh-rsa AAAAB3NzaC1yc2EAAAABJQAA....lo/gKMLVM2xzc1xJr/
Hc26biw3TXWGEakrK1OQ== rsa-key-20160304

For more information, see Managing Key Pairs on Linux Instances on page 698.
• A correctly configured virtual cloud network (VCN) to launch the system in. Its related networking resources
(gateways, route tables, security lists, DNS, and so on) must also be configured as necessary for the system. For
more information, see Network Setup for Exadata Cloud Service Instances on page 1233.
Default Options for the Initial Database
To simplify launching an Exadata Cloud Service instance in the Console and when using the API, the following
default options are used for the initial database:
• Console Enabled: False
• Create Container Database: False for version 11.2.0.4 databases. Otherwise, true.
• Create Instance Only (for standby and migration): False
• Database Home ID: Creates a database home
• Database Language: AMERICAN
• Database Sizing Template: odb2
• Database Storage: Automatic Storage Management (ASM)
• Database Territory: AMERICA
• Database Unique Name: The user-specified database name and a system-generated suffix, for example,
dbtst_phx1cs.
• PDB Admin Name: pdbuser (Not applicable for version 11.2.0.4 databases.)

Oracle Cloud Infrastructure User Guide 1245


Database

Using the Console


To create a cloud Exadata infrastructure resource
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Under Exadata at Oracle Cloud, click Exadata Infrastructure.
3. Click Create Exadata Infrastructure.
4. Compartment: Select a compartment for the Exadata infrastructure.
5. Display name: Enter a display name for the Exadata infrastructure. The name doesn't need to be unique. An
Oracle Cloud Identifier (OCID) will uniquely identify the cloud Exadata infrastructure resource. Avoid entering
confidential information.
6. Select an availability domain: The availability domain in which the Exadata infrastructure resides.
7. Select the Exadata system model: Select either a fixed-shape system (quarter, half, or full rack X7-2 or X8-2
shapes), or a scalable system (X8M-2).
X8M-2: If you select the flexible X8M-2 system model, you initial Exadata Cloud Service instance will have 2
database servers and 3 storage servers (the equivalents of an X8 quarter rack shape). After provisioning, you can
scale the service instance as needed by adding additional storage servers, compute servers, or both.
X7 and X8: If you select an X7 or X8 system, you are given the choice of provisioning a quarter, half, or full
rack. See Exadata Fixed Hardware Shapes: X6, X7, X8 and Exadata Base on page 1352 for hardware and
capacity details.
Exadata Base: The Exadata base shape comes in a single configuration, and provides an economical alternative
to provisioning a quarter rack system. See Exadata Fixed Hardware Shapes: X6, X7, X8 and Exadata Base on
page 1352
8. Configure automatic maintenance: Click this button to specify a schedule for the quarterly automatic
infrastructure maintenance. In the Automatic Infrastructure Maintenance Schedule dialog that opens, do the
following:
a. Click the Specify a schedule radio button to choose your preferred month, week, weekday, and start time for
infrastructure maintenance.
b. Under Maintenance months, specify at least one month for each quarter during which Exadata infrastructure
maintenance will take place. You can select more than one month per quarter. If you specify a long lead time
for advanced notification (for example, 4 weeks), then you may want to specify two or three months per
quarter during which maintenance runs can occur. This will ensure that your maintenance updates are applied
in a timely manner after accounting for your required lead time. Lead time is discussed in the following steps.
c. Under Week of the month, specify which week of the month maintenance will take place. Weeks start on
the 1st, 8th, 15th, and 22nd days of the month, and have a duration of seven days. Weeks start and end based
on calendar dates, not days of the week. Maintenance cannot be scheduled for the fifth week of months that
contain more than 28 days.
d. Optional. Under Day of the week, specify the day of the week on which the maintenance will occur. If you
do not specify a day of the week, then Oracle will run the maintenance update on a weekend day to minimize
disruption.
e. Optional. Under Start hour, specify the hour during which the maintenance run will begin. If you do not
specify a start hour, then Oracle will choose the least disruptive time to run the maintenance update.
f. Under Lead Time, specify the number of weeks ahead of the maintenance event you would like to receive a
notification message. Your lead time ensures that a newly released maintenance update is scheduled to account
for your required period of advanced notification.
g. Click Update Maintenance Schedule.
9. Click Show Advanced Options to specify advanced options for the initial database.
In the Tags tab, you can add tags to the database. To apply a defined tag, you must have permissions to use the tag
namespace. For more information about tagging, see Resource Tags on page 213. If you are not sure if you should
apply tags, skip this option (you can apply tags later) or ask your administrator.
10. Click Create Exadata Infrastructure. The cloud Exadata infrastructure appears in the Exadata Infrastructure list
with a status of Provisioning. The infrastructure's icon changes from yellow to green (or red to indicate errors).
WHAT NEXT?

Oracle Cloud Infrastructure User Guide 1246


Database

After the cloud Exadata infrastructure resource is successfully provisioned and in the Available status, you can create
a cloud VM cluster on your infrastructure. You must provision both an infrastructure resource and a VM cluster
before you can create your first database in the new Exadata Cloud Service instance.
To create a cloud VM cluster resource
Note:

To create a cloud VM cluster in an Exadata Cloud Service instance, you must


have first created a cloud Exadata infrastructure resource. Exadata Cloud
Service instances currently support creating a single cloud VM cluster.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Under Exadata at Oracle Cloud, click Exadata VM Clusters.
3. Click Create Exadata VM Cluster.
The Create Exadata VM Cluster page is displayed. Provide the required information to configure the VM
cluster.
4. Compartment: Select a compartment for the VM cluster resource.
5. Display name: Enter a user-friendly display name for the VM cluster. The name doesn't need to be unique. An
Oracle Cloud Identifier (OCID) will uniquely identify the DB system. Avoid entering confidential information.
6. Select Exadata infrastructure: Select the infrastructure resource that will contain the VM cluster. Currently,
cloud Exadata infrastructure resources support only one VM cluster, so you must choose an infrastructure resource
that does not have an existing VM cluster. Click Change Compartment and pick a different compartment from
the one you are working in to view infrastructure resources in other compartments.
7. Configure the VM cluster: Specify the number of OCPU cores you want to allocate to each of the VM cluster's
virtual machine compute nodes. The read-only Requested OCPU count for the Exadata VM cluster field
displays the total number of OCPU cores you are allocating.
8. Configure Exadata storage: Specify the following:
Allocate storage for Exadata sparse snapshots: Select this configuration option if you intend to use snapshot
functionality within your VM cluster. If you select this option, the SPARSE disk group is created, which enables
you to use VM cluster snapshot functionality for PDB sparse cloning. If you do not select this option, the
SPARSE disk group is not created and snapshot functionality will not be available on any database deployments
that are created in the environment.
Allocate storage for local backups: Select this option if you intend to perform database backups to the local
Exadata storage within your Exadata Cloud Service instance. If you select this option, more space is allocated to
the RECO disk group, which is used to store backups on Exadata storage. If you do not select this option, more
space is allocated to the DATA disk group, which enables you to store more information in your databases.
9. Add SSH key: Add the public key portion of each key pair you want to use for SSH access to the DB system.
Upload SSH key files: Select this radio button to browse or drag and drop .pub files.
Paste SSH keys: Select this radio button to paste in individual public keys. To paste multiple keys, click +
Another SSH Key, and supply a single key for each entry.

Oracle Cloud Infrastructure User Guide 1247


Database

10. Configure the network settings: Specify the following:


• Virtual cloud network: The VCN in which you want to create the VM cluster. Click Change Compartment
to select a VCN in a different compartment.
• Client subnet: The subnet to which the VM cluster should attach. Click Change Compartment to select a
subnet in a different compartment.
Do not use a subnet that overlaps with 192.168.16.16/28, which is used by the Oracle Clusterware private
interconnect on the database instance. Specifying an overlapping subnet will cause the private interconnect to
malfunction.
• Backup subnet: The subnet to use for the backup network, which is typically used to transport backup
information to and from Oracle Cloud InfrastructureObject Storage, and for Data Guard replication. Click
Change Compartment to select a subnet in a different compartment, if applicable.
Do not use a subnet that overlaps with 192.168.128.0/20. This restriction applies to both the client subnet and
backup subnet.
If you plan to back up databases to Object Storage, see the network prerequisites in Managing Exadata
Database Backups on page 1321.
• Network Security Groups: Optionally, you can specify one or more network security groups (NSGs) for both
the client and backup networks. NSGs function as virtual firewalls, allowing you to apply a set of ingress and
egress security rules to your Exadata Cloud Service VM cluster. A maximum of five NSGs can be specified.
For more information, see Network Security Groups on page 2867 and Network Setup for Exadata Cloud
Service Instances on page 1233.
Note that if you choose a subnet with a security list, the security rules for the VM cluster will be a union of the
rules in the security list and the NSGs.
To use network security groups:
• Check the Use network security groups to control traffic check box. This box appears under both the
selector for the client subnet and the backup subnet. You can apply NSGs to either the client or the backup
network, or to both networks. Note that you must have a virtual cloud network selected to be able to assign
NSGs to a network.
• Specify the NSG to use with the network. You might need to use more than one NSG. If you're not sure,
contact your network administrator.
• To use additional NSGs with the network, click + Another Network Security Group.
• Hostname prefix: Your choice of host name for the Exadata DB system. The host name must begin with an
alphabetic character, and can contain only alphanumeric characters and hyphens (-). The maximum number of
characters allowed for an Exadata DB system is 12.
Important:

The host name must be unique within the subnet. If it is not unique, the
VM cluster will fail to provision.
• Host domain name: The domain name for the VM cluster. If the selected subnet uses the Oracle-provided
Internet and VCN Resolver for DNS name resolution, this field displays the domain name for the subnet and it
can't be changed. Otherwise, you can provide your choice of a domain name. Hyphens (-) are not permitted.
If you plan to store database backups in Object Storage, Oracle recommends that you use a VCN Resolver
for DNS name resolution for the client subnet because it automatically resolves the Swift endpoints used for
backups.
• Host and domain URL: This read-only field combines the host and domain names to display the fully
qualified domain name (FQDN) for the database. The maximum length is 64 characters.

Oracle Cloud Infrastructure User Guide 1248


Database

11. Choose a license type: The type of license you want to use for the VM cluster. Your choice affects metering for
billing.
•License Included means the cost of the cloud service includes a license for the Database service.
•Bring Your Own License (BYOL) means you are an Oracle Database customer with an Unlimited
License Agreement or Non-Unlimited License Agreement and want to use your license with Oracle Cloud
Infrastructure. This removes the need for separate on-premises licenses and cloud licenses.
12. Click Show Advanced Options to specify advanced options for the VM cluster:
• Time zone: The default time zone for the DB system is UTC, but you can specify a different time zone. The
time zone options are those supported in both the Java.util.TimeZone class and the Oracle Linux operating
system. For more information, see DB System Time Zone on page 1576.
Tip:

If you want to set a time zone other than UTC or the browser-detected
time zone, and if you do not see the time zone you want, try selecting
the Select another time zone, option, then selecting "Miscellaneous"
in the Region or country list and searching the additional Time zone
selections.
• Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags
to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more
information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip
this option (you can apply tags later) or ask your administrator.
13. Click Create Exadata VM Cluster.
WHAT NEXT?
After your VM cluster is successfully created and in the Available state, you can view the VM Cluster Details page
by clicking the name of the VM cluster in the list of clusters. From the VM Cluster Details page, you can create your
first database in the cluster by clicking Create Database.
To create an X7 or X8 Exadata DB system
Tip:

Oracle recommends using the new Exadata Cloud Service resource model
when provisioning a new service instance. The new resource model is
compatible with all available Exadata shape families (X7, X8, and X8M).
The DB system resource described in this topic will be deprecated after a
period where both resource models are supported. If you need to provision
a service instance using the DB system resource model, you will be able to
switch the instance to the new resource model. See To switch an Exadata
DB system to the new Exadata resource model on page 1230 for more
information. Customers with existing Exadata DB systems will be notified in
advance regarding the deprecation of the DB system resource model.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Click Create DB System.

Oracle Cloud Infrastructure User Guide 1249


Database

3. On the Create DB System page, provide the basic information for the DB system:
• Select a compartment: By default, the DB system launches in your current compartment and you can use the
network resources in that compartment.
• Name your DB system: A friendly, display name for the DB system. The name doesn't need to be unique. An
Oracle Cloud Identifier (OCID) will uniquely identify the DB system. Avoid entering confidential information.
• Select an availability domain: The availability domain in which the DB system resides.
• Select a shape type: The shape type you select sets the default shape and filters the shape options in the next
field.
When you select Exadata, you are asked if you would like to use the newer Exadata resource model that
replaces the DB system resource with a cloud Exadata infrastructure resource and a cloud VM cluster. These
resources are compatible with X7, X8, and X8M hardware generations. Click Continue Creating DB System
if you do not want to use the new resource model.
• Select a shape: The shape determines the type of DB system and the resources allocated to the system. To
specify a shape other than the default, click Change Shape, and select an available shape from the list. See
Exadata Fixed Hardware Shapes: X6, X7, X8 and Exadata Base on page 1352 for available shapes in Oracle
Cloud Infrastructure.
Note that the X8M shape is not available when using the DB system resource model.
• Configure the DB system: Specify the following:
• Total node count: The number of nodes in the DB system. The number depends on the shape you select.
• Oracle Database software edition: The database edition supported by the DB system. Exadata DB
systems only support Enterprise Edition - Extreme Performance.
• CPU core count: The number of CPU cores for the DB system. The text below the field indicates the
acceptable values for that shape. The core count is evenly divided across the nodes.
You can increase the CPU cores to accommodate increased demand after you launch the DB system.
For an X8 or X7 Exadata DB system, or an Exadata base system, you can specify zero (0) CPU cores when
you launch the system. This will provision the system and immediately stop it.
See Scaling CPU Cores Within an Exadata Cloud Service Instance on page 1223 for information about
CPU core scaling and the impact on billing. Oracle recommends that if you are not provisioning a stopped
system (0 cores), that you specify at least 3 cores per node.
• Configure storage: Specify the following:
• Cluster Name: (Optional) A unique cluster name for a multi-node DB system. The name must begin with
a letter and contain only letters (a-z and A-Z), numbers (0-9) and hyphens (-). The cluster name can be no
longer than 11 characters and is not case sensitive. Avoid entering confidential information.
• Storage Allocation: The configuration settings that determine the percentage of storage assigned to
DATA, RECO, and optionally, SPARSE disk:
• Database Backups on Exadata Storage: Select this option if you intend to perform database backups
to the local Exadata storage within your Exadata DB system environment. If you select this option,
more space is allocated to the RECO disk group, which is used to store backups on Exadata storage. If
you do not select this option, more space is allocated to the DATA disk group, which enables you to
store more information in your databases.
• Create Sparse Disk Group: Select this configuration option if you intend to use snapshot functionality
within your Exadata DB system environment. If you select this option, the SPARSE disk group is
created, which enables you to use Exadata DB system snapshot functionality for PDB sparse cloning. If
you do not select this option, the SPARSE disk group is not created and Exadata DB system snapshot
functionality will not be available on any database deployments that are created in the environment.
Important:

Creating a sparse disk group impacts the storage available for the
ASM disk groups (DATA and RECO) and you cannot change the
storage allocation configuration after you provision your DB system.

Oracle Cloud Infrastructure User Guide 1250


Database

For information about the percentage of storage that will be assigned


to DATA, RECO, and SPARSE disk based on your configuration,
see Storage Configuration on page 1225. Similar information will
display under the options in the Console dialog.
• Add public SSH keys: The public key portion of each key pair you want to use for SSH access to the DB
system. You can browse or drag and drop .pub files, or paste in individual public keys. To paste multiple keys,
click + Another SSH Key, and supply a single key for each entry.
• Choose a license type: The type of license you want to use for the DB system. Your choice affects metering
for billing.

License Included means the cost of the cloud service includes a license for the Database service.

Bring Your Own License (BYOL) means you are an Oracle Database customer with an Unlimited
License Agreement or Non-Unlimited License Agreement and want to use your license with Oracle Cloud
Infrastructure. This removes the need for separate on-premises licenses and cloud licenses.
4. Specify the network information:
• Virtual cloud network: The VCN in which to launch the DB system. Click Change Compartment to select
a VCN in a different compartment.
• Client subnet: The subnet to which the Exadata DB system should attach. Click Change Compartment to
select a subnet in a different compartment.
Do not use a subnet that overlaps with 192.168.16.16/28, which is used by the Oracle Clusterware private
interconnect on the database instance. Specifying an overlapping subnet will cause the private interconnect to
malfunction.
• Backup subnet: The subnet to use for the backup network, which is typically used to transport backup
information to and from Oracle Cloud InfrastructureObject Storage, and for Data Guard replication. Click
Change Compartment to select a subnet in a different compartment, if applicable.
Do not use a subnet that overlaps with 192.168.128.0/20. This restriction applies to both the client subnet and
backup subnet.
If you plan to back up databases to Object Storage, see the network prerequisites in Managing Exadata
Database Backups on page 1321.
• Network Security Groups: Optionally, you can specify one or more network security groups (NSGs) for both
the client and backup networks. NSGs function as virtual firewalls, allowing you to apply a set of ingress and
egress security rules to your DB system. A maximum of five NSGs can be specified. For more information,

Oracle Cloud Infrastructure User Guide 1251


Database

see Network Security Groups on page 2867 and Network Setup for Exadata Cloud Service Instances on page
1233.
Note that if you choose a subnet with a security list, the security rules for the DB system will be a union of the
rules in the security list and the NSGs.
To use network security groups:
• Check the Use Network Security Groups to Control Client Traffic check box. Note that you must have
already selected a VCN to be able to assign NSGs to the client network.
• Specify the NSG to use with the client network. You might need to use more than one NSG. If you're not
sure, contact your network administrator.
• To use additional NSGs with the client network, click + Another Network Security Group.
• Check the Use Network Security Groups to Control Backup Traffic check box.
• Specify the NSG to use with the backup network just as described previously for the client subnet.
• Hostname prefix: Your choice of host name for the Exadata DB system. The host name must begin with an
alphabetic character, and can contain only alphanumeric characters and hyphens (-). The maximum number of
characters allowed for an Exadata DB system is 12.
Important:

The host name must be unique within the subnet. If it is not unique, the
DB system will fail to provision.
• Host domain name: The domain name for the DB system. If the selected subnet uses the Oracle-provided
Internet and VCN Resolver for DNS name resolution, this field displays the domain name for the subnet and it
can't be changed. Otherwise, you can provide your choice of a domain name. Hyphens (-) are not permitted.
If you plan to store database backups in Object Storage, Oracle recommends that you use a VCN Resolver
for DNS name resolution for the client subnet because it automatically resolves the Swift endpoints used for
backups.
• Host and domain URL: Combines the host and domain names to display the fully qualified domain name
(FQDN) for the database. The maximum length is 64 characters.
5. Click Show Advanced Options to specify advanced options for the DB system:
• Disk redundancy: Exadata DB systems support only high redundancy (3-way mirroring).
• Time zone: The default time zone for the DB system is UTC, but you can specify a different time zone. The
time zone options are those supported in both the Java.util.TimeZone class and the Oracle Linux operating
system. For more information, see DB System Time Zone on page 1576.
Tip:

If you want to set a time zone other than UTC or the browser-detected
time zone, and if you do not see the time zone you want, try selecting
the Select another time zone, option, then selecting "Miscellaneous"
in the Region or country list and searching the additional Time zone
selections.
• Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags
to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more
information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip
this option (you can apply tags later) or ask your administrator.
6. After you completed the network configuration and any advanced options, click Next.

Oracle Cloud Infrastructure User Guide 1252


Database

7. Provide information for the initial database:


• Database name: The name for the database. The database name must begin with an alphabetic character and
can contain a maximum of eight alphanumeric characters. Special characters are not permitted.
• Database version: The version of the initial database created on the DB system when it is launched. After
the DB system is active, you can create additional databases on it. You can mix database versions on the DB
system.
Note:

If you plan to run Oracle Database 19c on your Exadata DB system,


you must specify version 19c when you create the DB system. Earlier
database versions are supported on a 19c Exadata DB system and can
be created at anytime. Exadata DB systems created with earlier Oracle
Database versions will not automatically support Oracle Database 19c.
The DB system must be upgraded manually.
• PDB name: Not applicable to version 11.2.0.4. The name of the pluggable database. The PDB name must
begin with an alphabetic character, and can contain a maximum of 8 alphanumeric characters. The only special
character permitted is the underscore ( _).
• Create administrator credentials: A database administrator SYS user will be created with the password you
supply.
• Username: SYS
• Password: Supply the password for this user. The password must meet the following criteria:
A strong password for SYS, SYSTEM, TDE wallet, and PDB Admin. The password must be 9 to 30
characters and contain at least two uppercase, two lowercase, two numeric, and two special characters. The
special characters must be _, #, or -. The password must not contain the username (SYS, SYSTEM, and so
on) or the word "oracle" either in forward or reversed order and regardless of casing.
• Confirm password: Re-enter the SYS password you specified.
• Select workload type: Choose the workload type that best suits your application:
• Online Transactional Processing (OLTP) configures the database for a transactional workload, with a
bias towards high volumes of random data access.
• Decision Support System (DSS) configures the database for a decision support or data warehouse
workload, with a bias towards large data scanning operations.
• Configure database backups: Specify the settings for backing up the database to Object Storage:
• Enable automatic backups: Check the check box to enable automatic incremental backups for this
database.
• Backup retention period: (Optional) If you enable automatic backups, you can choose one of the
following preset retention periods: 7 days, 15 days, 30 days, 45 days, or 60 days. The default selection is 30
days.
• Backup scheduling (UTC): If you enable automatic backups, you can choose a two-hour scheduling
window to control when backup operations begin. If you do not specify a window, the six-hour default

Oracle Cloud Infrastructure User Guide 1253


Database

window of 00:00 to 06:00 (in the time zone of the DB system's region) is used for your database. See
Automatic Incremental Backups for more information.
• Click Show Advanced Options to specify advanced options for the initial database.
In the Management tab you can specify the following options:
• Character set: The character set for the database. The default is AL32UTF8.
• National character set: The national character set for the database. The default is AL16UTF16.
In the Encryption tab, Use Oracle-managed keys is the only selection and cannot be changed during this
creation process. You can change encryption management to use encryption keys that you manage after the
database is provisioned. See To administer Vault encryption keys on page 1312 for more information.
In the Tags tab, you can add tags to the database. To apply a defined tag, you must have permissions to use the
tag namespace. For more information about tagging, see Resource Tags on page 213. If you are not sure if you
should apply tags, skip this option (you can apply tags later) or ask your administrator.
8. Click Create DB System. The DB system appears in the list with a status of Provisioning. The DB system's icon
changes from yellow to green (or red to indicate errors).
After the DB system's icon turns green in the list of DB systems and displays the Available status, you can click
the highlighted DB system name to see details about the DB system. Note the IP addresses. You'll need the private
or public IP address, depending on network configuration, to connect to the DB system.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to create Exadata Cloud Service components.

APIs for the New Exadata Cloud Service Resource Model


The new Exadata resource model is compatible with all offered Exadata shape families (X7, X8, and X8M). See The
New Exadata Cloud Service Resource Model on page 1229 for more information.
Tip:

Oracle recommended provisioning new Exadata Cloud Service instances


using the new resource model. For Exadata instances, the DB system
resource model will be deprecated after a period where both resource models
are supported.
Cloud Exadata infrastructure resource:
• GetCloudExadataInfrastructure
• CreateCloudExadataInfrastructure
• ListCloudExadataInfrastructures
Cloud VM cluster resource:
• GetCloudVmCluster
• CreateCloudVmCluster
• ListCloudVmClusters
• GetCloudVmClusterIormConfig
• UpdateCloudVmClusterIormConfig

APIs for DB System Resource Model (X7 and X8 Shapes Only)


• GetDbSystem
• LaunchDbSystem
• ListDbSystems

Oracle Cloud Infrastructure User Guide 1254


Database

Database Homes
• CreateDbHome
• GetDbHome
• ListDbHomes

Shapes and Database Versions


• ListDbSystemShapes
• ListDbVersions
Configuring a Static Route for Accessing the Object Store
All the traffic in an Exadata Cloud Service instance is, by default, routed through the data network. To route backup
traffic to the backup interface (BONDETH1), you need to configure a static route on each of the compute nodes in the
cluster. For instructions, see Node Access to Object Storage: Static Route on page 1237.
Setting Up DNS for an Exadata Cloud Service Instance
DNS lets you use host names instead of IP addresses to communicate with an Exadata Cloud Service instance. You
can use the Internet and VCN Resolver (the DNS capability built into the VCN) as described in DNS in Your Virtual
Cloud Network on page 2936. Oracle recommends using a VCN Resolver for DNS name resolution for the client
subnet. It automatically resolves the Swift endpoints required for backing up databases, patching, and updating the
cloud tooling on an Exadata instance.

Maintaining an Exadata Cloud Service Instance

User-Managed Maintenance Updates


Maintaining a secure Exadata Cloud Service instance in the best working order requires you to perform the following
tasks regularly:
• Patching the Oracle Grid Infrastructure and Oracle Database software on the Exadata compute nodes. See
Patching an Exadata Cloud Service Instance Manually on page 1290 and Oracle Clusterware Configuration and
Administration for information and instructions.
• Updating the operating system and the tooling on the compute nodes. See Updating an Exadata Cloud Service
Instance on page 1275 for information and instructions.
Oracle-Managed Infrastructure Maintenance Updates
In addition to the maintenance tasks you perform, Oracle manages the patching and updating of all other
infrastructure components, including the physical compute nodes (Dom0), the Exadata storage servers, and the
Exadata InfiniBand switches. This is referred to as DB system infrastructure maintenance.
Overview of the Infrastructure Patching Process
Infrastructure maintenance begins with patching of the Exadata compute nodes. Compute nodes are updated in a
rolling fashion, with a single node being shut down, patched, and then brought back online while other nodes remain
operational. This process continues until all nodes are patched. After compute node patching completes, Oracle
patches the storage nodes. Storage server patching does not impact compute node availability.
Note that while databases are expected to be available during the patching process, Oracle does not verify that all
database services and pluggable databases are available after a node is brought back online, as these can depend
on the application service definition. Oracle recommends reviewing the documentation on workload management,
application continuity, and client failover best practices to reduce the potential for an outage with your applications.
By following the guidelines in the documentation, the impact of infrastructure patching will be only minor service
degradation due to connection loss as compute nodes are sequentially patched.
Oracle recommends that you follow the Maximum Availability Architecture (MAA) best practices and use Oracle
Data Guard to ensure the highest availability for your critical applications. For databases with Oracle Data Guard
enabled, Oracle recommends that you separate the patching windows for the infrastructure instances running the

Oracle Cloud Infrastructure User Guide 1255


Database

primary and standby databases, and perform a switchover prior to the maintenance operations for the infrastructure
instance hosting the primary database, to avoid any impact to your primary database during infrastructure patching.
The approximate time for patching operations is as follows:
• Quarter rack: five hours
• Half rack: 10 hours
• Full rack: 20 hours
Typically, Exadata compute nodes require one and half hours each for patching, and storage servers require one hour
each.
Tip:

Do not perform major maintenance operations on your databases or


applications during the patching window, as these operations could be
impacted by the rolling patch operations.
Scheduling Oracle-Managed Infrastructure Updates
Exadata Cloud Service infrastructure updates are released on a quarterly basis. You can set a maintenance window to
determine the time your quarterly infrastructure maintenance will begin. You can also view scheduled maintenance
runs and the maintenance history of your Exadata Cloud Service instance in the Oracle Cloud Infrastructure Console
on the Exadata Infrastructure Details or DB System Details page. For more information:
• To set the automatic maintenance schedule for Exadata Cloud Service infrastructure on page 1263
• To view or edit the time of the next scheduled maintenance for Exadata Cloud Service infrastructure on page
1264
• To view the maintenance history of an Exadata Cloud Service infrastructure resource on page 1264
In exceptional cases, Oracle can update your system apart from the regular quarterly updates to apply time-sensitive
changes such as security updates. While you cannot opt out of these infrastructure updates, Oracle alerts you in
advance through the Cloud Notification Portal to help you plan for them.
Monitoring Patching Operations Using Lifecycle State Information
The lifecycle state of your infrastructure resource (either the cloud Exadata infrastructure or the DB system resource)
enables you to monitor when the patching of your infrastructure resource begins and ends. In the Oracle Cloud
Infrastructure Console, you can see lifecycle state details messages on the Exadata Infrastructure Details or DB
System Details page when a tool tip is displayed beside the Status field. You can also access these messages using
the ListCloudExadataInfrastructuresAPI (for cloud Exadata infrastructure resources) or the ListDbSystems API (for
Exadata DB systems), and using tools based on the API, including SDKs and the OCI CLI.
During patching operations, you can expect the following:
• If you specify a maintenance window, then patching begins at your specified start time. The patching process
starts with a series of prerequisite checks to ensure that your system can be successfully patched. These checks
take approximately 30 minutes to complete. While the system is performing the checks, the infrastructure
resource's lifecycle state remains "Available," and there is no lifecycle state message.
For example, if you specify that patching should begin at 8:00 a.m., then Oracle begins patching operations
at 8:00, but the infrastructure resource's lifecycle state does not change from "Available" to "Maintenance in
Progress" until approximately 8:30 a.m.
• When Exadata compute node patching starts, the infrastructure resource's lifecycle state is "Maintenance in
Progress", and the associated lifecycle state message is "The underlying infrastructure of this system (dbnodes) is
being updated."
• When cell storage patching starts, the infrastructure resource's lifecycle state is "Maintenance in Progress", and the
associated lifecycle state message is "The underlying infrastructure of this system (cell storage) is being updated
and this will not impact Database availability."
• After cell patching is complete, the networking switches are patched one at a time, in a rolling fashion.
• When patching is complete, the infrastructure resource's lifecycle state is "Available", and the Console and API-
based tools do not provide a lifecycle state message.

Oracle Cloud Infrastructure User Guide 1256


Database

For More Information


For information about the update policy, and details such as the duration and impact on your system's availability and
performance, see Oracle Database Cloud Exadata Service Supported Software Versions and Planning for Updates.

Managing an Exadata Cloud Service Instance


This topic describes management operations you can perform on an Exadata Cloud Service instance at the
infrastructure level.
Note:

If your instance uses the older DB system resource model, all of the
management operations discussed in this topic take place on the DB system
resource.
If your instance uses the newer Exadata Cloud Service instance resource
model, most of the management operations discussed in this topic take place
on the cloud VM cluster resource. However, some operations, including
those related to infrastructure maintenance, take place on the cloud Exadata
infrastructure resource.
For all of the management tasks, this topic states which resource types the
operation takes place on.

You can perform the management tasks discussed in this topic by using the Oracle Cloud Infrastructure Console or
the API.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let database admins manage Oracle Cloud database systems on page 2158 lets the
specified group do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. If
you want to dig deeper into writing policies for databases, see Details for the Database Service on page 2251.
Using the Console
To check the status of a cloud Exadata infrastructure resource
Note:

This topic only applies to Exadata Cloud Service instances using the new
Exadata Cloud Service instance resource model.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Click Exadata Infrastructure under Exadata at Oracle Cloud.

Oracle Cloud Infrastructure User Guide 1257


Database

4. In the list of cloud Exadata infrastructure resources, click the name of the infrastructure you're interested in and
check its icon. The icon text indicates the status of the system. The following lifecycle states apply to the cloud
Exadata infrastructure resource:
• Provisioning: Resources are being reserved for the cloud Exadata infrastructure resource. Provisioning can
take several minutes. The resource is not ready to use yet.
• Available: The cloud Exadata infrastructure was successfully provisioned. You can create a cloud VM cluster
on the resource to complete the infrastructure provisioning.
• Updating: The cloud Exadata infrastructure is being updated. The resource goes into the updating state during
management tasks. For example, when moving the resource to another compartment, or creating a cloud VM
cluster on the resource.
• Terminating: The cloud Exadata infrastructure is being deleted by the terminate action in the Console or API.
• Terminated:The cloud Exadata infrastructure has been deleted and is no longer available.
• Failed: An error condition prevented the provisioning or continued operation of the cloud Exadata
infrastructure.
To check the status of a cloud VM cluster
Note:

This topic only applies to Exadata Cloud Service instances using the new
Exadata Cloud Service instance resource model.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Click Exadata VM Clusters under Exadata at Oracle Cloud.
4. In the list of cloud VM clusters, find the cluster you're interested in and check its icon. The icon text indicates the
status of the system. The following lifecycle states apply to the cloud VM cluster:
• Provisioning: Resources are being reserved for the cloud Exadata infrastructure resource. Provisioning can
take several minutes. The resource is not ready to use yet.
• Available: The cloud Exadata infrastructure was successfully provisioned. You can create a cloud VM cluster
on the resource to complete the infrastructure provisioning.
• Updating: The cloud Exadata infrastructure is being updated. The resource goes into the updating state during
management tasks. For example, when moving the resource to another compartment, or creating a cloud VM
cluster on the resource.
• Terminating: The cloud Exadata infrastructure is being deleted by the terminate action in the Console or API.
• Terminated:The cloud Exadata infrastructure has been deleted and is no longer available.
• Failed: An error condition prevented the provisioning or continued operation of the cloud Exadata
infrastructure.
To view the status of a virtual machine (database node) in the cloud VM cluster, under Resources, click Virtual
Machines to see the list of virtual machines. In addition to the states listed for a cloud VM cluster, a virtual
machine's status can be one of the following:
• Starting: The database node is being powered on by the start or reboot action in the Console or API.
• Stopping: The database node is being powered off by the stop or reboot action in the Console or API.
• Stopped: The database node was powered off by the stop action in the Console or API.
To check the status of an Exadata DB system
Note:

This topic only applies to Exadata Cloud Service instances using the DB
system resource model.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of DB systems is displayed.

Oracle Cloud Infrastructure User Guide 1258


Database

3. In the list of DB systems, find the system you're interested in and check its icon. The icon text indicates the status
of the system. The following lifecycle states apply to the DB system resource:
• Provisioning: Resources are being reserved for the DB system, the system is booting, and the initial database
is being created. Provisioning can take several minutes. The system is not ready to use yet.
• Available: The DB system was successfully provisioned. A few minutes after the system enters this state, you
can SSH to it and begin using it.
• Terminating: The DB system is being deleted by the terminate action in the Console or API.
• Terminated: The DB system has been deleted and is no longer available.
• Failed: An error condition prevented the provisioning or continued operation of the DB system.
To view the status of a database node, under Resources, click Nodes to see the list of nodes. In addition to the
states listed for a DB system, a node's status can be one of the following:
• Starting: The database node is being powered on by the start or reboot action in the Console or API.
• Stopping: The database node is being powered off by the stop or reboot action in the Console or API.
• Stopped: The database node was powered off by the stop action in the Console or API.
You can also check the status of DB systems and database nodes using the ListDbSystems or ListDbNodes
API operations, which return the lifecycleState attribute.
To start, stop, or reboot an Exadata Cloud Service cloud VM cluster or DB system
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Navigate to the cloud VM cluster or DB system you want to start, stop, or reboot:
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. Under Resources, click Virtual Machines (for cloud VM clusters) or Nodes (for DB systems) to display the
compute nodes of the cloud service instance. Click the Actions icon (three dots) for a node and then click one of
the following actions:
• Start: Restarts a stopped node. After the node is restarted, the Stop action is enabled.
• Stop: Shuts down the node. After the node is powered off, the Start action is enabled.
• Reboot: Shuts down the node, and then restarts it.
Note:

• For billing purposes, the Stop state has no effect on the resources you
consume. Billing continues for virtual machines or nodes that you stop,
and related resources continue to apply against any relevant quotas.
You must Terminate a cloud VM cluster or DB system to remove its
resources from billing and quotas.
• After you restart or reboot a node, the floating IP address might take
several minutes to be updated and display in the Console.
To scale CPU cores in an Exadata Cloud Service cloud VM cluster or DB system
Note:

For information on adding additional database (compute) and storage servers


to X8M cloud VM clusters, see To add compute and storage resources to
a flexible cloud Exadata infrastructure resource on page 1231 and To add
database server or storage server capacity to a cloud VM cluster on page
1231. Adding additional database servers to your X8M cloud VM cluster
will increase the number of CPU cores available for scaling.

Oracle Cloud Infrastructure User Guide 1259


Database

If an Exadata Cloud Service instance requires more compute node processing power, you can scale up (increase) the
number of enabled CPU cores (OCPUs) in the instance.
You can also scale a cloud VM cluster or DB system (except for X6 systems) down to zero (0) CPU cores to
temporarily stop the system and be charged only for the hardware infrastructure. For more information about scaling
down, see Scaling Options on page 1223. Oracle recommends that if you are not scaling down to a stopped system
(0 cores), that you scale to at least 3 cores per node.
CPU cores must be scaled symmetrically across all nodes in the cloud VM cluster or DB system. Use multiples of
two for a base system or quarter rack, multiples of four for a half rack, and multiples of eight for a full rack. The total
number of CPU cores in a rack must not exceed the maximum limit for that shape.
Tip:

OCPU scaling activities are done online with no downtime.


1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Navigate to the cloud VM cluster or DB system you want to scale:
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. Click Scale VM Cluster (for cloud VM clusters) or Scale CPU Cores (for DB systems) and then specify a new
number of CPU cores. The text below the field indicates the acceptable values, based on the shape used when the
DB system was launched.
5. Click Update.
Note:

If you scale an scale a cloud VM cluster or DB system (except for X6


systems) down to zero (0) CPU cores, the floating IP address of the nodes
might take several minutes to be updated and display in the Console.
To move an Exadata Cloud Service infrastructure resource to another compartment
Note:

• To move resources between compartments, resource users must have


sufficient access permissions on the compartment that the resource
is being moved to, as well as the current compartment. For more
information about permissions for Database resources, see Details for the
Database Service on page 2251.
• If your Exadata Cloud Service instance is in a security zone, the
destination compartment must also be in a security zone. See the Security
Zone Policies topic for a full list of policies that affect Database service
resources.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.

Oracle Cloud Infrastructure User Guide 1260


Database

3. Navigate to the cloud Exadata infrastructure, cloud VM cluster or DB system you want to move:
Cloud Exadata infrastructure (new resource model): Under Exadata at Oracle Cloud, click Exadata
Infrastructure. In the list of infrastructure resources, find the infrastructure you want to access and click its
highlighted name to view its details page.
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. Click Move Resource.
5. Select the new compartment.
6. Click Move Resource.
For information about dependent resources for Database resources, see Moving Database Resources to a Different
Compartment on page 1141.
To terminate Exadata Cloud Service infrastructure-level resources
This topic describes how to terminate a cloud Exadata infrastructure, cloud VM cluster, or DB system resource in an
Exadata Cloud Service instance.
Note:

The database data is local to the cloud VM cluster or DB system hosting it


and is lost when the system is terminated. Oracle recommends that you back
up any data in the cloud VM cluster or DB system before terminating it.
Terminating an Exadata Cloud Service resource permanently deletes it and
any databases running on it.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Navigate to the cloud Exadata infrastructure, cloud VM cluster or DB system you want to move:
Cloud Exadata infrastructure (new resource model): Under Exadata at Oracle Cloud, click Exadata
Infrastructure. In the list of infrastructure resources, find the infrastructure you want to access and click its
highlighted name to view its details page.
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. For cloud VM clusters and DB systems, click More Actions, then Terminate on the resource details page. For
cloud Exadata infrastructure resources, click Terminate on the resource details page.
Confirm when prompted.
The resource's icon indicates Terminating.
Note:

If you are terminating a cloud Exadata infrastructure resource that contains


a cloud VM cluster, you must check the box labelled Also delete the VM
cluster associated with this infrastructure to confirm that you intend to
delete the VM cluster.
At this point, you cannot connect to the system and any open connections are terminated.

Oracle Cloud Infrastructure User Guide 1261


Database

To edit the network security groups (NSGs) for your client or backup network
Your client and backup networks can each use up to five network security groups (NSGs). Note that if you choose a
subnet with a security list, the security rules for the cloud VM cluster or DB system will be a union of the rules in the
security list and the NSGs. For more information, see Network Security Groups on page 2867 and Network Setup
for Exadata Cloud Service Instances on page 1233.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Navigate to the cloud VM cluster or DB system you want to manage:
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. In the Network details, click the Edit link to the right of the Client Network Security Groups or Backup
Network Security Groups field.
5. In the Edit Network Security Groups dialog, click + Another Network Security Group to add an NSG to the
network.
To change an assigned NSG, click the drop-down menu displaying the NSG name, then select a different NSG.
To remove an NSG from the network, click the X icon to the right of the displayed NSG name.
6. Click Save.
To manage your BYOL database licenses
If you want to control the number of database licenses that you run at any given time, you can scale up or down the
number of OCPUs on the instance. These additional licenses are metered separately.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Navigate to the cloud VM cluster or DB system you want to scale:
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. Click Scale VM Cluster (for cloud VM clusters) or Scale CPU Cores (for DB systems) and then specify a new
number of CPU cores. The text below the field indicates the acceptable values, based on the shape used when the
DB system was launched.
5. Click Update.
To change the license type of an Exadata Cloud Service cloud VM cluster or DB system
Note:

Updating the license type is not supported for systems running on the X6
shape. The feature is supported for X7 and higher shapes.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.

Oracle Cloud Infrastructure User Guide 1262


Database

3. Navigate to the cloud VM cluster or DB system you want to manage:


Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. On the resource details page, click Update License Type.
The dialog displays the options with your current license type selected.
5. Select the new license type.
6. Click Save.
To manage tags for your Exadata Cloud Service resources
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Find the cloud Exadata infrastructure, cloud VM cluster, DB system or database resource you're interested in, and
click the name.
4. Click the Tags tab to view or edit the existing tags. Or click More Actions and then Apply Tags to add new ones.
For more information, see Resource Tags on page 213.
To view a work request for your Exadata Cloud Service resources
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of DB systems is displayed.
3. Find the cloud Exadata infrastructure, cloud VM cluster, DB system or database resource you're interested in, and
click the name.
4. In the Resources section, click Work Requests. The status of all work requests appears on the page.
5. To see the log messages, error messages, and resources that are associated with a specific work request, click the
operation name. Then, select an option in the More information section.
For associated resources, you can click the Actions icon (three dots) next to a resource to copy the resource's
OCID.
For more information, see Work Requests on page 262.
To set the automatic maintenance schedule for Exadata Cloud Service infrastructure
This task describes how to set the maintenance schedule for a cloud Exadata infrastructure or Exadata DB system
resource. See Oracle-Managed Infrastructure Maintenance Updates on page 1255 for more information on this type
of maintenance.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Navigate to the cloud Exadata infrastructure or DB system you want to access:
Cloud Exadata infrastructure (new resource model): Under Exadata at Oracle Cloud, click Exadata
Infrastructure. In the list of infrastructure resources, find the infrastructure you want to access and click its
highlighted name to view its details page.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
3. On the resource details page, under Maintenance or Infrastructure Maintenance, click the edit link in the
Maintenance Schedule field.
4. In the Automatic Maintenance Schedule dialog, select Specify a schedule.
5. Under Maintenance months, specify at least one month for each quarter during which Exadata infrastructure
maintenance will take place. You can select more than one month per quarter. If you specify a long lead time
for advanced notification (for example, 4 weeks), then you may want to specify two or three months per quarter

Oracle Cloud Infrastructure User Guide 1263


Database

during which maintenance runs can occur. This will ensure that your maintenance updates are applied in a timely
manner after accounting for your required lead time. Lead time is discussed in the following steps.
6. Under Week of the month, specify which week of the month maintenance will take place. Weeks start on the 1st,
8th, 15th, and 22nd days of the month, and have a duration of seven days. Weeks start and end based on calendar
dates, not days of the week. Maintenance cannot be scheduled for the fifth week of months that contain more than
28 days.
7. Optional. Under Day of the week, specify the day of the week on which the maintenance will occur. If you do not
specify a day of the week, then Oracle will run the maintenance update on a weekend day to minimize disruption.
8. Optional. Under Start hour, specify the hour during which the maintenance run will begin. If you do not specify a
start hour, then Oracle will choose the least disruptive time to run the maintenance update.
9. Under Lead Time, specify the number of weeks ahead of the maintenance event you would like to receive a
notification message. Your lead time ensures that a newly released maintenance update is scheduled to account for
your required period of advanced notification.
10. Click Update Maintenance Schedule.
To view or edit the time of the next scheduled maintenance for Exadata Cloud Service infrastructure
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Navigate to the cloud Exadata infrastructure or DB system you want to access:
Cloud Exadata infrastructure (new resource model): Under Exadata at Oracle Cloud, click Exadata
Infrastructure. In the list of infrastructure resources, find the infrastructure you want to access and click its
highlighted name to view its details page.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
3. On the resource details page, under Maintenance or Infrastructure Maintenance, click the view link in the Next
Maintenance field.
4. On the Maintenance page, scheduled maintenance events are listed.
5. Optional. To change the time of the next scheduled maintenance, click the Edit link in the Scheduled Start Time
field.
6. In the Edit Infrastructure Maintenance Scheduled Start Time dialog, enter a date and time in the Scheduled
start time field.
The following restrictions apply:

Infrastructure maintenance cannot be rescheduled to occur more than six months after the announcement of the
maintenance update's availability. If a new patch is announced prior to your rescheduled maintenance run, the
newer patch will be applied on your specified date. You can reschedule your maintenance to take placer earlier
than it is currently scheduled.
• Oracle reserves certain dates each quarter for internal maintenance operations, and you cannot schedule your
maintenance on these dates. When using the Console, the selection of these dates is disabled.
7. Click Update Scheduled Start Time.
To view the maintenance history of an Exadata Cloud Service infrastructure resource
This task describes how to view the maintenance history for a cloud Exadata infrastructure or DB system. resource.
See Oracle-Managed Infrastructure Maintenance Updates on page 1255 for more information on this type of
maintenance.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Navigate to the cloud Exadata infrastructure or DB system you want to access:
Cloud Exadata infrastructure (new resource model): Under Exadata at Oracle Cloud, click Exadata
Infrastructure. In the list of infrastructure resources, find the infrastructure you want to access and click its
highlighted name to view its details page.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.

Oracle Cloud Infrastructure User Guide 1264


Database

3. On the resource details page, under Maintenance or Infrastructure Maintenance, click the view link in the Next
Maintenance field.
4. Click Maintenance History to see a list of past maintenance events including details on their completion state.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to manage Exadata Cloud Service instance components.
Cloud Exadata infrastructure resource (new resource model):
• ListCloudExadataInfrastructures
• GetCloudExadataInfrastructure
• ChangeCloudExadataInfrastructureCompartment
• UpdateCloudExadataInfrastructure
• DeleteCloudExadataInfrastructure
Cloud VM cluster (new resource model)
• ListCloudVmClusters
• GetCloudVmCluster
• ChangeCloudVmClusterCompartment
• UpdateCloudVmCluster
• DeleteCloudVmCluster
DB systems (old resource model):
• ListDbSystems
• GetDbSystem
• ChangeDbSystemCompartment
• UpdateDbSystem
• TerminateDbSystem
Virtual machines nodes (all Exadata Cloud Service instances):
• DbNodeAction: Use this operation to power cycle a node in the DB system.
• ListDbNodes
• GetDbNode

Managing Exadata Cloud Service I/O Resource Management (IORM)


This topic explains the I/O Resource Management (IORM) feature and how to enable it, modify the IORM settings,
and disable it by using the Console or the API.
About IORM
The I/O Resource Management (IORM) feature allows you to manage how multiple databases share the I/O resources
of an Oracle Exadata cloud VM cluster (for systems using the new resource model) or DB system.
On an Exadata VM cluster or DB system, all databases share dedicated storage servers which include flash storage.
By default, the databases are given equal priority with respect to these resources. The Exadata storage management
software uses a first come, first served approach for query processing. If a database executes a major query that
overloads I/O resources, overall system performance can be slowed down.
IORM allows you to assign priorities to your databases to ensure critical queries are processed first when workloads
exceed their resource allocations. You assign priorities by creating directives that specify the number of shares
for each database. The number of shares corresponds to a percentage of resources given to that database when I/O
resources are stressed.

Oracle Cloud Infrastructure User Guide 1265


Database

Directives work together with an overall optimization objective you set for managing the resources. The following
objectives are available:
• Auto - Recommended. IORM determines the optimization objective and continuously and dynamically
determines the optimal settings, based on the workloads observed, and resource plans enabled.
• Balanced - For critical OLTP and DSS workloads. This setting balances low disk latency and high throughput.
This setting limits disk utilization of large I/Os to a lesser extent than low latency to achieve a balance between
good latency and good throughput.
• High throughput - For critical DSS workloads that require high throughput.
• Low latency - For critical OLTP workloads. This setting provides the lowest possible latency by significantly
limiting disk utilization.
For more information about IORM, see Exadata System Software User's Guide.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let database admins manage Oracle Cloud database systems on page 2158 lets the
specified group do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. If
you want to dig deeper into writing policies for databases, see Details for the Database Service on page 2251.
Using the Console
To enable IORM on your Exadata cloud VM cluster
Note:

This topic only applies to Exadata Cloud Service systems using the new
infrastructure resource model. If you are enabling IORM for an Exadata DB
system, see To enable IORM on your Exadata DB system on page 1267.
Enabling IORM includes specifying an optimization objective and configuring your resource plan directives.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Click Exadata VM Clusters under the Exadata at Oracle Cloud.
4. In the list of VM clusters, find the VM cluster for which you want to enable IORM, and click its highlighted
name. The cluster's details are displayed, showing the IORM status as "Disabled."
5. Click More Actions, then Enable IORM.
It might take a minute for the Enable I/O Resource Management dialog to retrieve the VM cluster information.
6. Select the objective to apply to the resource plan:
•Auto - (Recommended) Dynamically changes the objective based on the resource plan and observed
workloads.
• Balanced - Weighs high throughput and low latency evenly.
• High throughput - Provides the best throughput for DSS workloads.
• Low latency - Provides the best latency for critical OLTP workloads.
7. Configure the resource plan default directive by setting the number of shares. This number of shares is assigned to
each database not associated with a specific directive.
8. In the Resource Plan Directives section, add a directive for each database you want to assign a greater or lesser
number of shares than the default directive.
To add a directive, click + Additional Directive, then specify the database and the number of shares for that
database.

Oracle Cloud Infrastructure User Guide 1266


Database

9. When you are done adding directives, click Enable.


While the IORM configuration settings are being applied, the VM cluster details page shows the IORM status
as "Updating." The update might take several minutes to complete but should have no impact on your ability to
perform normal operations on your VM cluster. After a successful update, the IORM status shows as "Enabled."
To modify the IORM configuration on your cloud VM cluster
Note:

This topic only applies to Exadata Cloud Service systems using the new
infrastructure resource model. If you are updating an Exadata DB system,
see To modify the IORM configuration on your Exadata DB system on page
1268
Use this procedure to change your IORM settings or to disable IORM.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Click Exadata VM Clusters under Exadata at Oracle Cloud.
4. In the list of VM clusters, find the VM cluster for which you want to update IORM, and click its highlighted
name. The cluster's details are displayed, showing the IORM status as "Enabled."
5. Click More Actions, then Update IORM.
6. In the Update I/O Resource Management dialog, take one of the following actions:
• Change your settings - Specify a new objective and adjust your directives, as applicable, and then click
Update.
• Disable IORM - Click Disable IORM. Disabling IORM removes all your resource plan directives and restores
a basic objective for I/O resource management.
While the new IORM configuration settings are being applied, the system details page shows the IORM status
as "Updating." The update might take several minutes to complete but should have no impact on your ability to
perform normal operations on your DB system. After a successful update, the IORM status shows as "Enabled" or
"Disabled," depending on the action you took.
To enable IORM on your Exadata DB system
Note:

This topic only applies to Exadata Cloud Service instances using the DB
system resource model.
Enabling IORM includes specifying an optimization objective and configuring your resource plan directives.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. In the list of DB systems, find the Exadata DB system for which you want to enable IORM, and click its
highlighted name.
The system details are displayed, showing the IORM status as "Disabled."
4. Click More Actions, then Enable IORM.
It might take a minute for the Enable I/O Resource Management dialog to retrieve the DB system information.
5. Select the objective to apply to the resource plan:

Auto - (Recommended) Dynamically changes the objective based on the resource plan and observed
workloads.
• Balanced - Weighs high throughput and low latency evenly.
• High throughput - Provides the best throughput for DSS workloads.
• Low latency - Provides the best latency for critical OLTP workloads.
6. Configure the resource plan default directive by setting the number of shares. This number of shares is assigned to
each database not associated with a specific directive.

Oracle Cloud Infrastructure User Guide 1267


Database

7. In the Resource Plan Directives section, add a directive for each database you want to assign a greater or lesser
number of shares than the default directive.
To add a directive, click + Additional Directive, then specify the database and the number of shares for that
database.
8. When you are done adding directives, click Enable.
While the IORM configuration settings are being applied, the system details page shows the IORM status as
"Updating." The update might take several minutes to complete but should have no impact on your ability to
perform normal operations on your DB system. After a successful update, the IORM status shows as "Enabled."
To modify the IORM configuration on your Exadata DB system
Note:

This topic only applies to Exadata Cloud Service instances using the DB
system resource model.
Use this procedure to change your IORM settings or to disable IORM.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. In the list of DB systems, find the Exadata DB system for which you want to modify the IORM configuration, and
click its highlighted name.
The system details are displayed, showing the IORM status as "Enabled."
4. Click More Actions, then Update IORM.
5. In the Update I/O Resource Management dialog, take one of the following actions:
• Change your settings - Specify a new objective and adjust your directives, as applicable, and then click
Update.
• Disable IORM - Click Disable IORM. Disabling IORM removes all your resource plan directives and restores
a basic objective for I/O resource management.
While the new IORM configuration settings are being applied, the system details page shows the IORM status
as "Updating." The update might take several minutes to complete but should have no impact on your ability to
perform normal operations on your DB system. After a successful update, the IORM status shows as "Enabled" or
"Disabled," depending on the action you took.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to manage the I/O resources of an Exadata cloud VM cluster (see The New Exadata Cloud
Service Resource Model on page 1229 for more information on this resource type).
• ListCloudVmClusters
• GetCloudVmCluster
• GetCloudVmClusterIormConfig
• UpdateCloudVmClusterIormConfig
Use these API operations to manage the I/O resources of an Exadata DB system.
• ListDbSystems
• GetDbSystem
• GetExadataIormConfig
• UpdateExadataIormConfig

Oracle Cloud Infrastructure User Guide 1268


Database

Managing Exadata Resources with Oracle Enterprise Manager Cloud Control


This topic provides a short introduction to Oracle Enterprise Manager Cloud Control, a tool that can be used to
manage and monitor Exadata Cloud and Exadata Cloud@Customer resources. For complete documentation and
Oracle By Example tutorials, see Additional Information on page 1269 at the end of this topic.
Overview
Oracle Enterprise Manager Cloud Control provides a complete lifecycle management solution for Oracle Cloud
Infrastructure's Exadata Cloud and Exadata Cloud@Customer services.
Enterprise Manager Cloud Control discovers Exadata Cloud and Exadata Cloud@Customer services as a single target
and automatically identifies and organizes all dependent components. Using Enterprise Manager Cloud Control you
can then:
• Monitor and manage all Exadata, Exadata Cloud and Exadata Cloud@Customer systems, along with any other
targets, from a single interface
• Visualize storage and compute data
• View performance metrics of your Exadata components
Features

Enterprise Manager Target for Exadata Cloud


The target for Oracle Cloud Infrastructure Exadata resources (which covers both Exadata Cloud and Exadata
Cloud@Customer) does the following:
• Automatically identifies and organizes related targets
• Provides a high-level integration point for Enterprise Manager framework features such as incident rules, groups,
notifications, and monitoring templates

Improved Performance Monitoring


Enterprise Manager Cloud Control enhances performance monitoring in the following ways:
• Adds Exadata Storage Server and Exadata Storage Grid targets
• Offers visualization of storage and compute performance for your Exadata Cloud and Exadata Cloud@Customer
resources
• Enables use of the same Maximum Availability Architecture (MAA) key performance indicators (KPI) developed
for Oracle Exadata Database Machine

Scripted CLI-based Discovery


Enterprise Manager Cloud Control uses scripts to discover Oracle Cloud Infrastructure Exadata resources. Scripts
comb the existing hosts, clusters, ASM, databases and related targets, as well as adding the storage server targets

"Single Pane of Glass" View of On-Premises and Oracle Cloud Infrastructure Exadata Resources
Enterprise Manager Cloud Control 's use of a single Exadata target type provides a consistent Enterprise Manager
experience across on-premises, Exadata Cloud, and Exadata Cloud@Customer resources. The common Exadata
target menu allows you to easily navigate to, monitor and manage all of your Exadata systems.

Visualization
Enterprise Manager Cloud Control allows you to visualize the database and related targets associated with each
Exadata Cloud and Exadata Cloud@Customer system.
Additional Information
For more information on Oracle Enterprise Manager Cloud Control, see the following documentation resources:
• Oracle Enterprise Manager Cloud Control for Oracle Exadata Cloud

Oracle Cloud Infrastructure User Guide 1269


Database

• Setting Up Oracle Enterprise Manager 13.4 on Oracle Cloud Infrastructure

Connecting to an Exadata Cloud Service Instance


This topic explains how to connect to an Exadata Cloud Service instance using SSH or SQL Developer. How you
connect depends on how your cloud network is set up. You can find information on various networking scenarios in
Networking Overview on page 2774, but for specific recommendations on how you should connect to a database in
the cloud, contact your network security administrator.
Prerequisites
For SSH access to a compute node in an Exadata Cloud Service instance, you'll need the following:
• The full path to the file that contains the private key associated with the public key used when the system was
launched.
• The public or private IP address of the Exadata Cloud Service instance.
Use the private IP address to connect to the system from your on-premises network, or from within the virtual
cloud network (VCN). This includes connecting from a host located on-premises connecting through a VPN or
FastConnect to your VCN, or from another host in the same VCN. Use the public IP address to connect to the
system from outside the cloud (with no VPN). You can find the IP addresses in the Oracle Cloud Infrastructure
Console as follows:
• Cloud VM clusters (new resource model): On the Exadata VM Cluster Details page, click Virtual Machines
in the Resources list.
• DB systems: On the DB System Details page, click Nodes in the Resources list.
The values are displayed in the Public IP Address and Private IP Address & DNS Name columns of the table
displaying the Virtual Machines or Nodes of the Exadata Cloud Service instance.
Connecting to a Compute Node with SSH
You can connect to the compute nodes in an Exadata DB System by using a Secure Shell (SSH) connection. Most
UNIX-style systems (including Linux, Solaris, BSD, and OS X) include an SSH client by default. For Windows, you
can download a free SSH client called PuTTY from http://www.putty.org.
To connect from a UNIX-style system
Use the following SSH command to access a compute node: $ ssh –i <private key> opc@<DB System
IP address>
<private key> is the full path and name of the file that contains the private key associated with the Exadata DB
System you want to access.
Use the private or public IP address depending on your network configuration. For more information, see
Prerequisites on page 1392.
To connect from a Windows system
1. Open putty.exe.
2. In the Category pane, select Session and enter the following fields:
• Host Name (or IP address): opc@<ip_address>
Use the compute node's private or public IP address depending on your network configuration. For more
information, see Prerequisites on page 1392.
• Connection type: SSH
• Port: 22
3. In the Category pane, expand Connection, expand SSH, and then click Auth, and browse to select your private
key.
4. Optionally, return to the Session category screen and save this session information for reuse later.
5. Click Open to start the session.

Oracle Cloud Infrastructure User Guide 1270


Database

To access a database after you connect to the compute node


1. Log in as opc and then sudo to the oracle user.

login as: opc

[opc@<host_name> ~]$ sudo su - oracle


2. Source the database's .env file to set the environment.

[oracle@<host_name>]# . <database_name>.env

In the following example, the host name is "ed1db01" and the database name is "cdb01".

[oracle@ed1db01]# . cdb01.env
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/grid

Connecting to a Database with SQL Developer


You can connect to a database with SQL Developer by using one of the following methods:
• Create a temporary SSH tunnel from your computer to the database. This method provides access only for the
duration of the tunnel. (When you are done using the database, be sure to close the SSH tunnel by exiting the SSH
session.)
• Open port 1521 for the Oracle default listener by updating the security list used for the cloud VM cluster or
DB system resource in the Exadata Cloud Service instance. This method provides more durable access to the
database. For more information, see Updating the Security List on page 1306.
After you've created an SSH tunnel or opened port 1521 as described above, you can connect to a Exadata Cloud
Service instance using SCAN IP addresses or public IP addresses, depending on how your network is set up and
where you are connecting from. You can find the IP addresses in the Console, in the Database details page.
To connect using SCAN IP addresses
You can connect to the database using the SCAN IP addresses if your client is on-premises and you are connecting
using a FastConnect or VPN connection. You have the following options:
• Use the private SCAN IP addresses, as shown in the following tnsnames.ora example:

testdb=
(DESCRIPTION =
(ADDRESS_LIST=
(ADDRESS = (PROTOCOL = TCP)(HOST = <scanIP1>)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = <scanIP2>)(PORT = 1521)))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = <dbservice.subnetname.dbvcn.oraclevcn.com>)
)
)
• Define an external SCAN name in your on-premises DNS server. Your application can resolve this
external SCAN name to the DB System's private SCAN IP addresses, and then the application can use a
connection string that includes the external SCAN name. In the following tnsnames.ora example,
extscanname.example.com is defined in the on-premises DNS server.

testdb =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = <extscanname.example.com>)(PORT =
1521))

Oracle Cloud Infrastructure User Guide 1271


Database

(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = <dbservice.subnetname.dbvcn.oraclevcn.com>)
)
)

To connect using public IP addresses


You can use the node's public IP address to connect to the database if the client and database are in different VCNs,
or if the database is on a VCN that has an internet gateway. However, there are important implications to consider:
• When the client uses the public IP address, the client bypasses the SCAN listener and reaches the node listener, so
server side load balancing is not available.
• When the client uses the public IP address, it cannot take advantage of the VIP failover feature. If a node becomes
unavailable, new connection attempts to the node will hang until a TCP/IP timeout occurs. You can set client side
sqlnet parameters to limit the TCP/IP timeout.
The following tnsnames.ora example shows a connection string that includes the CONNECT_TIMEOUT
parameter to avoid TCP/IP timeouts.

test=
(DESCRIPTION =
(CONNECT_TIMEOUT=60)
(ADDRESS_LIST=
(ADDRESS = (PROTOCOL = TCP)(HOST = <publicIP1>)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = <publicIP2>)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = <dbservice.subnetname.dbvcn.oraclevcn.com>)
)
)

Managing Exadata Cloud Service Software Images Using the Dbaascli Utility
Note:

You can create custom database software images for your Exadata Cloud
Service instances using the Console or API. These images are stored in
Object Storage, and can be used to provision a Database Home in your
Exadata instance. See Oracle Database Software Images on page 1568 more
information.
You can control the version of Oracle binaries that is installed when you provision a new database on an Exadata
Cloud Service instance by maintaining the software images on the system. Oracle provides a library of cloud software
images that you can view and download onto your instance by using the dbaascli utility.
When you create a new database with a new Oracle Home (Database Home) directory location, the Oracle Database
binaries are sourced from a software image that is stored on your Exadata Cloud Service instance. Over time, the
software images on your instance become outdated if they are not maintained. Using an outdated software image
makes it necessary for you to apply patches to newly installed binaries to bring them up to date. Oracle recommends
that you maintain your instance with up-to-date software images to avoid this extra patching step which can be time-
consuming and error prone.
Viewing Information About Available Software Images
You can view information about Oracle Database software images that are available to download to your Exadata
Cloud Service instance by using the cswlib list subcommand of the dbaascli utility.

Oracle Cloud Infrastructure User Guide 1272


Database

To view information about available software images


1. Connect to a compute node as the opc user.
For detailed instructions, see Connecting to a Compute Node with SSH on page 1270.
2. Start a root-user command shell:

$ sudo -s
#
3. Execute the dbaascli command with the cswlib list subcommand:

# dbaascli cswlib list

The command displays a list of available software images, including version and bundle patch information that
you can use to download the software image.
4. Exit the root-user command shell:

# exit
$

Downloading Software Images


You can download available software images onto your Exadata Cloud Service instance by using the cswlib
download subcommand of the dbaascli utility.
To downloaded a software image
1. Connect to a compute node as the opc user.
For detailed instructions, see Connecting to a Compute Node with SSH on page 1270.
2. Start a root-user command shell:

$ sudo -s
#
3. Execute the dbaascli command with the cswlib download subcommand:

# dbaascli cswlib download [--version <software_version>] [--


bp <software_bundle_patch>]

The command displays a list of software images that are downloaded to your Exadata Cloud Service environment,
including version and bundle patch information.
The optional parameters are:
• version: specifies an Oracle Database software version. For example, 19000, 18000, or 12201.
• bp: identifies a bundle patch release. For example, APR2021, JAN2021, or OCT2020.
If you do not include the optional parameters, the dbaascli cswlib download command downloads the
latest available software image for all available Oracle Database software versions.
4. Exit the root-user command shell:

# exit
$

Updating an Exadata Cloud Service VM Cluster Operating System


Exadata VM Cluster Image Updates is a feature that enables ExaCS customers to update the OS image on their
Exadata VM Cluster nodes in an automated manner from their OCI console and APIs.

Oracle Cloud Infrastructure User Guide 1273


Database

Introduction
Exadata VM Cluster Image Updates allows you to update the OS image on your Exadata VM Cluster nodes in an
automated manner from the OCI console and APIs. This automated feature greatly simplifies and speeds up the
process, makes it less error prone, and eliminates the need to use Patch Manager.
When you apply a patch, the system runs a precheck operation to ensure your VM cluster, Exadata DB system, or
Database Home meets the requirements for that patch. If the precheck is not successful, the patch is not applied, and
the system displays a message that the patch cannot be applied because the precheck failed. A separate precheck
operation that you can run in advance of the planned update is also available.

Supported versions
The following versions and limitations are supported:
• Only Exadata image major versions 19 and above are supported.
• You cannot move to a new major version. For example, if the ExaCS VM Cluster is on major version 20 then you
can apply only image updates for version 20.
• Only the latest generation of Exadata VM Cluster images are shown on the console and can be applied.

Updating the operating system


1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. In the list of DB systems, click the name of the Exadata DB system that you want to patch to display the DB
system details.
4. In the Version section, to the right of the Updates Available label, click View Updates to display the Updates/
patches page.
5. Review the list of available patches for the DB system.
6. To the right of the Created field, click the Actions icon (three dots) for the patch you are interested in, and then
click one of the following actions:
• Run Precheck. Precheck checks the prerequisites to ensure that the patch can be successfully applied. Oracle
highly recommends that you run the precheck operation before you apply a patch. The reason is that things can
change in a database any time, and the precheck you run just before running a patch may find errors that the
previous precheck did not find.
• Apply Exadata Image Update. This link displays the Apply Exadata Image Update dialog box that you use to
apply the patch. The dialog box shows the name of the database system you are patching, the current version
of the database, and the new version of the database after the patch is applied. To start the process, click Apply
Exadata Image Update.
• Copy OCID. This copies the Oracle Cloud ID. This can be used when troubleshooting a patch or to give to
Support when contacting them.
Note:

If the precheck fails, the system displays a message in the Apply OS


Image Update dialog box that the last precheck has failed. Oracle
recommends that you run the precheck again.
Note:

While the patch is running:


• Run Precheck and Apply OS Image Update are not available. When
the patch has completed, these actions are available again.
• If the Exadata infrastructure containing this VM cluster is scheduled
for maintenance that conflicts with the timing of the patch, the patch
fails and the system displays a message explaining why. After the
infrastructure maintenance is complete, run the patch again.
7. Confirm when prompted.

Oracle Cloud Infrastructure User Guide 1274


Database

8. The patch list displays the status of the operation in the Version section of the database details page. Click View
Updates to view more details about an individual patch status and to display any updates that are available to run.
If no new updates are available, the system displays a message that says No Updates Available.

Patching and Precheck states, and DB system status


The following table shows the various states of precheck and patch operations and the status of the DB system during
each operation.

Operation DB System Patch State Description


Status
None Available Available The update is available to patch a database or database home.
Precheck Available Checking The precheck is in progress
Precheck Available Available The precheck failed but the patch is still available. The reason
(failed) (Information icon) for the failure is included in the image update history.
Patching Updating Applying Update The update is in progress. While a patch is being applied, the
status of the patch displays as Applying Update and the DB
system's status displays as Updating. Life cycle operations
on the DB system and its resources might be temporarily
unavailable. An entry is made in the update history log.
Patching Available Available The update failed. The update information is available in the
(failed) (Information icon) update history log. The system displays a message asking you
to do a rollback before attempting another update.
Patching DB System Applied The update is successful. The patch name is removed from the
(success) up to date update list and an entry is made in the update history log.

Updating an Exadata Cloud Service Instance


This topic covers how to update the operating system and the tooling on the compute server nodes (for cloud VM
clusters, these are called virtual machines) of an Exadata Cloud Service instance. Review all of the information
carefully before you begin the updates.
OS Updates
You update the operating systems of Exadata compute nodes by using the patchmgr tool. This utility manages the
entire update of one or more compute nodes remotely, including running pre-reboot, reboot, and post-reboot steps.
You can run the utility from either an Exadata compute node or a non-Exadata server running Oracle Linux. The
server on which you run the utility is known as the "driving system." You cannot use the driving system to update
itself. Therefore, if the driving system is one of the Exadata compute nodes on a system you are updating, you must
run a separate operation on a different driving system to update that server.
The following two scenarios describe typical ways of performing the updates:
Scenario 1: Non-Exadata Driving System
The simplest way to run the update the Exadata system is to use a separate Oracle Linux server to update all Exadata
compute nodes in the system.
Scenario 2: Exadata Node Driving System
You can use one Exadata compute node to drive the updates for the rest of the compute nodes in the system, and then
use one of the updated nodes to drive the update on the original Exadata driver node.
For example: You are updating a half rack Exadata system, which has four compute nodes - node1, node2, node3,
and node4. First, use node1 to drive the updates of node2, node3, and node4. Then, use node2 to drive the update of
node1.
The driving system requires root user SSH access to each compute node the utility will update.

Oracle Cloud Infrastructure User Guide 1275


Database

Preparing for the OS Updates


Caution:

Do not install NetworkManager on the Exadata Cloud Service instance.


Installing this package and rebooting the system results in severe loss of
access to the system.
• Before you begin your updates, review Exadata Cloud Service Software Versions (Doc ID 2333222.1) to
determine the latest software version and target version to use.
• Some steps in the update process require you to specify a YUM repository. The YUM repository URL is:

http://yum-<region_identifier>.oracle.com/repo/EngineeredSystems/exadata/
dbserver/<latest_version>/base/x86_64.

Region identifiers are text strings used to identify Oracle Cloud Infrastructure regions (for example, us-
phoenix-1). You can find a complete list of region identifiers in Regions.
You can run the following curl command to determine the latest version of the YUM repository for your
Exadata Cloud Service instance region:

curl -s -X GET http://yum-<region_identifier>.oracle.com/repo/


EngineeredSystems/exadata/dbserver/index.html |egrep "18.1."

This example returns the most current version of the YUM repository for the US West (Phoenix) region:

curl -s -X GET http://yum-us-phoenix-1.oracle.com/repo/EngineeredSystems/


exadata/dbserver/index.html |egrep "18.1."
<a href="18.1.4.0.0/">18.1.4.0.0/</a> 01-Mar-2018 03:36 -
• To apply OS updates, the system's VCN must be configured to allow access to the YUM repository. For more
information, see Option 2: Service Gateway Access to Both Object Storage and YUM Repos on page 1239.
To update the OS on all compute nodes of an Exadata Cloud Service instance
This example procedure assumes the following:
• The system has two compute nodes, node1 and node2.
• The target version is 18.1.4.0.0.180125.3.
• Each of the two nodes is used as the driving system for the update on the other one.
1. Gather the environment details.
a. SSH to node1 as root and run the following command to determine the version of Exadata:

[root@node1]# imageinfo -ver


12.2.1.1.4.171128
b. Switch to the grid user, and identify all computes in the cluster.

[root@node1]# su - grid
[grid@node1]$ olsnodes
node1
node1
2. Configure the driving system.
a. Switch back to the root user on node1, check whether a root ssh key pair (id_rsa and id_rsa.pub)
already exists. If not, then generate it.

[root@node1 .ssh]# ls /root/.ssh/id_rsa*


ls: cannot access /root/.ssh/id_rsa*: No such file or directory
[root@node1 .ssh]# ssh-keygen -t rsa

Oracle Cloud Infrastructure User Guide 1276


Database

Generating public/private rsa key pair.


Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
93:47:b0:83:75:f2:3e:e6:23:b3:0a:06:ed:00:20:a5
[email protected]
The key's randomart image is:
+--[ RSA 2048]----+
|o.. + . |
|o. o * |
|E . o o |
| . . = |
| o . S = |
| + = . |
| + o o |
| . . + . |
| ... |
+-----------------+
b. Distribute the public key to the target nodes, and verify this step. In this example, the only target node is
node2.

[root@node1 .ssh]# scp -i ~opc/.ssh/id_rsa ~root/.ssh/id_rsa.pub


opc@node2:/tmp/id_rsa.node1.pub
id_rsa.pub

[root@node2 ~]# ls -al /tmp/id_rsa.node1.pub


-rw-r--r-- 1 opc opc 442 Feb 28 03:33 /tmp/id_rsa.node1.pub
[root@node2 ~]# date
Wed Feb 28 03:33:45 UTC 2018
c. On the target node (node2, in this example), add the root public key of node1 to the root
authorized_keys file.

[root@node2 ~]# cat /tmp/id_rsa.node1.pub >> ~root/.ssh/authorized_keys


d. Download dbserver.patch.zip as p21634633_12*_Linux-x86-64.zip onto the driving system
(node1, in this example), and unzip it. See dbnodeupdate.sh and dbserver.patch.zip: Updating Exadata
Database Server Software using the DBNodeUpdate Utility and patchmgr (Doc ID 1553103.1) for information
about the files in this .zip.

[root@node1 patch]# mkdir /root/patch


[root@node1 patch]# cd /root/patch
[root@node1 patch]# unzip p21634633_181400_Linux-x86-64.zip
Archive: p21634633_181400_Linux-x86-64.zip creating:
dbserver_patch_5.180228.2/
creating: dbserver_patch_5.180228.2/ibdiagtools/
inflating: dbserver_patch_5.180228.2/ibdiagtools/cable_check.pl
inflating: dbserver_patch_5.180228.2/ibdiagtools/setup-ssh
inflating: dbserver_patch_5.180228.2/ibdiagtools/VERSION_FILE
extracting: dbserver_patch_5.180228.2/ibdiagtools/xmonib.sh
inflating: dbserver_patch_5.180228.2/ibdiagtools/monitord
inflating: dbserver_patch_5.180228.2/ibdiagtools/checkbadlinks.pl
creating: dbserver_patch_5.180228.2/ibdiagtools/topologies/
inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/
VerifyTopologyUtility.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/
verifylib.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/Node.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/Rack.pm

Oracle Cloud Infrastructure User Guide 1277


Database

inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/Group.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/Switch.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/topology-zfs
inflating: dbserver_patch_5.180228.2/ibdiagtools/dcli
creating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/
inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/
remoteScriptGenerator.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/
CommonUtils.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/
SolarisAdapter.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/
LinuxAdapter.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/
remoteLauncher.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/
remoteConfig.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/spawnProc.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/
runDiagnostics.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/OSAdapter.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/SampleOutputs.txt
inflating: dbserver_patch_5.180228.2/ibdiagtools/infinicheck
inflating: dbserver_patch_5.180228.2/ibdiagtools/ibping_test
inflating: dbserver_patch_5.180228.2/ibdiagtools/tar_ibdiagtools
inflating: dbserver_patch_5.180228.2/ibdiagtools/verify-topology
inflating: dbserver_patch_5.180228.2/installfw_exadata_ssh
creating: dbserver_patch_5.180228.2/linux.db.rpms/
inflating: dbserver_patch_5.180228.2/md5sum_files.lst
inflating: dbserver_patch_5.180228.2/patchmgr
inflating: dbserver_patch_5.180228.2/xcp
inflating: dbserver_patch_5.180228.2/ExadataSendNotification.pm
inflating: dbserver_patch_5.180228.2/ExadataImageNotification.pl
inflating: dbserver_patch_5.180228.2/kernelupgrade_oldbios.sh
inflating: dbserver_patch_5.180228.2/cellboot_usb_pci_path
inflating: dbserver_patch_5.180228.2/exadata.img.env
inflating: dbserver_patch_5.180228.2/README.txt
inflating: dbserver_patch_5.180228.2/exadataLogger.pm
inflating: dbserver_patch_5.180228.2/patch_bug_26678971
inflating: dbserver_patch_5.180228.2/dcli
inflating: dbserver_patch_5.180228.2/patchReport.py
extracting: dbserver_patch_5.180228.2/dbnodeupdate.zip
creating: dbserver_patch_5.180228.2/plugins/
inflating: dbserver_patch_5.180228.2/plugins/010-check_17854520.sh
inflating: dbserver_patch_5.180228.2/plugins/020-check_22468216.sh
inflating: dbserver_patch_5.180228.2/plugins/040-check_22896791.sh
inflating: dbserver_patch_5.180228.2/plugins/000-check_dummy_bash
inflating: dbserver_patch_5.180228.2/plugins/050-check_22651315.sh
inflating: dbserver_patch_5.180228.2/plugins/005-check_22909764.sh
inflating: dbserver_patch_5.180228.2/plugins/000-check_dummy_perl
inflating: dbserver_patch_5.180228.2/plugins/030-check_24625612.sh
inflating: dbserver_patch_5.180228.2/patchmgr_functions
inflating: dbserver_patch_5.180228.2/exadata.img.hw
inflating: dbserver_patch_5.180228.2/libxcp.so.1
inflating: dbserver_patch_5.180228.2/imageLogger
inflating: dbserver_patch_5.180228.2/ExaXMLNode.pm
inflating: dbserver_patch_5.180228.2/fwverify
e. Create the dbs_group file that contains the list of compute nodes to update. Include the nodes listed after
running the olsnodes command in step 1 except for the driving system node. In this example, dbs_group
should include only node2.

[root@node1 patch]# cd /root/patch/dbserver_patch_5.180228

Oracle Cloud Infrastructure User Guide 1278


Database

[root@node1 dbserver_patch_5.180228]# cat dbs_group


node2
3. Run a patching precheck operation.

patchmgr -dbnodes dbs_group -precheck -yum_repo <yum_repository> -


target_version <target_version> -nomodify_at_prereq

Important:

You must run the precheck operation with the -nomodify_at_prereq


option to prevent any changes to the system that could impact the backup
you take in the next step. Otherwise, the backup might not be able to roll
back the system to its original state, should that be necessary.
The output should look like the following example:

[root@node1 dbserver_patch_5.180228]# ./patchmgr -dbnodes


dbs_group -precheck -yum_repo http://yum-phx.oracle.com/repo/
EngineeredSystems/exadata/dbserver/18.1.4.0.0/base/x86_64 -target_version
18.1.4.0.0.180125.3 -nomodify_at_prereq

**************************************************************************************
NOTE patchmgr release: 5.180228 (always check MOS 1553103.1 for the
latest release of dbserver.patch.zip)
NOTE
WARNING Do not interrupt the patchmgr session.
WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot database nodes during update or rollback.
WARNING Do not open logfiles in write mode and do not try to alter them.
**************************************************************************************
2018-02-28 21:22:45 +0000 :Working: DO: Initiate precheck on 1
node(s)
2018-02-28 21:24:57 +0000 :Working: DO: Check free space and verify
SSH equivalence for the root user to node2
2018-02-28 21:26:15 +0000 :SUCCESS: DONE: Check free space and
verify SSH equivalence for the root user to node2
2018-02-28 21:26:47 +0000 :Working: DO: dbnodeupdate.sh running a
precheck on node(s).
2018-02-28 21:28:23 +0000 :SUCCESS: DONE: Initiate precheck on
node(s).
4. Back up the current system.

patchmgr -dbnodes dbs_group -backup -yum_repo <yum_repository> -


target_version <target_version> -allow_active_network_mounts

Important:

This is the proper stage to take the backup, before any modifications are
made to the system.
The output should look like the following example:

[root@node1 dbserver_patch_5.180228]# ./patchmgr -dbnodes


dbs_group -backup -yum_repo http://yum-phx.oracle.com/repo/
EngineeredSystems/exadata/dbserver/18.1.4.0.0/base/x86_64 -target_version
18.1.4.0.0.180125.3 -allow_active_network_mounts

**************************************************************************************
NOTE patchmgr release: 5.180228 (always check MOS 1553103.1 for the
latest release of dbserver.patch.zip)
NOTE

Oracle Cloud Infrastructure User Guide 1279


Database

WARNING Do not interrupt the patchmgr session.


WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot database nodes during update or rollback.
WARNING Do not open logfiles in write mode and do not try to alter them.
**************************************************************************************
2018-02-28 21:29:00 +0000 :Working: DO: Initiate backup on 1
node(s).
2018-02-28 21:29:00 +0000 :Working: DO: Initiate backup on node(s)
2018-02-28 21:29:01 +0000 :Working: DO: Check free space and verify
SSH equivalence for the root user to node2
2018-02-28 21:30:18 +0000 :SUCCESS: DONE: Check free space and
verify SSH equivalence for the root user to node2
2018-02-28 21:30:51 +0000 :Working: DO: dbnodeupdate.sh running a
backup on node(s).
2018-02-28 21:35:50 +0000 :SUCCESS: DONE: Initiate backup on
node(s).
2018-02-28 21:35:50 +0000 :SUCCESS: DONE: Initiate backup on 1
node(s).
5. Remove all custom RPMs from the target compute nodes that will be updated. Custom RPMs are reported in
precheck results. They include RPMs that were manually installed after the system was provisioned.
Note:


If you are updating the system from version
12.1.2.3.4.170111, and the precheck results include krb5-
workstation-1.10.3-57.el6.x86_64, remove it. (This item
is considered a custom RPM for this version.)
• Do not remove exadata-sun-vm-computenode-exact or
oracle-ofed-release-guest. These two RPMs are handled
automatically during the update process.
6. Run the nohup command to perform the update.

nohup patchmgr -dbnodes dbs_group -upgrade -nobackup -


yum_repo <yum_repository> -target_version <target_version> -
allow_active_network_mounts &

The output should look like the following example:

[root@node1 dbserver_patch_5.180228]# nohup ./patchmgr -dbnodes


dbs_group -upgrade -nobackup -yum_repo http://yum-phx.oracle.com/repo/
EngineeredSystems/exadata/dbserver/18.1.4.0.0/base/x86_64 -target_version
18.1.4.0.0.180125.3 -allow_active_network_mounts &

**************************************************************************************
NOTE patchmgr release: 5.180228 (always check MOS 1553103.1 for the
latest release of dbserver.patch.zip)
NOTE
NOTE Database nodes will reboot during the update process.
NOTE
WARNING Do not interrupt the patchmgr session.
WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot database nodes during update or rollback.
WARNING Do not open logfiles in write mode and do not try to alter them.
**************************************************************************************

2018-02-28 21:36:26 +0000 :Working: DO: Initiate prepare steps on


node(s).
2018-02-28 21:36:26 +0000 :Working: DO: Check free space and verify
SSH equivalence for the root user to node2
2018-02-28 21:37:44 +0000 :SUCCESS: DONE: Check free space and
verify SSH equivalence for the root user to node2

Oracle Cloud Infrastructure User Guide 1280


Database

2018-02-28 21:38:43 +0000 :SUCCESS: DONE: Initiate prepare steps on


node(s).
2018-02-28 21:38:43 +0000 :Working: DO: Initiate update on 1
node(s).
2018-02-28 21:38:43 +0000 :Working: DO: Initiate update on node(s)
2018-02-28 21:38:49 +0000 :Working: DO: Get information about any
required OS upgrades from node(s).
2018-02-28 21:38:59 +0000 :SUCCESS: DONE: Get information about any
required OS upgrades from node(s).
2018-02-28 21:38:59 +0000 :Working: DO: dbnodeupdate.sh running an
update step on all nodes.
2018-02-28 21:48:41 +0000 :INFO : node2 is ready to reboot.
2018-02-28 21:48:41 +0000 :SUCCESS: DONE: dbnodeupdate.sh running
an update step on all nodes.
2018-02-28 21:48:41 +0000 :Working: DO: Initiate reboot on node(s)
2018-02-28 21:48:57 +0000 :SUCCESS: DONE: Initiate reboot on
node(s)
2018-02-28 21:48:57 +0000 :Working: DO: Waiting to ensure node2 is
down before reboot.
2018-02-28 21:56:18 +0000 :Working: DO: Initiate prepare steps on
node(s).
2018-02-28 21:56:19 +0000 :Working: DO: Check free space and verify
SSH equivalence for the root user to node2
2018-02-28 21:57:37 +0000 :SUCCESS: DONE: Check free space and
verify SSH equivalence for the root user to node2
2018-02-28 21:57:42 +0000 :SEEMS ALREADY UP TO DATE: node2
2018-02-28 21:57:43 +0000 :SUCCESS: DONE: Initiate update on
node(s)
7. After the update operation completes, verify the version of the kernel on the compute node that was updated.

[root@node2 ~]# imageinfo -ver


18.1.4.0.0.180125.3
8. If the driving system is a compute node that needs to be updated (as in this example), repeat steps 2 through 7 of
this procedure using an updated compute node as the driving system to update the remaining compute node. In
this example update, you would use node2 to update node1.
9. On each compute node, run the uptrack-install command as root to install the available ksplice updates.

uptrack-install --all -y

Updating Tooling on an Exadata Cloud Service Instance


You can update the cloud-specific tooling included on an Exadata Cloud Service compute node by downloading and
applying an RPM file containing the latest version of the tools.
Note:

Oracle highly recommends that you maintain the same version of cloud
tooling across your Exadata Cloud Service environment. Perform the
following procedure on every compute node in the Exadata Cloud Service
instance.

Prerequisite
The compute nodes in the Exadata Cloud Service instance must be configured to access the Oracle Cloud
Infrastructure Object Storage service. For more information, see Node Access to Object Storage: Static Route on page
1237.

Updating the Cloud Tooling on Each Compute Node Manually


The method for updating the tooling depends on the tooling release that is currently installed on the compute node.

Oracle Cloud Infrastructure User Guide 1281


Database

To check the installed tooling release


1. Connect to the compute node as the opc user.
2. Start a root-user command shell.

$ sudo -s
#
3. Use the following command to display information about the installed cloud tooling and note the release label,
shown in red in the example that follows.

# rpm -qa|grep -i dbaastools_exa

dbaastools_exa-1.0-1+18.1.2.1.0_180511.0801.x86_64

In this example, the release version is 18.1.2.1.0_180511.0801.


To update the tooling if the release label is higher than 17430
You use the patch tools subcommand of the dbaascli utility to update the cloud tooling.
Important:

If you are updating the tooling on an Exadata Cloud Service instance that
includes a Data Guard configuration, you must perform these steps on both
the primary database's system and on the standby database's system.
1. Connect as the opc user to the compute node.
2. Start a root-user command shell:

$ sudo -s
#
3. Check whether any cloud tooling updates are available:

# dbaascli patch tools list

Example output:

[root@exacs-node1 ]# dbaascli patch tools list


DBAAS CLI version 19.4.1.0.0
Executing command patch tools list
Checking tools on all nodes
Current Patchid on stb-elbdc1: 19.4.1.0.0_190822.1034
Available Patches
Patchid : 19.4.1.0.0_190827.1034
Patchid : 19.4.1.0.0_190912.0440(LATEST)
Install tools patch using
dbaascli patch tools apply --patchid 19.4.1.0.0_190912.0440 or
dbaascli patch tools apply --patchid LATEST
All Nodes have the same tools version
4. In the command response, locate the patch ID of the cloud tooling update. The patch ID is listed as the "Patchid"
value. If multiple patches are listed, choose the latest one.

Oracle Cloud Infrastructure User Guide 1282


Database

5. Apply the patch containing the latest cloud tooling update by using one of the following methods:
• Specify the patch ID of the latest patch:

# dbaascli patch tools apply --patchid <patch_ID>


• Specify the patch ID as LATEST:

# dbaascli patch tools apply --patchid LATEST


• Run the update process in the background:

# dbaascli patch tools apply --patchid LATEST &


6. Reset the backup configuration:

# /var/opt/oracle/ocde/assistants/bkup/bkup
7. Exit the root-user command shell and disconnect from the compute node:

# exit
$ exit
8. If you are updating cloud tooling on a DB system hosting a Data Guard configuration, repeat the preceding steps
on the compute node of the peer (primary or standby database's) Exadata Cloud Service instance.
To update the tooling if the release label is 17430 or lower
1. Download the RPM file using the Swift object storage API endpoint URL for your region.

wget <swift_API_endpoint>/v1/exadata/patches/dbaas_patch/shome/
dbaastools_exa.rpm

The following example downloads the RPM file from the US West (Phoenix).

wget https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/exadata/
patches/dbaas_patch/shome/dbaastools_exa.rpm

See API Reference and Endpoints for the Swift API endpoint for your region.
2. Apply the RPM file.

# rpm -ev dbaastools_exa


# rpm -ivh dbaastools_exa.rpm
3. Repeat the previous steps on each compute node in the Exadata Cloud Service instance.

Configuring Automatic Cloud Tooling Updates


You can configure automatic cloud tooling updates for Exadata Cloud Service instance. When you configure these
updates, an entry is added to the /etc/crontab file to regularly check for cloud tooling updates and apply new
updates to the compute node when they become available.
Note:

These procedures apply only if the release label is higher than 17430.
To disable automatic cloud tooling updates for an Exadata Cloud Service instance
1. Connect to the compute node as the opc user.
2. Start a root-user command shell:

$ sudo -s

Oracle Cloud Infrastructure User Guide 1283


Database

#
3. Use the following command to disable automatic tooling updates:

# dbaascli patch tools auto disable


4. Exit the root-user command shell and disconnect from the compute node:

# exit
$ exit
5. If you are disabling automatic cloud tooling updates on an Exadata Cloud Service instance hosting a Data Guard
configuration, repeat the preceding steps on the compute node of the peer (primary or standby database's) system.
To integrate customer-managed key management into Exadata Cloud Service
If you choose to encrypt databases in an Exadata Cloud Service instance using encryption keys that you manage, then
you may update the following two packages (using Red Hat Package Manager) to enable DBAASTOOLS to interact
with the APIs that customer-managed key management uses.
KMS TDE CLI
To update the KMS TDE CLI package, you must complete the following task on all nodes in the Exadata Cloud
Service instance:
1. Deinstall current KMS TDE CLI package, as follows:

rpm -ev kmstdecli


2. Install the updated KMS TDE CLI package, as follows:

rpm -ivh kms_tde_cli

LIBKMS
LIBKMS is a library package necessary to synchronize a database with customer-managed key management
through PKCS11. When a new version of LIBKMS is installed, any databases converted to customer-managed key
management continue to use the previous LIBKMS version, until the database is stopped and restarted.
To update the LIBKMS package, you must complete the following task on all nodes in the Exadata Cloud Service
instance:
1. Confirm that the LIBKMS package is already installed, as follows:

rpm -qa --last | grep libkmstdepkcs11


2. Install a new version of LIBKMS, as follows:

rpm -ivh libkms


3. Use SQL*Plus to stop and restart all databases converted to customer-managed key management, as follows:

shutdown immediate;
startup;
4. Ensure that all converted databases are using the new LIBKMS version, as follows:

for pid in $(ps aux | grep "<dbname>" | awk '{print $2;}'); do echo $pid;
sudo lsof -p $pid | grep kms | grep "pkcs11_[0-9A-Za-z.]*" | sort -u;
done | grep pkcs11
5. Deinstall LIBKMS packages that are no longer being used by any database, as follows:

rpm -ev libkms

Oracle Cloud Infrastructure User Guide 1284


Database

Patching an Exadata Cloud Service Instance


This topic explains how to perform patching operations on Exadata Cloud Service resources by using the Console,
API, or the CLI.
Tip:

Oracle recommends patching databases by moving them to a Database Home


that uses the target patching level. See To patch a database by moving it to
another Database Home on page 1288 for instructions on this method of
database patching.
For information and instructions on patching the system by using the dbaascli utility, see Patching an Exadata
Cloud Service Instance Manually on page 1290.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let database admins manage Oracle Cloud database systems on page 2158 lets the
specified group do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. If
you want to dig deeper into writing policies for databases, see Details for the Database Service on page 2251.
About Patching Exadata Cloud Service Resources
Patching an Exadata Cloud Service instance updates the components on all the compute nodes in the instance. A VM
cluster or DB system patch updates the Oracle Grid Infrastructure (GI) on the resource.
Note:

The cloud Exadata resource model the instance is using determines whether
you patch the Grid Infrastructure on a DB system resource or a cloud VM
cluster resource. VM clusters are used by the new resource model. DB
systems can be easily migrated to the new resource model with no system
downtime.
A Database Home patch updates the Oracle Database software shared by the databases in that home. Thus, you patch
a database by either of the following methods:
• Move the database to a Database Home that has the correct patch version. This affects only the database being
moved.
• Patching the Database Home the database is currently in. This affects all databases located in the Database Home
being patched.
When patching a Database Home, you can use an Oracle-provided database software image to apply a generally-
available Oracle Database software update, or you can use a custom database software image created by your
organization to apply a specific set of patches required by your database. See Oracle Database Software Images on
page 1568 for more information on creating and using custom images.
Consider the following best practices:
• Back up your databases before you apply any patches. For information about backing up the databases, see
Managing Exadata Database Backups on page 1321.
• Patch a VM cluster or an Exadata DB system before you patch the Databases Homes and databases on that
resource.
• Before you apply any patch, run the precheck operation to ensure your VM cluster, Exadata DB system, or
Database Home meets the requirements for that patch.

Oracle Cloud Infrastructure User Guide 1285


Database

• To patch a database to a version other than the database version of the current home, move the database to a
Database Home running the target version. This technique requires less downtime and allows you to easily roll
back the database to the previous version by moving it back to the old Database Home. See To patch a database by
moving it to another Database Home on page 1288.
• For the Oracle Database and Oracle Grid Infrastructure major version releases available in Oracle Cloud
Infrastructure, patches are provided for the current version plus the two most recent older versions (N through N
- 2). For example, if an instance is using Oracle Database 19c, and the latest version of 19c offered is 19.8.0.0.0,
patches are available for versions 19.8.0.0.0, 19.7.0.0 and 19.6.0.0.
Prerequisites
The Exadata Cloud Service instance requires access to the Oracle Cloud Infrastructure Object Storage service,
including connectivity to the applicable Swift endpoint for Object Storage. Oracle recommends using a service
gateway with the VCN to enable this access. For more information, see these topics:
• Network Setup for Exadata Cloud Service Instances on page 1233: For information about setting up your VCN
for the Exadata Cloud Service instance, including the service gateway.
• Object Storage FAQ: For information about the Swift endpoints to use.
Important:

In addition to the prerequisites listed, ensure that the following conditions are
met to avoid patching failures:
• The /u01 directory on the database host file system has at least 15 GB of
free space for the execution of patching processes.
• The Oracle Clusterware is up and running on the Exadata Cloud Service
instance.
• All nodes of the instance are up and running.
Using the Console
You can use the Console to view the history of patch operations on Exadata Cloud Service instances, apply patches,
and monitor the status of patch operations.
Patching Exadata Instances That Use the DB System Resource Model
The tasks in this section describe how to apply patches and monitor the status of patch operations on Exadata DB
systems and their Database Homes.
To patch the Oracle Grid Infrastructure on an Exadata DB system
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. In the list of DB systems, click the name of the Exadata DB system you want to patch to display the DB system
details.
4. Under DB System Version, click the View link beside the Latest Patch Available field.
5. Review the list of available patches for the DB system.
6. Click the Actions icon (three dots) for the patch you are interested in, and then click one of the following actions:

Run Precheck: Check for any prerequisites to make sure that the patch can be successfully applied.

Apply: Applies the selected patch. Oracle highly recommends that you run the precheck operation for a patch
before you apply it.
7. Confirm when prompted.
The patch list displays the status of the operation. While a patch is being applied, the patch's status displays
as Patching and the DB system's status displays as Updating. Lifecycle operations on the DB system and its
resources might be temporarily unavailable. If patching completes successfully, the patch's status changes to
Applied and the status of the DB system changes to Available. You can view more details about an individual
patch operation by clicking Patch History.
To patch the Oracle Database software in a Database Home (DB system)

Oracle Cloud Infrastructure User Guide 1286


Database

Note:

This patching procedure updates the Oracle Database software for all
databases located in the Database Home. To patch an individual database,
you can move it to another Database Home that uses the desired Oracle
Database software configuration.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. In the list of DB systems, click the name of the Exadata DB system with the Database Home you want to patch to
display the DB system details.
4. Under Resources, click Database Homes.
5. Click the name of the Database Home you want to patch to display the Database Home details.
6. Under Database Software Version, click View.
7. Review the list of available patches for the Database Home.
8. Click the Actions icon (three dots) for the patch you are interested in, and then click one of the following actions:
•Precheck: Check for any prerequisites to make sure that the patch can be successfully applied.
•Apply: Applies the selected patch. Oracle highly recommends that you run the precheck operation for a patch
before you apply it.
9. Confirm when prompted.
The patch list displays the status of the operation. While a patch is being applied, the status of the patch displays
as Patching and the status of the Database Home and the databases in it display as Updating. During the
operation, each database in the home is stopped and then restarted. If patching completes successfully, the patch's
status changes to Applied and the Database Home's status changes to Available. You can view more details about
an individual patch operation by clicking Patch History.
Patching Exadata Instances That use the New Resource Model
The tasks in this section describe how to apply patches and monitor the status of patch operations on cloud VM
clusters and their Database Homes.
To patch the Oracle Grid Infrastructure on an Exadata cloud VM cluster
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Click Exadata VM Clusters.
4. In the list of cloud VM clusters, click the name of the cluster you want to patch to display the cluster details.
5. Under Version, click the View Patches link beside the Updates Available field.
6. Review the list of available patches for the cloud VM cluster.
7. Click the Actions icon (three dots) for the patch you are interested in, and then click one of the following actions:
•Run Precheck: Check for any prerequisites to make sure that the patch can be successfully applied.
•Update Grid Infrastructure: Applies the selected patch. Oracle highly recommends that you run the
precheck operation for a patch before you apply it.
8. Confirm when prompted.
The patch list displays the status of the operation. While a patch is being applied, the patch's status displays as
Patching and the cloud VM cluster's status displays as Updating. Lifecycle operations on the cluster and its
resources might be temporarily unavailable. If patching completes successfully, the patch's status changes to
Applied and the status of the cluster changes to Available. You can view more details about an individual patch
operation by clicking Update History.
To patch the Oracle Database software in a Database Home (cloud VM cluster)
Note:

This patching procedure updates the Oracle Database software for all
databases located in the Database Home. To patch an individual database,

Oracle Cloud Infrastructure User Guide 1287


Database

you can move it to another Database Home that uses the desired Oracle
Database software configuration.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Click Exadata VM Clusters.
4. In the list of cloud VM clusters, click the name of the cluster you want to patch to display the cluster details.
5. Under Resources, click Database Homes.
6. Click the name of the Database Home you want to patch to display the Database Home details.
7. Under Latest Patch Available, click View.
8. Review the list of available software images for the Database Home. You can select a generally-available Oracle
Database version update using the images in the Oracle Provided Database Software Images tab. You can
also patch the Database Home using a custom image created by your organization. Click the Custom Database
Software Images tab to see the software images available in your current region. See Oracle Database Software
Images on page 1568 for more information on creating and using custom images.
9. Click the Actions icon (three dots) for the patch you are interested in, and then click one of the following actions:
•Precheck: Check for any prerequisites to make sure that the patch can be successfully applied.
•Apply: Applies the selected patch. Oracle highly recommends that you run the precheck operation for a patch
before you apply it.
10. Confirm when prompted.
The patch list displays the status of the operation. While a patch is being applied, the status of the patch displays
as Patching and the status of the Database Home and the databases in it display as Updating. During the
operation, each database in the home is stopped and then restarted. If patching completes successfully, the patch's
status changes to Applied and the Database Home's status changes to Available. You can view more details about
an individual patch operation by clicking Update History.
Patching Individual Oracle Databases in an Exadata Cloud Service Instance
This task explains how to patch a single Oracle Database in your Exadata Cloud Service instance by moving it to
another Database Home. For information on patching Database Homes, see To patch the Oracle Database software in
a Database Home (cloud VM cluster) on page 1287 and To patch the Oracle Database software in a Database Home
(DB system) on page 1286.
To patch a database by moving it to another Database Home
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Navigate to the cloud VM cluster or DB system that contains the database you want to move.
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the list
of VM clusters, click the name of the VM cluster that contains the database you want to move.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, click the name of
the DB system that contains the database you want to move.
4. In the list of databases on the details page of the VM cluster or DB system, click the name of the database you
want to move to view the Database Details page.
5. Click Move to Another Home.
6. Select the target Database Home.
7. Click Move Database.
8. Confirm the move operation.
The database will be stopped in the current home and then restarted in the destination home. While the database is
being moved, the Database Home status displays as Moving Database. When the operation completes, Database
Home is updated with the current home. If the operation is unsuccessful, the status of the database displays as
Failed, and the Database Home field provides information about the reason for the failure.

Oracle Cloud Infrastructure User Guide 1288


Database

Viewing Patch History


Each patch history entry represents an attempted patch operation and indicates whether the operation was successful
or failed. You can retry a failed patch operation. Repeating an operation results in a new patch history entry.
Patch history views in the Console do not show patches that were applied by using command line tools such as
dbaascli.
If your service instance uses the new resource model, the patch history available by navigating to the VM Cluster
Details page. If your service instance uses the DB system resource model, the patch history is available by navigating
to the DB System Details page.
To view the patch history of a cloud VM cluster
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Click Exadata VM Clusters.
4. In the list of cloud VM clusters, click the name of the cluster you want to patch to display the cluster details.
5. Under Version, click the View Patches link beside the Updates Available field.
6. Click Update History.
The Update History page displays the history of patch operations for that cloud VM cluster and for the Database
Homes on that cloud VM cluster.
To view the patch history of a DB system
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. In the list of DB systems, click the name of the Exadata DB system with the information you want to view to
display the DB system details.
4. Under DB System Version, click the View beside the Latest Patch Available field.
5. Click Patch History.
The Patch History page displays the history of patch operations for that DB system and for the Database Homes
on that DB system.
To view the patch history of a Database Home
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Navigate to the cloud VM cluster or DB system that contains the Database Home.
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. Under Resources, click Database Homes.
5. Click the name of the Database Home you want to view to display the Database Home details.
6. Under Database Software Version, click View by the Latest Patch Available field.
7. Click Patch History (DB systems) or Update History (cloud VM clusters).
The history page displays the history of patch operations for that Database Home and for the cloud VM cluster or
DB system to which it belongs.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to manage patching the following Exadata resources: cloud VM clusters, DB systems,
databases, and Database Homes.

Oracle Cloud Infrastructure User Guide 1289


Database

Cloud VM clusters (for systems using the new resource model):


• ListCloudVmClusterUpdates
• ListCloudVmClusterUpdateHistoryEntries
• GetCloudVmClusterUpdate
• GetCloudVmClusterUpdateHistoryEntry
• UpdateVmCluster
DB systems:
• ListDbSystemPatches
• ListDbSystemPatchHistoryEntries
• GetDbSystemPatch
• GetDbSystemPatchHistoryEntry
• UpdateDbSystem
Databases:
• UpdateDatabase - Use this operation to patch a database by moving it to another Database Home
Database Homes:
• ListDbHomePatches
• ListDbHomePatchHistoryEntries
• GetDbHomePatch
• GetDbHomePatchHistoryEntry
• UpdateDbHome
For the complete list of APIs for the Database service, see Database Service API.

Patching an Exadata Cloud Service Instance Manually


This topic explains how to use the dbaascli utility to perform patching operations for Oracle Grid Infrastructure
and Oracle Database on an Exadata Cloud Service instance. The utility requires root or sudo administration
privileges.
Note:

You must update the cloud specific tooling on all the compute nodes in your
Exadata Cloud Service instance before performing the following procedures.
For more information, see Updating an Exadata Cloud Service Instance on
page 1275.
Prerequisites
Patches are stored in Oracle Cloud Infrastructure Object Storage, so the Exadata Cloud Service instance requires
access to that service. To enable this access, Oracle recommends using a service gateway with the VCN. For more
information, see Network Setup for Exadata Cloud Service Instances on page 1233. In that topic, pay particular
attention to:
• Service Gateway for the VCN on page 1238
• Node Access to Object Storage: Static Route on page 1237
• Backup egress rule: Allows access to Object Storage on page 1243
Managing Patches
To list available patches
You can produce a list of available patches using the cswlib list subcommand of the dbaascli utility:
• Connect to the compute node as the opc user.
For detailed instructions, see Connecting to an Exadata Cloud Service Instance on page 1270.

Oracle Cloud Infrastructure User Guide 1290


Database

• Start a root-user command shell:

$ sudo -s
#
• Execute the following command:

# cswlib list

Example:

# dbaascli cswlib list


DBAAS CLI version 20.1.3.3.0
Executing command cswlib list

INFO : Log file => /var/opt/oracle/log/list/


list_2020-11-17_18:14:53.802476231267.log
INFO : dbimage fixup executed.

############ List of Available BP #############


-APR2020 (For DB Versions 18000 12201 12102 11204 19000)
-JUL2020 (For DB Versions 18000 12201 12102 11204 19000)
-OCT2020 (For DB Versions 18000 12201 12102 11204 19000)

############ List of Available NONCDB BP #######


-APR2020 (For DB Versions 12102 19000)
-JUL2020 (For DB Versions 12102 19000)
-OCT2020 (For DB Versions 12102 19000)
• Exit the root-user command shell.

# exit
$

To check prerequisites before applying a patch


You can perform the prerequisites-checking operation using the patch db prereq subcommand of the
dbaascli utility:
1. Connect to the compute node as the opc user.
2. Start a root-user command shell:

$ sudo -s
#
3. Execute the following command:

# dbaascli dbhome patch --oracleHome <dbhome_path> --


targetVersion <Oracle_Database_version> --executePrereqs

where:
• --oracleHome identifies the path of the Database Home to be prechecked.
• --targetVersion specifies the target Oracle Database version to precheck.
Example:

# dbaascli dbhome patch --oracleHome /u02/app/oracle/product/19.0.0.0/


dbhome_2 --targetVersion 19.9.0.0 --executePrereqs
DBAAS CLI version 20.1.3.3.0
Executing command dbhome patch --oracleHome /u02/app/oracle/
product/19.0.0.0/dbhome_2 --targetVersion 19.9.0.0 --executePrereqs
-----------------

Oracle Cloud Infrastructure User Guide 1291


Database

Setting up parameters...
Patch Parameters setup successful.
-----------------
Validating Inputs.
Successfully Validated Inputs.
-----------------
Downloading DB Gold Image. This may take a while...
INFO : dbimage fixup executed.
Successfully downloaded gold image.
-----------------
Loading PILOT...
Session ID of the current execution is: 2
Log file location: /var/opt/oracle/log/dbHomePatch/
pilot_2020-11-13_03-49-29-PM
-----------------
Running initialization job
Completed initialization job
-----------------
Running validate_nodes job
Completed validate_nodes job
-----------------
Running validate_oracle_home job
Completed validate_oracle_home job
-----------------
Running validate_source_version job
Completed validate_source_version job
-----------------
Running validate_diag_perm job
Completed validate_diag_perm job
-----------------
Running validate_backup_loc job
Completed validate_backup_loc job
-----------------
Running validate_gold_image_url job
Completed validate_gold_image_url job
-----------------
Running validate_disk_space job
Completed validate_disk_space job
-----------------
Running download_gold_image job
-----------------
Running validate_gold_image job
Completed validate_gold_image job
-----------------
Running validate_patch_across_nodes job
Completed validate_patch_across_nodes job
-----------------
Running run_installer_prereqs job
Completed run_installer_prereqs job
-----------------
Running check_patch_conflict job
Completed check_patch_conflict job
-----------------
Running remove_unzip_loc job
Completed remove_unzip_loc job
DBHome Patching Prereqs Execution Successful.

To learn more about the dbhome patch subcommand, including available options, execute the following
command:

# dbaascli dbhome patch ?

Oracle Cloud Infrastructure User Guide 1292


Database

4. Exit the root-user command shell:

# exit
$

To apply a patch to a database


You can apply a patch by using the dbhome patch subcommand of the dbaascli utility.
The patching operation:
• Can be used to patch some or all of your compute nodes using one command.
• Coordinates multi-node patching in a rolling manner.
• Can execute patch-related SQL after patching all the compute nodes in the cluster.
• Can patch an empty Database Home, or a Database Home containing one or more databases.
Note:

Oracle recommends patching an empty Database Home and then moving


databases into the home using the dbaascli db move command.
To patch a Database Home (dbhome):
1. Connect to the compute node as the opc user.
2. Start a root-user command shell:

$ sudo -s
#
3. Execute the following command:

# dbaascli dbhome patch --oracleHome <dbhome_path> --


targetVersion <Oracle_Database_version>

where:
• --oracleHome identifies the path of the Database Home to be patched.
• --targetVersion specifies the target Oracle Database version to use for patching.
• --run_datasql 1 instructs the command to execute patch-related SQL commands.
Note:

• Patch-related SQL should only be executed after all of the compute


nodes are patched. Therefore, take care not to specify this argument
if you are patching a subset of nodes and further nodes remain to be
patched.
• This argument can only be specified in conjunction with a patching
operation on a set of compute nodes. Therefore, if you have patched
all of your nodes and you did not specify this argument, you will
need to manually execute the SQL commands associated with the
patch. Refer to the patch documentation for further details.
Example:

Example (Oracle Database):


# dbaascli dbhome patch --oracleHome /u02/app/oracle/product/19.0.0.0/
dbhome_2 --targetVersion 19.9.0.0
DBAAS CLI version 20.1.3.3.0
Executing command dbhome patch --oracleHome /u02/app/oracle/
product/19.0.0.0/dbhome_2 --targetVersion 19.9.0.0
-----------------
Setting up parameters...

Oracle Cloud Infrastructure User Guide 1293


Database

Patch Parameters setup successful.


-----------------
Validating Inputs.
Successfully Validated Inputs.
-----------------
Downloading DB Gold Image. This may take a while...
INFO : dbimage fixup executed.
Successfully downloaded gold image.
-----------------
Loading PILOT...
Session ID of the current execution is: 4
Log file location: /var/opt/oracle/log/dbHomePatch/
pilot_2020-11-13_04-49-49-PM
-----------------
Running initialization job
Completed initialization job
-----------------
Running validate_nodes job
Completed validate_nodes job
-----------------
Running validate_oracle_home job
Completed validate_oracle_home job
-----------------
Running validate_source_version job
Completed validate_source_version job
-----------------
Running validate_diag_perm job
Completed validate_diag_perm job
-----------------
Running validate_backup_loc job
Completed validate_backup_loc job
-----------------
Running validate_gold_image_url job
Completed validate_gold_image_url job
-----------------
Running validate_disk_space job
Completed validate_disk_space job
-----------------
Running download_gold_image job
-----------------
Running validate_gold_image job
Completed validate_gold_image job
-----------------
Running validate_patch_across_nodes job
Completed validate_patch_across_nodes job
-----------------
Running run_installer_prereqs job
Completed run_installer_prereqs job
-----------------
Running check_patch_conflict job
Completed check_patch_conflict job
-----------------
Running acquire_lock job
Completed acquire_lock job
-----------------
Running copy_image-ssexan-42hss1 job
Completed copy_image-ssexan-42hss1 job
-----------------
Running detach_home-ssexan-42hss1 job
Completed detach_home-ssexan-42hss1 job
-----------------
Running move_home-ssexan-42hss1 job
Completed move_home-ssexan-42hss1 job
-----------------

Oracle Cloud Infrastructure User Guide 1294


Database

Running move_image_to_home_loc-ssexan-42hss1 job


Completed move_image_to_home_loc-ssexan-42hss1 job
-----------------
Running setup_db_home-ssexan-42hss1 job
Completed setup_db_home-ssexan-42hss1 job
-----------------
Running root_script_execution-ssexan-42hss1 job
Completed root_script_execution-ssexan-42hss1 job
-----------------
Running copy_home_to_backup_loc-ssexan-42hss1 job
Completed copy_home_to_backup_loc-ssexan-42hss1 job
-----------------
Running copy_config_files-ssexan-42hss1 job
Completed copy_config_files-ssexan-42hss1 job
-----------------
Running copy_image-ssexan-42hss2 job
Completed copy_image-ssexan-42hss2 job
-----------------
Running move_home-ssexan-42hss2 job
Completed move_home-ssexan-42hss2 job
-----------------
Running setup_db_home-ssexan-42hss2 job
Completed setup_db_home-ssexan-42hss2 job
-----------------
Running root_script_execution-ssexan-42hss2 job
Completed root_script_execution-ssexan-42hss2 job
-----------------
Running copy_config_files-ssexan-42hss2 job
Completed copy_config_files-ssexan-42hss2 job
-----------------
Running release_lock job
Completed release_lock job
-----------------
Running backup_old_home job
Completed backup_old_home job
-----------------
Running cleanup job
Completed cleanup job
DBHome Patching Successful.

To learn more about the dbhome patch subcommand, including available options, execute the following
command:

# dbaascli dbhome patch ?


4. Exit the root-user command shell:

# exit
$

To apply a patch to the Oracle Grid Infrastructure


You can apply a patch to your Oracle Grid Infrastructure by using the patch db apply subcommand of the
dbaascli utility.
To perform the patching operation:
1. Connect to the compute node as the opc user.
2. Start a root-user command shell:

$ sudo -s
#

Oracle Cloud Infrastructure User Guide 1295


Database

3. Execute the following command:

# dbaascli patch db apply --patchid <patchid> --dbnames grid

where:
• --patchid identifies the patch to be applied.
• --dbnames specifies "grid" to indicate that the Grid Infrastructure is to be patched.
Example:

# dbaascli patch db apply --patchid 29708703-GI --dbnames grid

To learn more about the patch db apply subcommand, including available options, execute the following
command:

# dbaascli patch db apply ?


4. Exit the root-user command shell:

# exit
$

To list applied patches


You can produce a list of applied patches to determine which patches have been applied.
You can use the opatch utility to determine the patches that have been applied to an Oracle Database or Grid
Infrastructure installation.
To produce a list of applied patches for an Oracle Database installation:
1. Connect to a compute node as the oracle user.
2. Go to the Oracle user's home directory:

$ cd
3. Ensure that you are in the Oracle user's home directory:

$ pwd

/home/oracle
4. Source the environment file.
Example (using the environment file for a database named "DB18"):

$ . DB18.env
5. Execute the opatch command with the lspatches option:

$ opatch lspatches

To produce a list of applied patches for Oracle Grid Infrastructure:


1. Connect to a compute node as the opc user.
2. Become the grid user:

$ sudo -s
# su - grid

Oracle Cloud Infrastructure User Guide 1296


Database

3. Execute the opatch command with the lspatches option:

$ opatch lspatches

To roll back or resume a patching operation


You can roll back a patch by using the dbaascli utility's dbaascli dbhome patch --rollback
operation. This operation:
• Can be used to roll back a patch on some or all of your compute nodes using one command.
• Coordinates multi-node operations in a rolling manner.
• Can execute rollback-related SQL after rolling back the patch on all the compute nodes in the cluster.
You can resume a patching operation by using the dbaascli utility's dbaascli dbhome patch --resume
operation.
To perform a patch roll back operation:
1. Connect to the compute node as the opc user.
2. Start a root-user command shell:

$ sudo -s
#
3. Execute the following command:

dbaascli dbhome patch --oracleHome <dbhome_path> --


targetVersion <Oracle_Database_version> --rollBack

where:
• --oracleHome identifies the path of the Database Home to be patched.
• --targetVersion specifies the target Oracle Database version to use for patching.
• --instanceN specifies a compute node and one or more Oracle home directories that are subject to the
rollback operation. In this context, an Oracle home directory may be an Oracle Database home directory or the
Oracle Grid Infrastructure home directory.
• --dbnames specifies the name of the database you want to apply the switchback operation to.
Example:

# dbaascli dbhome patch --oracleHome /u02/app/oracle/product/19.0.0.0/


dbhome_2 --targetVersion 19.7.0.0 --rollBack

DBAAS CLI version 20.1.3.3.0

Executing command dbhome patch --oracleHome /u02/app/oracle/


product/19.0.0.0/dbhome_2 --targetVersion 19.7.0.0 --rollBack

-----------------

Setting up parameters...

Patch Parameters setup successful.

-----------------

Validating Inputs.

Successfully Validated Inputs.

-----------------

Oracle Cloud Infrastructure User Guide 1297


Database

Loading PILOT...

Session ID of the current execution is: 5

Log file location: /var/opt/oracle/log/dbHomePatch/


pilot_2020-11-13_05-14-30-PM

-----------------

Running initialization job

Completed initialization job

-----------------

Running validate_nodes job

Completed validate_nodes job

-----------------

Running validate_oracle_home job

Completed validate_oracle_home job

-----------------

Running validate_diag_perm job

Completed validate_diag_perm job

-----------------

Running validate_backup_loc job

Completed validate_backup_loc job

-----------------

Running validate_gold_image_url job

Completed validate_gold_image_url job

-----------------

Running validate_disk_space job

Completed validate_disk_space job

-----------------

Running download_gold_image job

-----------------

Running validate_gold_image job

Completed validate_gold_image job

-----------------

Running validate_patch_across_nodes job

Completed validate_patch_across_nodes job

Oracle Cloud Infrastructure User Guide 1298


Database

-----------------

Running run_installer_prereqs job

Completed run_installer_prereqs job

-----------------

Running acquire_lock job

Completed acquire_lock job

-----------------

Running copy_image-ssexan-42hss1 job

Completed copy_image-ssexan-42hss1 job

-----------------

Running detach_home-ssexan-42hss1 job

Completed detach_home-ssexan-42hss1 job

-----------------

Running move_home-ssexan-42hss1 job

Completed move_home-ssexan-42hss1 job

-----------------

Running move_image_to_home_loc-ssexan-42hss1 job

Completed move_image_to_home_loc-ssexan-42hss1 job

-----------------

Running setup_db_home-ssexan-42hss1 job

Completed setup_db_home-ssexan-42hss1 job

-----------------

Running root_script_execution-ssexan-42hss1 job

Completed root_script_execution-ssexan-42hss1 job

-----------------

Running copy_home_to_backup_loc-ssexan-42hss1 job

Completed copy_home_to_backup_loc-ssexan-42hss1 job

-----------------

Running copy_config_files-ssexan-42hss1 job

Completed copy_config_files-ssexan-42hss1 job

-----------------

Running copy_image-ssexan-42hss2 job

Oracle Cloud Infrastructure User Guide 1299


Database

Completed copy_image-ssexan-42hss2 job

-----------------

Running move_home-ssexan-42hss2 job

Completed move_home-ssexan-42hss2 job

-----------------

Running setup_db_home-ssexan-42hss2 job

Completed setup_db_home-ssexan-42hss2 job

-----------------

Running root_script_execution-ssexan-42hss2 job

Completed root_script_execution-ssexan-42hss2 job

-----------------

Running copy_config_files-ssexan-42hss2 job

Completed copy_config_files-ssexan-42hss2 job

-----------------

Running release_lock job

Completed release_lock job

-----------------

Running cleanup job

Completed cleanup job

DBHome Patching Successful.

To learn more about the patch db apply subcommand, including available


options, execute the following command:

# dbaascli dbhome patch ?

To learn more about the dbaascli dbhome patch subcommand, including available options, execute the
following command:

# dbaascli dbhome patch ?


4. Exit the root-user command shell:

# exit
$

Upgrading Exadata Grid Infrastructure


This topic describes how to upgrade the Oracle Grid Infrastructure (GI) on an Exadata cloud VM cluster using
the Oracle Cloud Infrastructure Console or API. Upgrading allows you to provision Oracle Database Homes and
databases that use the most current Oracle Database software. For more information on Exadata cloud VM clusters
and the new Exadata resource model, see Overview of X8M Scalable Exadata Infrastructure on page 1228.

Oracle Cloud Infrastructure User Guide 1300


Database

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let database admins manage Oracle Cloud database systems on page 2158 lets the
specified group do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. If
you want to dig deeper into writing policies for databases, see Details for the Database Service on page 2251.
Prerequisites
To upgrade your GI to Oracle Database 19c, you must be using the Oracle Linux 7 operating system for your VM
cluster. For more information on upgrading the operating system, see the following documentation:
• How to update the Exadata System Software (DomU) to 19 from 18 on the Exadata Cloud Service in OCI (My
Oracle Support Doc ID 2521053.1).
About Upgrading Grid Infrastructure
Upgrading the Oracle Grid Infrastructure (GI) on a VM cluster involves upgrading all the compute nodes in the
instance. The upgrade is performed in a rolling fashion, with only one node being upgraded at a time.
• Oracle recommends running an upgrade precheck to identify and resolve any issues that would prevent a
successful upgrade.
• You can monitor the progress of the upgrade operation by viewing the associated work requests.
• If you have an Exadata infrastructure maintenance operation scheduled to start within the next 24 hours, the GI
upgrade feature is not available.
• During the upgrade, you cannot perform other management operations such as starting, stopping or rebooting
nodes, scaling CPU, provisioning or managing Database Homes or databases, restoring a database, or editing
IORM settings. The following Data Guard operations are not allowed on the VM cluster undergoing a GI upgrade:
• Enable Data Guard
• Switchover
• Failover to the database using the VM cluster (a failover operation to a standby on another VM cluster is
possible)
Using the Console
You can use the Console to perform a precheck prior to upgrading your Oracle Grid Infrastructure (GI), and to
perform the GI upgrade operation.
To precheck your cloud VM cluster prior to upgrading
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Click Exadata VM Clusters.
4. In the list of cloud VM clusters, click the name of the cluster you want to patch to display the cluster details.
5. Under Version, click the View Patches link beside the Updates Available field.
6. Click Updates to view the list of available patches and upgrades.
7. Click the Actions icon (three dots) at the end of the row listing the Oracle Grid Infrastructure (GI) upgrade, then
click Run Precheck.
8. In the Confirm dialog, confirm you want to upgrade to begin the precheck operation.
To upgrade the Oracle Grid Infrastructure of a cloud VM cluster
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Click Exadata VM Clusters.

Oracle Cloud Infrastructure User Guide 1301


Database

4. In the list of cloud VM clusters, click the name of the cluster you want to patch to display the cluster details.
5. Under Version, click the View Patches link beside the Updates Available field.
6. Click Updates to view the list of available patches and upgrades.
7. Click the Actions icon (three dots) at the end of the row listing the Oracle Grid Infrastructure (GI) upgrade, then
click Upgrade Grid Infrastructure.
8. In the Upgrade Grid Infrastructure dialog, confirm you want to upgrade the GI by clicking Upgrade Grid
Infrastructure. If you haven't run a precheck, you have the option of clicking Run Precheck in this dialog to
precheck your cloud VM cluster prior to the upgrade.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to upgrade the Oracle Grid Infrastructure in a cloud VM clusters and view the cluster's
update history.
• ListCloudVmClusterUpdates
• ListCloudVmClusterUpdateHistoryEntries
• GetCloudVmClusterUpdate
• GetCloudVmClusterUpdateHistoryEntry
• UpdateVmCluster
For the complete list of APIs for the Database service, see Database Service API.

Upgrading Exadata Databases


Note:

This topic applies only to Exadata Cloud Service instances using the new
resource model. For information on converting an Exadata DB system to
the new resource model, see Switching an Exadata DB System to the New
Resource Model and APIs on page 1229.
This topic describes the procedures to upgrade an Exadata database instance to Oracle Database 19c (Long Term
Release) by using the Console and the API. The upgrade is accomplished by moving the Exadata database to a
Database Home that uses the target software version.
Prerequisites
The following are required in order to upgrade an Exadata Oracle Database instance:
• The Exadata Cloud Service instance system software must use Oracle Linux 7 (OL7). See How to update the
Exadata System Software (DomU) to 19 from 18 on the Exadata Cloud Service in OCI for instructions on
manually updating the operating system.
• The Oracle Grid Infrastructure must be version 19c. See Upgrading Exadata Grid Infrastructure on page 1300
for instructions on using the Oracle Cloud Infrastructure Console or API to upgrade Grid Infrastructure. If patches
are available for your Grid Infrastructure, Oracle recommends applying them prior to performing a database
upgrade.
• You must have an available Oracle Database Home that uses the two most recent version of Oracle Database
19c available in Oracle Cloud Infrastructure. See To create a new Database Home in an existing Exadata Cloud
Service instance on page 1331 for information on creating a Database Home. You can use Oracle-published
software images or a custom database software image based on your patching requirements to create Database
Homes.
• You must ensure that all pluggable databases in the container database that is being upgraded can be opened.
Pluggable databases that cannot be opened by the system during the upgrade can cause an upgrade failure.
Your Oracle database must be configured with the following settings in order to upgrade:
• The database must be in archivelog mode
• The database must have flashback enabled

Oracle Cloud Infrastructure User Guide 1302


Database

See the Oracle Database documentation for your database's release version to learn more about these settings.
About Upgrading a Database
For database software version upgrades, note the following:
• Database upgrades involve database downtime. Keep this in mind when scheduling your upgrade.
• Oracle recommends that you back up your database and test the new software version on a test system or a cloned
version of your database before you upgrade a production database. See To create an on-demand full backup of a
database on page 1323 for information on creating an on-demand manual backup.
• Oracle recommends running an upgrade precheck operation for your database prior to attempting an upgrade
so that you can discover any issues that need mitigation prior to the time you plan to perform the upgrade. The
precheck operation does not affect database availability and can be performed at any time that is convenient for
you.
• If your databases uses Data Guard, you will need to disable or remove the Data Guard association prior to
upgrading.
• An upgrade operation cannot take place while an automatic backup operation is underway. Before upgrading,
Oracle recommends disabling automatic backups and performing a manual backup. See To configure automatic
backups for a database on page 1323 and To create an on-demand full backup of a database on page 1323 for
more information.
• After upgrading, you cannot use automatic backups taken prior to the upgrade to restore the database to an earlier
point in time.
• If you are upgrading an database that uses version 11.2 software, the resulting version 19c database will be a non-
container database (non-CDB).

How the Upgrade Operation Is Performed by the Database Service


During the upgrade process, the Database service does the following:
• Executes an automatic precheck. This allows the system to identify issues needing mitigation and to stop the
upgrade operation.
• Sets a guaranteed restore point, enabling it to perform a flashback in the event of an upgrade failure.
• Moves the database to a user-specified Oracle Database Home that uses the desired target software version.
• Runs the Database Upgrade Assistant (DBUA) software to perform the upgrade.

Rolling Back an Unsuccessful Upgrade


If your upgrade does not complete successfully, you have the option of performing a rollback. Details about the
failure are displayed on the Database Details page in the Console, allowing you to analyze and resolve the issues
causing the failure. A rollback resets your database to the state prior to the upgrade. All changes to the database
made during and after the upgrade will be lost. The rollback option is provided in a banner message displayed on the
database details page of a database following an unsuccessful upgrade operation. See To roll back a failed database
upgrade on page 1304 for more information.

After Your Upgrade Is Complete


After a successful upgrade, note the following:
• Check that automatic backups are enabled for the database if you disabled them prior to upgrading. See To
configure automatic backups for a database on page 1323 for more information.
• Edit the Oracle Database COMPATIBLE parameter to reflect the new Oracle Database software version. See What
Is Oracle Database Compatibility? for more information.
• If your database uses a <database_name>.env file, ensure that the variables in the file have been updated to
point to the 19c Database Home. These variables should be automatically updated during the upgrade process.
• If you are upgrading a non-container database to Oracle Database version 19c, you can convert the database to a
pluggable database after converting. See How to Convert Non-CDB to PDB (Doc ID 2288024.1) for instruction
on converting your database to a pluggable database.

Oracle Cloud Infrastructure User Guide 1303


Database

• If your old Database Home is empty and will not be reused, you can remove it. See To delete a Database Home on
page 1333 for more information.
Using the Console
You can use the Console to:
• Upgrade you database
• Roll back an unsuccessful upgrade
• View the update history of a database that has been upgraded
Oracle recommends that you use the precheck action to ensure that your database has met the requirements for the
upgrade operation.
To upgrade or precheck an Exadata database
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the list of VM clusters, click the name of the VM
cluster that contains the database you want to upgrade.
Note:

If your database is in an Exadata Cloud Service instance that does not use
the new Exadata resource model, you will need to swtich the instance to
the new model before you can upgrade your database.
4. In the list of databases on the details page of the VM cluster, click the name of the database you want to upgrade
to view the Database Details page.
5. Click More Actions, then Upgrade.
6. In the Upgrade Database dialogue, select the following:

Oracle Database version: The drop-down selector lists only Oracle Database versions that are compatible
with an upgrade from the current software version the database is using. The target software version must be
higher than the database's current version.
• Target Database Home: Select a Database Home for your database. The list of Database Homes is limited to
those homes using the most recent versions of Oracle Database 19c software. Moving the database to the new
Database Home results in the database being upgraded to the major release version and patching level of the
new Database Home.
7. Click one of the following:
• Run Precheck: This option starts an upgrade precheck to identify any issues with your database that need
mitigation before you perform an upgrade.
• Upgrade Database: This option starts upgrade operation. Oracle recommends performing an upgrade only
after you have performed a successful precheck on the database.
To roll back a failed database upgrade
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the list of VM clusters, click the name of the VM
cluster that contains the database with the failed upgrade.
4. Find the database that was unsuccessfully upgraded, and click its name to display details about it. The database
should display a banner at the top of the details page that includes a Rollback button and details about what issues
caused the upgrade failure.
5. Click Rollback. In the Confirm rollback dialog, confirm that you want to initiate a rollback to the previous
Oracle Database version by clicking Rollback.
To view the upgrade history of a database
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.

Oracle Cloud Infrastructure User Guide 1304


Database

3. Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the list of VM clusters, click the name of the VM
cluster that contains the database you want to upgrade.
Note:

If your database is in an Exadata Cloud Service instance that does not use
the new Exadata resource model, you will need to swtich the instance to
the new model before you can upgrade your database.
4. In the list of databases on the details page of the VM cluster, click the name of the database for which you want to
view the upgrade history.
5. On the Database Details page, under Database Version, click the View link that is displayed for databases that
have been upgraded. This link does not appear for databases that have not been updated.
The Updates History page is displayed. The table displayed on this page shows precheck and upgrade operations
performed on the database.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following APIs to manage database upgrades:
• ListDatabaseUpgradeHistoryEntries
• UpgradeDatabase
For the complete list of APIs for the Database service, see Database Service API.
Note:

When using the UpgradeDatabase API to upgrade an Exadata Cloud


Service database, you must specify DB_HOME as the upgrade source.

Monitoring an Exadata Cloud Service Database


This topic explains how to access Enterprise Manager Database Express and Enterprise Manager Database Control,
which are web-based tools for managing Oracle Database.
Accessing Enterprise Manager Database Express 12c
Enterprise Manager Database Express 12c (EM Express) is available on Exadata Cloud Service databases created
using Oracle Database 12c Release 1 (12.1) or later.
How you access EM Express depends on whether you want to manage a CDB or PDB.
• To manage the CDB. When a database is created, the Database service automatically sets port 5500 on the
deployment’s compute nodes for EM Express access to the CDB.
• To manage a PDB. For a database using Oracle Database 12.2 or later, a single port (known as the global port) is
automatically set on the compute nodes. The global port lets you use EM Express to connect to all of the PDBs in
the CDB using the HTTPS port for the CDB.
For a database using Oracle Database 12.1, you must manually set a port on the compute nodes for each PDB you
want to manage using EM Express.
For both CDBs and PDBs, you must add the port to a security list as described in Updating the Security List on page
1306.
To confirm the port that is in use for a specific database, connect to the database as a database administrator and
execute the query shown in the following example:

SQL> select dbms_xdb_config.getHttpsPort() from dual;

DBMS_XDB_CONFIG.GETHTTPSPORT()

Oracle Cloud Infrastructure User Guide 1305


Database

------------------------------
5502

Setting the Port for EM Express to Manage a PDB (Oracle Database 12.1 Only)
In Oracle Database 12c Release 1, a unique HTTPS port must be configured for the root container (CDB) and each
PDB that you manage using EM Express.
To configure a HTTPS port so that you can manage a PDB with EM Express:
1. Invoke SQL*Plus and log in to the PDB as the SYS user with SYSDBA privileges.
2. Execute the DBMS_XDB_CONFIG.SETHTTPSPORT procedure.

SQL> exec dbms_xdb_config.sethttpsport(port-number)

Accessing EM Express
Before you access EM Express, add the port to the security list. See Updating the Security List on page 1306.
After you update the security list, you can access EM Express by directing your browser to the URL
https://<node-ip-address>:<port>/em, where node-ip-address is the public IP address of the
compute node hosting EM Express, and port is the EM Express port used by the database.
Accessing Enterprise Manager 11g Database Control
Enterprise Manager 11g Database Control (Database Control) is available on Exadata Cloud Service databases
created using Oracle Database 11g Release 2. Database Control is allocated a unique port number for each database
deployment. By default, access to Database Control is provided using port 1158 for the first deployment. Subsequent
deployments are allocated ports in a range starting with 5500, 5501, 5502, and so on.
You can confirm the Database Control port for a database by searching for REPOSITORY_URL in the
$ORACLE_HOME/host_sid/sysman/config/emd.properties file.
Before you access Database Control, add the port for the database to the security list associated with the Exadata DB
system's client subnet. For more information, see Updating the Security List on page 1306.
After you update the security list, you can access Database Control by directing your browser to the URL
https://<node-ip-address>:<port>/em, where node-ip-address is the public IP address of the
compute node hosting Database Control, and port is the Database Control port used by the database.
Updating the Security List
Before you can access EM Express or Database Control, you must add the port for the database to the security list
associated with the data (client) subnet used by the cloud VM cluster (for systems using the new resource model) or
the DB system. To update an existing security list, complete the following steps using the Console:
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Navigate to the cloud VM cluster or DB system contains the security list you want to update:
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster in which you want to manage and click its highlighted name to view the
details page for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system in which you want to manage, and then click its name to display details about it.
4. Note Client Subnet name of the cloud VM cluster or the DB system and click its Virtual Cloud Network.
5. Locate the subnet in the list, and then click its security list under Security Lists.
6. Click Edit All Rules and add an ingress rule with source type=CIDR, source CIDR=<source CIDR>,
protocol=TCP, and port=<port number or port range>.
The source CIDR should be the CIDR block that includes the ports you open for the client connection.

Oracle Cloud Infrastructure User Guide 1306


Database

For detailed information about creating or updating a security list, see Security Lists on page 2876.

Creating and Managing Exadata Databases

This topic describes creating and managing Oracle Databases on an Exadata Cloud Service instance instance.
When you create a Exadata Cloud Service instance, an initial Database Home and database are created. You can
create additional Database Homes and databases at any time by using the Console or the Oracle Cloud Infrastructure
API.
When you add a database to a VM cluster or a DB system resource on an Exadata instance, the database versions you
can select from depend on the current patch level of that resource. You may have to patch your VM cluster or DB
system to add later database versions.
After you provision a database, you can move it to another Database Home. Consolidating databases under the
same home can facilitate management of these resources. All databases in a given Database Home share the Oracle
Database binaries and therefore, have the same database version. The Oracle-recommended way to patch a database
to a version that is different from the current version is to move the database to a home running the target version. For
information about patching, see Patching an Exadata Cloud Service Instance on page 1285.
Note:

When provisioning databases, make sure your VM cluster or DB system has


enough OCPUs enabled to support the total number of database instances on
the system. Oracle recommends the following general rule: for each database,
enable 1 OCPU per node. See To scale CPU cores in an Exadata Cloud
Service cloud VM cluster or DB system on page 1259 for information on
scaling your OCPU count up or down.
When you create an Exadata database, you can choose to encrypt the database using your own encryption keys that
you manage. You can rotate encryption keys, periodically, to maintain security compliance and, in cases of personnel
changes, to disable access to a database.
Note:

• The encryption key you use must be AES-256.


• To ensure that your Exadata database uses the most current versions of
the Vault encryption key, rotate the key from the database details page on
the Oracle Cloud Infrastructure Console. Do not use the Vault service.
• You can only use Oracle-managed encryption keys if your database is
enabled with Oracle Data Guard.
If you want to use your own encryption keys to encrypt a database that you create, then you must create a dynamic
group and assign specific policies to the group for customer-managed encryption keys. See Managing Dynamic
Groups on page 2441 and Let security admins manage vaults, keys, and secrets on page 2160. Additionally, see
To integrate customer-managed key management into Exadata Cloud Service on page 1284 if you need to update
customer-managed encryption libraries for the Vault service.
You can also add and remove databases, and perform other management tasks on a database by using command line
utilities. For information and instructions on how to use these utilities, see Creating and Managing Exadata Databases
Manually on page 1314.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let database admins manage Oracle Cloud database systems on page 2158 lets the
specified group do everything with databases and related Database resources.

Oracle Cloud Infrastructure User Guide 1307


Database

To enable management of customer-managed encryption keys, you must create a policy in the tenancy that allows a
particular dynamic group to do so, similar to the following:

allow dynamic-group dynamic_group_name to manage keys in tenancy

If you are new to policies, then see Getting Started with Policies on page 2143 and Common Policies on page
2150. If you want more information about writing policies for databases, then see Details for the Database Service
on page 2251.
Using the Console
To create a database in an existing Exadata Cloud Service instance
Note:

If IORM is enabled on the Exadata Cloud Service instance, the default


directive will apply to the new database and system performance might
be impacted. Oracle recommends that you review the IORM settings and
make applicable adjustments to the configuration after the new database is
provisioned.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Navigate to the cloud VM cluster or DB system you want to create the database in:
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. Click Create Database.
5. In the Create Database dialog, enter the following:
• Database name: The name for the database. The database name must begin with an alphabetic character and
can contain a maximum of eight alphanumeric characters. Special characters are not permitted.
• Database version: The version of the database. You can mix database versions on the Exadata DB system.
• PDB name: (Optional) For Oracle Database 12c (12.1.0.2) and later, you can specify the name of the
pluggable database. The PDB name must begin with an alphabetic character, and can contain a maximum of
eight alphanumeric characters. The only special character permitted is the underscore ( _).
• Database Home: The Oracle Database Home for the database. Choose the applicable option:
• Select an existing Database Home: The Database Home display name field allows you to choose the
Database Home from the existing homes for the database version you specified. If no Database Home with
that version exists, you must create a new one.
• Create a new Database Home: A database home will be created using the database version and the
Database Home display name you specified.
• Create administrator credentials: A database administrator SYS user will be created with the password you
supply.
• Username: SYS
• Password: Supply the password for this user. The password must meet the following criteria:
A strong password for SYS, SYSTEM, TDE wallet, and PDB Admin. The password must be 9 to 30
characters and contain at least two uppercase, two lowercase, two numeric, and two special characters. The

Oracle Cloud Infrastructure User Guide 1308


Database

special characters must be _, #, or -. The password must not contain the username (SYS, SYSTEM, and so
on) or the word "oracle" either in forward or reversed order and regardless of casing.
• Confirm password: Re-enter the SYS password you specified.
• Select workload type: Choose the workload type that best suits your application:
• Online Transactional Processing (OLTP) configures the database for a transactional workload, with a
bias towards high volumes of random data access.
• Decision Support System (DSS) configures the database for a decision support or data warehouse
workload, with a bias towards large data scanning operations.
• Configure database backups: Specify the settings for backing up the database to Object Storage:

Enable automatic backup: Check the check box to enable automatic incremental backups for this
database. If you are creating a database in a security zone compartment, you must enable automatic
backups.
• Backup retention period: If you enable automatic backups, you can choose one of the following preset
retention periods: 7 days, 15 days, 30 days, 45 days, or 60 days. The default selection is 30 days.
• Backup Scheduling: If you enable automatic backups, you can choose a two-hour scheduling window
to control when backup operations begin. If you do not specify a window, the six-hour default window
of 00:00 to 06:00 (in the time zone of the DB system's region) is used for your database. See Automatic
Incremental Backups for more information.
6. Click Show Advanced Options to specify advanced options for the database:
• Character set: The character set for the database. The default is AL32UTF8.
• National character set: The national character set for the database. The default is AL16UTF16.
• If you have permissions to create a resource, then you also have permissions to apply free-form tags to that
resource. To apply a defined tag, you must have permissions to use the tag namespace. For more information
about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip this option
(you can apply tags later) or ask your administrator.
• If you are creating a database in an Exadata Cloud Service VM cluster, then you can choose to use encryption
based on encryption keys that you manage. By default, the database is configured using Oracle-managed
encryption keys. To configure the database with encryption based on encryption keys you manage:
a. Click the Encryption tab.
b. Select Use customer-managed keys. You must have a valid encryption key in Oracle Cloud Infrastructure
Vault service. See Let security admins manage vaults, keys, and secrets on page 2160.
Note:

Oracle only supports AES-256 encryption keys.


c. Choose a vault from the Vault in compartment drop-down. You can change the compartment by clicking
the CHANGE COMPARTMENT link.
d. Select an encryption key from the Master encryption key in compartment drop-down. You can
change the compartment containing the encryption key you want to use by clicking the CHANGE
COMPARTMENT link.
e. If you want to use an encryption key that you import into your vault, then select Choose the key version
and enter the OCID of the key you want to use in the Key version OCID field.
Note:

• Oracle supports customer-managed keys on databases after Oracle


Database 11g release 2 (11.2.0.4).
• If you choose to provide an OCID for the valid key version, then
ensure that the OCID corresponds to the key version you want to use.
7. Click Create Database.
After database creation is complete, the status changes from Provisioning to Available, and on the database details
page for the new database, the Encryption section displays the encryption key name and the encryption key OCID.

Oracle Cloud Infrastructure User Guide 1309


Database

Caution:

Do not delete the encryption key from the vault. This causes any database
protected by the key to become unavailable.
To create a database from a backup
Before you begin, note the following:
• When you create a database from a backup, the availability domain is the same as the availability domain that
hosts the backup.
• The Oracle Database software version you specify must be the same or later version as that of the backed-up
database.
• If you are creating a database from an automatic backup, then you can choose any level 0 weekly backup, or
a level 1 incremental backup created after the most recent level 0 backup. For more information on automatic
backups, see Using the Console on page 1322
• If the backup being used to create a database is in a security zone compartment, the database cannot be created
in a compartment that is not in a security zone. See the Security Zone Policies topic for a full list of policies that
affect Database service resources.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Navigate to a backup.
• Standalone backups: Click Standalone Backups under Bare Metal, VM, and Exadata.
• Automatic backups: Navigate to the Database Details page of the database associated with the backup:
• Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters.
In the list of VM clusters, find the VM cluster you want to access and click its highlighted name to view
the details page for the cluster.
• DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the
Exadata DB system you want to access, and then click its name to display details about it.
Click the name of the database associated with the backup that you will use to create the new database. Locate
the backup in the list of backups on the Database Details page.
4. Click the Actions icon (three dots) for the backup you chose.
5. Click Create Database. On the Create Database from Backup page, configure the database as follows.
6. In the Configure your DB system section:
•Backups created in cloud VM clusters: Choose a cloud VM cluster to run the database from the Select a VM
cluster drop-down list.
• Backups created in DB systems: Choose a shape from the Select a shape drop-down list, then choose a DB
system to run the database from the Select a DB system drop-down list.
7. In the Configure Database Home section:
•Select an existing Database Home: If you choose this option, make a selection from the Select a Database
Home drop-down list.
• Create a new Database home: If you choose this option, enter a name for the new Database Home in the
Database Home display name field.
8. In the Configure database section:
• In the Database name field, accept the default name or name the database.
The database name must begin with an alphabetic character and can contain a maximum of eight alphanumeric
characters. Special characters are not permitted.
• In the Password and Confirm password fields, enter and re-enter a password.
A strong password for SYS administrator must be 9 to 30 characters and contain at least two uppercase, two
lowercase, two numeric, and two special characters. The special characters must be _, #, or -. The password

Oracle Cloud Infrastructure User Guide 1310


Database

must not contain the user name (SYS, SYSTEM, and so on) or the word "oracle" either in forward or reverse
order and regardless of casing.
9. In the Enter the source database's TDE wallet or RMAN password field, enter a password that matches either
the Transparent Data Encryption (TDE) wallet password or RMAN password for the source database.
10. Click Create Database.
To navigate to a list of backups for a particular database:
1. Click the DB system name that contains the specific database to display the DB System Details page.
2. From the list of databases, click the database name associated with the backup you want to use to display a list of
backups on the database details page. You can also access the list of backups for a database by clicking Backups
in the Resources section.
To navigate to the list of standalone backups for your current compartment:
1. Click Standalone Backups under Bare Metal, VM, and Exadata.
2. In the list of standalone backups, find the backup you want to use to create the database.
To move a database to another Database Home
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Navigate to the database:
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. Click Move to Another Home.
5. Select the target Database Home.
6. Click Move Database.
7. Confirm the move operation.
The database will be stopped in the current home and then restarted in the destination home. While the database is
being moved, the Database Home status displays as Moving Database. When the operation completes, Database
Home is updated with the current home. If the operation is unsuccessful, the status of the database displays as
Failed, and the Database Home field provides information about the reason for the failure.
To terminate a database
You'll get the chance to back up the database prior to terminating it. This creates a standalone backup that can be used
to create a database later. We recommend that you create this final backup for any production (non-test) database.
Note:

Terminating a database removes all automatic incremental backups of the


database from Oracle Cloud Infrastructure Object Storage. However, all
full backups that were created on demand, including your final backup, will
persist as standalone backups.
You cannot terminate a database that is assuming the primary role in a Data Guard association. To terminate it, you
can switch it over to the standby role.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.

Oracle Cloud Infrastructure User Guide 1311


Database

3. Navigate to the database.


X8M systems: Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the list of cloud VM clusters, find
the VM cluster containing the database you want to manage and click its highlighted name to view the details
page for the cluster.
In the list of databases, click the highlighted name of the database you wish to manage. The Database Details page
is displayed.
X6, X7, or X8 systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the
Exadata DB system containing the database you want to manage and click its highlighted name to view the details
page for the DB system.
In the list of databases, click the highlighted name of the database you wish to manage. The Database Details page
is displayed.
4. Click More Actions, and then click Terminate.
5. In the confirmation dialog, indicate whether you want to back up the database before terminating it, and type the
name of the database to confirm the termination.
6. Click Terminate Database.
The database's status indicates Terminating.
To administer Vault encryption keys
After you provision a database in an Exadata DB system or VM cluster, you can rotate the Vault encryption key or
change the encryption management configuration for that database.
Note:

• To ensure that your Exadata database uses the most current version of the
Vault encryption key, rotate the key from the database details page on the
Oracle Cloud Infrastructure Console. Do not use the Vault service.
• You can rotate Vault encryption keys only on databases that are
configured with customer-managed keys.
• You can change encryption key management from Oracle-managed
keys to customer-managed keys but you cannot change from customer-
managed keys to Oracle-managed keys.
• If the database for which you are changing encryption key management is
using Oracle-managed keys and is enabled with Oracle Data Guard, then
you cannot change to customer-managed keys.
• Oracle supports administering encryption keys on databases after Oracle
Database 11g release 2 (11.2.0.4).
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your compartment from the Compartment drop-down.
3. Navigate to the cloud VM cluster or DB system that contains the database for which you want to change
encryption management or to rotate a key.
Cloud VM clusters: Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the list of VM clusters,
locate the VM cluster you want to access and click its highlighted name to view the details page for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, locate the
Exadata DB system you want to access and click its name to display its details page.
4. In the Databases section, click the name of the database for which you want to change encryption management or
to rotate a key to display its details page.
5. Click the More Actions drop-down.

Oracle Cloud Infrastructure User Guide 1312


Database

6. Click Administer Encryption Key.


To rotate an encryption key on a database using customer-managed keys:
a. Click Rotate Encryption Key to display a confirmation dialog.
b. Click Rotate Key.
To change key management type from Oracle-managed keys to customer-managed keys:
a. Click Change Key Mangement Type.
b. Select Use customer-managed keys.
You must have a valid encryption key in Oracle Cloud Infrastructure Vault service and provide the
information in the subsequent steps. See Key and Secret Management Concepts on page 3989.
c. Choose a vault from the Vault in compartment drop-down. You can change the compartment by clicking the
CHANGE COMPARTMENT link.
d. Select an encryption key from the Master encryption key in compartment drop-down. You can change the
compartment containing the encryption key you want to use by clicking the CHANGE COMPARTMENT
link.
e. If you want to use an encryption key that you import into your vault, then select Choose the key version and
enter the OCID of the key you want to use in the Key version OCID field.
Note:

Changing key management causes the database to become briefly


unavailable.
Caution:

After changing key management to customer-managed keys, do not delete


the encryption key from the vault as this can cause the database to become
unavailable.
7. Click Apply.
On the database details page for this database, the Encryption section displays the encryption key name and the
encryption key OCID.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to manage databases.
• ListDatabases
• GetDatabase
• CreateDatabase
• UpdateDatabase - Use this operation to move a database to another Database Home
• DeleteDatabase
For the complete list of APIs for the Database service, see Database Service API.
Changing the Database Passwords
The password that you specify in the Database Admin Password field when you create a new Exadata Cloud Service
instance or database is set as the password for the SYS, SYSTEM, TDE wallet, and PDB Admin credentials. Use the
following procedures if you need to change passwords for an existing database.
Note that if you are enabling Data Guard for a database, the SYS password and the TDE wallet password of the
primary and standby databases must all be the same.
To change the SYS password for an Exadata Cloud Service database
1. Log onto the cloud VM cluster or DB system host as opc.

Oracle Cloud Infrastructure User Guide 1313


Database

2. Run the following command:

sudo dbaascli database changepassword --dbname <database_name>

To change the TDE wallet password for an Exadata Cloud Service database
1. Log onto the cloud VM cluster or DB system host as opc.
2. Run the following command:

sudo dbaascli tde changepassword --dbname <database_name>

Creating and Managing Exadata Databases Manually


Exadata Cloud Service instances include these command line tools for performing various tasks to manage individual
databases:
• dbaasapi - For adding and removing databases from the Exadata Cloud Service instance. See Using dbaasapi
on page 1314.
• dbaascli - For a variety of life-cycle and administration operations such as:
• Starting and stopping a database
• Starting and stopping the Oracle Net listener
• Viewing information about Oracle Homes
• Moving a database to another Oracle Home
• Deleting an unused Oracle Home
• Performing database configuration changes
• Managing Oracle Database software images
• Managing pluggable databases (PDBs)
• Performing database recovery
• Rotating the master encryption key
For details about how to use this CLI, see The dbaascli Utility.
Using dbaasapi
You can use the dbaasapi command line utility to create and delete databases on an Exadata DB system. The
utility operates like a REST API. It reads a JSON request body and produces a JSON response body in an output file.
The utility is located in the /var/opt/oracle/dbaasapi/ directory on the compute nodes and must be run as
the root user.
To learn how to add or remove Exadata databases by using the Oracle Cloud Infrastructure Console or API instead,
see Creating and Managing Exadata Databases on page 1307.
Note:

• You must update the cloud-specific tooling on all the compute nodes in
your Exadata Cloud Service instance before performing the following
procedures. For more information, see Updating an Exadata Cloud
Service Instance on page 1275.
• Only one dbaasapi operation can execute at a given time. We
recommend that you check the status of an operation to ensure it
completed before you run another operation.
• Databases that you create by using dbaasapi are visible in the Console
and through the API and CLI only if you create the database across all
nodes in the cluster. However, it can take up to 5 hours before you see
them.

Oracle Cloud Infrastructure User Guide 1314


Database

Prerequisites
If you plan to create a database and store its backups in the Oracle Cloud Infrastructure Object Storage, refer to
the prerequisites in Managing Exadata Database Backups on page 1321, and ensure that the system meets the
networking requirements for backing up to Object Storage. Review Create Database Parameters on page 1317 and
gather the information you'll need to supply in the input file you create for the dbaasapi operation.
Caution:

We recommend that you avoid specifying parameter values that include


confidential information when you use the dbaasapi commands.
Creating a Database
The following procedure creates directory called dbinput, a sample input file called myinput.json, and a
sample output file called createdb.out.
1. SSH to a compute node in the Exadata DB system.

ssh -i <private_key_path> opc@<node_ip_address>


2. Log in as opc and then sudo to the root user.

login as: opc

[opc@dbsys ~]$ sudo su -


3. Make a directory for the input file and change to the directory.

[root@dbsys ~]# mkdir –p /home/oracle/dbinput


# cd /home/oracle/dbinput
4. Create the input file in the directory. The following sample file will create a database configured to store backups
in an existing bucket in Object Storage. For parameter descriptions, see Create Database Parameters on page
1317.

{
"object": "db",
"action": "start",
"operation": "createdb",
"params": {
"nodelist": "",
"dbname": "exadb",
"edition": "EE_EP",
"version": "12.1.0.2",
"ohome_name": "oradbhome1",
"adminPassword": "<password>",
"sid": "exadb",
"pdbName": "PDB1",
"charset": "AL32UTF8",
"ncharset": "AL16UTF16",
"backupDestination": "OSS",
"cloudStorageContainer":
"https://swiftobjectstorage.<region_name>.oraclecloud.com/v1/mycompany/
DBBackups",
"cloudStorageUser": "<[email protected]>",
"cloudStoragePwd": "<auth_token>"
},
"outputfile": "/home/oracle/createdb.out",
"FLAGS": ""
}

Oracle Cloud Infrastructure User Guide 1315


Database

5. Run the utility and specify the input file.

[root@dbsys ~]# /var/opt/oracle/dbaasapi/dbaasapi -i myinput.json


6. Check the output file and note the ID.

[root@dbsys ~]# cat /home/oracle/createdb.out


{
"msg" : "",
"object" : "db",
"status" : "Starting",
"errmsg" : "",
"outputfile" : "/home/oracle/createdb.out",
"action" : "start",
"id" : "170",
"operation" : "createdb",
"logfile" : "/var/opt/oracle/log/gsa1/dbaasapi/db/createdb/1.log"
}
7. Create a JSON file to check the database creation status. Note the action of "status". Replace the ID and the
dbname with the values from the previous steps.

{
"object": "db",
"action": "status",
"operation": "createdb",
"id": 170,
"params": {
"dbname": "exadb"
},
"outputfile": "/home/oracle/createdb.out",
"FLAGS": ""
}
8. Run the utility with the status file as input and then check the utility output.
Rerun the status action regularly until the response indicates that the operation succeeded or failed.

[root@dbsys ~]# /var/opt/oracle/dbaasapi/dbaasapi -i db_status.json

[root@dbsys ~]# cat /home/oracle/createdb.out

{
"msg" : "Sync sqlnet file...[done]\\n##Done executing tde\\nWARN:
Could not register elogger_parameters: elogger.pm::_init: /var/opt/
oracle/dbaas_acfs/events does not exist\\n##Invoking assistant bkup\
\nUsing cmd : /var/opt/oracle/ocde/assistants/bkup/bkup -out /var/opt/
oracle/ocde/res/bkup.out -sid=\"exadb1\" -reco_grp=\"RECOC1\" -hostname=
\"ed1db01.data.customer1.oraclevcn.com\" -oracle_home=\"/u02/app/oracle/
product/12.1.0/dbhome_5\" -dbname=\"exadb\" -dbtype=\"exarac\" -exabm=
\"yes\" -edition=\"enterprise\" -bkup_cfg_files=\"no\" -acfs_vol_dir=\"/
var/opt/oracle/dbaas_acfs\" -bkup_oss_url=\"bkup_oss_url\" -bkup_oss_user=
\"bkup_oss_user\" -version=\"12102\" -oracle_base=\"/u02/app/oracle\" -
firstrun=\"no\" -action=\"config\" -bkup_oss=\"no\" -bkup_disk=\"no\" -
data_grp=\"DATAC1\" -action=config \\n\\n##Done executing bkup\\nWARN:
Could not register elogger_parameters: elogger.pm::_init: /var/opt/
oracle/dbaas_acfs/events does not existRemoved all entries from creg
file : /var/opt/oracle/creg/exadb.ini matching passwd or decrypt_key\
\n\\n#### Completed OCDE Successfully ####\\nWARN: Could not register
elogger_parameters: elogger.pm::_init: /var/opt/oracle/dbaas_acfs/events
does not exist",

Oracle Cloud Infrastructure User Guide 1316


Database

"object" : "db",
"status" : "Success",
"errmsg" : "",
"outputfile" : "/home/oracle/createdb_exadb.out",
"action" : "start",
"id" : "170",
"operation" : "createdb",
"logfile" : "/var/opt/oracle/log/exadb/dbaasapi/db/createdb/170.log"
}

Create Database Parameters


Use the following parameters to create a database.

Parameter Description
object The value "db".
action The value "start".
operation The value "createdb".
nodelist The value "" (an empty string). The database will be
created across all nodes in the cluster.
Note:

If you specify only a subset of


nodes, then the database you create
will not be visible in the Oracle
Cloud Infrastructure interfaces
(Console, API, and CLI).

dbname The database name, in quotes.


edition The value "EE_EP". (Only Enterprise Edition -
Extreme Performance is supported .)
version The database version as 18.0.0.0, 12.2.0.1,
12.1.0.2, or 11.2.0.4, in quotes.
ohome_name The name of the Oracle Database Home to use for the
new database, in quotes.
adminPassword The administrator (SYS and SYSTEM) password to
use for the new database, in quotes. The password must
be nine to thirty characters and contain at least two
uppercase, two lowercase, two numeric, and two special
characters. The special characters must be _, #, or -.
sid The SID of the database, in quotes.
pdbName The name of the pluggable database, in quotes.
charset The database character set, in quotes. For allowed
values, see Allowed Create Database Charset Values on
page 1319

ncharset The database national character set. The value


AL16UTF16 or UTF8, in quotes.

Oracle Cloud Infrastructure User Guide 1317


Database

Parameter Description
backupDestination The database backup destination, in quotes. You can
configure the following backup destinations.
NONE No backup destination is configured.
DISK Configure database backups to the local disk Fast
Recovery Area.
OSS Configure database backups to an existing bucket in
the Oracle Cloud Infrastructure Object Storage service.
You must specify all the cloudStorage parameters.
BOTH Configure database backups to both local disk and
an existing bucket in Object Storage. You must specify
all the cloudStorage parameters.
For example:
"backupDestination":"BOTH"

cloudStorageContainer=<swift_url> Required if you specify a backup destination of OSS or


BOTH. The Object Storage URL, your Oracle Cloud
Infrastructure tenant, and an existing bucket in the object
store to use as the backup destination, in the following
format:
https://swiftobjectstorage.<region_name>.oraclecl
v1/<tenant>/<bucket>
See Regions and Availability Domains on page 182 to
look up the region name string.
For example:
"cloudStorageContainer":"https://swiftobjectstora
v1/<company_name>/DBBackups"

cloudStorageUser=<user_name> Required if you specify a backup destination of OSS or


BOTH. The user name for the Oracle Cloud Infrastructure
user account, for example:
"cloudStorageUser":"[email protected]"
This is the user name you use to sign in to the Console.
The user name must be a member of the Administrators
group, as described in Prerequisites on page 1315.

cloudStoragePwd=<auth_token> Required if you specify a backup destination of OSS or


BOTH. The auth token generated by using the Console or
IAM API, in quotes, for example:
"cloudStoragePwd":"<auth_token>"
For more information, see Managing User Credentials on
page 2475.
This is not the password for the Oracle Cloud
Infrastructure user.

Oracle Cloud Infrastructure User Guide 1318


Database

Parameter Description
outputfile The absolute path for the output of the request,
for example, "outputfile":"/home/
oracle/createdb.out".
FLAGS The value "" (an empty string).

Allowed Create Database Charset Values


AL32UTF8, AR8ADOS710, AR8ADOS720, AR8APTEC715, AR8ARABICMACS, AR8ASMO8X,
AR8ISO8859P6, AR8MSWIN1256, AR8MUSSAD768, AR8NAFITHA711, AR8NAFITHA721,
AR8SAKHR706, AR8SAKHR707, AZ8ISO8859P9E, BG8MSWIN, BG8PC437S, BLT8CP921,
BLT8ISO8859P13, BLT8MSWIN1257, BLT8PC775, BN8BSCII, CDN8PC863, CEL8ISO8859P14,
CL8ISO8859P5, CL8ISOIR111, CL8KOI8R, CL8KOI8U, CL8MACCYRILLICS, CL8MSWIN1251,
EE8ISO8859P2, EE8MACCES, EE8MACCROATIANS, EE8MSWIN1250, EE8PC852, EL8DEC,
EL8ISO8859P7, EL8MACGREEKS, EL8MSWIN1253, EL8PC437S, EL8PC851, EL8PC869,
ET8MSWIN923, HU8ABMOD, HU8CWI2, IN8ISCII, IS8PC861, IW8ISO8859P8,
IW8MACHEBREWS, IW8MSWIN1255, IW8PC1507, JA16EUC, JA16EUCTILDE, JA16SJIS,
JA16SJISTILDE, JA16VMS, KO16KSCCS, KO16MSWIN949, LA8ISO6937, LA8PASSPORT,
LT8MSWIN921, LT8PC772, LT8PC774, LV8PC1117, LV8PC8LR, LV8RST104090, N8PC865,
NE8ISO8859P10, NEE8ISO8859P4, RU8BESTA, RU8PC855, RU8PC866, SE8ISO8859P3,
TH8MACTHAIS, TH8TISASCII, TR8DEC, TR8MACTURKISHS, TR8MSWIN1254, TR8PC857,
US7ASCII, US8PC437, UTF8, VN8MSWIN1258, VN8VN3, WE8DEC, WE8DG, WE8ISO8859P15,
WE8ISO8859P9, WE8MACROMAN8S, WE8MSWIN1252, WE8NCR4970, WE8NEXTSTEP, WE8PC850,
WE8PC858, WE8PC860, WE8ROMAN8, ZHS16CGB231280, ZHS16GBK, ZHT16BIG5, ZHT16CCDC,
ZHT16DBT, ZHT16HKSCS, ZHT16MSWIN950, ZHT32EUC, ZHT32SOPS, ZHT32TRIS
Deleting a Database
We recommend that you create a final backup before you delete any production (non-test) database. See Managing
Exadata Database Backups by Using bkup_api on page 1324 to learn how to back up an Exadata database.
1. SSH to a compute node (virtual machine) in the Exadata cloud VM cluster or DB system.

ssh -i <private_key_path> opc@<node_ip_address>


2. Log in as opc and then sudo to the root user.

login as: opc

[opc@dbsys ~]$ sudo su -


3. Make a directory for the input file and change to the directory.

[root@dbsys ~]# mkdir –p /home/oracle/dbinput

# cd /home/oracle/dbinput
4. Create the input file in the directory and specify the database name to delete and an output file. For more
information, see Delete Database Parameters on page 1321.

{
"object": "db",
"action": "start",
"operation": "deletedb",
"params": {
"dbname": "exadb"
},

Oracle Cloud Infrastructure User Guide 1319


Database

"outputfile": "/home/oracle/delete_exadb.out",
"FLAGS": ""
}
5. Run the utility and specify the input file.

[root@dbsys ~]# /var/opt/oracle/dbaasapi/dbaasapi -i myinput.json


6. Check the output file and note the ID.

[root@ed1db01 ~]# cat /home/oracle/delete_exadb.out

{
"msg" : "",
"object" : "db",
"status" : "Starting",
"errmsg" : "",
"outputfile" : "/home/oracle/deletedb.out",
"action" : "start",
"id" : "17",
"operation" : "deletedb",
"logfile" : "/var/opt/oracle/log/exadb/dbaasapi/db/deletedb/17.log"
}
7. Create a JSON file to check the database deletion status. Note the action of "status" in the sample file below.
Replace the ID and the dbname with the values from the previous steps.

{
"object": "db",
"action": "status",
"operation": "deletedb",
"id": 17,
"params": {
"dbname": "exadb"
},
"outputfile": "/home/oracle/deletedb.out",
"FLAGS": ""
}
8. Run the utility with the status file as input and then check the utility output.
Rerun the status action regularly until the response indicates that the operation succeeded.

[root@dbsys ~]# /var/opt/oracle/dbaasapi/dbaasapi -i db_status.json

[root@dbsys ~]# cat /home/oracle/deletedb.out

{
"msg" : "Using cmd : su - root -c \"/var/opt/oracle/ocde/assistants/
dg/dgcc -dbname exadb -action delete\" \\n\\n##Done executing dg\
\nWARN: Could not register elogger_parameters: elogger.pm::_init: /
var/opt/oracle/dbaas_acfs/events does not exist\\n##Invoking assistant
bkup\\nUsing cmd : /var/opt/oracle/ocde/assistants/bkup/bkup -out /
var/opt/oracle/ocde/res/bkup.out -bkup_oss_url=\"bkup_oss_url\" -
bkup_daily_time=\"0:13\" -bkup_oss_user=\"bkup_oss_user\" -dbname=\"exadb
\" -dbtype=\"exarac\" -exabm=\"yes\" -firstrun=\"no\" -action=\"delete\" -
bkup_cfg_files=\"no\" -bkup_oss=\"no\" -bkup_disk=\"no\" -action=delete \
\n\\n##Done executing bkup\\nWARN: Could not register elogger_parameters:
elogger.pm::_init: /var/opt/oracle/dbaas_acfs/events does not exist\
\n##Invoking assistant dbda\\nUsing cmd : /var/opt/oracle/ocde/assistants/

Oracle Cloud Infrastructure User Guide 1320


Database

dbda/dbda -out /var/opt/oracle/ocde/res/dbda.out -em=\"no\" -pga_target=


\"2000\" -dbtype=\"exarac\" -sga_target=\"2800\" -action=\"delete\" -
build=\"no\" -nid=\"no\" -dbname=\"exadb\" -action=delete \\n",
"object" : "db",
"status" : "InProgress",
"errmsg" : "",
"outputfile" : "/home/oracle/deletedb.out",
"action" : "start",
"id" : "17",
"operation" : "deletedb",
"logfile" : "/var/opt/oracle/log/exadb/dbaasapi/db/deletedb/17.log"
}

Delete Database Parameters


Use the following parameters to delete a database.

Parameter Description
object The value "db".
action The value "start".
operation The value "deletedb".
dbname The database name, in quotes.
outputfile The absolute path for the output of the request, for
example, "/home/oracle/deletedb.out".
FLAGS The value "" (an empty string).

Managing Exadata Database Backups


This topic explains how to work with Exadata database backups managed by Oracle Cloud Infrastructure. You do
this by using the Console or the API. (For unmanaged backups, see Managing Exadata Database Backups by Using
bkup_api on page 1324.)
Important:

If you previously used bkup_api to configure backups and then you switch
to using the Console or the API for backups:
• A new backup configuration is created and associated with your database.
This means that you can no longer rely on your previously configured
unmanaged backups to work.
• bkup_api uses cron jobs to schedule backups. These jobs are not
automatically removed when you switch to using managed backups.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150.
Prerequisites
• Review the information and instructions in Configuring a Static Route for Accessing the Object Store on page
1255 and ensure that you configure the static route for the backup subnet on each compute node (for DB
systems) or virtual machine (for cloud VM clusters) in the Exadata Cloud Service instance.

Oracle Cloud Infrastructure User Guide 1321


Database

• Your Exadata Cloud Service instance must have connectivity to the applicable Swift endpoint for Object Storage.
See https://www.oracle.com/cloud/storage/object-storage-faq.html for information about the Swift endpoints to
use.
Important:

To avoid backup failures, ensure that the database's archiving mode is set to
ARCHIVELOG (the default).
Using the Console
You can use the Console to enable automatic incremental backups, create full backups on demand, and view the list
of managed backups for a database. You can also use the Console to delete manual (on-demand) backups.
Note:

• The list of backups you see in the Console does not include any
unmanaged backups (backups created directly by using bkup_api).
• All backups are encrypted with the same master key used for Transparent
Data Encryption (TDE) wallet encryption.
• Backups for a particular database are listed on the details page for that
database. The Encryption Key column displays either Oracle-Managed
Key or a key name if you are using your own encryption keys to protect
the database. See Backing Up Vaults and Keys on page 4039 for more
information.
Caution:

Do not delete any necessary encryption keys from the vault because this
causes databases and backups protected by the key to become unavailable.
The database and infrastructure (the VM cluster or DB system) must be in an “Available” state for a backup operation
to run successfully. Oracle recommends that you avoid performing actions that could interfere with availability (such
as patching operations) while a backup operation is in progress. If an automatic backup operation fails, the Database
service retries the operation during the next day’s backup window. If an on-demand full backup fails, you can try the
operation again when the Exadata Cloud Service instance and database availability are restored.

Automatic Incremental Backups


When you enable the Automatic Backup feature, the service creates daily incremental backups of the database to
Object Storage. The first backup created is a level 0 backup. Then, level 1 backups are created every day until the
next weekend. Every weekend, the cycle repeats, starting with a new level 0 backup.

Backup Retention
If you choose to enable automatic backups, you can choose one of the following preset retention periods: 7 days, 15
days, 30 days, 45 days, or 60 days. The system automatically deletes your incremental backups at the end of your
chosen retention period.

Backup Scheduling
The automatic backup process starts at any time during your daily backup window. You can optionally specify a
2-hour scheduling window for your database during which the automatic backup process will begin. There are 12
scheduling windows to choose from, each starting on an even-numbered hour (for example, one window runs from
4:00-6:00 AM, and the next from 6:00-8:00 AM). Backups jobs do not necessarily complete within the scheduling
window
The default backup window of 00:00 to 06:00 in the time zone of the Exadata Cloud Service instance's region is
assigned to your database if you do not specify a window. Note that the default backup scheduling window is six
hours long, while the windows you specify are two hours long.

Oracle Cloud Infrastructure User Guide 1322


Database

Note:

• Data Guard - You can enable the Automatic Backup feature on a


database with the standby role in a Data Guard association. However,
automatic backups for that database will not be created until it assumes
the primary role.
• Retention Period Changes - If you shorten your database's automatic
backup retention period in the future, existing backups falling outside the
updated retention period are deleted by the system.
• Object Storage Costs - Automatic backups incur Object Storage usage
costs.

On-Demand Full Backups


You can create a full backup of your database at any time.

Standalone Backups
When you terminate a Exadata Cloud Service instancea database, all of its resources are deleted, along with any
automatic backups. Full backups remain in Object Storage as standalone backups. You can use a standalone backup to
create a new database.
To configure automatic backups for a database
When you create an Exadata Cloud Service instance, you can optionally enable automatic backups for the initial
database. Use this procedure to enable or disable automatic backups after the database is created.
Note:

Databases in a security zone compartment must have automatic backups


enabled. See the Security Zone Policies topic for a full list of policies that
affect Database service resources.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Navigate to the cloud VM cluster or DB system containing the database you want to configure:
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. In the list of databases, find the database for which you want to enable or disable automatic backups, and click its
name to display database details. The details indicate whether automatic backups are enabled.
5. Click Configure Automatic Backups.
6. In the Configure Automatic Backups dialog, check or uncheck Enable Automatic Backup, as applicable. If you
are enabling automatic backups, you can choose one of the following preset retention periods: 7 days, 15 days, 30
days, 45 days, or 60 days. The default selection is 30 days.
7. Click Save Changes.
To create an on-demand full backup of a database
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.

Oracle Cloud Infrastructure User Guide 1323


Database

3. Navigate to the cloud VM cluster or DB system containing the database you want to back up:
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. In the list of databases, find the database for which you want to create an on-demand full backup and click its
name to display database details.
5. Under Resources, click Backups.
A list of backups is displayed.
6. Click Create Backup.
To delete full backups from Object Storage
Note:

You cannot explicitly delete automatic backups. Unless you terminate the
database, automatic backups remain in Object Storage for 30 days, after
which time they are automatically deleted.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Navigate to the cloud VM cluster or DB system containing the database backup you want to delete:
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. In the list of databases, find the database you are interested in and click its name to display database details.
5. Under Resources, click Backups.
A list of backups is displayed.
6. Click the Actions icon (three dots) for the backup you are interested in, and then click Delete.
7. Confirm when prompted.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to manage database backups:
• ListBackups
• GetBackup
• CreateBackup
• DeleteBackup
• UpdateDatabase - To enable and disable automatic backups.
For the complete list of APIs for the Database service, see Database Service API.
What's Next?
See Recovering an Exadata Database from Object Storage on page 1329.

Managing Exadata Database Backups by Using bkup_api


You can use Exadata's backup utility, bkup_api, to back up databases on an Exadata Cloud Service instance to an
existing bucket in the Oracle Cloud Infrastructure Object Storage service and to the local disk Fast Recovery Area.

Oracle Cloud Infrastructure User Guide 1324


Database

For backups managed by Oracle Cloud Infrastructure, see Managing Exadata Database Backups on page 1321.
This topic explains how to:
• Create a backup configuration file that indicates the backup destination, when the backup should run, and how
long backups are retained. If the backup destination is Object Storage, the file also contains the credentials to
access the service.
• Associate the backup configuration file with a database. The database will be backed up as scheduled, or you can
create an on-demand backup.
Note:

You must update the cloud-specific tooling on all the compute nodes in your
Exadata Cloud Service instance before performing the following procedures.
For more information, see Updating an Exadata Cloud Service Instance on
page 1275.
Prerequisites
• The Exadata Cloud Service instance requires access to the Oracle Cloud Infrastructure Object Storage service.
Oracle recommends using a service gateway with the VCN to enable this access. For more information, see
Network Setup for Exadata Cloud Service Instances on page 1233. In that topic, pay particular attention to:
• Service Gateway for the VCN on page 1238
• Node Access to Object Storage: Static Route on page 1237
• Backup egress rule: Allows access to Object Storage on page 1243
• An existing Object Storage bucket to use as the backup destination. You can use the Console or the Object Storage
API to create the bucket. For more information, see Managing Buckets on page 3426.
• An auth token generated by Oracle Cloud Infrastructure. You can use the Console or the IAM API to generate the
password. For more information, see Working with Auth Tokens.
• The user name specified in the backup configuration file must have tenancy-level access to Object Storage. An
easy way to do this is to add the user name to the Administrators group. However, that allows access to all of the
cloud services. Instead, an administrator should create a policy like the following that limits access to only the
required resources in Object Storage for backing up and restoring the database:

Allow group <group_name> to manage objects in


compartment <compartment_name> where target.bucket.name = '<bucket_name>'

Allow group <group_name> to read buckets in compartment <compartment_name>

For more information about adding a user to a group, see Managing Groups on page 2438. For more information
about policies, see Getting Started with Policies on page 2143.
Default Backup Configuration
The backup configuration follows a set of Oracle best-practice guidelines:
• Full (level 0) backup of the database followed by rolling incremental (level 1) backups on a seven-day cycle (a 30-
day cycle for the Object Storage destination).
• Full backup of selected system files.
• Automatic backups daily at a specific time set during the database deployment creation process.
Retention period:
• Both Object Storage and local storage: 30 days, with the 7 most recent days' backups available on local storage.
• Object Storage only: 30 days.
• Local storage only: Seven days.
Encryption:
• Both Object Storage and local storage: All backups to cloud storage are encrypted.
• Object Storage only: All backups to cloud storage are encrypted.

Oracle Cloud Infrastructure User Guide 1325


Database

Managing Backups
To create a backup configuration file
Important:

The following procedure must be performed on the first compute node in


the Exadata Cloud Service instance's VM cluster or DB system resource. To
determine the first compute node, connect to any compute node as the grid
user and execute the following command:
$ $ORACLE_HOME/bin/olsnodes -n
The first node has the number 1 listed beside the node name.
1. SSH to the first compute node in the VM cluster or DB system resource.

ssh -i <private_key_path> opc@<node_1_ip_address>


2. Log in as opc and then sudo to the root user.

login as: opc

[opc@dbsys ~]$ sudo su -


3. Create a new backup configuration file in /var/opt/oracle/ocde/assistants/bkup/ as shown in the
sample configuration file below. This example uses the file name bkup.cfg, but you can provide your own file
name. The following file schedules a backup to both local storage and an existing bucket in Object Storage.
The parameters are described below this procedure.

[root@dbsys ~]# cd /var/opt/oracle/ocde/assistants/bkup/

vi bkup.cfg
bkup_disk=yes
bkup_oss=yes
bkup_oss_url=https://swiftobjectstorage.<region>.oraclecloud.com/v1/
companyabc/DBBackups
bkup_oss_user=<oci_user_name>
bkup_oss_passwd=<password>
bkup_oss_recovery_window=7
bkup_daily_time=06:45
4. Change the permissions of the file.

[root@dbsys bkup]# chmod 600 bkup.cfg


5. Use the following command to install the backup configuration, configure the credentials, schedule the backup,
and associate the configuration with a database name.

[root@dbsys bkup]# ./bkup -cfg bkup.cfg -dbname=<database_name>

The backup is scheduled via cron and can be viewed at /etc/crontab.


When the scheduled backup runs, you can check its progress with the following command.

[root@dbsys bkup]# /var/opt/oracle/bkup_api/bkup_api bkup_status

The backup configuration file parameters are described in the following table:

Parameter Description
bkup_disk=[yes|no] Whether to back up locally to disk (Fast Recovery Area).

Oracle Cloud Infrastructure User Guide 1326


Database

Parameter Description
bkup_oss=[yes|no] Whether to back up to Object Storage. If yes, you must
also provide the parameters bkup_oss_url, bkup_oss_user,
bkup_oss_passwd, and bkup_oss_recovery_window.
bkup_oss_url=<swift_url> Required if bkup_oss=yes.
The Object Storage URL including the tenant and bucket
you want to use. The URL is:
https://swiftobjectstorage.<region_name>.oracleclou
v1/<tenant>/<bucket>
where <tenant> is the lowercase tenant name (even if
it contains uppercase characters) that you specify when
signing in to the Console and <bucket> is the name of the
existing bucket you want to use for backups.

bkup_oss_user=<oci_user_name> Required if bkup_oss=yes.


The user name for the Oracle Cloud Infrastructure user
account. This is the user name you use to sign in to the
Oracle Cloud Infrastructure Console.
For example, [email protected] for a local user or
<identity_provider>/[email protected] for
a federated user.
To determine which type of user you have, see the following
topics:
• Managing Users on page 2433 (for information on
local users)
• Federating with Identity Providers on page 2381 (for
information on federated users)
Note that the user must be a member of the Administrators
group, as described in Prerequisites on page 1325.

bkup_oss_passwd=<auth_token> Required if bkup_oss=yes.


The auth token generated by using the Console or IAM API,
as described in Prerequisites on page 1325.
This is not the password for the Oracle Cloud Infrastructure
user.

bkup_oss_recovery_window=n Required if bkup_oss=yes.


The number of days for which backups and archived redo
logs are maintained in the Object Storage bucket. Specify 1
to 30 days.

bkup_daily_time=hh:mm The time at which the daily backup is scheduled, specified


in hours and minutes (hh:mm), in 24-hour format.

To create an on-demand backup


You can use the bkup_api utility to create an on-demand backup of a database.

Oracle Cloud Infrastructure User Guide 1327


Database

1. SSH to the first compute node in the Exadata VM cluster or DB system resource.

ssh -i <private_key_path> opc@<node_1_ip_address>

To determine the first compute node, connect to any compute node as the grid user and execute the following
command:

$ $ORACLE_HOME/bin/olsnodes -n

The first node has the number 1 listed beside the node name.
2. Log in as opc and then sudo to the root user.

login as: opc

[opc@dbsys ~]$ sudo su -


3. You can let the backup follow the current retention policy, or you can create a long-term backup that persists until
you delete it:
• To create a backup that follows the current retention policy, enter the following command:

# /var/opt/oracle/bkup_api/bkup_api bkup_start --dbname=<database_name>


• To create a long-term backup, enter the following command:

# /var/opt/oracle/bkup_api/bkup_api bkup_start --keep --


dbname=<database_name>
4. Exit the root-user command shell and disconnect from the compute node:

# exit
$ exit

By default, the backup is given a timestamp-based tag. To specify a custom backup tag, add the --tag option to
the bkup_api command; for example, to create a long-term backup with the tag "monthly", enter the following
command:

# /var/opt/oracle/bkup_api/bkup_api bkup_start --keep --tag=monthly

After you enter a bkup_api bkup_start command, the bkup_api utility starts the backup process, which runs
in the background. To check the progress of the backup process, enter the following command:

# /var/opt/oracle/bkup_api/bkup_api bkup_status --dbname=<database_name>

To remove the backup configuration


A backup configuration can contain the credentials to access the Object Storage bucket. For this reason, you might
want to remove the file after successfully configuring the backup.

[root@dbsys bkup]# rm bkup.cfg

To delete a local backup


To delete a backup of a database deployment on the Exadata Cloud Service instance, use the bkup_api utility.

Oracle Cloud Infrastructure User Guide 1328


Database

1. Connect to the first compute node in your Exadata VM cluster or DB system resource as the opc user.
To determine the first compute node, connect to any compute node as the grid user and execute the following
command:

$ $ORACLE_HOME/bin/olsnodes -n

The first node has the number 1 listed beside the node name.
2. Start a root-user command shell:

$ sudo -s#
3. List the available backups:

# >/var/opt/oracle/bkup_api/bkup_api recover_list --dbname=<database_name>

where dbname is the database name for the database that you want to act on.
A list of available backups is displayed.
4. Delete the backup you want:

# /var/opt/oracle/bkup_api/bkup_api bkup_delete --bkup=<backup-tag> --


dbname=<database_name>

where backup-tag is the tag of the backup you want to delete.


5. Exit the root-user command shell:

# exit$

To delete a backup in Object Storage


Use the RMAN delete backup command to delete a backup from the Object Store.
What Next?
If you used Object Storage as a backup destination, you can display the backup files in your bucket in the Console on
the Storage page, by selecting Object Storage.
You can manually restore a database backup by using the RMAN utility. For information about using RMAN, see the
Oracle Database Backup and Recovery User's Guide for Release 18.1, 12.2, 12.1, or 11.2.

Recovering an Exadata Database from Object Storage


This topic explains how to recover an Exadata database from a backup stored in Object Storage by using the
Console or the API. The Object Storage service is a secure, scalable, on-demand storage solution in Oracle Cloud
Infrastructure. For information on backing up your databases to Object Storage, see Managing Exadata Database
Backups on page 1321.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150.
Using the Console
You can use the Console to restore the database from a backup in the Object Storage that was created by using the
Console or the API. You can restore to the last known good state of the database, or you can specify a point in time or
an existing System Change Number (SCN).

Oracle Cloud Infrastructure User Guide 1329


Database

Note:

The list of backups you see in the Console does not include any unmanaged
backups (backups created directly by using bkup_api ).

Restoring an Existing Database


To restore a database
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Navigate to the cloud VM cluster or DB system containing the database you want to restore:
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. In the list of databases, find the database you want to restore, and click its name to display details about it.
5. Click Restore.
6. Select one of the following options, and click Restore Database:
• Restore to the latest: Restores the database to the last known good state with the least possible data loss.
• Restore to the timestamp: Restores the database to the timestamp specified.
• Restore to System Change Number (SCN): Restores the database using the SCN specified. This SCN must
be valid.
Tip:

You can determine the SCN number to use either by accessing and
querying your database host, or by accessing any online or archived logs.
7. Confirm when prompted.
If the restore operation fails, the database will be in a "Restore Failed" state. You can try restoring again using
a different restore option. However, Oracle recommends that you review the RMAN logs on the host and fix any
issues before reattempting to restore the database. These log files can be found in subdirectories of the /var/
opt/oracle/log directory.
To restore a database using a specific backup from Object Storage
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Navigate to the cloud VM cluster or DB system containing the database you want to restore:
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. In the list of databases, find the database you want to restore, and click its name to display details about it.
5. Under Resources, click Backups.
A list of backups is displayed.
6. Click the Actions icon (three dots) for the backup you are interested in, and then click Restore.
7. Confirm when prompted.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.

Oracle Cloud Infrastructure User Guide 1330


Database

Use these API operations to recover a database:


• ListBackups
• GetBackup
• RestoreDatabase
For the complete list of APIs for the Database service, see Database Service API.

Recovering an Exadata Database by Using RMAN


If you backed up your Exadata database by using bkup_api, you can manually restore that database backup by
using the Oracle Recovery Manager (RMAN) utility. For information about using RMAN, see the Oracle Database
Backup and Recovery User's Guide for Release 18.1, 12.2, 12.1, or 11.2.
To restore an Exadata database from a managed backup, see Recovering an Exadata Database from Object Storage on
page 1329.

Creating Oracle Database Homes on an Exadata Cloud Service Instance


You can add Oracle Database Homes (referred to as "Database Homes" in Oracle Cloud Infrastructure) to an existing
Exadata Cloud Service instance by using the Oracle Cloud Infrastructure Console, the API, or the CLI.
A Database Home is a directory location on the Exadata database compute nodes that contains Oracle Database
software binary files. Compute nodes are also referred to as virtual machines in the Oracle Cloud Infrastructure
Console
After you provision the Exadata Cloud Service instance, you can create one or more Database Homes in the instance,
and add databases to any of the Database Homes.
You can also add and remove Database Homes, and perform other management tasks on a Database Home by using
the dbaascli utility. For information and instructions, see Managing Oracle Database Homes Manually on page
1334.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let database admins manage Oracle Cloud database systems on page 2158 lets the
specified group do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. If
you want to dig deeper into writing policies for databases, see Details for the Database Service on page 2251.
Using the Console
To create a new Database Home in an existing Exadata Cloud Service instance
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Navigate to the cloud VM cluster or DB system you want to create the new Database Home on:
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. Under Resources, click Database Homes.
A list of Database Homes is displayed.
5. Click Create Database Home.

Oracle Cloud Infrastructure User Guide 1331


Database

6. In the Create Database Home dialog, enter the following:


• Database Home display name: The display name for the Database Home. Avoid entering confidential
information.
• Database image: Determines what Oracle Database version is used for the database. You can mix database
versions on the DB system, but not editions. By default, the latest Oracle-published database software image is
selected.
Click Change Database Image to use an older Oracle-published image or a custom database software image
that you have created in advance, then select an Image Type:
• Oracle Provided Database Software Images: These images contain generally available versions of
Oracle Database software.
• Custom Database Software Images: These images are created by your organization and contain
customized configurations of software updates and patches. Use the Select a compartment and Select a
Database version selectors to limit the list of custom database software images to a specific compartment
or Oracle Database software major release version.
After choosing a software image, click Select to return to the Create Database dialog.
• Click Show Advanced Options to specify advanced options for the Database Home.

Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags
to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more
information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then
skip this option (you can apply tags later) or ask your administrator.
7. Click Create.
When the Database Home creation is complete, the status changes from Provisioning to Available.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the CreateDbHome API operation to create Database Homes.
For the complete list of APIs for the Database service, see Database Service API.

Managing Oracle Database Homes on an Exadata Cloud Service Instance


You can delete or view information about Oracle Database Homes (referred to as "Database Homes" in Oracle Cloud
Infrastructure) by using the Oracle Cloud Infrastructure Console, the API, or the CLI.
For information on how to perform these tasks manually, see Managing Oracle Database Homes Manually on page
1334.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let database admins manage Oracle Cloud database systems on page 2158 lets the
specified group do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. If
you want to dig deeper into writing policies for databases, see Details for the Database Service on page 2251.
Using the Console
To view information about a Database Home
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.

Oracle Cloud Infrastructure User Guide 1332


Database

3. Navigate to the cloud VM cluster or DB system containing the Database Home:


Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. On the DB System Details page, under Resources, click Database Homes.
5. In the list of Database Homes, find the Database Home you are interested in, and then click its name to display
details about it.
To delete a Database Home
You cannot delete a Database Home that contains databases. You must first terminate the databases to empty the
Database Home. See To terminate a database on page 1311 to learn how to terminate a database.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Navigate to the cloud VM cluster or DB system containing the Database Home you want to delete:
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. On the DB System Details page, under Resources, click Database Homes.
5. In the list of Database Homes, find the Database Home you want to delete, and then click its name to display
details about it.
6. On the Database Home Details page, click Delete.
If the Database Home contains databases, you will not be able to proceed. You must cancel the deletion, empty the
Database Home as applicable, and then retry the deletion.
To manage tags for your Database Home
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Navigate to the cloud VM cluster or DB system containing the Database Home:
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. Under Resources, click Database Homes.
5. In the list of Database Homes, find the Database Home you want to administer.
6. Click the the Actions icon (three dots) on the row listing the Database Home, and then click Add Tags.
For more information, see Resource Tags on page 213.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to manage Database Homes:
• ListDbHomes
• GetDbHome
• DeleteDbHome

Oracle Cloud Infrastructure User Guide 1333


Database

For the complete list of APIs for the Database service, see Database Service API.

Managing Oracle Database Homes Manually


This topic describes how to manage Oracle Database Homes (also called "Oracle Homes" or "Database Homes") by
using the dbaascli utility.
An Oracle Database Home is a directory location on the compute nodes that contains Oracle Database binaries.
Exadata Cloud Service instances enable multiple databases to share a set of Oracle Database binaries in a shared
Oracle Home directory location.
For information on how to manage Database Homes by using the Oracle Cloud Infrastructure Console, the API, or the
CLI, see Managing Oracle Database Homes on an Exadata Cloud Service Instance on page 1332.
Viewing Information About Oracle Homes
You can view information about Oracle Home directory locations by using the dbhome info subcommand of the
dbaascli utility as follows.
1. Connect to a compute node as the opc user.
For detailed instructions, see Connecting to an Exadata Cloud Service Instance on page 1270.
2. Start a root-user command shell:

$ sudo -s
#
3. Execute the dbaascli command with the dbhome info subcommand:

# dbaascli dbhome info


4. When prompted, press Enter to view information about all Oracle Homes registered in your Exadata Cloud
Service instance, or specify an Oracle Home name to view information only about that Oracle Home.
5. Exit the root-user command shell:

# exit
$

Moving a Database to Another Oracle Home


Moving a database to another Oracle Home enables you to consolidate existing Oracle Homes and manage the storage
that they consume. You can move a database to another Oracle Home by using the database move subcommand
of the dbaascli utility as follows.
1. Connect to a compute node as the opc user.
For detailed instructions, see Connecting to an Exadata Cloud Service Instance on page 1270.
2. Start a root-user command shell:

$ sudo -s
#
3. Ensure that all database instances associated with the database deployment are up and running.

# dbaascli database status --dbname <dbname>

In the preceding command, <dbname> specifies the name of the database that you want to check.
Restart any database instances that are not running and open.

Oracle Cloud Infrastructure User Guide 1334


Database

4. Execute the dbaascli command with the database move subcommand:

# dbaascli database move --dbname <dbname> --ohome <oracle_home>

In the preceding command:


• <dbname> — specifies the name of the database that you want to move.
• <oracle_home> — specifies the path to an existing Oracle Home directory location, which you want the
specified database to use.
When performing a move operation to an Oracle Home with a different patch level, if the database is part of a
Data Guard association, then ensure that you move the standby database to the new patchset before you move the
primary database.
5. Exit the root-user command shell:

# exit
$

Creating an Oracle Home


You can create an Oracle Home directory location and software installation, without creating a database, by using the
dbhome create subcommand of the dbaascli utility as follows.
1. Connect to a compute node as the opc user.
For detailed instructions, see Connecting to an Exadata Cloud Service Instance on page 1270.
2. Start a root-user command shell:

$ sudo -s
#
3. Run the dbaascli command with the dbhome create subcommand:

# dbaascli dbhome create --version <software_version>

In the preceding command, <software_version> specifies an Oracle Database software version. For example,
19000, 18000, 12201, 12102, or 11204. The latest available bundle patch for the specified software version is
automatically used.
To see information about Oracle Database software images that are available in your Exadata Cloud Service
instance, including software version and bundle patch details, use the dbaascli dbimage list command.
When prompted, type yes to confirm that the installation is based on a local software image.
4. Exit the root-user command shell:

# exit
$

Deleting an Oracle Home


If an Oracle Home directory does not support any databases, you can delete it by using the dbhome purge
subcommand of the dbaascli utility as follows.
1. Connect to a compute node as the opc user.
For detailed instructions, see Connecting to an Exadata Cloud Service Instance on page 1270.
2. Start a root-user command shell:

$ sudo -s
#

Oracle Cloud Infrastructure User Guide 1335


Database

3. Execute the dbaascli command with the dbhome purge subcommand:

# dbaascli dbhome purge


4. When prompted, enter:
• 1 — if you want to specify the Oracle Home name for the location being purged.
• 2 — if you want to specify the Oracle Home directory path for the location being purged.
5. When next prompted, enter the Oracle Home name or directory path for the location being purged.
If your entries are valid and the Oracle Home is not associated with a database, then the Oracle binaries are
removed from the Oracle Home directory location and the associated metadata is removed from the system.
6. Exit the root-user command shell:

# exit
$

Monitoring and Managing Exadata Storage Servers with ExaCLI


The ExaCLI command line utility allows you to perform monitoring and management functions on Exadata storage
servers in an Exadata Cloud Service instance. ExaCLI offers a subset of the commands found in the on-premises
Exadata command line utility CellCLI. The utility runs on the database compute nodes in the Exadata Cloud Service
instance.
See the ExaCLI Command Reference on page 1338 list in this topic to learn what commands are available.
Username and Password
You need a username and password to connect to the Exadata Storage Server. On Exadata Cloud@Customer, the
preconfigured user is cloud_user_clustername, where clustername is the name of the virtual machine
(VM) cluster that is being used. You can determine the name of the VM cluster by running the following command as
the grid user on any cluster node:

$ crsctl get cluster name

The password forcloud_user_clustername is initially set to the administration password that you specify
when creating the starter database deployment on the VM cluster.
Command Syntax
For Exadata Storage Server targets, ExaCLI supports the same command syntax as CellCLI. Construct you
commands using the syntax that follows. Note that the syntax example assumes you are the opc user on a compute
node.

exacli -c [username@]remotehost[:port] [-l username] [--xml] [--cookie-


jar filename] [-e {command | 'command; command' | @batchfile}]

Example 1
This example shows the user on an Exadata compute node issuing the command to log in to ExaCLI start an
interactive ExaCLI session on a storage server:

[opc@exacs-node1 ~]$ exacli -l cloud_user_clustername -c 192.168.136.7

See Connecting to a Storage Server with ExaCLI for information on determining your storage server's IP address.
Once logged in, run additional commands as follows:

exacli [email protected]> LIST DATABASE


ASM
HRCDB

Oracle Cloud Infrastructure User Guide 1336


Database

Example 2
This example shows a single command issued on a compute node that does the following:
• Connects to a storage server
• Performs a LIST action
• Exits the session (specified with the "-e" flag)

[opc@exacs-node1 ~]$ exacli -l cloud_user_clustername -c 192.168.136.7 --xml


--cookie-jar -e list griddisk detail

Options

Option Description

-c [username@]remotehost or Specifies the remote node to which you want to connect.


ExaCLI prompts for the user name if not specified.
--connect [username@]remotehost[:port]

-l username or Specifies the user name to log into the remote node. The
preconfigured user is cloud_user_clustername.
--login-name username

--xml Displays the output in XML format.

--cookie-jar [filename] Specifies the filename of the cookie jar to use. If


filename is not specified, the cookie is stored in a default
cookie jar located at HOME/.exacli/cookiejar,
where HOME is the home directory of the OS user
running the ExaCLI command.
The presence of a valid cookie allows the ExaCLI user
to execute commands without requiring to login in
subsequent ExaCLI sessions.

-e command or Specifies either the ExaCLI commands to run or a batch


file. ExaCLI exits after running the commands.
-e 'command[; command]' or
If specifying multiple commands to run, enclose the
-e @batchFile
commands in single quotes to prevent the shell from
interpreting the semi-colon.
Omit this option to start an interactive ExaCLI session.

--cert-proxy proxy[:port] Specifies the proxy server to use when downloading


certificates. If port is omitted, port 80 is used by
default.

-n or Suppresses prompting for user input.


--no-prompt

Usage Notes
• Notes for the --cookie-jar option:
• The user name and password are sent to the remote node for authentication. On successful authentication, the
remote node issues a cookie (the login credentials) that is stored in the specified filename on the database

Oracle Cloud Infrastructure User Guide 1337


Database

node. If filename is not specified, the cookie is stored in a default cookie jar located at HOME/.exacli/
cookiejar, where HOME is the home directory of the operating system user running the ExaCLI command.
For the opc user, the home is /home/opc.
• The operating system user running the ExaCLI command is the owner of the cookie-jar file.
• A cookie jar can contain multiple cookies from multiple users on multiple nodes in parallel sessions.
• Cookies are invalidated after 24 hours.
• If the cookie is not found or is no longer valid, ExaCLI prompts for the password. The new cookie is stored in
the cookie jar identified by filename, or the default cookie jar if filename is not specified.
• Even without the --cookie-jar option, ExaCLI still checks for cookies from the default cookie jar.
However, if the cookie does not exist or is no longer valid, the new cookie will not be stored in the default
cookie jar if the --cookie-jar option is not specified.
• Notes for the -e option:
• ExaCLI exits after running the commands.
• If specifying multiple commands to run, be sure to enclose the commands in single quotes to prevent the shell
from interpreting the semi-colon.
• The batch file is a text file that contains one or more ExaCLI commands to run.
• Notes for the -n (--no-prompt) option:
• If ExaCLI needs additional information from the user, for example, if ExaCLI needs to prompt the user for a
password (possibly because there were no valid cookies in the cookie-jar) or to prompt the user to confirm the
remote node’s identity, then ExaCLI prints an error message and exits.
Connecting to a Storage Server with ExaCLI
To use ExaCLI on storage servers, you will need to know your target storage server's IP address. If you do not know
the IP address of the node you want to connect to, you can find it by viewing the contents of the cellip.ora file.
The following example illustrates how to do so on the UNIX command line for a quarter rack system. (Note that a
quarter rack has three storage cells, and each cell has two connections, so a total of six IP addresses are shown.)

[root@exacs-node1 ~]# cat /etc/oracle/cell/


network-config/cellip.ora
cell="192.168.136.5;cell="192.168.136.6"
cell="192.168.136.7;cell="192.168.136.8"
cell="192.168.136.9;cell="192.168.136.10"

If you are connecting to a storage cell for the first time using ExaCLI, you may be prompted to accept an
SSL certificate. The ExaCLI output in this case will look like the following:

[opc@exacs-node1 ~]$ exacli -l cloud_user_clustername -c 192.168.136.7 --


cookie-jar
No cookies found for [email protected]
Password: *********
EXA-30016: This connection is not secure. You have asked ExaCLI to connect
to cell 192.168.136.7 securely. The identity of 192.168.136.7 cannot be
verified.
Got certificate from server:
C=US,ST=California,L=Redwood City,O=Oracle Corporation,OU=Oracle
Exadata,CN=ed1cl03clu01-priv2.usdc2.oraclecloud.com
Do you want to accept and store this certificate? (Press y/n)

Accept the self-signed Oracle certificate by pressing "y" to continue using ExaCLI.
ExaCLI Command Reference
You can execute various ExaCLI commands to monitor and manage Exadata Storage Servers associated with your
Oracle Cloud Infrastructure Exadata DB system. ExaCLI allows you to get up-to-date, real-time information about
your Exadata Cloud Service.
Use the LIST command with the following services and objects:

Oracle Cloud Infrastructure User Guide 1338


Database

• ACTIVEREQUEST- Lists all active requests that are currently being served by the storage servers.
• ALERTDEFINITION - Lists all possible alerts and their sources for storage servers.
• ALERTHISTORY - Lists all alerts that have been issues for the storage servers.
• CELL - Used to list the details of a specific attribute of the storage servers or storage cells. The syntax is as
follows: LIST CELL ATTRIBUTES A,B,C, with A, B, and C being attributes. To see all cell attributes, use
the LIST CELL ATTRIBUTES ALL command.
• CELLDISK - Lists the attributes of the cell disks in the storage servers. Use the following syntax to list the cell
disk details: LIST CELLDISK cell_disk_name DETAIL.
• DATABASE - Lists details of the databases. Uses the regular LIST command syntax: LIST DATABASE and
LIST DATABASE DETAIL. You can also use this command to show an individual attribute with the following
syntax: LIST DATABASE ATTRIBUTES NAME.
• FLASHCACHE - Lists the details of the Exadata system's flash cache. For this object, you can use the following
syntax patterns: LIST FLASHCACHE DETAIL or LIST FLASHCACHE ATTRIBUTES attribute_name.
• FLASHCACHECONTENT - Lists the details of all objects in the flash cache, or the details of a specified
object ID. To list all the details of all objects, use LIST FLASHCACHECONTENT DETAIL. To list
details for a specific object, use a where clause as follows: LIST FLASHCACHECONTENT WHERE
objectNumber=12345 DETAIL
Note: To find the object ID of a specific object, you can query user_objects using the object's name to get the
data_object_id of a partition or table.
• FLASHLOG - Lists the attributes for the Oracle Exadata Smart Flash Log.
• GRIDDISK - Lists the details of a particular grid disk. The syntax is similar to the CELLDISK command syntax.
To view all attributes: LIST GRIDDISK grid_disk_name DETAIL. To view specified attributes of the
grid disk: LIST GRIDDISK grid_disk_name ATTRIBUTES size, name.
• IBPORT - Lists details of the InfiniBand ports. Syntax is LIST IBPORT DETAIL.
• IORMPROFILE - Lists any IORM profiles that have been set on the storage servers. You can also refer back
to the profile attribute on the DATABASE object if a database has an IORM profile on it. Syntax is LIST
IORMPROFILE.
• LUN - The LUN (logical unit number) object returns the number and the detail of the physical disks in the
storage servers. List the LUNs of the disks with LIST LUN. List the details of each LUN with LIST LUN
lun_number DETAIL.
• METRICCURRRENT - Lists the current metrics for a particular object type. Syntax is LIST
METRICCURRENT WHERE objectType = 'CELLDISK'. This command also allows for sorting and
results limits as seen in the following example:
LIST METRICCURRENT attributes name, metricObjectName ORDER BY
metricObjectName asc, name desc LIMIT 5
• METRICDEFINITION - Lists metric definitions for the object that you can then get details for. With the
command LIST metricDefinition WHERE objectType=cell, you can get all the metrics for that
object type. You can then use the metric definition object again to get details for one of those specific metrics just
listed: LIST metricDefinition WHERE name= IORM_MODE DETAIL.
• METRICHISTORY - List metrics over a specified period of time. For example, with the command LIST
METRICHISTORY WHERE ageInMinutes < 30, you can list all the metrics collected over the
past 30 minutes. You can also use the predicate collectionTime to set a range from a specific time. Use
collectionTime as shown in the follow example: LIST METRICHISTORY WHERE collectionTime >
'2018-04-01T21:12:00-10:00'. The metric history object can also be used to see a specific metric using
the object’s name (for example, LIST METRICHISTORY CT_FD_IO_RQ_SM) or with a "where" clause to
get objects with similar attributes like name (for example, LIST METRICHISTORY WHERE name like
'CT_.*').
• OFFLOADGROUP - Lists the attributes for the offload group that are running on your storage servers. You can
list all details for all groups with LIST OFFLOADGROUP DETAIL, or list the attributes for a specific group,
as shown in the following example: LIST OFFLOADGROUP offloadgroup4. List specific attributes with
LIST OFFLOADGROUP ATTRIBUTES name.
• PHYSICALDISK - Lists all physical disks. Use the results of LIST PHYSICALDISK to identify a specific disk
for further investigation, then list the details of that disk using the command as follows: LIST PHYSICALDISK

Oracle Cloud Infrastructure User Guide 1339


Database

20:10 DETAIL. To list the details of flash disks, use the command as follows: LIST PHYSICALDISK
FLASH_1_0 DETAIL).
• PLUGGABLEDATABASE - Lists all PDBs. View the details of a specific PDB with LIST
PLUGGABLEDATABASE pdb_name.
• QUARANTINE - Lists all SQL statements that you prevented from using Smart Scans. The syntax is LIST
QUARANTINE DETAIL. You can also use a "where" clause on any of the available attributes.
Use the ExaCLI CREATE, ALTER, DROP, and LIST commands to act on the following Exadata Storage Server
objects:
• DIAGPACK - Lists the diagnostic packages and their status in your Exadata system. The syntax is LIST
DIAGPACK [DETAIL], with DETAIL being an optional attribute. Use CREATE DIAGPACK with the
packStartTime attribute to gather logs and trace files into a single compressed file for downloading, as in the
following example: CREATE DIAGPACK packStartTime=2019_12_15T00_00_00. You can also use
the value "now" with packStartTime: CREATE DIAGPACK packStartTime=now
To download a diagnostic package, use DOWNLOAD DIAGPACK package_name local_directory. For
example, the following command downloads a diagnostic package to the /tmp directory: DOWNLOAD DIAGPACK
cfclcx2647_diag_2018_06_03T00_44_24_1 /tmp
• IORMPLAN - You can List, create, alter, and drop IORM plans using ExaCLI. To see the details of all IORM
plans, use LIST IORMPLAN DETAIL. You can also use the command to create and alter IORM plans, and to
apply plans to storage servers.
Example query: finding the object_id value of an object

select object_name, data_object_id from user_objects where object_name =


'BIG_CENSUS';
OBJECT_NAME DATA_OBJECT_ID
----------------------------------------
BIG_CENSUS 29152

Using Oracle Data Guard with Exadata Cloud Service


Note:

This procedure is only applicable to Exadata Cloud Service instances. To


use Oracle Data Guard with bare metal and virtual machine DB systems, see
Using Oracle Data Guard on page 1462.

This topic explains how to use the Oracle Cloud Infrastructure (OCI) Console or the API to manage Oracle Data
Guard associations in your Exadata Cloud Service instances. This topic does not apply to Data Guard configurations
created by accessing the host directly and setting up Oracle Data Guard manually.
When you use the Oracle Cloud Infrastructure Console or API to enable Oracle Data Guard for an Exadata Cloud
Service database:
• The standby database is a physical standby.
• The peer databases (primary and standby) are:
• in the same compartment
• both Exadata system shapes
• identical database versions
• You are limited to one standby database for each primary database.
• You must use Oracle-managed encryption keys.
To configure Oracle Data Guard between on-premises and OCI Exadata Cloud Service instances, or to configure your
database with multiple standbys, you must access the database host directly and set up Oracle Data Guard manually.
For complete information on Oracle Data Guard, see the Data Guard Concepts and Administration documentation in
the Oracle Help Center.

Oracle Cloud Infrastructure User Guide 1340


Database

Required IAM Service Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
If you are new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150.
Prerequisites
An Exadata Cloud Service Oracle Data Guard implementation requires two existing Exadata Cloud Service instances:
one containing an existing database that is to be duplicated by Data Guard, and one that will house the new standby
database by Data Guard. When enabling Data Guard, you can create a new Database Home on the standby Exadata
instance to house the new standby database during the enable Data Guard operation. Alternately, you can choose to
provision the standby database in an existing Database Home on the standby instance. For information on creating the
required resources for the standby system, see the following topics:
• To create a cloud Exadata infrastructure resource on page 1246
• To create a cloud VM cluster resource on page 1247
• To create a new Database Home in an existing Exadata Cloud Service instance on page 1331
You can use a custom database software image to that contains the necessary patches for your databases when
creating a Database Home on either the primary or the standby Exadata instance. See Oracle Database Software
Images on page 1568 for information on working with custom Oracle Database software images.
If you choose to provision a standby database in an existing Database Home, ensure that the target Database Home on
the standby instance has all required patches that are in use for the primary database before you provision the standby
database. See the following topics for more information on patching an existing Database Home:
• To patch the Oracle Database software in a Database Home (cloud VM cluster) on page 1287
• To patch the Oracle Database software in a Database Home (DB system) on page 1286

Network Requirements
Ensure that your environment meets the following network requirements:
• If you want to configure Oracle Data Guard across regions, then you must configure remote virtual cloud network
(VCN) peering between the primary and standby databases. Networking is configured on the cloud VM cluster
resource for systems using the new Exadata resource model, and on the DB system resource for system using the
old resource model. See Remote VCN Peering (Across Regions) on page 3308.
For Exadata Data Guard configurations, OCI supports the use of hub-and-spoke network topology for the VCNs
within each region. This means that the primary and standby databases can each utilize a "spoke" VCN that passes
network traffic to the "hub" VCN that has a remote peering connection. See Spoke-to-Spoke: Remote Peering
with Transit Routing for information on setting up this network topology.
• To set up Oracle Data Guard within a single region, both Exadata Cloud Service instances must use the same
VCN. When setting up Data Guard within the same region, Oracle recommends that the instance containing
the standby database be in a different availability domain from the instance containing the primary database to
improve availability and disaster recovery.
• Configure the ingress and egress security rules for the subnets of both Exadata Cloud Service instances in the
Oracle Data Guard association to enable TCP traffic to move between the applicable ports. Ensure that the rules
you create are stateful (the default).
For example, if the subnet of the primary Exadata Cloud Service instance uses the source CIDR 10.0.0.0/24 and
the subnet of the standby instance uses the source CIDR 10.0.1.0/24, then create rules as shown in the subsequent
example.
Note:

The egress rules in the example show how to enable TCP traffic only for
port 1521, which is a minimum requirement for Oracle Data Guard to

Oracle Cloud Infrastructure User Guide 1341


Database

work. If TCP traffic is already enabled for all destinations (0.0.0.0/0) on


all of your outgoing ports, then you need not explicitly add these specific
egress rules.
Security Rules for Subnet of Primary Exadata Cloud Service instance

Ingress Rules:

Stateless: No
Source: 10.0.1.0/24
IP Protocol: TCP
Source Port Range: All
Destination Port Range: 1521
Allows: TCP traffic for ports: 1521

Egress Rules:

Stateless: No
Destination: 10.0.1.0/24
IP Protocol: TCP
Source Port Range: All
Destination Port Range: 1521
Allows: TCP traffic for ports: 1521

Security Rules for Subnet of Standby Exadata Cloud Service instance

Ingress Rules:

Stateless: No
Source: 10.0.0.0/24
IP Protocol: TCP
Source Port Range: All
Destination Port Range: 1521
Allows: TCP traffic for ports: 1521

Egress Rules:

Stateless: No
Destination: 10.0.0.0/24
IP Protocol: TCP
Source Port Range: All
Destination Port Range: 1521
Allows: TCP traffic for ports: 1521

For information about creating and editing rules, see Security Lists on page 2876.

Password Requirements
For Oracle Data Guard operations to work, the SYS password and the TDE wallet password of the primary and
standby databases must all be the same. If you change any one of these passwords, then you must update the rest
of the passwords to match. See Changing the Database Passwords on page 1313 to learn how to change the SYS
password or the TDE wallet password.
If you make any change to the TDE wallet (such as adding a master key for a new PDB or changing the wallet
password), then you must copy the wallet from the primary database to the standby database so that Oracle Data
Guard can continue to operate. For Oracle Database versions prior to Oracle Database 12c release 2 (12.2), if you
change the SYS password on one of the peers, then you must manually sync the password file between the DB
systems.

Oracle Cloud Infrastructure User Guide 1342


Database

Working with Oracle Data Guard


Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data. The Oracle
Cloud Infrastructure Database Data Guard implementation requires two databases: one in a primary role and one in
a standby role. The two databases compose an Oracle Data Guard association. Most of your applications access the
primary database. The standby database is a transactionally consistent copy of the primary database.
Oracle Data Guard maintains the standby database by transmitting and applying redo data from the primary database.
If the primary database becomes unavailable, then you can use Oracle Data Guard to switch or failover the standby
database to the primary role.

Switchover
A switchover reverses the primary and standby database roles. Each database continues to participate in the Oracle
Data Guard association in its new role. A switchover ensures no data loss. Performing planned maintenance on a
Exadata Cloud Service instance with an Oracle Data Guard association is typically done by switching the primary
database to the standby role, performing maintenance on the standby database, and then switching the standby
database back to the primary role.

Failover
A failover transitions the standby database into the primary role after the existing primary database fails or becomes
unreachable. A failover might result in some data loss when you use Maximum Performance protection mode.

Reinstate
Reinstates a database into the standby role in an Oracle Data Guard association. You can use the reinstate command
to return a failed database into service after correcting the cause of failure.
Note:

You cannot terminate a primary database that has an Oracle Data Guard
association with a peer (standby) database. Delete the standby database first.
Alternatively, you can switch over the primary database to the standby role,
and then terminate it.
You cannot terminate an Exadata cloud VM cluster or DB system that
includes Oracle Data Guard-enabled databases. You must first remove the
Oracle Data Guard association by terminating the standby database.
Using the Console
Use the Console to enable an Oracle Data Guard association between databases, change the role of a database in an
Oracle Data Guard association using either a switchover or a failover operation, and reinstate a failed database.
When you enable Oracle Data Guard, a separate Oracle Data Guard association is created for the primary and the
standby database.
To enable Oracle Data Guard on an Exadata Cloud Service instance
Note:

If you use customer-managed encryption keys to protect this database, then


enabling Oracle Data Guard for this database is not available.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose the Compartment that contains the Exadata Cloud Service instance with the database for which you want
to enable Oracle Data Guard.

Oracle Cloud Infrastructure User Guide 1343


Database

3. Navigate to the cloud VM cluster or DB system that contains a database you want to assume the primary role:
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. On the VM cluster or DB system details page, in the Databases section, click the name of the database you want
to make primary.
5. On the Database Details page, in the Resources section, click Data Guard Associations.
6. In the Data Guard Associations section, click Enable Data Guard.
7. On the Enable Data Guard page, configure the Oracle Data Guard association.
• The settings in the Data Guard association details section are read only and cannot be changed.
• Protection mode: The protection mode used for this Oracle Data Guard association is set to Maximum
Performance.
• Transport type: The redo transport type used for this Oracle Data Guard association is set to Async
(asynchronous).
• In the Select peer DB system section, provide the following information for the standby database to obtain a
list of available DB systems in which to locate the standby database:
• Region: Select a region where you want to locate the standby database. The region where the primary
database is located is selected, by default. You can choose to locate the standby database in a different
region. The hint text associated with this field tells you in which region the primary database is located.
• Availability domain: Select an availability domain for the standby database. The hint text associated with
this field tells you in which availability domain the primary database is located.
• Shape: Select the shape of the standby DB system. The shape must be an Exadata DB system shape.
• Data Guard peer resource type: Select DB System or VM Cluster.
• Select a DB system or cloud VM cluster from the drop-down list.
• In the Choose Database Home section, choose one of the following:
• Select an existing Database Home: If you use this option, select a home from the Database Home
display name drop-down list.
• Create a new Database Home: Use this option to provision a new Database Home for your Data Guard
peer database.
Click Change Database Image to use an older Oracle-published image or a custom database software
image that you have created in advance, then select an Image Type:
• Oracle Provided Database Software Images: These images contain generally available versions of
Oracle Database software.
• Custom Database Software Images: These images are created by your organization and contain
customized configurations of software updates and patches. Use the Select a compartment and
Select a Database version selectors to limit the list of custom database software images to a specific
compartment or Oracle Database software major release version.
• In the Configure standby database section, enter the database administrator password of the primary
database in the Database password field. Use this same database administrator password for the standby
database.
Note:

The administrator password and the TDE wallet password must be


identical. If the passwords are not identical, then follow the instructions
in Changing the Database Passwords on page 1313 to ensure that they
are.

Oracle Cloud Infrastructure User Guide 1344


Database

8. Click Enable Data Guard.


When you create the association, the details for a database and its peer display their respective roles as Primary or
Standby.
To perform a database switchover
You initiate a switchover operation by using the Data Guard association of the primary database.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose the Compartment that contains the Exadata Cloud Service instance with the database for which you want
to enable Oracle Data Guard.
3. Navigate to the cloud VM cluster or DB system that contains the Data Guard association:
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. Under Resources, click Data Guard Associations.
5. For the Data Guard association on which you want to perform a switchover, click the Actions icon (three dots),
and then click Switchover.
6. In the Switchover Database dialog box, enter the database admin password, and then click OK.
This database should now assume the role of the standby, and the standby should assume the role of the primary in
the Data Guard association.
To perform a database failover
You initiate a failover operation by using the Data Guard association of the standby database.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose the Compartment that contains the Exadata Cloud Service instance with the database for which you want
to enable Oracle Data Guard.
3. Navigate to the cloud VM cluster or DB system that contains the Data Guard association:
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. Under Resources, click Data Guard Associations.
5. For the Data Guard association on which you want to perform a failover, click Failover.
6. In the Failover Database dialog box, enter the database admin password, and then click OK.
This database should now assume the role of the primary, and the old primary's role should display as Disabled
Standby.
To reinstate a database
After you fail over a primary database to its standby, the standby assumes the primary role and the old primary is
identified as a disabled standby. After you correct the cause of failure, you can reinstate the failed database as a
functioning standby for the current primary by using its Data Guard association.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose the Compartment that contains the Exadata Cloud Service instance with the database for which you want
to enable Oracle Data Guard.

Oracle Cloud Infrastructure User Guide 1345


Database

3. Navigate to the cloud VM cluster or DB system that contains the Data Guard association:
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. Under Resources, click Data Guard Associations.
5. For the Data Guard association on which you want to reinstate this database, click the Actions icon (three dots),
and then click Reinstate.
6. In the Reinstate Database dialog box, enter the database admin password, and then click OK.
This database should now be reinstated as the standby in the Data Guard association.
To terminate a Data Guard association on an Exadata Cloud Service instance
On an Exadata Cloud Service instance, you remove a Data Guard association by terminating the standby database.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose the Compartment that contains the Exadata Cloud Service instance with the database for which you want
to enable Oracle Data Guard.
3. Navigate to the cloud VM cluster or DB system that contains the standby database:
Cloud VM clusters (new resource model): Under Exadata at Oracle Cloud, click Exadata VM Clusters. In the
list of VM clusters, find the VM cluster you want to access and click its highlighted name to view the details page
for the cluster.
DB systems: Under Bare Metal, VM, and Exadata, click DB Systems. In the list of DB systems, find the Exadata
DB system you want to access, and then click its name to display details about it.
4. For the standby database you want to terminate, click the Actions icon (three dots), and then click Terminate.
5. In the Terminate Database dialog box, enter the name of the database, and then click OK.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to manage Data Guard associations on an Exadata Cloud Service instance:
• CreateDataGuardAssociation
• GetDataGuardAssociation
• ListDataGuardAssociations
• SwitchoverDataGuardAssociation
• FailoverDataGuardAssociation
• ReinstateDataGuardAssociation
• DeleteDatabase - To terminate an Exadata Cloud Service instance Data Guard association, you delete the standby
database.
For the complete list of APIs for the Database service, see Database Service API.

Configuring Oracle Database Features for Exadata Cloud Service


This topic describes how to configure Oracle Multitenant, tablespace encryption, and Huge Pages for use with your
Exadata Cloud Service instance.
Using Oracle Multitenant on an Exadata Cloud Service Instance
When you create an Exadata Cloud Service instance that uses Oracle Database 12c or later, an Oracle Multitenant
environment is created.
The multitenant architecture enables an Oracle database to function as a multitenant container database (CDB) that
includes zero, one, or many pluggable databases (PDBs). A PDB is a portable collection of schemas, schema objects,

Oracle Cloud Infrastructure User Guide 1346


Database

and non-schema objects that appears to an Oracle Net Services client as a non-CDB. All Oracle databases using
versions earlier than Oracle Database 12c are non-CDBs.
To use Oracle Transparent Data Encryption (TDE) in a pluggable database (PDB), you must create and activate a
master encryption key for the PDB.
In a multitenant environment, each PDB has its own master encryption key which is stored in a single keystore used
by all containers.
You must export and import the master encryption key for any encrypted PDBs you plug into your Exadata Cloud
Service instance CDB.
If your source PDB is encrypted, you must export the master encryption key and then import it.
You can export and import all of the TDE master encryption keys that belong to the PDB by exporting and importing
the TDE master encryption keys from within a PDB. Export and import of TDE master encryption keys support the
PDB unplug and plug operations. During a PDB unplug and plug, all of the TDE master encryption keys that belong
to a PDB, as well as the metadata, are involved.
See "Exporting and Importing TDE Master Encryption Keys for a PDB" in Oracle Database Advanced Security
Guide for Release 19, 18, 12.2 or 12.1.
See "ADMINISTER KEY MANAGEMENT" in Oracle Database SQL Language Reference for Release 19, 18, 12.2
or 12.1.
To determine if you need to create and activate an encryption key for the PDB
1. Invoke SQL*Plus and log in to the database as the SYS user with SYSDBA privileges.
2. Set the container to the PDB:

SQL> ALTER SESSION SET CONTAINER = pdb;


3. Query V$ENCRYPTION_WALLET as follows:

SQL> SELECT wrl_parameter, status, wallet_type FROM v$encryption_wallet;

If the STATUS column contains a value of OPEN_NO_MASTER_KEY, you need to create and activate the master
encryption key.
To create and activate the master encryption key in a PDB
1. Set the container to the PDB:

SQL> ALTER SESSION SET CONTAINER = pdb;

Oracle Cloud Infrastructure User Guide 1347


Database

2. Create and activate a master encryption key in the PDB by executing the following command:

SQL> ADMINISTER KEY MANAGEMENT SET KEY USING TAG 'tag' FORCE KEYSTORE
IDENTIFIED BY keystore-password WITH BACKUP USING 'backup_identifier';

In the previous command:


• keystore-password is the keystore password. By default, the keystore password is set to the value of the
administration password that is specified when the database is created.
• The optional USING TAG 'tag' clause can be used to associate a tag with the new master encryption key.
• The WITH BACKUP clause, and the optional USING 'backup_identifier' clause, can be used to
create a backup of the keystore before the new master encryption key is created.
See also ADMINISTER KEY MANAGEMENT in Oracle Database SQL Language Reference for Release19, 18 or
12.2.
Note:

To enable key management operations while the keystore is in use, Oracle


Database 12c Release 2, and later, includes the FORCE KEYSTORE option
to the ADMINISTER KEY MANAGEMENT command. This option is also
available for Oracle Database 12c Release 1 with the October 2017, or
later, bundle patch.
If your Oracle Database 12c Release 1 database does not have the October
2017, or later, bundle patch installed, you can perform the following
alternative steps:
a. Close the keystore.
b. Open the password-based keystore.
c. Create and activate a master encryption key in the PDB by using
ADMINISTER KEY MANAGEMENT without the FORCE KEYSTORE
option.
d. Update the auto-login keystore by using ADMINISTER KEY
MANAGEMENT with the CREATE AUTO_LOGIN KEYSTORE FROM
KEYSTORE option.
3. Query V$ENCRYPTION_WALLET again to verify that the STATUS column is set to OPEN:

SQL> SELECT wrl_parameter, status, wallet_type FROM v$encryption_wallet;


4. Query V$INSTANCE and take note of the value in the HOST_NAME column, which identifies the database server
that contains the newly updated keystore files:

SQL> SELECT host_name FROM v$instance;

Oracle Cloud Infrastructure User Guide 1348


Database

5. Copy the updated keystore files to all of the other database servers.
To distribute the updated keystore, you must perform the following actions on each database server that does not
contain the updated keystore files:
a. Connect to the root container and query V$ENCRYPTION_WALLET. Take note of the keystore location
contained in the WRL_PARAMETER column:

SQL> SELECT wrl_parameter, status FROM v$encryption_wallet;


b. Copy the updated keystore files.
You must copy all of the updated keystore files from a database server that is already updated. Use the
keystore location observed in the WRL_PARAMETER column of V$ENCRYPTION_WALLET.
Open the updated keystore:

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE open FORCE KEYSTORE IDENTIFIED
BY keystore-password CONTAINER=all;

Note:

To enable key management operations while the keystore is in use, Oracle


Database 12c Release 2, and later, includes the FORCE KEYSTORE option
to the ADMINISTER KEY MANAGEMENT command. This option is also
available for Oracle Database 12c Release 1 with the October 2017, or
later, bundle patch.
If your Oracle Database 12c Release 1 database does not have the October
2017, or later, bundle patch installed, you can perform the following
alternative steps:
a. Close the keystore before copying the updated keystore files.
b. Copy the updated keystore files.
c. Open the updated keystore by using ADMINISTER KEY
MANAGEMENT without the FORCE KEYSTORE option.
6. Query GV$ENCRYPTION_WALLET to verify that the STATUS column is set to OPEN across all of the database
instances:

SQL> SELECT wrl_parameter, status, wallet_type FROM gv$encryption_wallet;

To export and import a master encryption key


1. Export the master encryption key.
a. Invoke SQL*Plus and log in to the PDB.
b. Execute the following command:

SQL> ADMINISTER KEY MANAGEMENT EXPORT ENCRYPTION KEYS WITH SECRET


"secret" TO 'filename' IDENTIFIED BY keystore-password;
2. Import the master encryption key.
a. Invoke SQL*Plus and log in to the PDB.
b. Execute the following command:

SQL> ADMINISTER KEY MANAGEMENT IMPORT ENCRYPTION KEYS WITH SECRET


"secret" FROM 'filename' IDENTIFIED BY keystore-password;

Managing Tablespace Encryption


By default, all new tablespaces that you create in an Exadata database are encrypted.

Oracle Cloud Infrastructure User Guide 1349


Database

However, the tablespaces that are initially created when the database is created may not be encrypted by default.
• For databases that use Oracle Database 12c Release 2 or later, only the USERS tablespaces initially created when
the database was created are encrypted. No other tablespaces are encrypted including the non-USERS tablespaces
in:
• The root container (CDB$ROOT).
• The seed pluggable database (PDB$SEED).
• The first PDB, which is created when the database is created.
• For databases that use Oracle Database 12c Release 1 or Oracle Database 11g, none of the tablespaces initially
created when the database was created are encrypted.
For further information about the implementation of tablespace encryption in Exadata, along with how it impacts
various deployment scenarios, see Oracle Database Tablespace Encryption Behavior in Oracle Cloud.

Creating Encrypted Tablespaces


User-created tablespaces are encrypted by default.
By default, any new tablespaces created by using the SQL CREATE TABLESPACE command are encrypted with the
AES128 encryption algorithm. You do not need to include the USING 'encrypt_algorithm' clause to use the
default encryption.
You can specify another supported algorithm by including the USING 'encrypt_algorithm' clause in the CREATE
TABLESPACE command. Supported algorithms are AES256, AES192, AES128, and 3DES168.

Managing Tablespace Encryption


You can manage the software keystore (known as an Oracle wallet in Oracle Database 11g), the master encryption
key, and control whether encryption is enabled by default.

Managing the Master Encryption Key


Tablespace encryption uses a two-tiered, key-based architecture to transparently encrypt (and decrypt) tablespaces.
The master encryption key is stored in an external security module (software keystore). This master encryption key is
used to encrypt the tablespace encryption key, which in turn is used to encrypt and decrypt data in the tablespace.
When a database is created on an Exadata Cloud Service instance, a local software keystore is created. The keystore
is local to the compute nodes and is protected by the administration password specified during the database creation
process. The auto-login software keystore is automatically opened when the database is started.
You can change (rotate) the master encryption key by using the ADMINISTER KEY MANAGEMENT SQL
statement. For example:

SQL> ADMINISTER KEY MANAGEMENT SET ENCRYPTION KEY USING TAG 'tag'
IDENTIFIED BY password WITH BACKUP USING 'backup';

keystore altered.

See "Managing the TDE Master Encryption Key" in Oracle Database Advanced Security Guide for Release 19,
18, 12.2 or 12.1 or "Setting and Resetting the Master Encryption Key" in Oracle Database Advanced Security
Administrator's Guide for Release 11.2.

Controlling Default Tablespace Encryption


The ENCRYPT_NEW_TABLESPACES initialization parameter controls the default encryption of new tablespaces. In
Exadata databases, this parameter is set to CLOUD_ONLY by default.
Values of this parameter are as follows.

Oracle Cloud Infrastructure User Guide 1350


Database

Value Description
ALWAYS During creation, tablespaces are transparently encrypted
with the AES128 algorithm unless a different algorithm
is specified in the ENCRYPTION clause.
CLOUD_ONLY Tablespaces created in an Exadata database are
transparently encrypted with the AES128 algorithm
unless a different algorithm is specified in the
ENCRYPTION clause. For non-cloud databases,
tablespaces are only encrypted if the ENCRYPTION
clause is specified. ENCRYPTION is the default value.
DDL During creation, tablespaces are not transparently
encrypted by default, and are only encrypted if the
ENCRYPTION clause is specified.

Note:

With Oracle Database 12c Release 2 (12.2), or later, you can no longer
create an unencrypted tablespace in an Exadata database. An error message
is returned if you set ENCRYPT_NEW_TABLESPACES to DDL and issue a
CREATE TABLESPACE command without specifying an ENCRYPTION
clause.
Managing Huge Pages
Huge Pages provide considerable performance benefits for Oracle Database on systems with large amounts of
memory. Oracle Database on an Exadata Cloud Service instance hosted in Oracle Cloud Infrastructure provides
configuration settings that make use of Huge Pages by default; however, you can make manual adjustments to
optimize the configuration of Huge Pages.
Huge Pages is a feature integrated into the Linux kernel 2.6. Enabling Huge Pages makes it possible for the operating
system to support large memory pages. Using Huge Pages can improve system performance by reducing the amount
of system CPU and memory resources required to manage Linux page tables, which store the mapping between
virtual and physical memory addresses. For Oracle Databases, using Huge Pages can drastically reduce the number of
page table entries associated with the System Global Area (SGA).
On Exadata Cloud Service instances hosted in Oracle Cloud Infrastructure, a standard page is 4 KB, while a Huge
Page is 2 MB by default. Therefore, an Oracle Database on an Exadata DB system with a 50 GB SGA requires
13,107,200 standard pages to house the SGA, compared with only 25,600 Huge Pages. The result is much smaller
page tables, which require less memory to store and fewer CPU resources to access and manage.

Adjusting the Configuration of Huge Pages


The configuration of Huge Pages for Oracle Database is a two-step process:
• At the operating system level, the overall amount of memory allocated to Huge Pages is controlled by the
vm.nr_hugepages entry in the /etc/sysctl.conf file. This setting is made on each compute node in the environment
and it is strongly recommended that the setting is consistent across all of the compute nodes. To alter the Huge
Page allocation, you can execute the following command on each compute node as the root user:

# sysctl -w vm.nr_hugepages=value

where value is the number of Huge Pages that you want to allocate.
On Exadata Cloud Service instances hosted in Oracle Cloud Infrastructure, each Huge Page is 2 MB by default.
Therefore, to allocate 50 GB of memory to Huge Pages you can execute the following command:

# sysctl -w vm.nr_hugepages=25600

Oracle Cloud Infrastructure User Guide 1351


Database

• At the Oracle Database level, the use of Huge Pages is controlled by the USE_LARGE_PAGES instance parameter
setting. This setting applies to each database instance in a clustered database. Oracle strongly recommends a
consistent setting across all of the database instances associated with a database. The following options are
available:
• TRUE — specifies that the database instance can use Huge Pages if they are available. For all versions of
Oracle Database after 11.2.0.3, Oracle allocates as much of the SGA as it can, using Huge Pages. When the
Huge Page allocation is exhausted, standard memory pages are used.
• FALSE — specifies that the database instance does not use Huge Pages. This setting is generally not
recommended if Huge Pages are available.
• ONLY — specifies that the database instance must use Huge Pages. With this setting, the database instance
fails to start if the entire SGA cannot be accommodated in Huge Pages.
If you make any adjustments at either the operating system or Oracle Database level, ensure that the overall
configuration works.
For more information, see the Oracle Database Administrator's Reference for Linux and UNIX-Based Operating
Systems for Release 19, 18, 12.1, or 11.2 for a general overview of Huge Pages and more information about
configuring Huge Pages. Also, see USE_LARGE_PAGES in the Oracle Database Reference for Release 12.2, 12.1, or
11.2.

Exadata Fixed Hardware Shapes: X6, X7, X8 and Exadata Base


This topic describes the available fixed-size Exadata Cloud Service hardware shapes in Oracle Cloud Infrastructure.
Note:

For information on the flexible X8M shape, see Overview of X8M Scalable
Exadata Infrastructure on page 1228.
Exadata X8 shapes:
• Exadata.Quarter3.100: Provides a 2-node Exadata DB system with up to 100 CPU cores, and 149 TB of usable
storage.
• Exadata.Half3.200: Provides a 4-node Exadata DB system with up to 200 CPU cores, and 299 TB of usable
storage.
• Exadata.Full3.400: Provides an 8-node Exadata DB system with up to 400 CPU cores, and 598 TB of usable
storage.
Exadata X7 shapes:
• Exadata.Quarter2.92: Provides a 2-node Exadata DB system with up to 92 CPU cores, and 106 TB of usable
storage.
• Exadata.Half2.184: Provides a 4-node Exadata DB system with up to 184 CPU cores, and 212 TB of usable
storage.
• Exadata.Full2.368: Provides an 8-node Exadata DB system with up to 368 CPU cores, and 424 TB of usable
storage.
Exadata X6 shapes:
• Exadata.Quarter1.84: Provides a 2-node Exadata DB system with 22 enabled CPU cores, with up to 62
additional CPU cores, and 84 TB of usable storage.
• Exadata.Half1.168: Provides a 4-node Exadata DB system with 44 enabled CPU cores, with up to 124 additional
CPU cores, and 168 TB of usable storage.
• Exadata.Full1.336: Provides an 8-node Exadata DB system with 88 enabled CPU cores, with up to 248
additional CPU cores, and 336 TB of usable storage.
Exadata base system:
Exadata.Base.48: Provides a 2-node Exadata DB system with up to 48 CPU cores, and 74 TB of usable storage.

Oracle Cloud Infrastructure User Guide 1352


Database

All Exadata shapes provide unlimited I/O, and support only Enterprise Edition - Extreme Performance. All Exadata
shapes provide 720 GB RAM per node except for Exadata base systems, which provide 360 GB RAM per node. For
more details about Exadata shapes, see Exadata Shape Configurations on page 1224.
For information on provisioning an Exadata Cloud Service instance, see Creating an Exadata Cloud Service Instance
on page 1244.

The X8M Virtual Machine File System Structure


Exadata Cloud Service X8M systems use the following file system organization on the virtual machine nodes.

Filesystem Mounted On
devtmpfs /dev
tmpfs /dev/shm
tmpfs /run
tmpfs /sys/fs/cgroup
tmpfs /run/user/0
/dev/mapper/VGExaDb-LVDbSys1 /
/dev/mapper/VGExaDb-LVDbOra1 /u01
/dev/mapper/VGExaDb-LVDbTmp /tmp
/dev/mapper/VGExaDb-LVDbVar1 /var
/dev/mapper/VGExaDb-LVDbVarLog /var/log
/dev/mapper/VGExaDb-LVDbHome /home
/dev/mapper/VGExaDbDisk.u02_extra.img- /u02
LVDBDisk
/dev/mapper/VGExaDb-LVDbVarLogAudit /var/log/audit
/dev/sda1 /boot
/dev/mapper/VGExaDbDisk.grid19.0.0.0.200414.img- /u01/app/19.0.0.0/grid
LVDBDisk
/dev/asm/acfsvol01-142 /acfs01

Bare Metal and Virtual Machine DB Systems


Oracle Cloud Infrastructure offers single-node DB systems on either bare metal or virtual machines, and 2-node RAC
DB systems on virtual machines. If you need to provision a DB system for development or testing purposes, a special
fast-provisioning single-node virtual machine system is available.
You can manage these systems by using the Console, the API, the Oracle Cloud Infrastructure CLI, the Database CLI
(DBCLI), Enterprise Manager, Enterprise Manager Express, or SQL Developer.
Note:

This documentation is intended for Oracle database administrators


and assumes familiarity with Oracle databases and tools. If you need
additional information, see the product documentation available at http://
docs.oracle.com/en/database/.

Oracle Cloud Infrastructure User Guide 1353


Database

Supported Database Editions and Versions


All single-node Oracle RAC DB systems support the following Oracle Database editions:
• Standard Edition
• Enterprise Edition
• Enterprise Edition - High Performance
• Enterprise Edition - Extreme Performance
Two-node Oracle RAC DB systems require Oracle Enterprise Edition - Extreme Performance.
For standard provisioning of DB systems (using Oracle Automatic Storage Management (ASM) as your storage
management software), the supported database versions are:
• Oracle Database 21c
• Oracle Database 19c
• Oracle Database 18c (18.0)
• Oracle Database 12c Release 2 (12.2)
• Oracle Database 12c Release 1 (12.1)
• Oracle Database 11g Release 2 (11.2)
For fast provisioning of single-node virtual machine database systems (using Logical Volume Manager as your
storage management software), the supported database versions are:
• Oracle Database 21c
• Oracle Database 19c
• Oracle Database 18c
Tip:

Your DB system's operating system will periodically need to be updated, just


as your Oracle Database software will need to be updated. Before attempting
an OS update, be sure to read the information in Updating a DB System on
page 1398 and back up your DB system's databases.

Upgrading the Oracle Database Software in Your DB System


You can upgrade database instances that use Oracle Database 18c and earlier to Oracle Database 19c (Long Term
Release). For information on prerequisites and upgrade instructions see Upgrading a Database on page 1425.

Oracle Database Preview Version Availability


Oracle Cloud Infrastructure periodically offers preview software versions of Oracle Database for testing purposes.
You can provision a virtual machine DB system using preview version software to test applications before the general
availability of the software in the Database service. When you provision a DB system with preview version software,
the system remains available to you until you decide to terminate it.
Preview version DB systems are provisioned in the same manner as non-preview systems. If available, preview
version software is displayed as one of the choices in the Database version selector in the Create DB System dialog.
See To create a DB system on page 1373 for instructions on provisioning a virtual machine DB system using
preview version software.
Current Preview Version Software
There is no Oracle Database preview software version available at this time for bare metal and virtual machine DB
systems in Oracle Cloud Infrastructure.
Oracle Database Preview Version Restrictions
Preview version software cannot be used for production databases. The following restrictions apply to preview
version software:

Oracle Cloud Infrastructure User Guide 1354


Database

• Only available for non-RAC virtual machine DB systems. Preview software is not available for bare metal
systems, Exadata systems, or virtual machine systems using RAC.
• Uses Logical Volume Manager (LVM) storage management software only. Automatic Storage Management
(ASM) is not available.
• Patching and database version upgrades (including upgrades to the generally available release of the preview
software) are not available.
• You cannot create a new DB system from a backup of a database that uses preview version software.
• Standalone backups cannot be created.
• Data Guard is not available.
• Preview version software DB systems cannot be created from backups. In-place restores are supported.

Availability of Older Database Versions for Virtual Machine DB Systems


For virtual machine DB systems, Oracle Cloud Infrastructure also supports the creation of DB systems using
older database versions. For each shape, the latest version and the two prior versions of the release are available at
provisioning.
Caution:

If you need to launch your DB system with an older database version, see
Critical Patch Updates for information on known security issues with your
chosen database version. You will also need to analyze and patch known
security issues for the operating system included with the older database
version. See Securing Database on page 3722 for information on security
best practices for databases in Oracle Cloud Infrastructure.

Per-Second Billing for Bare Metal and Virtual Machine Database Resources
For databases using bare metal and virtual machine infrastructure, Oracle Cloud Infrastructure uses per-second
billing. This means that OCPU and storage usage is billed by the second, with a minimum usage period of 1 minute
for virtual machine DB systems and 1 hour for bare metal DB systems.

Bare Metal DB Systems


Bare metal DB systems consist of a single bare metal server running Oracle Linux 6.8, with locally attached NVMe
storage. If the node fails, you can simply launch another system and restore the databases from current backups.
When you launch a bare metal DB system, you select a single Oracle Database edition that applies to all the databases
on that DB system. The selected edition cannot be changed. Each DB system can have multiple database homes,
which can be different versions. Each database home can have only one database, which is the same version as the
database home.
Shapes for Bare Metal DB Systems
When you launch a DB system, you choose a shape, which determines the resources allocated to the DB system. The
available shapes for a bare metal DB system are:
• BM.DenseIO2.52: Provides a 1-node DB system (one bare metal server), with up to 52 CPU cores, 768 GB
memory, and eight 6.4 TB locally attached NVMe drives (51.2 TB total) to the DB system.
• BM.DenseIO1.36: Limited availability. Provides a 1-node DB system (one bare metal server), with up to 36 CPU
cores, 512 GB memory, and nine 3.2 TB locally attached NVMe drives (28.8 TB total) to the DB system.
Note: BM.DenseO1.36 is available only to monthly universal credit customers existing on or before November
9th, 2018. This shape is available only in the US West (Phoenix), US East (Ashburn), and Germany Central
(Frankfurt) regions.

Oracle Cloud Infrastructure User Guide 1355


Database

Bare Metal DB System Storage Considerations


The shape you choose for a bare metal DB system determines its total raw storage, but other options, like 2- or 3-way
mirroring and the space allocated for data files, affect the amount of usable storage on the system. The following table
shows how various configurations affect the usable storage for bare metal DB systems.

Shape Raw Storage Usable Storage with Usable Storage with


Normal Redundancy (2- High Redundancy (3-way
way Mirroring) Mirroring)

BM.DenseIO2.52 51.2 TB NVMe DATA 16 TB DATA 9 TB


RECO 4 TB RECO 2.3 TB

BM.DenseIO1.36see note 28.8 TB NVMe DATA 9.4 TB DATA 5.4 TB


RECO 1.7 TB RECO 1 TB

Note: BM.DenseIO1.36 availability is limited to monthly universal credit customers existing on or before November
9th, 2018, in the us-phoenix-1, us-ashburn-1, and eu-frankfurt-1 regions.

Virtual Machine DB Systems


There are two types of DB systems on virtual machines:
• A 1-node virtual machine DB system consists of one virtual machine.
• A 2-node virtual machine DB system consists of two virtual machines.
When you launch a virtual machine DB system, you select the Oracle Database edition and version that applies to the
database on that DB system. The selected edition cannot be changed. Depending on your selected Oracle Database
edition and version, your DB system can support multiple pluggable databases (PDBs). See the following Oracle
Database licensing topics for information about the maximum number of pluggable and container databases available
for your selected Oracle Database version:
• Oracle Database 18c: Permitted Features, Options, and Management Packs by Oracle Database Offering
• Oracle Database 19c: Permitted Features, Options, and Management Packs by Oracle Database Offering
Unlike a bare metal DB system, a virtual machine DB system can have only a single Database Home, which in turn
can have only a single database. The databases will be the same version as the Database Home.
Virtual machine DB systems also differ from bare metal DB systems in the following ways:
• A virtual machine DB system database uses Oracle Cloud Infrastructure block storage instead of local storage.
You specify a storage size when you launch the DB system, and you can scale up the storage as needed at any
time.
• To change the number of CPU cores on an existing virtual machine DB system, you must change the shape of that
DB system. See To change the shape of a virtual machine DB system on page 1385 for more information.
Note:

The shape-changer operation takes place in a rolling fashion for multi-node


DB systems, allowing you to change the shape with no database downtime.
Fast Provisioning Option for Single-Node Virtual Machine DB Systems
For 1-node virtual machine DB systems, Oracle Cloud Infrastructure provides have a "fast provisioning" option that
allows you to create your DB system using Logical Volume Manager as your storage management software. The
alternative ("standard provisioning") is to provision with Oracle Automatic Storage Management (ASM).
Note:

• When using the fast provisioning option, the number and size of the block
volumes specified during provisioning determines the maximum total

Oracle Cloud Infrastructure User Guide 1356


Database

storage available through scaling. See Storage Scaling Considerations for


Virtual Machine Databases Using Fast Provisioning on page 1556 for
details.
• Multi-node Virtual Machine DB systems require Oracle Automatic
Storage Management and cannot be created using the fast-provisioning
option.
• You can clone virtual machine DB systems that have been created using
the fast provisioning option. See Cloning a Virtual Machine DB System
on page 1389 for instructions.
• You cannot use a custom database software image when provisioning a
system with logical volume manager storage software.
Cloning a Fast-Provisioned Virtual Machine DB System
You can create a clone of a virtual machine DB system that was created using the "fast provisioning" option (these
are single-node systems that use Logical Volume Manager storage management software). For more information and
instructions, see Cloning a Virtual Machine DB System on page 1389.
Fault Domain Considerations for Two-Node Virtual Machine DB Systems
When you provision a 2-node RAC DB systems, the system assigns each node to a different fault domain by default.
Using the Advanced Options link in the provisioning dialog, you can select the fault domain(s) to be used for your 2-
node RAC DB systems and the system will assign the nodes to your selected fault domains. Oracle recommends that
you place each node of a 2-node RAC DB system in a different fault domain. For more information on fault domains,
see Fault Domains on page 184.
Rebooting a Virtual Machine DB System Node for Planned Maintenance
Virtual machine DB system nodes use underlying physical hosts that periodically need to undergo maintenance.
When such maintenance is needed, Oracle Cloud Infrastructure schedules a reboot of your virtual machine DB
system node and notifies you of the upcoming reboot. The reboot allows your virtual machine DB system node to be
migrated to a new physical host which is not in need of maintenance. (Stopping and starting the node will also result
in the migration to a new physical host.) The only impact to your virtual machine DB system node is the reboot itself.
The planned maintenance of the original physical hardware takes place after your node has been migrated to its new
host, and has no impact on your DB system.
If your virtual machine DB system node is scheduled for a maintenance reboot, you can proactively reboot your node
(by stopping and starting it) using the Console or the API. This lets you control how and when your node experiences
downtime. If you choose not to reboot before the scheduled time, then Oracle Cloud Infrastructure will reboot and
migrate your node at the scheduled time.
To identify the virtual machine DB system nodes that you can proactively reboot, navigate to your system's
DB System Details page in the Console and check the Node Maintenance Reboot field. If the instance has a
maintenance reboot scheduled and can be proactively rebooted, this field displays the date and start time for the
reboot. When the Maintenance Reboot field does not display a date, your virtual machine DB system has no
scheduled node maintenance events.
To check for scheduled maintenance events using the API, use the GetDbNode operation to check the
timeMaintenanceWindowEnd field of the DbNode resource. This field specifies when the system will initiate
the next scheduled node reboot.
To make it easier to locate nodes that have scheduled maintenance reboots, you can use the Search service with a
predefined query to find all DB systems that have a maintenance reboot scheduled.
For instructions on using the Console to reboot a node, see To start, stop, or reboot a database system on page 1384.
Shapes for Virtual Machine DB Systems
When you launch a virtual machine DB system, you choose a shape, which determines the resources allocated
to the DB system. After you provision the system, you can change the shape to adapt to new processing capacity
requirements.
The following table shows the available shapes in the X7 series for a virtual machine DB system.

Oracle Cloud Infrastructure User Guide 1357


Database

Shape CPU Cores Memory


VM.Standard2.1 1 15 GB
VM.Standard2.2 2 30 GB
VM.Standard2.4 4 60 GB
VM.Standard2.8 8 120 GB
VM.Standard2.16 16 240 GB
VM.Standard2.24 24 320 GB

The following table shows the available shapes in the X5 series for a virtual machine DB system.
Note:

Availability of X5 shapes is limited to monthly universal credit customers


existing on or before November 9th, 2018, in the us-phoenix-1, us-ashburn-1,
and eu-frankfurt-1 regions.

Shape CPU Cores Memory


VM.Standard1.1 1 7 GB
VM.Standard1.2 2 14 GB
VM.Standard1.4 4 28 GB
VM.Standard1.8 8 56 GB
VM.Standard1.16 16 112 GB

Storage Options for Virtual Machine DB Systems


Virtual machine DB systems use Oracle Cloud Infrastructure block storage. The following table shows details of the
storage options for a virtual machine DB system. Total storage includes available storage plus recovery logs.

Available Storage (GB) Total Storage (GB)

256 712

512 968

1024 1480
2048 2656
4096 5116
6144 7572
8192 10032
10240 12488
12288 14944
14336 17404
16384 19860
18432 22320
20480 24776

Oracle Cloud Infrastructure User Guide 1358


Database

Available Storage (GB) Total Storage (GB)


22528 27232
24576 29692
26624 32148
28672 34608
30720 37064
32768 39520
34816 41980
36864 44436
38912 46896
40960 49352

For 2-node RAC virtual machine DB systems, storage capacity is shared between the nodes.
Security Hardening Tool for Virtual Machine DB systems
Oracle Cloud Infrastructure virtual machine DB systems provisioned using Oracle Linux 7 include a python script,
referred to as the Security Technical Implementation Guide (STIG) tool, that you can use to perform security
hardening for your virtual machine DB system. See Security Technical Implementation Guide (STIG) Tool for
Virtual Machine DB systems on page 1556 and Enabling FIPS, SE Linux, and STIG on Bare Metal or Virtual
Machine DB System Components on page 1557 for more information.
Boot Volume Backups
Oracle maintains a weekly boot volume backup of your virtual machine DB system so that the system can be easily
restored in the event of a serious error or system failure. Boot volume backups are currently not accessible to users
(there is no Console, API, or CLI access to a DB system boot volume backup), and Oracle bears the cost of keeping
and maintaining the backup. In the event of a system failure, contact My Oracle Support to request that Oracle
perform a restore of your system from the boot volume backup.

Database Backups, Restoring from a Backup, and Creating a Database or DB


System from a backup

Backup Options
Oracle Cloud Infrastructure offers you the ability to create and store automatic daily backups and on-demand full
backups. You can store backups in your DB system's local storage, or in Oracle Cloud Infrastructure Object Storage.
See Backing Up a Database on page 1436 for information about the backup storage options you have for your cloud
databases. See Backing Up a Database to Oracle Cloud Infrastructure Object Storage on page 1436 for information
about managed automatic backups in Oracle Cloud Infrastructure.

Restoring from a Backup


See Recovering a Database from Object Storage on page 1447 for information on restoring a database from a
backup in Object Storage.

Creating a Database or DB System Using a Backup


See To create a DB system from a backup and To create a database from a backup in an existing DB system for
information about creating a database or DB system from the following sources:
• Daily automatic backups or on-demand full backups.

Oracle Cloud Infrastructure User Guide 1359


Database

• The last archived redo log backup. Requires that you have automatic backups enabled. This backup combines data
from the most recent daily automatic backup and data from archived redo logs, and represents the most current
backup available.
• Daily automatic backup data used to crate a point-in-time copy of the source database based on a specified
timestamp.
• Standalone Backups on page 1439

Moving Databases to Oracle Cloud DB Systems Using Zero Downtime Migration


Oracle now offers the Zero Downtime Migration service, a quick and easy way to move on-premises Oracle
Databases and Oracle Cloud Infrastructure Classic databases to Oracle Cloud Infrastructure. You can migrate
databases to the following types of Oracle Cloud Infrastructure systems: Exadata, Exadata Cloud@Customer, bare
metal, and virtual machine.
Zero Downtime Migration leverages Oracle Active Data Guard to create a standby instance of your database in an
Oracle Cloud Infrastructure system. You switch over only when you are ready, and your source database remains
available as a standby. Use the Zero Downtime Migration service to migrate databases individually or at the fleet
level. See Move to Oracle Cloud Using Zero Downtime Migration for more information.

Network Setup for DB Systems


Note:

This topic is not applicable to Exadata DB systems. For information on the


network setup for an Exadata DB system, see Network Setup for Exadata
Cloud Service Instances on page 1233.
Before you set up a bare metal or virtual machine DB system, you must set up a virtual cloud network (VCN) and
other Networking service components. This topic describes the recommended configuration for the VCN.
VCN and Subnets
To launch a DB system, you must have:
• A VCN in the region where you want the DB system
• At least one subnet in the VCN (either a public subnet or a private subnet)
In general, Oracle recommends using regional subnets, which span all availability domains in the region. For a bare
metal or virtual machine DB system, either a regional subnet or AD-specific subnet works. For more information, see
Overview of VCNs and Subnets on page 2848.
You will create a custom route table. You will also create security rules to control traffic to and from the DB system's
compute notes. More information follows about that.
Certain details of the VCN and subnet configuration depend on your choice for DNS resolution within the VCN. For
more information, see DNS for the DB System on page 1363.
Option 1: Public Subnet with Internet Gateway
This option can be useful when doing a proof-of-concept or development work. You can use this setup in production
if you want to use an internet gateway with the VCN, or if you have services that run only on a public network and
need access to the database. See the following diagram and description.

Oracle Cloud Infrastructure User Guide 1360


Database

You set up:


• Public subnet.
• Internet gateway.
• Service gateway to reach Object Storage for database backups and patching. Also see Option 1: Service Gateway
Access Only to Object Storage on page 1367.
• Route table: A custom route table for the subnet, with two rules:
• A rule for 0.0.0.0/0, and target = internet gateway.
• A rule for the service CIDR label called OCI <region> Object Storage, and target = the service gateway.
Also see Option 1: Service Gateway Access Only to Object Storage on page 1367.
• Security rules to enable the desired traffic to and from the DB system nodes. See Security Rules for the DB
System on page 1368.
Important:

See this known issue for information about configuring route rules with
service gateway as the target on route tables associated with public subnets.

Oracle Cloud Infrastructure User Guide 1361


Database

Option 2: Private Subnet


Oracle recommends this option for a production system. The subnet is private and cannot be reached from the
internet. See the following diagram and description.

You set up:


• Private subnet.
• Gateways for the VCN:
• Dynamic routing gateway (DRG), with a FastConnect or IPSec VPN to your on-premises network.
• Service gateway to reach Object Storage for database backups and patching, and to reach Oracle YUM repos
for OS updates. Also see Option 2: Service Gateway Access to Both Object Storage and YUM Repos on page
1367.
• NAT gateway (to reach public endpoints not supported by the service gateway).
• Route table: A custom route table for the subnet, with these rules:
• A route for the on-premises network's CIDR, and target = DRG.
• A rule for the service CIDR label called All <region> Services in Oracle Services Network, and target = the
service gateway. Also see Option 2: Service Gateway Access to Both Object Storage and YUM Repos on page
1367.
• If you want to access the Oracle YUM repos through the NAT gateway, add a route rule for the regional YUM
repo's public IP address, and target = the NAT gateway. If you just use the next rule only, the traffic to the

Oracle Cloud Infrastructure User Guide 1362


Database

YUM repo would still be routed to the service gateway, because the service gateway route is more specific
than 0.0.0.0/0.
• A rule for 0.0.0.0/0, and target = NAT gateway.
• Security rules to enable the desired traffic to and from the DB system nodes. See Security Rules for the DB
System on page 1368.
Requirements for IP Address Space
If you are setting up DB systems (and thus VCNs) in more than one region, make sure the IP address space of the
VCNs does not overlap.
The subnet you create for a bare metal or virtual machine DB system cannot overlap with 192.168.16.16/28, which is
used by the Oracle Clusterware private interconnect on the database instance.
The following table lists the minimum required subnet size.
Tip:

The Networking service reserves three IP addresses in each subnet.


Allocating a larger space for the subnet than the minimum required (for
example, at least /25 instead of /28) can reduce the relative impact of those
reserved addresses on the subnet's available space.

DB System Type # Required IP Addresses Minimum Subnet Size


1-node bare metal or virtual 1 + 3 reserved in subnet = 4 /30 (4 IP addresses)
machine
2-node RAC virtual machine (2 addresses * 2 nodes) + 3 for /28 (16 IP addresses)
SCANs + 3 reserved in subnet = 10

VCN Creation Wizard: Not for Production


The Networking section of the Console includes a handy wizard that creates a VCN along with related resources. It
can be useful if you just want to try launching an instance. However, the wizard automatically creates a public subnet
and an internet gateway. You may not want this for your production network, so Oracle recommends you create the
VCN and other resources individually yourself instead of using the wizard.
DNS for the DB System
There are two choices for DNS and hostname resolution for the DB system:
• Recommended: Use the default DNS functionality in the VCN (called the Internet and VCN Resolver)
• Use a custom DNS resolver of your choice
The following table shows which choices are supported with each type of DB system, and the endpoints that need to
be resolved for the DB system to function.

DB System Type Supported DNS Choices Endpoints to Be Resolved


1-node bare metal or virtual • Recommended: Default (Internet • Object Storage endpoints
machine and VCN Resolver) (includes both the Object Storage
• Custom DNS resolver of your endpoints and Swift endpoints)
choice • Oracle YUM repo endpoints

2-node RAC virtual machine • Default (Internet and VCN • Object Storage endpoints
Resolver) (includes both the Object Storage
endpoints and Swift endpoints)
• Oracle YUM repo endpoints
• Single Client Access Names
(SCANs)

Oracle Cloud Infrastructure User Guide 1363


Database

The following sections give more details about the DNS choices.

Default (Internet and VCN Resolver)


See the preceding table for the types of DB systems that support the Internet and VCN Resolver.
Oracle recommends using the Internet and VCN Resolver for DNS. It's the default, built-in DNS functionality that
comes with each VCN. It enables hosts in a VCN to resolve these items:
• Hostnames of other hosts in the same VCN
• Hostnames that are publicly published on the Internet
For general information about the Internet and VCN Resolver, see DNS in Your Virtual Cloud Network on page
2936.
For a DB system, the Internet and VCN Resolver handles resolution of all necessary endpoints: Object Storage
endpoints (includes both the Object Storage endpoints and Swift endpoints), YUM repos, and SCANs (SCANs are
used only with 2-node RAC systems).
By default, each VCN is configured to use the Internet and VCN Resolver. If you plan to use a custom DNS resolver,
you must configure the VCN in a different way. For more information, see To use a custom DNS resolver with your
DB system on page 1365.
To use the Internet and VCN Resolver with your DB System
As part of the overall network setup, perform these tasks:
1. Create the VCN with the required DNS settings:

When creating the VCN, select the check box for Use DNS Hostnames in this VCN.

Specify a DNS label for the VCN. See the restrictions in Hostname restrictions for using the Internet and VCN
Resolver on page 1364.
• Notice that you cannot change these VCN DNS settings after you create the VCN.
2. Create each subnet with the required DNS settings:

When creating a subnet in the VCN, select the check box for Use DNS Hostnames in this Subnet.

Specify a DNS label for the subnet. See the restrictions in Hostname restrictions for using the Internet and
VCN Resolver on page 1364.
• Notice that you cannot change these subnet DNS settings after you create the subnet.
3. Use the default set of DHCP options that come with the VCN:
• When creating each subnet, configure it to use the VCN's default set of DHCP options.
• By default, the default set of DHCP options is configured to use the Internet and VCN Resolver.
4. Create the DB system with a hostname prefix:
• Later, when creating the DB system, specify a value in the Hostname Prefix field. See the restrictions in
Hostname restrictions for using the Internet and VCN Resolver on page 1364.
• Notice that the DB system's Host Domain Name value is automatically assigned based on the VCN and
subnet DNS labels.
The resulting DB system has a fully qualified domain name (FQDN) based on the hostname prefix, VCN label, and
subnet label you specify.
Hostname restrictions for using the Internet and VCN Resolver
When you create the VCN, subnet, and DB system, you must carefully set the following identifiers, which are related
to DNS in the VCN:
• VCN DNS label
• Subnet DNS label
• Hostname prefix for the DB system
These values make up the node's fully qualified domain name (FQDN):

Oracle Cloud Infrastructure User Guide 1364


Database

<hostname_prefix><RAC_node_#>.<subnet_DNS_label>.<VCN_DNS_label>.oraclevcn.com
For RAC systems only, the Database service automatically appends a node number after the hostname prefix.
For example:
• Node 1: dbsys1.ad1.acmevcniad.oraclevcn.com
• Node 2: dbsys2.ad1.acmevcniad.oraclevcn.com
Requirement for the DB system's hostname prefix:
• Recommended maximum: 16 characters. For more information, see the example under the following section,
"Requirements for the VCN and subnet DNS labels", for more details.
• Must start with an alphabetical character.
• Cannot be the string localhost.
Requirements for the VCN and subnet DNS labels:
• Recommended maximum: 15 characters.
• No hyphens or underscores.
• Recommended: Include the region name in the VCN's name, and include the availability domain name in the
subnet's name.
• The FQDN has a maximum total limit of 63 characters, so set the VCN and subnet DNS labels short enough to
meet that requirement. Here is a safe general rule:
<16_chars_max>#.<15_chars_max>.<15_chars_max>.oraclevcn.com
• The recommended maximums are not enforced when you create the VCN and subnets. However, the DB system
deployment fails if the FQDN has more than 63 characters.
Custom DNS Resolver
See the preceding table for the types of DB systems that support the use of a custom DNS resolver.
A custom DNS resolver is a DNS server that you set up in your on-premises network and maintain yourself. It must
resolve the endpoints required by the DB system.
By default, the VCN is configured to use the Internet and VCN Resolver. Therefore, if you instead want to use a
custom DNS resolver, you must configure the VCN and DHCP options in a different way. See the following process.
To use a custom DNS resolver with your DB system
As part of the overall network setup, perform these tasks:
1. Create the VCN with the recommended DNS settings:

When creating the VCN, Oracle recommends that you select the check box for Use DNS Hostnames in this
VCN and then specify a DNS label for the VCN. See the restrictions listed in Hostname restrictions when
using a custom DNS resolver on page 1366.
• Notice that you cannot change the preceding VCN DNS settings after you create the VCN. They are
optional for a custom DNS server, but required if you use the Internet and VCN Resolver. Therefore, Oracle
recommends that you configure them now in case you later want to use the Internet and VCN Resolver.
2. Create each subnet with the recommended DNS settings:
• When creating a subnet in the VCN, Oracle recommends that you select select the check box for Use
DNS Hostnames in this Subnet and then specify a DNS label for the subnet. See the restrictions listed in
Hostname restrictions when using a custom DNS resolver on page 1366.
• Notice that you cannot change the preceding subnet DNS settings after you create the subnet. They are
optional for a custom DNS server, but required if you use the Internet and VCN Resolver. Therefore, Oracle
recommends that you configure them now in case you later want to use the Internet and VCN Resolver.

Oracle Cloud Infrastructure User Guide 1365


Database

3. Edit the default set of DHCP options to use a custom resolver:



When creating each subnet, configure it to use the VCN's default set of DHCP options.

Edit the default set of DHCP options so that DNS Type is set to Custom Resolver. Provide the IP address
for at least one DNS server (maximum three). Optionally provide a single search domain (which will
automatically be added to the host's /etc/resolv.conf file).
4. Create the DB system with required DNS entries:
• Later, when creating the DB system, specify a Hostname Prefix.
• For the Host Domain Name: If you selected the check box for Use DNS Hostnames in the preceding steps,
the Host Domain Name is automatically generated from the VCN and subnet DNS labels. Otherwise, you
must provide a value for the Host Domain Name. See the restrictions listed in Hostname restrictions when
using a custom DNS resolver on page 1366.
• Notice that when launching the DB system, the Database service automatically assigns an IP address from the
VCN's CIDR block and resolves the address locally based on the host's /etc/hosts file. Your custom DNS
resolver does not need to resolve the hostname in advance for the DB system launch to succeed.
Hostname restrictions when using a custom DNS resolver
Requirement for the DB system's hostname prefix:
• Recommended maximum: 16 characters. For more information, see the example under the next section,
Requirements for the VCN and subnet DNS labels".
• Must start with an alphabetical character.
• Cannot be the string localhost.
Requirements for the VCN and subnet DNS labels:
• You can provide a value for the DNS labels only if you select the check box for Use DNS Hostnames when
creating the VCN and subnets. The resulting FQDN for the DB system follows this format:
<hostname_prefix>.<subnet_DNS_label>.<VCN_DNS_label>.oraclevcn.com
• Recommended maximum for each DNS label: 15 characters.
• No hyphens or underscores.
• Recommended: Include the region name in the VCN's name, and include the availability domain name in the
subnet's name.
• The FQDN has a maximum total limit of 63 characters, so set the VCN and subnet DNS labels short enough to
meet that requirement. Here is a safe general rule:
<16_chars_max>.<15_chars_max>.<15_chars_max>.oraclevcn.com
• The recommended maximums are not enforced when you create the VCN and subnets. However, the DB system
deployment fails if the FQDN has more than 63 characters.
Requirements for the DB system's host domain name:
• You can provide a value in the Host Domain Name field only if you did not select the check box for Use DNS
Hostnames when creating the VCN and subnets.
• No hyphens or underscores.
• Ensure that the value results in an FQDN that is no longer than 63 characters. Otherwise the DB system
deployment will fail.
DNS: Between On-Premises Network and VCN
If you are using the Internet and VCN Resolver and want to enable the use of hostnames when on-premises hosts and
VCN resources communicate with each other, you can set up an instance in the VCN to be a custom DNS server.
For an example of an implementation of this scenario with the Oracle Terraform provider, see Hybrid DNS
Configuration.
Service Gateway for the VCN
Your VCN needs access to both Object Storage (for backing up databases, patching, and updating the cloud tooling
on a DB system) and Oracle YUM repos for OS updates.

Oracle Cloud Infrastructure User Guide 1366


Database

Depending on whether you use option 1 or option 2 described previously, you use the service gateway in different
ways. See the next two sections.
Option 1: Service Gateway Access Only to Object Storage
You configure the subnet to use the service gateway for access only to Object Storage. As a reminder, here's the
diagram for option 1:

In general, you must:


• Perform the tasks for setting up a service gateway on a VCN, and specifically enable the service CIDR label called
OCI <region> Object Storage.
• In the task for updating routing, add a route rule to the subnet's custom route table. For the destination service, use
OCI <region> Object Storage and target = the service gateway.
• In the task for updating security rules for the subnet, perform the task on the DB system's custom network security
group (NSG) or security list. Here you set up a security rule with the destination service set to OCI <region>
Object Storage. See Custom Security Rules on page 1369.
Option 2: Service Gateway Access to Both Object Storage and YUM Repos
You configure the subnet to use the service gateway for access to the Oracle Services Network, which includes both
Object Storage and the Oracle YUM repos.

Oracle Cloud Infrastructure User Guide 1367


Database

Important:

See this known issue for information about accessing Oracle YUM services
through the service gateway.
As a reminder, here's the diagram for option 2:

In general, you must:


• Perform the tasks for setting up a service gateway on a VCN, and specifically enable the service CIDR label called
All <region> Services in Oracle Services Network.
• In the task for updating routing in the subnet, add a rule to the subnet's custom route table. For the destination
service, use All <region> Services in Oracle Services Network and target = the service gateway.
• In the task for updating security rules for the subnet, perform the task on the subnet's custom network security
group (NSG) or security list. Here you set up a security rule with the destination service set to All <region>
Services in Oracle Services Network. See Custom Security Rules on page 1369.
Security Rules for the DB System
This section lists the security rules to use with your DB system. Security rules control the types of traffic allowed in
and out of the DB system's compute nodes. The rules are divided into two sections.
There are different ways to implement these rules. For more information, see Ways to Implement the Security Rules
on page 1371.

Oracle Cloud Infrastructure User Guide 1368


Database

Important:

Your instances running Oracle-provided DB system images also have


firewall rules that control access to the instance. Make sure that both the
instance's security rules and firewall rules are set correctly. Also see Opening
Ports on the DB System on page 1434.
General Rules Required for Basic Connectivity
This section has several general rules that enable essential connectivity for hosts in the VCN.
If you use security lists to implement your security rules, be aware that the rules that follow are included by default in
the default security list. Update or replace the list to meet your particular security needs. The two ICMP rules (general
ingress rules 2 and 3) are required for proper functioning of network traffic within the Oracle Cloud Infrastructure
environment. Adjust the general ingress rule 1 (the SSH rule) and the general egress rule 1 to allow traffic only to and
from hosts that require communication with resources in your VCN.
General ingress rule 1: Allows SSH traffic from anywhere
• Stateless: No (all rules must be stateful)
• Source Type: CIDR
• Source CIDR: 0.0.0.0/0
• IP Protocol: TCP
• Source Port Range: All
• Destination Port Range: 22
General ingress rule 2: Allows Path MTU Discovery fragmentation messages
This rule enables hosts in the VCN to receive Path MTU Discovery fragmentation messages. Without access to these
messages, hosts in the VCN can have problems communicating with hosts outside the VCN.
• Stateless: No (all rules must be stateful)
• Source Type: CIDR
• Source CIDR: 0.0.0.0/0
• IP Protocol: ICMP
• Type: 3
• Code: 4
General ingress rule 3: Allows connectivity error messages within the VCN
This rule enables the hosts in the VCN to receive connectivity error messages from each other.
• Stateless: No (all rules must be stateful)
• Source Type: CIDR
• Source CIDR: Your VCN's CIDR
• IP Protocol: ICMP
• Type: 3
• Code: All
General egress rule 1: Allows all egress traffic
• Stateless: No (all rules must be stateful)
• Destination Type: CIDR
• Destination CIDR: 0.0.0.0/0
• IP Protocol: All
Custom Security Rules
The following rules are necessary for the DB system's functionality.

Oracle Cloud Infrastructure User Guide 1369


Database

Important:

Custom ingress rules 1 and 2 only cover connections initiated from within the
VCN. If you have a client that resides outside the VCN, Oracle recommends
setting up two additional similar rules that instead have the Source CIDR set
to the public IP address of the client.
Custom ingress rule 1: Allows ONS and FAN traffic from within the VCN
This rule is recommended and enables the Oracle Notification Services (ONS) to communicate about Fast
Application Notification (FAN) events.
• Stateless: No (all rules must be stateful)
• Source Type: CIDR
• Source CIDR: VCN's CIDR
• IP Protocol: TCP
• Source Port Range: All
• Destination Port Range: 6200
• Description: An optional description of the rule.
Custom ingress rule 2: Allows SQL*NET traffic from within the VCN
This rule is for SQL*NET traffic and is required only if you need to enable client connections to the database.
• Stateless: No (all rules must be stateful)
• Source Type: CIDR
• Source CIDR: VCN's CIDR
• IP Protocol: TCP
• Source Port Range: All
• Destination Port Range: 1521
• Description: An optional description of the rule.
Custom egress rule 1: Allows outbound SSH access
This rule enables SSH access between nodes in a 2-node DB system. It is redundant with the general egress rule in
General Rules Required for Basic Connectivity (and in the default security list). It is optional but recommended in
case the general rule (or default security list) is inadvertently changed.
• Stateless: No (all rules must be stateful)
• Destination Type: CIDR
• Destination CIDR: 0.0.0.0/0
• IP Protocol: TCP
• Source Port Range: All
• Destination Port Range: 22
• Description: An optional description of the rule.
Custom egress rule 2: Allows access to Object Storage and YUM repos
This rule enables the DB system to communicate with Object Storage alone (for option 1), or with the Oracle Services
Network, which includes both Object Storage and the Oracle YUM repos (for option 2). It is redundant with the
general egress rule in General Rules Required for Basic Connectivity on page 1369 (and in the default security list).
It is optional but recommended in case the general rule (or default security list) is inadvertently changed.
• Stateless: No (all rules must be stateful)
• Destination Type: Service
• Destination Service:
• For option 1, use the service CIDR label called OCI <region> Object Storage
• For option 2, use the service CIDR label called All <region> Services in Oracle Services Network
• IP Protocol: TCP

Oracle Cloud Infrastructure User Guide 1370


Database

• Source Port Range: All


• Destination Port Range: 443 (HTTPS)
• Description: An optional description of the rule.
Ways to Implement the Security Rules
The Networking service offers two ways to implement security rules within your VCN:
• Network security groups
• Security lists
For a comparison of the two methods, see Comparison of Security Lists and Network Security Groups on page
2859.
If you use network security groups
If you choose to use network security groups (NSGs), here is the recommended process:
1. Create a network security group for DB systems. Add the following security rules to that NSG:
• The rules listed in General Rules Required for Basic Connectivity on page 1369
• The rules listed in Custom Security Rules on page 1369
2. When the database administrator creates the DB system, they must choose several networking components (for
example, which VCN and subnet to use). They can also choose which NSG or NSGs to use. Make sure they
choose the NSG you created.
You could instead create one NSG for the general rules and a separate NSG for the custom rules. Then when the
database administrator chooses which NSGs to use for the DB system, make sure they choose both NSGs.
If you use security lists
If you choose to use security lists, here is the recommended process:
1. Configure the subnet to use the required security rules:
a. Create a custom security list for the subnet and add the rules listed in Custom Security Rules on page 1369.
b. Associate the following two security lists with the subnet:
• VCN's default security list with all its default rules. This automatically comes with the VCN.
• The new custom security list you created for the subnet
2. Later when the database administrator creates the DB system, they must choose several networking components.
When they select the subnet that you have already created and configured, the security rules are automatically
enforced for the compute nodes created in the subnet.
Caution:

Do not remove the default egress rule from the default security list. If you
do, instead make sure to include the following replacement egress rule in the
subnet's custom security list:
• Stateless: No (all rules must be stateful)
• Destination Type: CIDR
• Destination CIDR: 0.0.0.0/0
• IP Protocol: All

Creating Bare Metal and Virtual Machine DB Systems


This topic explains how to create a bare metal or virtual machine DB system, and set up DNS for a single-node or
two-node Oracle RAC DB system.
When you create a DB system using the Console, the API, or the CLI, the system is provisioned to support Oracle
databases, and an initial database is created based on the options you provide and some default options described later
in this topic.

Oracle Cloud Infrastructure User Guide 1371


Database

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let database admins manage Oracle Cloud database systems on page 2158 lets the
specified group do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
more information about writing policies for databases, see Details for the Database Service on page 2251.
Prerequisites
You'll need the following items to create any DB system:
• The public key, in OpenSSH format, from the key pair that you plan to use for connecting to the DB System via
SSH. A sample public key, abbreviated for readability, is shown below.

ssh-rsa AAAAB3NzaC1yc2EAAAABJQAA....lo/gKMLVM2xzc1xJr/
Hc26biw3TXWGEakrK1OQ== rsa-key-20160304

For more information, see Managing Key Pairs on Linux Instances on page 698.
• A correctly configured virtual cloud network (VCN) to launch the DB system in. Its related networking resources
(gateways, route tables, security lists, DNS, and so on) must also be configured as necessary for the DB system.
For more information, see Network Setup for DB Systems on page 1360.
• If you plan to back up your DB system to Object Storage or to use the managed patching feature, then Oracle
recommends using a service gateway to enable access to Object Storage. For more information, see Service
Gateway for the VCN on page 1366.
• For a two-node Oracle RAC DB system, ensure that port 22 is open for both ingress and egress on the subnet,
and that the security rules you create are stateful (the default), otherwise, the DB system might fail to provision
successfully.
Default Options for the Initial Database
To simplify creating a DB system in the Console, and when using the API, the following default options are used for
the initial database and for any additional databases that you create. (Several advanced options, such as time zone, can
be set when you can use the dbcli command line interface to create databases.)
• Console Enabled: False
• Create Container Database: False for Oracle Database 11g (11.2.0.4) databases. Otherwise, true.
• Create Instance Only (for standby and migration): False
• Database Home ID: Creates a new database home
• Database Language: AMERICAN
• Database Sizing Template: odb2
• Database Storage: Oracle Automatic Storage Management Cluster File System (ACFS) for Oracle Database 11g
(11.2.0.4) databases. Otherwise, Automatic Storage Management (ASM) for all bare metal and multi-node virtual
machine DB systems. Single-node VM systems can optionally be provisioned using Logical Volume Manager for
faster provisioning.
• Database Territory: AMERICA
• Database Unique Name: The user-specified database name and a system-generated suffix, for example,
dbtst_phx1cs.
• PDB Admin Name: pdbuser (Not applicable for Oracle Database 11g (11.2.0.4) databases.)
For a list of the database options that you can set, see To create a DB system on page 1373.

Oracle Cloud Infrastructure User Guide 1372


Database

Using a Backup to Create the Initial Database


When creating a new DB system using a backup stored in Object Storage as the source of the initial database, you
have the following options:
• Daily automatic backup. Requires that you have automatic backups enabled and an available backup to use. If
you are creating a database from an automatic backup, you can choose any level 0 weekly backup, or a level 1
incremental backup created after the most recent level 0 backup. For more information on automatic backups, see
Oracle Cloud Infrastructure Managed Backup Features on page 1437.
• On-demand full backup. See To create an on-demand full backup of a database on page 1440 for information
on creating an on-demand backup.
• Standalone backup. For more information, see Standalone Backups on page 1439.
• Last archived redo log backup. Requires that you have automatic backups enabled. This backup combines data
from the most recent daily automatic backup and data from archived redo logs, and represents the most current
backup available. The time of the last archived redo log backup is visible on the database details page in the Last
Backup Time field.
• Point-in-time out of place restore. Specify a timestamp to create a new copy of the database that included data
up to a specified point in time. The timestamp must be earlier or equal to the Last Backup Time time displayed
on the database details page. Note the following limitations when performing a point-in-time out of place restore:
• The timestamp must be within the recovery window of the database
• The timestamp must be available within the database incarnation of the available automatic backups
• The timestamp cannot fall within two overlapping database incarnations
• The create database operation will fail if the database has undergone structural changes since the specified
timestamp. Structural changes include operations such as creating or dropping a tablespace.
• The create database operation cannot be started if another point-in-time database copy operation is in progress.
Custom IP Addresses for Non-RAC DB Systems
When creating a new non-RAC DB system or cloning an existing VM DB system, you can optionally define the IP
address of the DB system being provisioned. This is useful in development contexts where you create and delete the
same DB system over and over, and you need each new iteration of the DB system to use the same IP address.
Using the Console
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
To create a DB system
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Click Create DB System.

Oracle Cloud Infrastructure User Guide 1373


Database

3. On the Create DB System page, provide the basic information for the DB system:
• Select a compartment: By default, the DB system is created in your current compartment and you can use the
network resources in that compartment.
• Name your DB system: A non-unique, display name for the DB system. An Oracle Cloud Identifier (OCID)
uniquely identifies the DB system. Avoid entering confidential information.
• Select an availability domain: The availability domain in which the DB system resides.
• Select a shape type: The shape type you select sets the default shape and filters the shape options in the next
field.
• Select a shape: The shape determines the type of DB system created and the resources allocated to the system.
To specify a shape other than the default, click Change Shape, and select an available shape from the list.
Bare metal shapes
• BM.DenseIO2.52: Provides a 1-node DB system (one bare metal server), with up to 52 CPU cores, 768
GB memory, and eight 6.4 TB locally attached NVMe drives (51.2 TB total) to the DB system.
• BM.DenseIO1.36: Limited availability. Provides a 1-node DB system (one bare metal server), with up to
36 CPU cores, 512 GB memory, and nine 3.2 TB locally attached NVMe drives (28.8 TB total) to the DB
system.
Note: BM.DenseO1.36 is available only to monthly universal credit customers existing on or before
November 9th, 2018. This shape is available only in the US West (Phoenix), US East (Ashburn), and
Germany Central (Frankfurt) regions.
Virtual machine shapes
Virtual machine X7 shapes:
• VM.Standard2.1: Provides a 1-node DB system with 1 core.
• VM.Standard2.2: Provides a 1- or 2-node DB system with 2 cores.
• VM.Standard2.4: Provides a 1- or 2-node DB system with 4 cores.
• VM.Standard2.8: Provides a 1- or 2-node DB system with 8 cores.
• VM.Standard2.16: Provides a 1- or 2-node DB system with 16 cores.
• VM.Standard2.24: Provides a 1- or 2-node DB system with 24 cores.
Virtual machine X5 shapes:
• VM.Standard1.1: Provides a 1-node DB system with 1 core.
• VM.Standard1.2: Provides a 1- or 2-node DB system with 2 cores.
• VM.Standard1.4: Provides a 1- or 2-node DB system with 4 cores.
• VM.Standard1.8: Provides a 1- or 2-node DB system with 8 cores.
• VM.Standard1.16: Provides a 1- or 2-node DB system with 16 cores.
Note:

•X5-based shapes availability is limited to monthly universal credit


customers existing on or before November 9th, 2018, in the US West
(Phoenix), US East (Ashburn), and Germany Central (Frankfurt)
regions.
• VM.Standard1.1 and VM.Standard2.1 shapes cannot be used for 2-
node RAC clusters.
• Configure the DB system: Specify the following:
• Total node count: The number of nodes in the DB system, which depends on the shape you select. For
virtual machine DB systems, you can specify either one or two nodes, except for VM.Standard2.1 and
VM.Standard1.1, which are single-node DB systems.
• Oracle Database software edition: The database edition supported by the DB system. For bare metal
systems, you can mix supported database releases on the DB system to include older database versions, but

Oracle Cloud Infrastructure User Guide 1374


Database

not editions. The database edition cannot be changed and applies to all the databases in this DB system.
Virtual machine systems support only one database.
• CPU core count: Displays only for bare metal DB systems to allow you to specify the number of CPU
cores for the system. (Virtual machine DB system shapes have a fixed number of CPU cores.) The text
below the field indicates the acceptable values for that shape. For a multi-node DB system, the core count
is evenly divided across the nodes.
Note:

After you provision the DB system, you can increase the CPU cores
to accommodate increased demand. On a bare metal DB system, you
scale the CPU cores directly. For virtual machine DB systems, you
change the number of CPU cores by changing the shape.
• Choose Storage Management Software: 1-node virtual machine DB systems only. Select Oracle Grid
Infrastructure to use Oracle Automatic Storage Management (recommended for production workloads).
Select Logical Volume Manager to quickly provision your DB system using Logical Volume Manager
storage management software. Note that the Available storage (GB) value you specify during provisioning
determines the maximum total storage available through scaling. The total storage available for each choice is
detailed in the Storage Scaling Considerations for Virtual Machine Databases Using Fast Provisioning on page
1556 topic.
See Fast Provisioning Option for 1-node Virtual Machine DB Systems for more information about this feature.
• Configure storage: Specify the following:
• Available storage (GB): Virtual machine only. The amount of Block Storage in GB to allocate to the
virtual machine DB system. Available storage can be scaled up or down as needed after provisioning your
DB system.
• Total storage (GB): Virtual machine only. The total Block Storage in GB used by the virtual machine
DB system. The amount of available storage you select determines this value. Oracle charges for the total
storage used.
• Cluster name: (Optional) A unique cluster name for a multi-node DB system. The name must begin with
a letter and contain only letters (a-z and A-Z), numbers (0-9) and hyphens (-). The cluster name can be no
longer than 11 characters and is not case sensitive.
• Data storage percentage: Bare metal only. The percentage (40% or 80%) assigned to DATA storage
(user data and database files). The remaining percentage is assigned to RECO storage (database redo logs,
archive logs, and recovery manager backups).
• Add public SSH keys: The public key portion of each key pair you want to use for SSH access to the DB
system. You can browse or drag and drop .pub files, or paste in individual public keys. To paste multiple keys,
click + Another SSH Key, and supply a single key for each entry.
• Choose a license type: The type of license you want to use for the DB system. Your choice affects metering
for billing.

License Included means the cost of this Oracle Cloud Infrastructure Database service resource will include
both the Oracle Database software licenses and the service.
• Bring Your Own License (BYOL) means you will use your organization's Oracle Database software
licenses for this Oracle Cloud Infrastructure Database service resource. See Bring Your Own License for
more information.
4. Specify the network information:
• Virtual cloud network: The VCN in which to create the DB system. Click Change Compartment to select a
VCN in a different compartment.
• Client Subnet: The subnet to which the DB system should attach. For 1- and 2-node RAC DB systems:
Do not use a subnet that overlaps with 192.168.16.16/28, which is used by the Oracle Clusterware private

Oracle Cloud Infrastructure User Guide 1375


Database

interconnect on the database instance. Specifying an overlapping subnet will cause the private interconnect to
malfunction.
Click Change Compartment to select a subnet in a different compartment.
• Network Security Groups: Optionally, you can specify one or more network security groups (NSGs) for your
DB system. NSGs function as virtual firewalls, allowing you to apply a set of ingress and egress security rules
to your DB system. A maximum of five NSGs can be specified. For more information, see Network Security
Groups on page 2867 and Network Setup for DB Systems on page 1360.
Note that if you choose a subnet with a security list, the security rules for the DB system will be a union of the
rules in the security list and the NSGs.
To use network security groups
a. Check the Configure Network Security Groups check box. Note that you must have a virtual cloud
network selected to be able to assign NSGs to your DB system.
b. Specify the NSG to use with the DB system. You might need to use more than one NSG. If you're not sure,
contact your network administrator.
c. To use additional NSGs, click + Another Network Security Group.
• Hostname prefix: Your choice of host name for the bare metal or virtual machine DB system. The host name
must begin with an alphabetic character, and can contain only alphanumeric characters and hyphens (-). The
maximum number of characters allowed for bare metal and virtual machine DB systems is 16.
Important:

The host name must be unique within the subnet. If it is not unique, the
DB system will fail to provision.
• Host domain name: The domain name for the DB system. If the selected subnet uses the Oracle-provided
Internet and VCN Resolver for DNS name resolution, then this field displays the domain name for the subnet
and it can't be changed. Otherwise, you can provide your choice of a domain name. Hyphens (-) are not
permitted.
• Host and domain URL: Combines the host and domain names to display the fully qualified domain name
(FQDN) for the database. The maximum length is 64 characters.
• Private IP address: Optionally, for non-RAC DB systems, you can define the IP address of the new DB
system. This is useful in development contexts where you create and delete a DB system over and over, and
you need each new iteration of the DB system to use the same IP address. If you specify an IP address that
is currently in use within the subnet, the provisioning operation will fail with an error message regarding the
invalid IP address.
5. Click Show Advanced Options to specify advanced options for the DB system:
• Disk redundancy: For bare metal systems only. The type of redundancy configured for the DB system.
• Normal is 2-way mirroring, recommended for test and development systems.
• High is 3-way mirroring, recommended for production systems.
• Fault domain: The fault domain(s) in which the DB system resides. You can choose which fault domain to
use for your DB system. For two-node Oracle RAC DB systems, you can specify which two fault domains to
use. Oracle recommends that you place each node of a two-node Oracle RAC DB system in a different fault
domain. For more information on fault domains, see About Regions and Availability Domains on page 182.
• Time zone: The default time zone for the DB system is UTC, but you can specify a different time zone. The
time zone options are those supported in both the Java.util.TimeZone class and the Oracle Linux operating
system. For more information, see DB System Time Zone on page 1576.
Tip:

If you want to set a time zone other than UTC or the browser-detected
time zone, and if you do not see the time zone you want, try selecting
"Miscellaneous" in the Region or country list.
• Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags
to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more

Oracle Cloud Infrastructure User Guide 1376


Database

information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip
this option (you can apply tags later) or ask your administrator.
6. After you complete the network configuration and specify any advanced options, click Next.
7. Provide information for the initial database:
• Database name: The name for the database. The database name must begin with an alphabetic character and
can contain a maximum of eight alphanumeric characters. Special characters are not permitted.
• Database image: This controls the version of the initial database created on the DB system. By default, the
latest available Oracle Database version is selected. You can also choose an older Oracle Database version,
or choose a customized database software image that you have previously created in your current region with
your choice of updates and one-off (interim) patches. See Oracle Database Software Images on page 1568 for
information on creating and working with database software images.
To use an older Oracle-published software image:
a. Click Change Database Image.
b. In the Select a Database Software Image dialog, select Oracle-published Database Software Images.
c. In the Oracle Database Version list, check the version you wish to use to provision the initial database
in your DB system. If you are launching a DB system with a virtual machine shape, you have option of
selecting an older database version.
Display all available versions: Use this switch to include older database updates in the list of database
version choices. When the switch is activated, you will see all available PSUs and RUs. The most recent
release for each major version is indicated with "(latest)". See Availability of Older Database Versions for
Virtual Machine DB Systems on page 1355 for more information.
Note:

Preview software versions: Versions flagged as "Preview" are for


testing and subject to some restrictions. See Oracle Database Preview
Version Availability on page 1354 for more information.
d. Click Select.
To use a user-created database software image:
a. Click Change Database Image.
b. In the Select a Database Software Image dialog, select Custom Database Software Images.
c. Select the compartment that contains your database software image.
d. Select the Oracle Database version that your database software image uses.
e. A list of database software images is displayed for your chosen Oracle Database version. Check the box
beside the display name of the image you want to use.
After the DB system is active, you can create additional databases for bare metal systems. You can mix
database versions on the DB system, but not editions. Virtual machine DB systems are limited to a single
database.
• PDB name: Not applicable to Oracle Database 11g (11.2.0.4). The name of the pluggable database. The PDB
name must begin with an alphabetic character, and can contain a maximum of eight alphanumeric characters.
The only special character permitted is the underscore ( _).
• Create administrator credentials: A database administrator SYS user will be created with the password you
supply.
• Username: SYS
• Password: Supply the password for this user. The password must meet the following criteria:
A strong password for SYS, SYSTEM, TDE wallet, and PDB Admin. The password must be 9 to 30
characters and contain at least two uppercase, two lowercase, two numeric, and two special characters. The

Oracle Cloud Infrastructure User Guide 1377


Database

special characters must be _, #, or -. The password must not contain the username (SYS, SYSTEM, and so
on) or the word "oracle" either in forward or reversed order and regardless of casing.
• Confirm password: Re-enter the SYS password you specified.
• Use the administrator password for the TDE wallet: When this option checked, the password entered
for the SYS user is also used for the TDE wallet. To set the TDE wallet password manually, uncheck this
option and enter the TDE wallet password.
• Select workload type: Choose the workload type that best suits your application:
• Online Transactional Processing (OLTP) configures the database for a transactional workload, with a
bias towards high volumes of random data access.
• Decision Support System (DSS) configures the database for a decision support or data warehouse
workload, with a bias towards large data scanning operations.
• Configure database backups: Specify the settings for backing up the database to Object Storage:
• Enable automatic backup: Check the check box to enable automatic incremental backups for this
database. If you are creating a database in a security zone compartment, you must enable automatic
backups.
• Backup retention period: If you enable automatic backups, then you can choose one of the following
preset retention periods: 7 days, 15 days, 30 days, 45 days, or 60 days. The default selection is 30 days.
• Backup Scheduling: If you enable automatic backups, then you can choose a two-hour scheduling
window to control when backup operations begin. If you do not specify a window, then the six-hour default
window of 00:00 to 06:00 (in the time zone of the DB system's region) is used for your database. See
Backup Scheduling for more information.
• Click Show Advanced Options to specify advanced options for the initial database:
• Character set: The character set for the database. The default is AL32UTF8.
• National character set: The national character set for the database. The default is AL16UTF16.
8. Click Create DB System. The DB system appears in the list with a status of Provisioning. The DB system's icon
changes from yellow to green (or red to indicate errors).
After the DB system's icon turns green, with a status of Available, you can click the highlighted DB system
name to display details about the DB system. Note the IP addresses. You'll need the private or public IP address,
depending on network configuration, to connect to the DB system.
To create a DB system from a backup
You can create a new DB system from a backup. See Using a Backup to Create the Initial Database on page 1373 in
this topic for details on backup source options.
Before you begin, note the following:
• When you create a DB system from a backup, the availability domain is the same as where the backup is hosted.
• The shape you specify must be the same type as the database from which the backup was taken. For example, if
you are using a backup of a single-node database, then the DB system you select as your target must also be a
single-node DB system.
• The Oracle database software version you specify must be an equal or greater version than that of the backed up
database.
• If you specify a virtual machine DB system shape, then the Available Storage Size will default to the data size of
the backup, rounded up to the closest storage size option. However, you can specify a larger storage size.
• If you are creating a new DB system from an automatic backup, you may choose any level 0 weekly backup, or
a level 1 incremental backup created after the most recent level 0 backup. For more information on automatic
backups, see Oracle Cloud Infrastructure Managed Backup Features on page 1437
• If the backup being used to create a DB system is in a security zone compartment, the DB system cannot be
created in a compartment that is not in a security zone. See the Security Zone Policies topic for a full list of
policies that affect Database service resources.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.

Oracle Cloud Infrastructure User Guide 1378


Database

2. Choose your Compartment.


A list of DB systems is displayed.
3. Navigate to the backup or standalone backup you want to use to create the new DB system:
Tip:

If you are creating a database from an automatic backup, you may choose
any level 0 weekly backup, or a level 1 incremental backup created
after the most recent level 0 backup. For more information on automatic
backups, see Oracle Cloud Infrastructure Managed Backup Features on
page 1437.
To select a daily automatic backup or on-demand full backup as the source
a. Click the DB system name that contains the specific database to display the DB System Details page.
b. From the Databases list, click the database name associated with the backup you want to use.
c. Find your desired backup in the Backups list. If you don't see the backups list on the database details page,
click Backups in the Resources menu.
d. Click the Actions icon (three dots) for the backup, and then click Create Database.
To select the last archived redo log automatic backup as the source
a. Find the DB system where the database is located, and click the system name to display details about it.
b. Find the database associated with the backup you wish to use, and click its name to display details about it.
c. On the database details page, click Create Database from Last Backup.
d. In the Create Database from Backup dialog, select Create database from last backup.
To specify a timestamp for a point-in-time copy of the source
a. Click the DB system name that contains the specific database to display the DB System Details page.
b. From the Databases list, click the database name associated with the backup data you want to use as the
source for the initial database in you new DB system.
c. On the database details page, click Create Database from Last Backup.
d. In the Create Database from Backup dialog, select Create database from specified timestamp.
To select a standalone backup as the source
a. Click Standalone Backups under Bare Metal, VM, and Exadata.
b. In the list of standalone backups, find the backup you want to use to create the database.
c. Click the Actions icon (three dots) for the backup you are interested in, and then click Create Database.

Oracle Cloud Infrastructure User Guide 1379


Database

4. In the Create Database from Backup dialog, enter the following:


DB System Information
• DB System Information: Select Launch New DB System.
• Display Name: A friendly, display name for the DB system. The name doesn't need to be unique. An Oracle
Cloud Identifier (OCID) will uniquely identify the DB system. Avoid entering confidential information.
• Shape: The shape to use to launch the DB system. The shape determines the type of DB system and the
resources allocated to the system.
The selected shape must support the same number of nodes as the DB system from which the backup was
created.
Bare metal shapes
• BM.DenseIO2.52: Provides a 1-node DB system (one bare metal server), with up to 52 CPU cores, 768
GB memory, and eight 6.4 TB locally attached NVMe drives (51.2 TB total) to the DB system.
• BM.DenseIO1.36: Limited availability. Provides a 1-node DB system (one bare metal server), with up to
36 CPU cores, 512 GB memory, and nine 3.2 TB locally attached NVMe drives (28.8 TB total) to the DB
system.
Note: BM.DenseO1.36 is available only to monthly universal credit customers existing on or before
November 9th, 2018. This shape is available only in the US West (Phoenix), US East (Ashburn), and
Germany Central (Frankfurt) regions.
Virtual machine shapes
Virtual machine X7 shapes:
• VM.Standard2.1: Provides a 1-node DB system with 1 core.
• VM.Standard2.2: Provides a 1- or 2-node DB system with 2 cores.
• VM.Standard2.4: Provides a 1- or 2-node DB system with 4 cores.
• VM.Standard2.8: Provides a 1- or 2-node DB system with 8 cores.
• VM.Standard2.16: Provides a 1- or 2-node DB system with 16 cores.
• VM.Standard2.24: Provides a 1- or 2-node DB system with 24 cores.
Virtual machine X5 shapes:
• VM.Standard1.1: Provides a 1-node DB system with 1 core.
• VM.Standard1.2: Provides a 1- or 2-node DB system with 2 cores.
• VM.Standard1.4: Provides a 1- or 2-node DB system with 4 cores.
• VM.Standard1.8: Provides a 1- or 2-node DB system with 8 cores.
• VM.Standard1.16: Provides a 1- or 2-node DB system with 16 cores.
Note:

• X5-based shapes availability is limited to monthly universal credit


customers existing on or before November 9th, 2018, in the US West
(Phoenix), US East (Ashburn), and Germany Central (Frankfurt)
regions.
• VM.Standard1.1 and VM.Standard2.1 shapes cannot be used for 2-
node RAC clusters.
• Total Node Count:Virtual machine DB systems only. The number of nodes in the DB system. The number
depends on the shape you select. You can specify 1 or 2 nodes for virtual machine DB systems, except for
VM.Standard2.1 and VM.Standard1.1, which are single-node DB systems.
• Oracle Database Software Edition: The database edition supported by the DB system. For bare metal
systems, you can mix supported database releases on the DB system to include older database versions, but not

Oracle Cloud Infrastructure User Guide 1380


Database

editions. The database edition cannot be changed and applies to all the databases in this DB system. Virtual
machine systems support only one database.
• Available Storage Size:Virtual machine DB systems only. The amount of Block Storage you wish to allocate
to the virtual machine DB system.
• Cluster Name: A unique cluster name for a multi-node DB system. The name must begin with a letter and
contain only letters (a-z and A-Z), numbers (0-9) and hyphens (-). The cluster name can be no longer than 11
characters and is not case sensitive.
• License Type: The type of license you want to use for the DB system. Your choice affects metering for
billing.
• License included means the cost of this Oracle Cloud Infrastructure Database service resource will include
both the Oracle Database software licenses and the service.
• Bring Your Own License (BYOL) means you will use your organization's Oracle Database software
licenses for this Oracle Cloud Infrastructure Database service resource. See Bring Your Own License for
more information.
• SSH Public Key: The public key portion of the key pair you want to use for SSH access to the DB system. To
provide multiple keys, paste each key on a new line. Make sure each key is on a single, continuous line. The
length of the combined keys cannot exceed 10,000 characters.
• Data Storage Percentage:For bare metal DB systems only. The percentage (40% or 80%) assigned to DATA
storage (user data and database files). The remaining percentage is assigned to RECO storage (database redo
logs, archive logs, and recovery manager backups).
• Available Storage Size:For virtual machine DB systems only. The amount of block storage to allocate to the
virtual machine DB system. Available storage can be scaled up or down as needed after provisioning your DB
system.
If you are creating a DB system from a backup, the minimum value for available storage is determined by the
size of the backup.
• Advanced Options:
• Disk redundancy: For bare metal systems only. The type of redundancy configured for the DB system.
• Normal is 2-way mirroring, recommended for test and development systems.
• High is 3-way mirroring, recommended for production systems.
• Fault Domain: The fault domain(s) in which the DB system resides. You can choose which fault domain
to use for your DB system. For 2-node RAC DB systems, you can specify which two fault domains are
to be used. Oracle recommends that you place each node of a 2-node RAC DB system in a different fault
domain. For more information on fault domains, see Fault Domains on page 184.
Network Information
• Virtual Cloud Network: The VCN in which to launch the DB system.
• Client Subnet: The subnet to which the bare metal or virtual machine DB system should attach. For 1- and 2-
node RAC DB systems: Do not use a subnet that overlaps with 192.168.16.16/28, which is used by the Oracle
Clusterware private interconnect on the database instance. Specifying an overlapping subnet will cause the
private interconnect to malfunction.
• Network Security Groups: Optionally, you can specify one or more network security groups (NSGs) for your
DB system. NSGs function as virtual firewalls, allowing you to apply a set of ingress and egress security rules

Oracle Cloud Infrastructure User Guide 1381


Database

to your DB system. A maximum of five NSGs can be specified. For more information, see Network Security
Groups on page 2867 and Network Setup for DB Systems on page 1360.
To use network security groups
a. Check the Configure Network Security Groups check box. Note that you must have a virtual cloud
network selected to be able to assign NSGs to your DB system.
b. Specify the NSG to use with the DB system. You might need to use more than one NSG. If you're not sure,
contact your network administrator.
c. To use additional NSGs, click + Another Network Security Group.
• Hostname Prefix: Your choice of host name for the bare metal or virtual machine DB system. The host name
must begin with an alphabetic character, and can contain only alphanumeric characters and hyphens (-). The
maximum number of characters allowed for bare metal and virtual machine DB systems is 16.
Important:

The host name must be unique within the subnet. If it is not unique, the
DB system will fail to provision.
• Private IP address: Optionally, for non-RAC DB systems, you can define the IP address of the new DB
system. This is useful in development contexts where you create and delete a DB system over and over, and
you need each new iteration of the DB system to use the same IP address. If you specify an IP address that
is currently in use within the subnet, the provisioning operation will fail with an error message regarding the
invalid IP address.
Database Information
• Database Name: The name for the database. The database name must begin with an alphabetic character and
can contain a maximum of eight alphanumeric characters. Special characters are not permitted.
• Database Admin Password:
A strong password for SYS, SYSTEM, TDE wallet, and PDB Admin. The password must be 9 to 30
characters and contain at least two uppercase, two lowercase, two numeric, and two special characters. The
special characters must be _, #, or -. The password must not contain the username (SYS, SYSTEM, and so on)
or the word "oracle" either in forward or reversed order and regardless of casing.
• Confirm Database Admin Password: Re-enter the Database admin password you specified.
• Password for Transparent Data Encryption (TDE) Wallet or RMAN Encryption:
Enter either the TDE wallet password or the RMAN encryption password for the backup, whichever is
applicable. The TDE wallet password is the SYS password provided when the database was created by using
the Oracle Cloud Infrastructure Console, API, or CLI. The RMAN encryption password is typically required
instead if the password was subsequently changed manually.
5. Click Create Database.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to create DB system components.
DB systems:
• ListDbSystems
• GetDbSystem
• LaunchDbSystem
Database homes:
• ListDbHomes
• GetDbHome
• CreateDbHome
• DeleteDbHome

Oracle Cloud Infrastructure User Guide 1382


Database

Databases:
• ListDatabases
• GetDatabase
Shapes and database versions:
• ListDbSystemShapes
• ListDbVersions
For the complete list of APIs for the Database service, see Database Service API.
Setting up DNS for a DB System
DNS lets you use host names instead of IP addresses to communicate with a DB system. You can use the Internet and
VCN Resolver (the DNS capability built into the VCN) as described in DNS in Your Virtual Cloud Network on page
2936.
Alternatively, you can use your choice of DNS server. You associate the host name and domain name to the public or
private IP address of the DB system. You can find the host and domain names and IP addresses for the DB system in
the Oracle Cloud Infrastructure Console on the Database page.
To associate the host name to the DB system's public or private IP address, contact your DNS administrator and
request a custom DNS record for the DB system’s IP address. For example, if your domain is example.com and you
want to use clouddb1 as the host name, you would request a DNS record that associates clouddb1.example.com to
your DB system's IP address.
If you provide the public IP address to your DNS administrator as described above, you should also associate a
custom domain name to the DB system's public IP address:
1. Register your domain name through a third-party domain registration vendor, such as register.com.
2. Resolve your domain name to the DB system's public IP address, using the third-party domain registration vendor
console. For more information, refer to the third-party domain registration documentation.

Managing Bare Metal and Virtual Machine DB Systems


This topic explains how to perform a variety of management tasks for a bare metal or virtual machine
database system. Tasks include:
• Starting, stopping, rebooting, and terminating a DB system
• Scaling the CPU count and storage
• Changing the shape of a virtual machine DB system
• Managing network security groups (NSGs) for your system
• Managing licenses for your DB system
• Checking the system status
• Moving a system to another compartment
• Creating a serial console connection to your DB system nodes
• Managing tags for your system
• Viewing work requests related to your system
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let database admins manage Oracle Cloud database systems on page 2158 lets the
specified group do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. If
you want to dig deeper into writing policies for databases, see Details for the Database Service on page 2251.

Oracle Cloud Infrastructure User Guide 1383


Database

Using the Console


To check the status of a DB system
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of database systems is displayed.
3. In the list of DB systems, find the system you're interested in and check its icon. The color of the icon and the text
next to it indicates the status of the system.
• Provisioning: Yellow icon. Resources are being reserved for the DB system, the system is booting, and the
initial database is being created. Provisioning can take several minutes. The system is not ready to use yet.
• Available: Green icon. The DB system was successfully provisioned. A few minutes after the system enters
this state, you can SSH to it and begin using it.
• Terminating: Gray icon. The DB system is being deleted by the terminate action in the Console or API.
• Terminated: Gray icon. The DB system has been deleted and is no longer available.
• Failed: Red icon. An error condition prevented the provisioning or continued operation of the DB system.
To view the status of a database node, under Resources, click Nodes to see the list of nodes. In addition to the
states listed for a DB system, a node's status can be one of the following:
• Starting: Yellow icon. The database node is being powered on by the start or reboot action in the Console or
API.
• Stopping: Yellow icon. The database node is being powered off by the stop or reboot action in the Console or
API.
• Stopped: Yellow icon. The database node was powered off by the stop action in the Console or API.
You can also check the status of database systems and database nodes by using the ListDbSystems or ListDbNodes
API operations, which return the lifecycleState attribute.
To start, stop, or reboot a database system
DB system nodes are stopped, started, or rebooted individually. For multi-node DB systems, you may need to act on
only one node (as in the case of proactively rebooting a virtual machine node with scheduled maintenance).
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of database systems is displayed.
3. In the list of database systems, find the DB system you want to stop or start, and then click its name to display
details about it.
4. In the list of nodes, click the Actions icon (three dots) for a node and then click one of the following actions:
• Start: Restarts a stopped node. After the node is restarted, the Stop action is enabled.
• Stop: Shuts down the node. After the node is powered off, the Start action is enabled.
• Reboot: Shuts down the node, and then restarts it.
Note:

• Resource billing differs between bare metal and virtual machine DB


systems as follows:
• Bare metal DB systems - The Stop state has no effect on the
resources you consume. Billing continues for nodes that you stop,
and related resources continue to apply against any relevant quotas.
You must Terminate a DB system to remove its resources from
billing and quotas.
• Virtual machine DB systems - Stopping a node stops billing for all
OCPUs associated with that node. Billing resumes if you restart the
node.

Oracle Cloud Infrastructure User Guide 1384


Database

• After you restart or reboot a node, the floating IP address might take
several minutes to be updated and display in the Console.
To scale the CPU cores for a bare metal DB system
If a bare metal DB system requires more compute node processing power, you can scale up (increase) the number of
enabled CPU cores in the system without impacting the availability of that system.
Note:

You cannot change the number of CPU cores for a virtual machine DB
system in the same way. Instead, you must change the shape to one with a
different number of OCPUs. See To change the shape of a virtual machine
DB system on page 1385 to learn how.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of database systems is displayed.
3. In the list of DB systems, find the system you want to scale and click its highlighted name.
The system details are displayed.
4. Click Scale CPU Cores, and then change the number in the CPU Core Count field. The text below the field
indicates the acceptable values, based on the shape used when the DB system was launched.
5. Click Update.
To change the shape of a virtual machine DB system
After you provision a virtual machine DB system, you can change the shape at any time to adapt to changes in
performance needs. For example, you might require a system with a higher number of OCPUs, or you might want to
reduce costs by reducing the number of OCPUs. See Virtual Machine DB Systems on page 1356 for resource details
for each shape in a series.
Note:

The shape-changer operation takes place in a rolling fashion for multi-node


DB systems, allowing you to change the shape with no database downtime.
Changing the shape does not impact the amount of storage available to the DB system. However, the new shape can
have different memory and network bandwidth characteristics, and you might need to reapply any customizations to
these aspects after the change.
Prerequisites:
• DB system and database are in the Available state
• DB system is registered with the Cluster Ready Services (CRS) Grid Infrastructure stack. By default, virtual
machine DB systems use CRS.
• Database can be successfully restarted
• Database is configured to use SPFILE (server parameter file), not PFILE. By default, databases in virtual machine
DB systems use the SPFILE configuration.
• The SGA_TARGET parameter for Automatic Shared Memory Management (ASMM) has a non-zero value. By
default, virtual machine DB systems use this ASMM configuration.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of database systems is displayed.
3. In the list of DB systems, find the system you want to scale and click its highlighted name.
The system details are displayed.
4. Click Change Shape.

Oracle Cloud Infrastructure User Guide 1385


Database

5. Select the new shape from the list of compatible and available shapes, and click Change Shape. Compatible
shapes are those shapes within the same series. For example, if the current shape is VM.Standard 2.2, you can
select another X7 shape that has not reached its usage limit in the selected availability domain.
Note:

If the current shape supports Oracle RAC, then you can only change that
shape to another shape that also supports Oracle RAC. For example, you
cannot change the shape from VM.Standard2.2 to VM.Standard2.1.
6. Review the information on the confirmation dialog, and proceed as applicable.
Tip:

If your shape change operation is not successful, see the troubleshooting tips
in Shape Change Failures for Virtual Machine DB Systems on page 1665.
To scale up the storage for a virtual machine DB system
If a virtual machine DB system requires more block storage, you can increase the storage at any time without
impacting the system.
Note:

This procedure does not apply to bare metal DB systems.


1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of database systems is displayed.
3. In the list of DB systems, find the system you want to scale up and click its highlighted name.
The system details are displayed.
4. Click Scale Storage Up, and then select the new available storage size from the drop-down list.
The new total storage size displays in the total storage field. Oracle charges for the total storage used.
5. Click Update.
To move a DB system to another compartment
Note:

• To move resources between compartments, resource users must have


sufficient access permissions on the compartment that the resource
is being moved to, as well as the current compartment. For more
information about permissions for Database resources, see Details for the
Database Service on page 2251.
• If your DB system is in a security zone, the destination compartment must
also be in a security zone. See the Security Zone Policies topic for a full
list of policies that affect Database service resources.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of database systems is displayed.
3. In the list of DB systems, find the system you want to move and click its highlighted name.
The system details are displayed.
4. Click Move Resource.
5. Select the new compartment.
6. Click Move Resource.
For information about dependent resources for Database resources, see Moving Database Resources to a Different
Compartment on page 1141.

Oracle Cloud Infrastructure User Guide 1386


Database

To terminate a DB system
Terminating a DB system permanently deletes it and any databases running on it.
Note:

The database data is local to the DB system and will be lost when the system
is terminated. Oracle recommends that you back up any data in the DB
system prior to terminating it.
Terminating a DB system removes all automatic incremental backups of all
databases in the DB system from Oracle Cloud InfrastructureObject Storage.
Full backups remain in Object Storage as standalone backups which you
can use to create a new database. See Creating Databases on page 1415 for
information on creating a new database from a backup.
Important: If your DB system has Data Guard enabled, you must terminate
the standby DB system before terminating the primary DB system. If you try
to terminate a primary DB system that has a standby, the terminate operation
will not complete.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of database systems is displayed.
3. For the DB system you want to terminate, click the Actions icon (three dots) and then click Terminate.
4. Confirm when prompted.
The database system's icon indicates Terminating.
At this point, you cannot connect to the system and any open connections will be terminated.
To edit the network security groups (NSGs) for your DB system
Your DB system can use up to five network security groups (NSGs). Note that if you choose a subnet with a security
list, the security rules for the DB system will be a union of the rules in the security list and the NSGs. For more
information, see Network Security Groups on page 2867 and Network Setup for DB Systems on page 1360.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of database systems is displayed.
3. In the list of DB systems, find the system you want to manage and click its highlighted name.
The system details are displayed.
4. In the Network details, click the Edit link to the right of the Network Security Groups field.
5. In the Edit Network Security Groups dialog, click + Another Network Security Group to add an NSG to the
DB system.
To change an assigned NSG, click the drop-down menu displaying the NSG name, then select a different NSG.
To remove an NSG from your DB system, click the X icon to the right of the displayed NSG name.
6. Click Save.
To manage your BYOL database licenses on your bare metal DB system
If you want to control the number of database licenses that you run at any given time on a bare metal system, you can
scale up or down the number of OCPUs on the instance. These additional licenses are metered separately.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of database systems is displayed.

Oracle Cloud Infrastructure User Guide 1387


Database

3. In the list of DB systems, find the system you want to scale and click its highlighted name.
The system details are displayed.
4. Click Scale CPU Cores, and then change the number.
To change the license type of a bare metal or virtual machine DB system
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. In the list of DB systems, find the DB system you want to administer and click its highlighted name.
4. On the DB system details page, click Update License Type.
The dialog displays the options with your current license type selected.
5. Select the new license type.
6. Click Save.
See Known Issue.
To manage tags for your DB systems and database resources
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Find the DB system or database resource you're interested in, and click the name.
4. Click the Tags tab to view or edit the existing tags. Or click More Actions and then Apply Tags to add new ones.
For more information, see Resource Tags on page 213.
To view a work request for your DB systems and database resources
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of database systems is displayed.
3. Find the DB system or database resource you're interested in, and click the name.
4. In the Resources section, click Work Requests. The status of all work requests appears on the page.
5. To see the log messages, error messages, and resources that are associated with a specific work request, click the
operation name. Then, select an option in the More information section.
For associated resources, you can click the Actions icon (three dots) next to a resource to copy the resource's
OCID.
For more information, see Work Requests on page 262.
To create a serial console connection to your database system
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of database systems is displayed.
3. Find the DB system or database resource you're interested in, and click the name.
4. In the Resources section, click Console Connections.
5. Click Create Console Connection. Note that if all nodes currently have existing console connections, this button
will be disabled.
6. In the Create Console Connection dialog, specify the following:

The DB system node. For multi-node DB systems, select which node or nodes you wish to create a connection
for. No node selector will display if the DB system has only one node, or if there is only one node in a multi-
node system that currently lacks a connection.
• The SSH key. You can browse or drag and drop .pub files, or paste in individual public keys.
7. Click Create Console Connection.

Oracle Cloud Infrastructure User Guide 1388


Database

To delete a serial console connection to your database system


1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of database systems is displayed.
3. Find the DB system or database resource you're interested in, and click the name.
4. In the Resources section, click Console Connections. Your current console connections are displayed.
5. To delete a connection, click the Actions icon (three dots) in the row listing the connection, then click Delete.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to manage DB system components.
database systems:
• ListDbSystems
• GetDbSystem
• UpdateDbSystem
• ChangeDbSystemCompartment
• TerminateDbSystem
Database homes:
• ListDbHomes
• GetDbHome
• DeleteDbHome
Databases:
• ListDatabases
• GetDatabase
Nodes:
• DbNodeAction: Use this operation to power cycle a node in the database system.
• ListDbNodes
• GetDbNode
For the complete list of APIs for the Database service, see Database Service API.

Cloning a Virtual Machine DB System


This topic explains how to clone a virtual machine DB system that uses Logical Volume Manager storage
management software. See Fast Provisioning Option for Single-Node Virtual Machine DB Systems on page 1356
for more information on these virtual machine DB systems.
Cloning creates a copy of a source DB system as it exists at the time of the cloning operation, including the storage
configuration software and database volumes. When creating a clone, you can specify a new SSH key and ADMIN
password.
Note:

• Cloning is not supported for bare metal DB systems or virtual machine


systems using Oracle Automatic Storage Management.
• You can't clone a virtual machine DB system in a security zone to create
a virtual machine DB system that isn't in a security zone. See the Security
Zone Policies topic for a full list of policies that affect Database service
resources.

Oracle Cloud Infrastructure User Guide 1389


Database

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let database admins manage Oracle Cloud database systems on page 2158 lets the
specified group do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
more information about writing policies for databases, see Details for the Database Service on page 2251.
Using the Console to Clone a Virtual Machine DB System
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose the compartment where the source DB system is located.
3. In the list of DB systems, find the virtual machine DB system you want to clone and click its highlighted name.
Note that only system using Logical Volume Manager storage management software can be cloned.
4. On the DB System Details page of your source DB system, click Clone. This opens the Clone DB System dialog.
5. Select a compartment: By default, the DB system is created in your current compartment and you can use the
network resources in that compartment.
6. Display name: A non-unique, display name for the DB system. An Oracle Cloud Identifier (OCID) uniquely
identifies the DB system. Avoid entering confidential information.
7. Add public SSH keys: The public key portion of each key pair you want to use for SSH access to the DB system.
You can browse or drag and drop .pub files, or paste in individual public keys. To paste multiple keys, click +
Another SSH Key, and supply a single key for each entry.
The clone will use the SSH keys specified during the cloning operation, and the source DB system will continue to
use the SSH keys that were in place prior to the cloning operation.
8. Choose a license type: The type of license you want to use for the DB system. Your choice affects metering for
billing.
• License Included means the cost of this Oracle Cloud Infrastructure Database service resource will include
both the Oracle Database software licenses and the service.
• Bring Your Own License (BYOL) means you will use your organization's Oracle Database software licenses
for this Oracle Cloud Infrastructure Database service resource. See Bring Your Own License for more
information.
Note that this license selection only applies to the clone, and does not affect the source DB system.
9. Specify the network information:
• Virtual cloud network: The VCN in which to create the DB system. Click Change Compartment to select a
VCN in a different compartment. Note that the clone can use a different VCN and subnet from the source DB
system.
• Client Subnet: The subnet to which the DB system should attach. For 1- and 2-node RAC DB systems:
Do not use a subnet that overlaps with 192.168.16.16/28, which is used by the Oracle Clusterware private
interconnect on the database instance. Specifying an overlapping subnet will cause the private interconnect to
malfunction.
Click Change Compartment to select a subnet in a different compartment.
• Network Security Groups: Optionally, you can specify one or more network security groups (NSGs) for your
DB system. NSGs function as virtual firewalls, allowing you to apply a set of ingress and egress security rules

Oracle Cloud Infrastructure User Guide 1390


Database

to your DB system. A maximum of five NSGs can be specified. For more information, see Network Security
Groups on page 2867 and Network Setup for DB Systems on page 1360.
Note that if you choose a subnet with a security list, the security rules for the DB system will be a union of the
rules in the security list and the NSGs.
• Hostname prefix: Your choice of host name for the bare metal or virtual machine DB system. The host name
must begin with an alphabetic character, and can contain only alphanumeric characters and hyphens (-). The
maximum number of characters allowed for bare metal and virtual machine DB systems is 16.
Important:

The host name must be unique within the subnet. If it is not unique, the
DB system will fail to provision. If the clone is created in a different
subnet from the source, the same host name can be used for both the
clone and the source DB system.
• Host domain name: The domain name for the DB system. If the selected subnet uses the Oracle-provided
Internet and VCN Resolver for DNS name resolution, then this field displays the domain name for the subnet
and it can't be changed. Otherwise, you can provide your choice of a domain name. Hyphens (-) are not
permitted.
• Host and domain URL: Combines the host and domain names to display the fully qualified domain name
(FQDN) for the database. The maximum length is 64 characters.
• Private IP address: Optionally, you can define the IP address of the clone. This is useful in development
contexts where you create and delete clones of the same source DB system over and over, and you need each
new iteration of the clone to use the same IP address. If you specify an IP address that is currently in use
within the subnet, the cloning operation will fail with an error message regarding the invalid IP address.
10. Fault domain: The fault domain in which the DB system resides. You can choose which fault domain to use for
your DB system. For more information on fault domains, see About Regions and Availability Domains on page
182.
11. Provide information for the initial database of the clone:
• Database name: The name for the database. The database name must begin with an alphabetic character and
can contain a maximum of eight alphanumeric characters. Special characters are not permitted. You can use
the same database name that is used in the source DB system.
• Password: A strong password for the SYS user. The password must be 9 to 30 characters and contain at least
two uppercase, two lowercase, two numeric, and two special characters. The special characters must be _, #,
or -. The password must not contain the username (SYS or SYSTEM) or the word "oracle" either in forward
or reversed order and regardless of casing. The password will be used for the SYS and SYSTEM administrator
accounts.
Important: The TDE wallet password is inherited from the source DB system.
• Confirm password: Re-enter the password you specified.
12. Clicking Show Advanced Options allows you to configure the following:
•Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags
to that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more
information about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip
this option (you can apply tags later) or ask your administrator.
13. Click Clone DB System.

Connecting to a DB System
This topic explains how to connect to an active DB system. How you connect depends on the client tool or protocol
you use, the purpose of the connection, and how your cloud network is set up. You can find information on various
networking scenarios in Networking Overview on page 2774, but for specific recommendations on how you should
connect to a database in the cloud, contact your network security administrator.

Oracle Cloud Infrastructure User Guide 1391


Database

Prerequisites
This section describes prerequisites you'll need to perform various tasks in this topic.
• To use the Console or the API to get the default administration service connection strings, you must be given the
required type of access in a policy written by an administrator, whether you're using the Console or the REST
API with an SDK, CLI, or other tool. If you try to perform an action and get a message that you don’t have
permission or are unauthorized, confirm with your administrator the type of access you've been granted and
which compartment you should work in. See Authentication and Authorization for more information on user
authorizations for the Oracle Cloud Infrastructure Database service.
• To connect to the database, you'll need the public or private IP address of the DB system.
Use the private IP address to connect to the system from your on-premises network, or from within the virtual
cloud network (VCN). This includes connecting from a host located on-premises connecting through a VPN or
FastConnect to your VCN, or from another host in the same VCN. Use the Exadata system's public IP address to
connect to the system from outside the cloud (with no VPN). You can find the IP addresses in the Oracle Cloud
Infrastructure Console as follows:
• Cloud VM clusters (new resource model): On the Exadata VM Cluster Details page, click Virtual Machines
in the Resources list.
• DB systems: On the DB System Details page, click Nodes in the Resources list.
The values are displayed in the Public IP Address and Private IP Address & DNS Name columns of the table
displaying the Virtual Machines or Nodes of the Exadata Cloud Service instance.
• For Secure Shell (SSH) access to the DB system, you'll need the full path to the file that contains the private key
associated with the public key used when the DB system was launched.
If you have problems connecting, see Troubleshooting Connection Issues on page 1397.
Database Services and Connection Strings
Database services allow you to control client access to a database instance depending on the functionality needed. For
example, you might need to access the database for administration purposes only or you might need to connect an
application to the database. Connection strings are specific to a database service.
When you provision a DB system, a default database administration service is automatically created. For 12c and
later Oracle Databases, this service is for administrating the database at the CDB level. Because this service provides
limited functionality, it is not suitable for connecting an application. Oracle recommends that you create a default
application service for the initial database after you create your DB system. For 12c and later Oracle Databases,
application services connect at the PDB level. Here are some important functions an application service can provide:
• Workload identification
• Load balancing
• Application continuity and Transaction Guard
• Fast Application Notification
• Resource assignment based on the service name
For details about these and other High Availability capabilities, see Client Failover Best Practices for Highly
Available Oracle Databases.

Creating an Application Service


You use the srvctl utility to create an application service. Before you can connect to the service, you must start it.
To create an application service for a PDB or an 11g Oracle database
1. Log in to the DB system host as opc.
2. Switch to the oracle user, and set your environment to the Oracle Database you want to administer.

$ sudo su - oracle
$ . oraenv
ORACLE_SID = [oracle] ? <database_name>

Oracle Cloud Infrastructure User Guide 1392


Database

The Oracle base has been set to /u01/app/oracle


3. Create the application service for the database. Include the pdb option only if you are creating an application
service for a PDB.

$ srvctl add service


-db <DB_unique_name>
-pdb <PDB_name>
-service <app_service_name>
-role PRIMARY
-notification TRUE
-session_state dynamic
-failovertype transaction
-failovermethod basic
-commit_outcome TRUE
-failoverretry 30
-failoverdelay 10
-replay_init_time 900
-clbgoal SHORT
-rlbgoal SERVICE_TIME
-preferred <rac_node1>,<rac_node2>
-retention 3600

Note that the preferred option is required only for multi-node databases to specify the hostname of the node in the
RAC.
4. Start the application service.

$ srvctl start service -db <DB_unique_name> -s <app_service_name>

For more information about services for a PDB, see Managing Services for PDBs.

Database Connection Strings


You must use the appropriate connection string to access a database administration or application service. You
can use the Console or the API to get the string for connecting to the default administration service from within
a VCN. For 12c and later Oracle Databases, this service is for administrating the database at the CDB level. The
string is provided in both the Easy Connect and in the full connect descriptor (long) format. Use the long format
for the connection if hostname resolution is not available. You can also use the long format to create an alias in the
tnsnames.ora file.
For accessing a database service within the VCN, the connection string for a Real Application Cluster (RAC) DB
system uses the Single Client Access Name (SCAN) while the connection string for single instance DB system uses
the hostname instead.
The private SCAN name is a Round Robin DNS entry created when you launch a 2-node RAC DB system. The
private SCAN name is resolvable only within the VCN. If the client and the database are in the same VCN, the
connection mechanism is the same as an on-premises RAC database; all the features provided by VIPs and SCAN
VIPs, such as server side load balancing and VIP failover, are available.
Note:

If you manually change the DB_UNIQUE_NAME, DB_DOMAIN, or


listener port on the DB system, the connection strings you see in the Console
or API will not reflect your changes. Ensure that you use the actual values of
these parameters when you make a connection.
To get the connection strings for the default administration service
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Find the DB system you're interested in, and click the name.

Oracle Cloud Infrastructure User Guide 1393


Database

4. Click DB Connection.
5. Click the applicable link to view or copy the connection string.
You can derive the connection strings for other database services by replacing part of the default application service
connection string with the applicable values.
To derive the connection string for a PDB administration service or an application service
1. Follow the procedure to get the Easy Connect string for the default administration service. That string should have
the following format:

<hostname|SCAN>:1521/<DB_unique_name>.<DB_domain>
2. Make the appropriate substitution:
• For the PDB administration service, replace DB_unique_name with the PDB name.

<hostname|SCAN>:1521/<PDB_name>.<DB_domain>
• For an application service, replace DB_UNIQUE_NAME with the name of the application service.

<hostname|SCAN>:1521/<app_service_name>.<DB_domain>

Connecting to a Database Service by Using SQL*Net


This section describes how to connect to a database service from a computer that has a SQL*Net client installed. Port
1521 must be open to support the SQL*Net protocol.

Connecting from Within the VCN


For security reasons, Oracle recommends that you connect to your database services from within the VCN. You can
use this method whether you are connecting to an administration service or to an application service.
To connect using SQL*Plus, you run the following command using the applicable connection string:

sqlplus system/<password>@<connection_string>

Consider the following:


• If your system is not using the VCN Resolver, ensure that the DB system's hostname (for single-node systems) or
SCAN name (for multi-node systems) can be resolved. See DNS in Your Virtual Cloud Network on page 2936
for information about DNS name resolution.
• For connecting to the administration service of a PDB, ensure that the PDB is open or the service will not be
available.
• For connecting to an application service, ensure that the service is started. For Fast Application Notification
to work, ensure that port 6200 can be reached. See Client Failover Best Practices for Highly Available Oracle
Databases for information about Fast Application Notification.

Connecting from the Internet


Although Oracle does not recommend connecting to your database from the Internet, you can connect to a database
service by using a public IP address if port 1521 is open to the public for ingress.
To use this method, you run the following command using the public IP address instead of the hostname or SCAN in
the connection string:

sqlplus system/<password>@<public_IP>:1521/<service_name>.<DB_domain>

Consider the following:


• SCANs and hostnames are not resolvable on the Internet, therefore load balancing and failover for multi-node DB
systems, which rely on these names, cannot work.

Oracle Cloud Infrastructure User Guide 1394


Database

• For multi-node DB systems, which normally use SCANs, you must specify the IP address of one of the RAC hosts
to access the database.
Important:

Do not use this method to connect to the database from within the VCN.
Doing so negatively impacts performance because traffic to the database is
routed out of the VCN and back in through the public IP address.

Example: Connecting in SQL Developer Using SQL*Net


Prerequisites:
• Ensure that port 1521 is open for the Oracle default listener. (You can do this by checking the DB system's
security list.)
• If port 1521 is open only to hosts in the VCN, then you must run your SQL Developer client from a machine
that has direct access to the VCN. If you are connecting to the database from the Internet instead, then the public
IP address of your computer must be granted access to port 1521 in the security list. (Alternatively, the security
list can grant full access to port 1521, however, this is not recommended for security reasons.) You must use the
public IP address of the host because connecting from the Internet does not support SCAN name resolution.
To connect from within the VCN using a private IP address
After the prerequisites are met, start SQL Developer and create a connection by supplying the following connection
details:
• Username: sys as sysdba
• Password: The Database Admin Password that was specified in the Launch DB System dialog in the Console.
• Hostname: The hostname as it appears in the Easy Connect format of the connection string. (See Database
Connection Strings on page 1393 for help with getting the connection string and identifying the hostname.)
• Port: 1521
• Service name: The concatenated name of the service and host domain name, for example,
db1_phx1tv.example.com. You can identify this value as the last part of the Easy Connect string,
<service_name>.<DB_domain>.
Connecting to a Database with a Public IP by Using SSH Tunneling
You can access the services of DB system databases with public IP addresses by using SSH tunneling. The main
advantage of this method is that port 1521 does not need to be opened to the public internet. However, just like
accessing the database with a public IP using a SQL*Net client, load balancing and failover for multi-node DB
systems cannot work because they rely on SCANs and hostnames.
Oracle SQL Developer and Oracle SQLcL and are two tools that facilitate the use of tunneling for Oracle Database
access.
To open a tunnel, and then connect to a database service by using SQLcL, you run commands like the following:

SQL> sshtunnel opc@<public_IP> -i <private_key> -


L <local_port>:<private_IP>:1521
Using port:22
SSH Tunnel connected
SQL> connect
system/<password>@localhost:<local_port>/<service_name>.<DB_domain>

See Oracle SQL Developer and Oracle SQLcL for information about these tools.
Connecting to a Database by Using SSH and the Bequeath Protocol
This method allows you to connect to the database without using the network listener. It should be used to connect
only for administration purposes.
When connecting to a multi-node DB system, you'll SSH to each individual node in the cluster.

Oracle Cloud Infrastructure User Guide 1395


Database

To connect from a UNIX-style system


Use the following SSH command to access the DB system: $ ssh –i <private_key>
opc@<DB_system_IP_address>
<private_key> is the full path and name of the file that contains the private key associated with the DB system you
want to access.
Use the DB system's private or public IP address depending on your network configuration. For more information,
see Prerequisites on page 1392.
To connect from a Windows system
1. Open putty.exe.
2. In the Category pane, select Session and enter the following fields:
• Host Name (or IP address): opc@<DB_system_IP_address>
Use the DB system's private or public IP address depending on your network configuration. For more
information, see Prerequisites on page 1392.
• Connection type: SSH
• Port: 22
3. In the Category pane, expand Connection, expand SSH, and then click Auth, and browse to select your private
key.
4. Optionally, return to the Session category screen and save this session information for reuse later.
5. Click Open to start the session.
To access a database after you connect
1. Log in as opc and then sudo to the grid user.

login as: opc

[opc@ed1db01 ~]$ sudo su - grid


2. List all the databases on the system.

root@ed1db01 ]# srvctl config database -v

cdbm01 /u02/app/oracle/product/12.1.0/dbhome_2 12.1.0.2.0


exadb /u02/app/oracle/product/11.2.0/dbhome_2 11.2.0.4.0
mmdb /u02/app/oracle/product/12.1.0/dbhome_3 12.1.0.2.0
3. Connect as the oracle user and get the details about one of the databases by using the srvctl command.

[root@ed1db01 ~]# su - oracle


[oracle@ed1db01 ~]$ . oraenv
ORACLE_SID = [oracle] ? cdbm01
The Oracle base has been set to /u02/app/oracle
[oracle@ed1db01 ~]$ srvctl config database -d cdbm01
Database unique name: cdbm01 <<== DB unique name
Database name:
Oracle home: /u02/app/oracle/product/12.1.0/dbhome_2
Oracle user: oracle
Spfile: +DATAC1/cdbm01/spfilecdbm01.ora
Password file: +DATAC1/cdbm01/PASSWORD/passwd
Domain: data.customer1.oraclevcn.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATAC1,RECOC1

Oracle Cloud Infrastructure User Guide 1396


Database

Mount point paths:


Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: racoper
Database instances: cdbm011,cdbm012 <<== SID
Configured nodes: ed1db01,ed1db02
Database is administrator managed
4. Set the ORACLE_SID and ORACLE_UNIQUE_NAME using the values from the previous step.

[oracle@ed1db01 ~]$ export ORACLE_UNIQUE_NAME=cdbm01


[oracle@ed1db01 ~]$ export ORACLE_SID=cdbm011
[oracle@ed1db01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Apr 19 04:10:12 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c EE Extreme Perf Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage
Management, Oracle Label Security,
OLAP, Advanced Analytics and Real Application Testing options

Serial Console Access for Troubleshooting and Managing a Bare Metal or VM System
You can create and delete serial console connections to your bare metal or virtual machine DB system in the Oracle
Cloud Infrastructure Console. This allows you to manage and troubleshoot your system in single-user mode using an
SSH connection. See the following topics for more information:
• To create a serial console connection to your database system on page 1388
• To delete a serial console connection to your database system on page 1389
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the GetDatabase API operation to get the default administration service connection strings.
Troubleshooting Connection Issues
The following issues might occur when connecting to a DB system or database.

ORA-28365: Wallet is Not Open Error


For a 1-node DB system or 2-node RAC DB system, regardless of how you connect to the DB system, before
you use OS authentication to connect to a database (for example, sqlplus / as sysdba) be sure to set
the ORACLE_UNQNAME variable. Otherwise, commands that require the TDE wallet will result in the error
ORA-28365: wallet is not open.
Note that this is not an issue when using a TNS connection because ORACLE_UNQNAME is automatically set in the
database CRS resource.

SSH Access Stops Working


If the DB system’s root volume becomes full, you might lose the ability to SSH to the system (the SSH command
will fail with permission denied errors). Before you copy a large amount of data to the root volume, for example, to
migrate a database, use the dbcli create-dbstorage command to set up storage on the system’s NVMe drives

Oracle Cloud Infrastructure User Guide 1397


Database

and then copy the database files to that storage. For more information, see Setting Up Storage on the DB System on
page 1639.
What Next?
Before you begin updating your DB system, review the information in Updating a DB System on page 1398.
For information about setting up an Enterprise Manager console to monitor your databases, see Monitoring a
Database on page 1428.

Updating a DB System
Note:

This topic is not applicable to Exadata DB systems. For information on how


to update an Exadata DB system, see Updating an Exadata Cloud Service
Instance on page 1275
This topic includes information and instructions on how to update the OS of a bare metal or virtual machine DB
system.
Caution:

• Review all of the information before you begin updating the system.
Updating the operating system through methods not described on this
page can cause permanent loss of access.
• Always back up your databases prior to updating your DB system's
operating system.
Bash Profile Updates
Do not add interactive commands such as oraenv, or commands that might return an error or warning message,
to the .bash_profile file for the grid or oracle users. Adding such commands can prevent Database service
operations from functioning properly.
Essential Firewall Rules
For a 1-node DB system or 2-node RAC DB system, do not remove or modify the following firewall rules in /etc/
sysconfig/iptables:
• The firewall rules for ports 1521, 7070, and 7060 allow the Database service to manage the DB system. Removing
or modifying them can result in the Database Service no longer operating properly.
• The firewall rules for 169.254.0.2:3260 and 169.254.0.3:80 prevent non-root users from escalating privileges and
tampering with the system’s boot volume and boot process. Removing or modifying these rules can allow non-
root users to modify the system's boot volume.
OS Updates
Before you update the OS, review the following important guidelines and information:
• Back up your DB system's databases prior to attempting an OS update.
• Do not remove packages from a DB system. However, you might have to remove custom RPMs (packages that
were installed after the system was provisioned) for the update to complete successfully.
Caution:

Do not install NetworkManager on the DB system. Installing this package


and rebooting the system results in severe loss of access to the system.
• Oracle recommends that you test any updates thoroughly before updating a production system.
• The image used to launch a DB system is updated regularly with the necessary patches. After you launch a DB
system, you are responsible for applying the required OS security updates published through the Oracle public
YUM server.

Oracle Cloud Infrastructure User Guide 1398


Database

• To apply OS updates, the DB system's VCN must be configured to allow access to the YUM repository. For more
information, see Network Setup for DB Systems on page 1360.
To update an OL7 OS on a DB system host
You can update the OS on 2-node RAC virtual machine DB systems in a rolling fashion.
Note:

Ensure the Oracle Clusterware (CRS) is completely shut down before


performing the OS kernel updates.
1. Log on to the DB system host as opc, and then sudo to the root user.

login as: opc


[opc@dbsys ~]$ sudo su -
2. If your DB system uses an image with the kernel version 4.1.12-124.27.1.el7uek (used with older images), then
change the bootefi label before updating the OS.
3. Identify the host region by running the following command:

# curl -s http://169.254.169.254/opc/v1/instance/ |grep region


4. With the region you noted from the previous step, determine the region name, and perform the following two
steps.
See Regions and Availability Domains on page 182 to look up the region name.
a. Download the repo.

# wget https://swiftobjectstorage.<region_name>.oraclecloud.com/
v1/dbaaspatchstore/DBaaSOSPatches/oci_dbaas_ol7repo -O /tmp/
oci_dbaas_ol7repo

This example output assumes the region is us-phoenix-1 (PHX).

# wget https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/
v1/dbaaspatchstore/DBaaSOSPatches/oci_dbaas_ol7repo -O /tmp/
oci_dbaas_ol7repo
--2019-07-16 10:40:42-- https://swiftobjectstorage.us-
phoenix-1.oraclecloud.com/v1/dbaaspatchstore/DBaaSOSPatches/
oci_dbaas_ol7repo
Resolving swiftobjectstorage.us-phoenix-1.oraclecloud.com...
129.146.13.177, 129.146.13.180, 129.146.12.235, ...
Connecting to swiftobjectstorage.us-phoenix-1.oraclecloud.com|
129.146.13.177|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1394 (1.4K) [binary/octet-stream]
Saving to: `/tmp/oci_dbaas_ol7repo'

100%[==============================================================================
1,394 --.-K/s in 0s

Oracle Cloud Infrastructure User Guide 1399


Database

2019-07-16 10:40:42 (34.5 MB/s) - `/tmp/oci_dbaas_ol7repo' saved


[1394/1394]
b. Download the version lock files.

# wget https://swiftobjectstorage.<region_name>.oraclecloud.com/
v1/dbaaspatchstore/DBaaSOSPatches/versionlock_ol7.list -O /tmp/
versionlock.list

This example output assumes the region is us-phoenix-1 (PHX).

# wget https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/
v1/dbaaspatchstore/DBaaSOSPatches/versionlock_ol7.list -O /tmp/
versionlock.list
--2019-07-16 10:41:38-- https://swiftobjectstorage.us-
phoenix-1.oraclecloud.com/v1/dbaaspatchstore/DBaaSOSPatches/
versionlock_ol7.list
Resolving swiftobjectstorage.us-phoenix-1.oraclecloud.com...
129.146.12.224, 129.146.12.164, 129.146.14.172, ...
Connecting to swiftobjectstorage.us-phoenix-1.oraclecloud.com|
129.146.12.224|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 15769 (15K) [binary/octet-stream]
Saving to: `/tmp/versionlock.list'

100%[==============================================================================
15,769 --.-K/s in 0.1s

2019-07-16 10:41:39 (123 KB/s) - `/tmp/versionlock.list' saved


[15769/15769]
5. Copy the repo file to the /etc/yum.repos.d directory.

cp /tmp/oci_dbaas_ol7repo /etc/yum.repos.d/ol7.repo
6. Copy and overwrite the existing version lock file.

cp /etc/yum/pluginconf.d/versionlock.list /etc/yum/pluginconf.d/
versionlock.list-`date +%Y%m%d`
cp /tmp/versionlock.list /etc/yum/pluginconf.d/versionlock.list

The initial version lock file should be empty. However, it is a good practice to back it up in case it is not and you
need to refer to it later.
7. Run the update command.

| 18 MB 00:00
# yum update
Loaded plugins: kernel-update-handler, ulninfo, versionlock
Excluding 250 updates due to versionlock (use "yum versionlock status" to
show them)
Resolving Dependencies
--> Running transaction check
---> Package kernel-uek.x86_64 0:4.1.12-124.28.5.el7uek will be installed
---> Package kernel-uek-firmware.noarch 0:4.1.12-124.28.5.el7uek will be
installed
---> Package libtalloc.x86_64 0:2.1.10-1.el7 will be updated
---> Package libtalloc.x86_64 0:2.1.13-1.el7 will be an update
---> Package pytalloc.x86_64 0:2.1.10-1.el7 will be updated
---> Package pytalloc.x86_64 0:2.1.13-1.el7 will be an update
--> Finished Dependency Resolution

Oracle Cloud Infrastructure User Guide 1400


Database

Dependencies Resolved

======================================================================================
Package Arch Version
Repository Size
======================================================================================
Installing:
kernel-uek x86_64
4.1.12-124.28.5.el7uek ol7_UEKR4 44
M
kernel-uek-firmware noarch
4.1.12-124.28.5.el7uek ol7_UEKR4 1.0
M
Updating:
libtalloc x86_64 2.1.13-1.el7
ol7_latest 31 k
pytalloc x86_64 2.1.13-1.el7
ol7_latest 16 k

Transaction Summary
======================================================================================
Install 2 Packages
Upgrade 2 Packages

Total download size: 46 M


Is this ok [y/d/N]: y
Downloading packages:
No Presto metadata available for ol7_UEKR4
No Presto metadata available for ol7_latest
(1/4): kernel-uek-firmware-4.1.12-124.28.5.el7uek.noarch.rpm
| 1.0 MB 00:00:00
(2/4): libtalloc-2.1.13-1.el7.x86_64.rpm
| 31 kB 00:00:00
(3/4): pytalloc-2.1.13-1.el7.x86_64.rpm
| 16 kB 00:00:00
(4/4): kernel-uek-4.1.12-124.28.5.el7uek.x86_64.rpm
| 44 MB 00:00:01
--------------------------------------------------------------------------------------
Total
41 MB/s | 46 MB 00:00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
** Found 7 pre-existing rpmdb problem(s), 'yum check' output follows:
oda-hw-mgmt-19.3.0.0.0_LINUX.X64_190530-1.x86_64 has missing requires of
libnfsodm19.so()(64bit)
oda-hw-mgmt-19.3.0.0.0_LINUX.X64_190530-1.x86_64 has missing requires of
perl(GridDefParams)
oda-hw-mgmt-19.3.0.0.0_LINUX.X64_190530-1.x86_64 has missing requires of
perl(Sys::Syslog)
oda-hw-mgmt-19.3.0.0.0_LINUX.X64_190530-1.x86_64 has missing requires of
perl(s_GridSteps)
perl-RPC-XML-0.78-3.el7.noarch has missing requires of perl(DateTime) >=
('0', '0.70', None)
perl-RPC-XML-0.78-3.el7.noarch has missing requires of
perl(DateTime::Format::ISO8601) >= ('0', '0.07', None)
perl-RPC-XML-0.78-3.el7.noarch has missing requires of perl(Module::Load)
>= ('0', '0.24', None)
Installing : kernel-uek-firmware-4.1.12-124.28.5.el7uek.noarch
1/6
Updating : libtalloc-2.1.13-1.el7.x86_64
2/6

Oracle Cloud Infrastructure User Guide 1401


Database

Updating : pytalloc-2.1.13-1.el7.x86_64
3/6
Installing : kernel-uek-4.1.12-124.28.5.el7uek.x86_64
4/6
Cleanup : pytalloc-2.1.10-1.el7.x86_64
5/6
Cleanup : libtalloc-2.1.10-1.el7.x86_64
6/6

Note:

• Ignore the error activating message that results from running the
update.
• An update will occur only if a versionlock file has a valid update
available to apply to the DB system.
8. Restart the system.

$ sudo su -
# reboot
9. Run the following command to validate the update:

# uname -r
4.1.12-124.28.5

In this example, then new kernel version is 4.1.12-124.28.5.


To check the kernel version
Run the following command.

$ uname -r

Example response indicating kernel version 4.1.12-124.27.1.el7uek:

4.1.12-124.27.1.el7uek.x86_64

If you have kernel version 4.1.12-124.27.1.el7uek, then proceed to change the bootefi label.
To change the bootefi label (each node)
1. Edit /etc/fstab: Change the label bootefi to BOOTEFI (uppercase).
Example:

LABEL=BOOTEFI /boot/efi vfat defaults 1


2
2. Restart the DB node.
3. Run the following command to ensure that the required link is created.

$ sudo ls -lrt /etc/grub2-efi.cfg

Example response indicating that the required link exists:

lrwxrwxrwx 1 root root 31 Sep 4 11:49 /etc/grub2-efi.cfg -> ../boot/efi/


EFI/redhat/grub.cfg

To update an OL6 OS on a DB system host


You can update the OS on 2-node RAC virtual machine DB systems in a rolling fashion.

Oracle Cloud Infrastructure User Guide 1402


Database

Note:

Ensure the Oracle Clusterware (CRS) is completely shut down before


performing the OS kernel updates.
1. Log on to the DB system host as opc, and then sudo to the root user.

login as: opc


[opc@dbsys ~]$ sudo su -
2. Identify the host region by running the following command:

# curl -s http://169.254.169.254/opc/v1/instance/ |grep region


3. With the region you noted from the previous step, determine the region name, and perform the following two
steps.
See Regions and Availability Domains on page 182 to look up the region name.
a. Download the repo.

# wget https://swiftobjectstorage.<region_name>.oraclecloud.com/
v1/dbaaspatchstore/DBaaSOSPatches/oci_dbaas_ol6repo -O /tmp/
oci_dbaas_ol6repo

This example output assumes the region is us-phoenix-1 (PHX).

# wget https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/
v1/dbaaspatchstore/DBaaSOSPatches/oci_dbaas_ol6repo -O /tmp/
oci_dbaas_ol6repo
--2018-03-16 10:40:42-- https://swiftobjectstorage.us-
phoenix-1.oraclecloud.com/v1/dbaaspatchstore/DBaaSOSPatches/
oci_dbaas_ol6repo
Resolving swiftobjectstorage.us-phoenix-1.oraclecloud.com...
129.146.13.177, 129.146.13.180, 129.146.12.235, ...
Connecting to swiftobjectstorage.us-phoenix-1.oraclecloud.com|
129.146.13.177|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1394 (1.4K) [binary/octet-stream]
Saving to: `/tmp/oci_dbaas_ol6repo'

100%[==============================================================================
1,394 --.-K/s in 0s

2018-03-16 10:40:42 (34.5 MB/s) - `/tmp/oci_dbaas_ol6repo' saved


[1394/1394]
b. Download the version lock files.

# wget https://swiftobjectstorage.<region_name>.oraclecloud.com/
v1/dbaaspatchstore/DBaaSOSPatches/versionlock_ol6.list -O /tmp/
versionlock.list

This example output assumes the region is us-phoenix-1 (PHX).

# wget https://swiftobjectstorage.us-phoenix-1.oraclecloud.com/
v1/dbaaspatchstore/DBaaSOSPatches/versionlock_ol6.list -O /tmp/
versionlock.list
--2018-03-16 10:41:38-- https://swiftobjectstorage.us-
phoenix-1.oraclecloud.com/v1/dbaaspatchstore/DBaaSOSPatches/
versionlock_ol6.list
Resolving swiftobjectstorage.us-phoenix-1.oraclecloud.com...
129.146.12.224, 129.146.12.164, 129.146.14.172, ...

Oracle Cloud Infrastructure User Guide 1403


Database

Connecting to swiftobjectstorage.us-phoenix-1.oraclecloud.com|
129.146.12.224|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 15769 (15K) [binary/octet-stream]
Saving to: `/tmp/versionlock.list'

100%[==============================================================================
15,769 --.-K/s in 0.1s

2018-03-16 10:41:39 (123 KB/s) - `/tmp/versionlock.list' saved


[15769/15769]
4. Enable the repo for your region.
a. Copy the repo file to the /etc/yum.repos.d directory.

cp /tmp/oci_dbaas_ol6repo /etc/yum.repos.d/ol6.repo
b. Modify the ol6.repo file to enable the repo for your region.

vi /etc/yum.repos.d/ol6.repo

[ol6_latest_PHX]
name=Oracle Linux $releasever Latest ($basearch)
baseurl=http://yum-phx.oracle.com/repo/OracleLinux/OL6/latest/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1 <= Enabled.

[ol6_UEKR4_PHX]
name=Latest Unbreakable Enterprise Kernel Release 4 for Oracle Linux
$releasever ($basearch)
baseurl=http://yum-phx.oracle.com/repo/OracleLinux/OL6/UEKR4/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1 <= Enabled.
5. Install yum-plugin-versionlock.

$ sudo su -
# yum repolist
Loaded plugins: kernel-update-handler, security, ulninfo
ol6_UEKR4
| 1.2 kB 00:00
ol6_UEKR4/primary
| 29 MB 00:00
ol6_UEKR4

588/588
ol6_latest
| 1.4 kB 00:00
ol6_latest/primary
| 67 MB 00:00
ol6_latest

39825/39825
repo id repo name

status
ol6_UEKR4 Latest Unbreakable Enterprise Kernel
Release 4 for Oracle Linux 6Server (x86_64)
588

Oracle Cloud Infrastructure User Guide 1404


Database

ol6_latest Oracle Linux 6Server Latest (x86_64)

39825
repolist: 40413
[root@jigsosupg ~]# yum install yum-plugin-versionlock
Loaded plugins: kernel-update-handler, security, ulninfo
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package yum-plugin-versionlock.noarch 0:1.1.30-40.0.1.el6 will be
installed
--> Finished Dependency Resolution

Dependencies Resolved

======================================================================================
Package Arch
Version Repository
Size
======================================================================================
Installing:
yum-plugin-versionlock noarch
1.1.30-40.0.1.el6 ol6_latest
32 k

Transaction Summary
======================================================================================
Install 1 Package(s)

Total download size: 32 k


Installed size: 43 k
Is this ok [y/N]: y
Downloading Packages:
yum-plugin-versionlock-1.1.30-40.0.1.el6.noarch.rpm
| 32 kB 00:00
warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID
ec551f03: NOKEY
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Importing GPG key 0xEC551F03:
Userid : Oracle OSS group (Open Source Software group)
<[email protected]>
Package: 6:oraclelinux-release-6Server-8.0.3.x86_64 (@odadom1)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Is this ok [y/N]: y
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
** Found 4 pre-existing rpmdb problem(s), 'yum check' output follows:
oda-hw-mgmt-12.2.0.1.0_LINUX.X64_170614.TR1221-1.x86_64 has missing
requires of /usr/local/bin/perl
oda-hw-mgmt-12.2.0.1.0_LINUX.X64_170614.TR1221-1.x86_64 has missing
requires of libnfsodm12.so()(64bit)
oda-hw-mgmt-12.2.0.1.0_LINUX.X64_170614.TR1221-1.x86_64 has missing
requires of perl(GridDefParams)
oda-hw-mgmt-12.2.0.1.0_LINUX.X64_170614.TR1221-1.x86_64 has missing
requires of perl(s_GridSteps)
Installing : yum-plugin-versionlock-1.1.30-40.0.1.el6.noarch

1/1
Verifying : yum-plugin-versionlock-1.1.30-40.0.1.el6.noarch

1/1

Oracle Cloud Infrastructure User Guide 1405


Database

Installed:
yum-plugin-versionlock.noarch 0:1.1.30-40.0.1.el6

Complete!

Note:

Ignore the RPMDB warning messages that refer to oda-hw-mgmt.


6. Copy and overwrite the existing version lock file.

cp /etc/yum/pluginconf.d/versionlock.list /etc/yum/pluginconf.d/
versionlock.list-`date +%Y%m%d`
cp /tmp/versionlock.list /etc/yum/pluginconf.d/versionlock.list

The initial version lock file should be empty. However, it is a good practice to back it up in case it is not and you
need to refer to it later.
7. Run the update command.

# yum update
Loaded plugins: kernel-update-handler, security, ulninfo, versionlock
Setting up Update Process
Resolving Dependencies
--> Running transaction check
---> Package kernel-uek.x86_64 0:4.1.12-112.14.13.el6uek will be installed
---> Package kernel-uek-firmware.noarch 0:4.1.12-112.14.13.el6uek will be
installed
---> Package linux-firmware.noarch 0:20160616-44.git43e96a1e.0.12.el6 will
be updated
---> Package linux-firmware.noarch 0:20171128-56.git17e62881.0.2.el6 will
be an update
--> Finished Dependency Resolution

Dependencies Resolved

======================================================================================
Package Arch Version
Repository
Size
======================================================================================
Installing:
kernel-uek x86_64
4.1.12-112.14.13.el6uek ol6_UEKR4
51 M
kernel-uek-firmware noarch
4.1.12-112.14.13.el6uek ol6_UEKR4
2.4 M
Updating:
linux-firmware noarch
20171128-56.git17e62881.0.2.el6 ol6_UEKR4
74 M

Transaction Summary
======================================================================================
Install 2 Package(s)
Upgrade 1 Package(s)

Total download size: 128 M


Is this ok [y/N]:y
Downloading Packages:
(1/3): kernel-uek-4.1.12-112.14.13.el6uek.x86_64.rpm
| 51 MB 00:00

Oracle Cloud Infrastructure User Guide 1406


Database

(2/3): kernel-uek-firmware-4.1.12-112.14.13.el6uek.noarch.rpm
| 2.4 MB 00:00
(3/3): linux-firmware-20171128-56.git17e62881.0.2.el6.noarch.rpm
| 74 MB 00:00
--------------------------------------------------------------------------------------
Total
214 MB/s | 128 MB 00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : kernel-uek-firmware-4.1.12-112.14.13.el6uek.noarch

1/4
Updating : linux-firmware-20171128-56.git17e62881.0.2.el6.noarch

2/4
Installing : kernel-uek-4.1.12-112.14.13.el6uek.x86_64

3/4
Cleanup : linux-firmware-20160616-44.git43e96a1e.0.12.el6.noarch

4/4
ol6_UEKR4/filelists
| 18 MB 00:00
Uploading /boot/vmlinuz-4.1.12-112.14.13.el6uek.x86_64 to
http://169.254.0.3/kernel
Uploading /boot/initramfs-4.1.12-112.14.13.el6uek.x86_64.img to
http://169.254.0.3/initrd
Uploading /tmp/tmp5HjrRUcmdline to http://169.254.0.3/cmdline

Error activating kernel/initrd/cmdline: 502 - <html>


<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
</body>
</html>

Note:

• Ignore the error activating message that results from running the
update.
• An update will occur only if a versionlock file has a valid update
available to apply to the DB system.
8. Restart the system.

$ sudo su -
# reboot
9. Run the following command to validate the update:

# uname -r
4.1.12-112.14.13

In this example, then new kernel version is 4.1.12-112.14.13.


For information about applying Oracle database patches to a DB system, see Patching a DB System on page 1408.

Oracle Cloud Infrastructure User Guide 1407


Database

Configuring a DB System
This topic provides information to help you configure your DB system.
Network Time Protocol
Oracle recommends that you run a Network Time Protocol (NTP) daemon on your 1-node DB systems to keep
system clocks stable during rebooting. If you need information about an NTP daemon, see Setting Up “NTP (Network
Time Protocol) Server” in RHEL/CentOS 7.
Oracle recommends that you configure NTP on both nodes in a 2-node RAC DB system to synchronize time
across the nodes. If you do not configure NTP, then Oracle Clusterware configures and uses the Cluster Time
Synchronization Service (CTSS), and the cluster time might be out-of-sync with applications that use NTP for time
synchronization.
For information about configuring NTP on a version 12c database, see Setting Network Time Protocol for Cluster
Time Synchronization. For a version 11g database, see Network Time Protocol Setting.
Transparent Data Encryption
All user-created tablespaces in a DB system database are encrypted by default, using Transparent Data Encryption
(TDE).
• For version 12c databases, if you don’t want your tablespaces encrypted, you can set the
ENCRYPT_NEW_TABLESPACES database initialization parameter to DDL.
• On a 1- or 2-node RAC DB system, you can use the TDE Commands on page 1543 command to update the
master encryption key for a database.
• You must create and activate a master encryption key for any PDBs that you create. After creating or plugging
in a new PDB on a 1- or 2-node RAC DB System, use the dbcli update-tdekey command to create
and activate a master encryption key for the PDB. Otherwise, you might encounter the error ORA-28374:
typed master key not found in wallet when attempting to create tablespaces in the PDB. In a
multitenant environment, each PDB has its own master encryption key which is stored in a single keystore used
by all containers. For more information, see "Overview of Managing a Multitenant Environment" in the Oracle
Database Administrator’s Guide.
• For information about encryption on Exadata DB systems, see Using Tablespace Encryption in Exadata Cloud
Service.
• For information on changing an existing TDE wallet password using the Oracle Cloud Infrastructure Console, see
To manage administrator and TDE wallet passwords on page 1420.
For detailed information about database encryption, see the Oracle Database Security White Papers.

Patching a DB System
This topic describes the procedures to patch bare metal and virtual machine DB systems and database homes by using
the Console, the API, or the database CLI (dbcli Integration Cloud). For information on patching or performing a
version upgrade on databases within a bare metal or virtual machine DB system, see Patching a Database on page
1422.
Note:

This topic is not applicable to Exadata Cloud Service instances. For


information and instructions on Exadata patching in Oracle Cloud
Infrastructure, see the following topics:
• Patching an Exadata Cloud Service Instance on page 1285
• Patching an Exadata Cloud Service Instance Manually on page 1290.

Oracle Cloud Infrastructure User Guide 1408


Database

Currently Available Patches

Version DB System Patch Database Patch


19.0.0.0 January 2021, October 2020 January 2021, October
2020, July 2020, April
2020
18.0.0.0 January 2021, October 2020 January 2021, October
2020, July 2020, April
2020
12.2.0.1 January 2021, October 2020 January 2021, October
2020, July 2020, April
2020
12.1.0.2 January 2021, October 2020 January 2021, October
2020, July 2020, April
2020
11.2.0.4 Not applicable January 2021, October
2020, July 2020, April
2020

For information about operating system updates, see OS Updates on page 1398.
Required IAM Policy
You must have the required type of access in a policy to use Oracle Cloud Infrastructure, whether you're using the
Console or the REST API with an SDK, CLI, or other tool. When running a command, if you see an error message
that says you don’t have permission or are unauthorized, contact your administrator. Confirm the type of access
you've been granted, and which compartment you should work in.
For administrators: The policy in Let database admins manage Oracle Cloud database systems on page 2158 enables
the specified group to do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150.
Details about writing policies for databases are located in Details for the Database Service on page 2251.
About Patching DB Systems

Planning and Preparation


Patching a DB system requires a reboot, which can take several minutes. To minimize the impact on users, run
the patch at a time when the system has the fewest users. To avoid system interruption, consider implementing a
high availability strategy such as Oracle Data Guard. For more information, see Using Oracle Data Guard with the
Database CLI on page 1470.
Oracle recommends that you back up your database and test the patch on a test system before you apply the patch.
See Backing Up a Database on page 1436 for more information.
Always patch a DB system before you patch the databases within that system. The Console displays the latest DB
system patch and the previous patch. You can use either of these patches, but we recommend using the latest patch
when possible.

Patch Availability for Older Oracle Database Software Versions


For the Oracle Database and Oracle Grid Infrastructure major version releases available in Oracle Cloud
Infrastructure, patches are provided for the current version plus the two most recent older versions (N through N - 2).
For example, if an instance is using Oracle Database 19c, and the latest version of 19c offered is 19.8.0.0.0, patches
are available for versions 19.8.0.0.0, 19.7.0.0 and 19.6.0.0.

Oracle Cloud Infrastructure User Guide 1409


Database

Prerequisites
DB systems require access to the Oracle Cloud Infrastructure Object Storage service, including connectivity to the
applicable Swift endpoint for Object Storage. We recommend using a service gateway with the VCN to enable this
access. For more information, see these topics:
• Network Setup for DB Systems on page 1360. This topic describes the procedure to set up your VCN for the DB
system, including the service gateway.
• https://cloud.oracle.com/infrastructure/storage/object-storage/faq. This topic explains which Swift endpoints to
use.
Important:

In addition to the prerequisites listed in this section, ensure that the following
conditions are met to avoid patching failures:
• The /u01 directory on the database host file system has at least 15 GB of
free space to execute patching processes.
• The Oracle Cluster ware is running on the DB system.
• All DB system nodes are running.
See Patching Failures on Bare Metal and Virtual Machine DB Systems on
page 1661 for details on problems that can result from not following these
guidelines.
Using the Console
You can use the Console to:
• View the patch history of a DB system or an individual database.
• Apply patches
• Monitor the status of an operation.
We recommend that you use the pre-check action to ensure that your DB system or database home has met the
requirements for the patch you want to apply.
To patch a DB system
1. Open the navigation menu.Click > Database > Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of DB systems is displayed.
3. Find the DB system that you plan to patch.
4. Click the DB system name to display details about it.
5. Click Resources > Patches.
6. Review the list of patches.
7. Click Actions (three dots) for the patch you are interested in, and then select one of the following actions:
• Pre-check: Check for any prerequisites to ensure that the patch can be successfully applied.
• Apply: Performs the pre-check, and then applies the patch.
8. Confirm when prompted.
9. In the list of patches, click the patch name to display its patch request. Then monitor the progress of the patch
operation.
While a patch is being applied, the patch status displays as Applying and the DB system status displays as
Updating. If the operation completes successfully, the patch's status changes to Applied and the DB system's
status changes to Available.
To view the patch history of a DB system
1. Open the navigation menu. Click > Database > Bare Metal, VM, and Exadata.

Oracle Cloud Infrastructure User Guide 1410


Database

2. Choose your Compartment.


A list of DB systems is displayed.
3. To display details about the system you are interested in, locate the system name and click it.
4. Under Resources, click Patch History.
The history of patch operations for that DB system is displayed.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following APIs to manage patching DB systems:
• ListDbSystemPatches
• ListDbSystemPatchHistoryEntries
• GetDbSystemPatch
• GetDbSystemPatchHistoryEntry
• UpdateDbSystem
For the complete list of APIs for the Database service, see Database Service API.
Using the Database CLI
This topic explains how to use the command line interface on the DB system to patch a DB system. Patches are
available from the Oracle Cloud Infrastructure Object Storage service. You use the DBCLI commands to download
and apply patches to some or all components in your system.

Prerequisites
To connect to the DB system via SSH, you need the path to private key associated with the public key used when the
DB system was launched.
You also need the public or private IP address of the DB system.
Use the private IP address to connect to the system from your on-premises network, or from within the virtual cloud
network (VCN). This includes connecting from a host located on-premises connecting through a VPN or FastConnect
to your VCN, or from another host in the same VCN. Use the Exadata system's public IP address to connect to the
system from outside the cloud (with no VPN). You can find the IP addresses in the Oracle Cloud Infrastructure
Console as follows:
• Cloud VM clusters (new resource model): On the Exadata VM Cluster Details page, click Virtual Machines in
the Resources list.
• DB systems: On the DB System Details page, click Nodes in the Resources list.
The values are displayed in the Public IP Address and Private IP Address & DNS Name columns of the table
displaying the Virtual Machines or Nodes of the Exadata Cloud Service instance.
To update the CLI with the latest commands
Update the CLI to ensure you have the latest patching commands (older DB systems might not include them).
1. SSH to the DB System.

ssh -i <private_key_path> opc@<db_system_ip_address>


2. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the root user's profile,
which will set the PATH to the dbcli directory (/opt/oracle/dcs/bin).

login as: opc

[opc@dbsys ~]$ sudo su -

Oracle Cloud Infrastructure User Guide 1411


Database

3. Update the CLI by using the CLI Update Command on page 1484 command.

[root@dbsys ~]# cliadm update-dbcli


{
"jobId" : "dc9ce73d-ed71-4473-99cd-9663b9d79bfd",
"status" : "Created",
"message" : "Dcs cli will be updated",
"reports" : [ ],
"createTimestamp" : "January 18, 2017 10:19:34 AM PST",
"resourceList" : [ ],
"description" : "dbcli patching",
"updatedTime" : "January 18, 2017 10:19:34 AM PST"
}
4. Wait for the update job to complete successfully. Check the status of the job by using the Job Commands on page
1526 command.

[root@dbsys ~]# dbcli list-jobs

ID Description Created
Status
------------------------------------ --------------
----------------------------------- ----------
dc9ce73d-ed71-4473-99cd-9663b9d79bfd dbcli patching January 18, 2017
10:19:34 AM PST Success

To check for installed and available patches


1. SSH to the DB System.

ssh -i <private_key_path> opc@<db_system_ip_address>


2. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the root user's profile,
which will set the PATH to the dbcli directory (/opt/oracle/dcs/bin).

login as: opc

[opc@dbsys ~]$ sudo su -


3. Display the installed patch versions by using the Component Command on page 1505 command. If the
Available Version column indicates a version number for a component, you should update the component.

[root@dbsys ~]# dbcli describe-component


System Version
---------------
12.1.2.10.0

Component Name Installed Version Available Version


--------------------- -------------------- --------------------
OAK 12.1.2.10.0 up-to-date
GI 12.1.0.2.161018 up-to-date
ORADB12102_HOME1 12.1.0.2.160719 12.1.0.2.161018
4. Display the latest patch versions available in Object Storage by using the Latestpatch Command on page 1528
command.

[root@dbsys ~]# dbcli describe-latestpatch

componentType availableVersion
--------------- --------------------
gi 12.1.0.2.161018
db 11.2.0.4.161018

Oracle Cloud Infrastructure User Guide 1412


Database

db 12.1.0.2.161018
oak 12.1.2.10.0

To patch server components


You can patch the Grid Infrastructure (GI) and storage management kit (OAK) server components.
1. SSH to the DB System.

ssh -i <private_key_path> opc@<db_system_ip_address>


2. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the root user's profile,
which will set the PATH to the dbcli directory (/opt/oracle/dcs/bin).

login as: opc

[opc@dbsys ~]$ sudo su -


3. Update the server components by using the Server Command on page 1541 command.

[root@dbsys ~]# dbcli update-server


{
"jobId" : "9a02d111-e902-4e94-bc6b-9b820ddf6ed8",
"status" : "Created",
"reports" : [ ],
"createTimestamp" : "January 19, 2017 09:37:11 AM PST",
"resourceList" : [ ],
"description" : "Server Patching",
"updatedTime" : "January 19, 2017 09:37:11 AM PST"
}

Note the job ID in the above example.


4. Check the job output by using the Job Commands on page 1526 command with the job ID.

[root@dbsys ~]# dbcli describe-job -i 9a02d111-e902-4e94-bc6b-9b820ddf6ed8

Job details
----------------------------------------------------------------
ID: 9a02d111-e902-4e94-bc6b-9b820ddf6ed8
Description: Server Patching
Status: Running
Created: January 19, 2017 9:37:11 AM PST
Message:

Task Name Start Time


End Time Status
----------------------------------------
----------------------------------- -----------------------------------
----------
Create Patching Repository Directories January 19, 2017 9:37:11 AM PST
January 19, 2017 9:37:11 AM PST Success
Download latest patch metadata January 19, 2017 9:37:11 AM PST
January 19, 2017 9:37:11 AM PST Success
Update System version January 19, 2017 9:37:11 AM PST
January 19, 2017 9:37:11 AM PST Success
Update Patching Repository January 19, 2017 9:37:11 AM PST
January 19, 2017 9:38:35 AM PST Success
oda-hw-mgmt upgrade January 19, 2017 9:38:35 AM PST
January 19, 2017 9:38:58 AM PST Success
Opatch updation January 19, 2017 9:38:58 AM PST
January 19, 2017 9:38:58 AM PST Success

Oracle Cloud Infrastructure User Guide 1413


Database

Patch conflict check January 19, 2017 9:38:58 AM PST


January 19, 2017 9:42:06 AM PST Success
Apply cluster-ware patch January 19, 2017 9:42:06 AM PST
January 19, 2017 10:02:32 AM PST Success
Updating GiHome version January 19, 2017 10:02:32 AM PST
January 19, 2017 10:02:38 AM PST Success
5. Verify that the server components were updated successfully by using the Component Command on page 1505
command. The Available Version column should indicate update-to-date.
To patch database home components
1. SSH to the DB System.

ssh -i <private_key_path> opc@<db_system_ip_address>


2. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the root user's profile,
which will set the PATH to the dbcli directory (/opt/oracle/dcs/bin).

login as: opc

[opc@dbsys ~]$ sudo su -


3. Get the ID of the database home by using the Dbhome Commands on page 1519 command.

[root@dbsys ~]# dbcli list-dbhomes


ID Name DB Version Home
Location
------------------------------------ ----------------- ----------
------------------------------------------
b727bf80-c99e-4846-ac1f-28a81a725df6 OraDB12102_home1 12.1.0.2 /u01/app/
orauser/product/12.1.0.2/dbhome_1
4. Update the database home components by using the Dbhome Commands on page 1519 command and providing
the ID from the previous step.

[root@dbsys ~]# dbcli update-dbhome -i b727bf80-c99e-4846-


ac1f-28a81a725df6
{
"jobId" : "31b38f67-f993-4f2e-b7eb-5bccda9901ae",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "January 20, 2017 10:08:48 AM PST",
"resourceList" : [ ],
"description" : "DB Home Patching: Home Id is 52e2e799-946a-4339-964b-
c203dee35328",
"updatedTime" : "January 20, 2017 10:08:48 AM PST"
}

Note the job ID in the above example.


5. Check the job output by using the Job Commands on page 1526 command with the job ID.

[root@dbsys ~]# dbcli describe-job -i 31b38f67-f993-4f2e-b7eb-5bccda9901ae

Job details
----------------------------------------------------------------
ID: 31b38f67-f993-4f2e-b7eb-5bccda9901ae
Description: DB Home Patching: Home Id is b727bf80-c99e-4846-
ac1f-28a81a725df6
Status: Success
Created: January 20, 2017 10:08:48 AM PST

Oracle Cloud Infrastructure User Guide 1414


Database

Message:

Task Name Start Time


End Time Status
----------------------------------------
----------------------------------- -----------------------------------
----------
Create Patching Repository Directories January 20, 2017 10:08:49 AM PST
January 20, 2017 10:08:49 AM PST Success
Download latest patch metadata January 20, 2017 10:08:49 AM PST
January 20, 2017 10:08:49 AM PST Success
Update System version January 20, 2017 10:08:49 AM PST
January 20, 2017 10:08:49 AM PST Success
Update Patching Repository January 20, 2017 10:08:49 AM PST
January 20, 2017 10:08:58 AM PST Success
Opatch updation January 20, 2017 10:08:58 AM PST
January 20, 2017 10:08:58 AM PST Success
Patch conflict check January 20, 2017 10:08:58 AM PST
January 20, 2017 10:12:00 AM PST Success
db upgrade January 20, 2017 10:12:00 AM PST
January 20, 2017 10:22:17 AM PST Success
6. Verify that the database home components were updated successfully by using the Component Command on page
1505 command. The Available Version column should indicate update-to-date.

Creating Databases
Note:

• This topic applies only to bare metal DB systems. Virtual machine DB


systems can only contain a single database, which is created when the DB
system is provisioned.
• Database backups on virtual machine DB systems can only be restored to
an existing bare metal DB system or a newly-created virtual machine or
bare metal DB system.
When you launch a bare metal DB system, an initial database is created in that system. After provisioning your
system, you can create additional databases at any time by using the Console or the API. The database edition will be
the edition of the DB system in which the database is created, and each new database is created in a separate database
home. You can create an empty database or reproduce a database by using a backup.
Options for Creating a Database from a Backup
When creating a new database using a backup stored in Object Storage as the source, you have the following backup
source options:
• Daily automatic backup. Requires that you have automatic backups enabled and an available backup to use. If
you are creating a database from an automatic backup, you can choose any level 0 weekly backup, or a level 1
incremental backup created after the most recent level 0 backup. For more information on automatic backups, see
Oracle Cloud Infrastructure Managed Backup Features on page 1437.
• On-demand full backup. See To create an on-demand full backup of a database on page 1440 for information
on creating an on-demand backup.
• Standalone backup. For more information, see Standalone Backups on page 1439.
• Last archived redo log backup. Requires that you have automatic backups enabled. This backup combines data
from the most recent daily automatic backup and data from archived redo logs, and represents the most current
backup available. The time of the last archived redo log backup is visible on the database details page in the Last
Backup Time field.

Oracle Cloud Infrastructure User Guide 1415


Database

• Point-in-time out of place restore. Specify a timestamp to create a new copy of the database that included data
up to a specified point in time. The timestamp must be earlier or equal to the Last Backup Time time displayed
on the database details page. Note the following limitations when performing a point-in-time out of place restore:
• The timestamp must be within the recovery window of the database
• The timestamp must be available within the database incarnation of the available automatic backups
• The timestamp cannot fall within two overlapping database incarnations
• The create database operation will fail if the database has undergone structural changes since the specified
timestamp. Structural changes include operations such as creating or dropping a tablespace.
• The create database operation cannot be started if another point-in-time database copy operation is in progress.
For information on configuring your DB system to back up to Object Storage, see Backing Up a Database to Oracle
Cloud Infrastructure Object Storage on page 1436.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let database admins manage Oracle Cloud database systems on page 2158 lets the
specified group do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. If
you want to dig deeper into writing policies for databases, see Details for the Database Service on page 2251.
Using the Console
To create a new database in an existing DB system
Note:

• The database that you create will be the same edition as the initial
database in your bare metal DB system.
• Virtual machine DB systems do not support the creation of additional
databases after system provisioning.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of DB systems is displayed.
3. In the list of DB systems, find the DB system in which you want to create the database, and then click its name to
display details about it.
4. Click Create Database.

Oracle Cloud Infrastructure User Guide 1416


Database

5. In the Create Database dialog, enter the following:


• Database name: The name for the database. The database name must begin with an alphabetic character and
can contain a maximum of eight alphanumeric characters. Special characters are not permitted.
• Database image: Determines what Oracle Database version is used for the database. You can mix database
versions on the DB system, but not editions. By default, the latest Oracle-published database software image is
selected.
Click Change Database Image to use an older Oracle-published image or a custom database software image
that you have created in advance, then select an Image Type:
• Oracle Provided Database Software Images: These images contain generally available versions of
Oracle Database software.
• Custom Database Software Images: These images are created by your organization and contain
customized configurations of software updates and patches.
After choosing a software image, click Select to return to the Create Database dialog.
• PDB name: (Optional) For version 12.1.0.2 and later, you can specify the name of the pluggable database.
The PDB name must begin with an alphabetic character, and can contain a maximum of 8 alphanumeric
characters. The only special character permitted is the underscore ( _).
• Create administrator credentials: A database administrator SYS user will be created with the password you
supply.
• Username: SYS
• Password: Supply the password for this user. The password must meet the following criteria:
A strong password for SYS, SYSTEM, TDE wallet, and PDB Admin. The password must be 9 to 30
characters and contain at least two uppercase, two lowercase, two numeric, and two special characters. The
special characters must be _, #, or -. The password must not contain the username (SYS, SYSTEM, and so
on) or the word "oracle" either in forward or reversed order and regardless of casing.
• Confirm password: Re-enter the SYS password you specified.
• Use the administrator password for the TDE wallet: When this option checked, the password entered
for the SYS user is also used for the TDE wallet. To set the TDE wallet password manually, uncheck this
option and enter the TDE wallet password.
• Select workload type: Choose the workload type that best suits your application:
• Online Transactional Processing (OLTP) configures the database for a transactional workload, with a
bias towards high volumes of random data access.
• Decision Support System (DSS) configures the database for a decision support or data warehouse
workload, with a bias towards large data scanning operations.
• Configure database backups: Specify the settings for backing up the database to Object Storage:

Enable automatic backup: Check the check box to enable automatic incremental backups for this
database. If you are creating a database in a security zone compartment, you must enable automatic
backups.
• Backup Retention Period: If you enable automatic backups, you can choose one of the following preset
retention periods: 7 days, 15 days, 30 days, 45 days, or 60 days. The default selection is 30 days.
• Backup Scheduling: If you enable automatic backups, you can choose a two-hour scheduling window
to control when backup operations begin. If you do not specify a window, the six-hour default window
of 00:00 to 06:00 (in the time zone of the DB system's region) is used for your database. See Backup
Scheduling for more information.
6. Click Show Advanced Options to specify the following options for the database:
• Character set: The character set for the database. The default is AL32UTF8.
• National character set: The national character set for the database. The default is AL16UTF16.
• Tags: If you have permissions to create a resource, you also have permissions to apply free-form tags to that
resource. To apply a defined tag, you must have permissions to use the tag namespace. For more information
about tagging, see Resource Tags on page 213. If you are not sure if you should apply tags, skip this option
(you can apply tags later) or ask your administrator.

Oracle Cloud Infrastructure User Guide 1417


Database

7. Click Create Database.


When the database creation is complete, the status changes from Provisioning to Available.
To create a database from a backup in an existing DB system
Note:

Virtual machine DB systems do not support the creation of additional


databases after system provisioning. To create a new virtual machine DB
system from a backup, see To create a DB system from a backup on page
1378
You can create a new database from a database backup. See Options for Creating a Database from a Backup on page
1415 for details on backup source options.
Before you begin, note the following:
• When you create a database from a backup, you can choose a different DB system and compartment. However,
the availability domain will be the same as where the source database is hosted.
Tip:

You can use the GetBackup API to obtain information about the
availability domain of the backup.
• The DB system you specify must support the same type as the system from which the backup was taken. For
example, if the backup is from a single-node database, then the target DB system must be a single-node shape.
• The version of the target DB system must be the same or higher than the version of the backup.
• If the backup being used to create a database is in a security zone compartment, the database cannot be created
in a compartment that is not in a security zone. See the Security Zone Policies topic for a full list of policies that
affect Database service resources.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of DB systems is displayed.
3. Navigate to the backup or standalone backup you want to use to create the new DB system:
Tip:

If you are creating a database from an automatic backup, you may choose
any level 0 weekly backup, or a level 1 incremental backup created
after the most recent level 0 backup. For more information on automatic

Oracle Cloud Infrastructure User Guide 1418


Database

backups, see Oracle Cloud Infrastructure Managed Backup Features on


page 1437.
To select a daily automatic backup or on-demand full backup as the source
a. Click the DB system name that contains the specific database to display the DB System Details page.
b. From the Databases list, click the database name associated with the backup you want to use.
c. Find your desired backup in the Backups list. If you don't see the backups list on the database details page,
click Backups in the Resources menu.
d. Click the Actions icon (three dots) for the backup, and then click Create Database.
To select the last archived redo log automatic backup as the source
a. Find the DB system where the database is located, and click the system name to display details about it.
b. Find the database associated with the backup you wish to use, and click its name to display details about it.
c. On the database details page, click Create Database from Last Backup.
d. In the Create Database from Backup dialog, select Create database from last backup.
To specify a timestamp for a point-in-time copy of the source
a. Click the DB system name that contains the specific database to display the DB System Details page.
b. From the Databases list, click the database name associated with the backup data you want to use as the
source for the initial database in you new DB system.
c. On the database details page, click Create Database from Last Backup.
d. In the Create Database from Backup dialog, select Create database from specified timestamp.
To select a standalone backup as the source
a. Click Standalone Backups under Bare Metal, VM, and Exadata.
b. In the list of standalone backups, find the backup you want to use to create the database.
c. Click the Actions icon (three dots) for the backup you are interested in, and then click Create Database.
4. In the Create Database from Backup dialog, enter the following:
• DB System: The DB system in which you want to create the database. You must have the Use Existing DB
System radio button selected to see the drop-down list of DB system choices.
Note:

You cannot create a new database in the same DB system in which the
database used to create the backup resides.
• Database Name: The name for the database. The database name must begin with an alphabetic character and
can contain a maximum of eight alphanumeric characters. Special characters are not permitted.
• Database Admin Password:
A strong password for SYS, SYSTEM, TDE wallet, and PDB Admin. The password must be 9 to 30
characters and contain at least two uppercase, two lowercase, two numeric, and two special characters. The
special characters must be _, #, or -. The password must not contain the username (SYS, SYSTEM, and so on)
or the word "oracle" either in forward or reversed order and regardless of casing.
A strong password for SYS, SYSTEM, TDE wallet, and PDB Admin. The password must be 9 to 30
characters and contain at least two uppercase, two lowercase, two numeric, and two special characters. The
special characters must be _, #, or -. The password must not contain the username (SYS, SYSTEM, and so on)
or the word "oracle" either in forward or reversed order and regardless of casing.
• Confirm Database Admin Password: Re-enter the Database admin password you specified.
• Password for Transparent Data Encryption (TDE) Wallet or RMAN Encryption:
Enter either the TDE wallet password or the RMAN encryption password for the backup, whichever is
applicable. The TDE wallet password is the SYS password provided when the database was created by using
the Oracle Cloud Infrastructure Console, API, or CLI. The RMAN encryption password is typically required
instead if the password was subsequently changed manually.
5. Click Create Database.

Oracle Cloud Infrastructure User Guide 1419


Database

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to create databases on bare metal DB systems.
Database homes:
• ListDbHomes
• GetDbHome
• CreateDbHome
• DeleteDbHome
Databases:
• CreateDatabase
For the complete list of APIs for the Database service, see Database Service API.

Managing Databases
This topic describes the following administrative tasks for databases in bare metal and virtual machine DB systems:
• Updating the administrator and TDE wallet passwords of a database in a bare metal or virtual machine DB system
• Deleting a database in a DB system (bare metal systems only)
Note:

Virtual machine DB systems can only contain a single database, which is


created when the DB system is provisioned. To delete a database in a virtual
machine DB system, terminate the virtual machine DB system resource.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let database admins manage Oracle Cloud database systems on page 2158 lets the
specified group do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. If
you want to dig deeper into writing policies for databases, see Details for the Database Service on page 2251.
Using the Console

To manage administrator and TDE wallet passwords


1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. Navigate to the database:
a. Click the DB system name that contains the specific database to display the DB System Details page.
b. From the Databases list, click the database name you want to administer.
4. On the Database Details page, click More Actions, then Manage Passwords.
5. In the Manage Passwords dialog click Update Administrator Password or Update TDE Wallet Password,
depending on which password you want to update.

Oracle Cloud Infrastructure User Guide 1420


Database

6. Enter the new password:



For the administrator password, enter the new password in both the New administrator password and
Confirm administrator password fields.
• For the TDE wallet password, enter the current wallet password in the Enter existing TDE wallet password
field, then enter the new password in both the New TDE wallet password and the Confirm TDE wallet
password fields.
7. Click Apply to update your chosen password.
To terminate a database
When terminating a database in a bare metal DB system, you will be given the chance to back up the database prior
to terminating it. This creates a standalone backup that can be used to create a database later. Oracle recommends that
you create this final backup for any production (non-test) database.
Note:

Terminating a database removes all automatic incremental backups of the


database from Oracle Cloud Infrastructure Object Storage. However, all
full backups that were created on demand, including your final backup, will
persist as standalone backups.
You cannot terminate a database that is assuming the primary role in a Data Guard association. To terminate it, you
can switch it over to the standby role.
For information on terminating a database contained in a virtual machine DB system, see To terminate a DB system
on page 1387. For virtual machine systems, a database can only be terminated as part of the terminate DB system
operation.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of DB systems is displayed.
3. In the list of DB systems, find the DB system that contains the database you want to terminate, and then click its
name to display details about it.
4. In the list of databases, find the database you want to terminate, and then click its name to display details about it.
5. Click Actions, and then click Terminate.
6. In the confirmation dialog, indicate whether you want to back up the database before terminating it, and type the
name of the database to confirm the termination.
7. Click Terminate Database.
The database's status indicates Terminating.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to manage databases.
Database homes:
• ListDbHomes
• GetDbHome
• UpdateDbHome
• DeleteDbHome
Databases:
• ListDatabases
• GetDatabase
• UpdateDatabase

Oracle Cloud Infrastructure User Guide 1421


Database

Note:

See the DeleteDbHome API for information on deleting databases on bare


metal DB systems. See the TerminateDbSystem for information on deleting
virtual machine DB systems, including the database contained in the system.
For the complete list of APIs for the Database service, see Database Service API.
Additional Information
See the following topics for additional information on administering database resources in Oracle Cloud
Infrastructure:
• Backing Up a Database on page 1436
• Recovering a Database on page 1447
• Patching a DB System on page 1408
• To patch a database on page 1423
• To view the patch history of a database on page 1423
• Using the Console to Clone a Virtual Machine DB System on page 1390

Patching a Database
This topic describes the procedures to apply patches to databases in bare metal and virtual machine DB systems
by using the Console and the API. For information on patching DB systems and to see a list of currently available
database patches, see Patching a DB System on page 1408.
Note:

This topic is not applicable to Exadata Cloud Service instances. For


information and instructions on Exadata patching in Oracle Cloud
Infrastructure, see the following topics:
• Patching an Exadata Cloud Service Instance on page 1285
• Patching an Exadata Cloud Service Instance Manually on page 1290.
Required IAM Policy
You must have the required type of access in a policy to use Oracle Cloud Infrastructure, whether you're using the
Console or the REST API with an SDK, CLI, or other tool. When running a command, if you see an error message
that says you don’t have permission or are unauthorized, contact your administrator. Confirm the type of access
you've been granted, and which compartment you should work in.
For administrators: The policy in Let database admins manage Oracle Cloud database systems on page 2158 enables
the specified group to do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150.
Details about writing policies for databases are located in Details for the Database Service on page 2251.
About Patching Databases
For database patching, always patch a DB system before you patch the databases within that system. The Console
displays the latest DB system patch and the previous patch. You can use either of these patches, but we recommend
using the latest patch when possible. See Patching a DB System on page 1408 for more information.
You can also patch your database using a custom database software image. See Oracle Database Software Images on
page 1568 for more information on creating and working with software images.
For a list of currently available database patches, see Currently Available Patches on page 1409.
Applying Interim Patches Using a Database Software Image
You can use custom database software images to easily apply interim (one-off) patches to virtual machine and bare
metal DB systems in the Console. See the following topics for more information:

Oracle Cloud Infrastructure User Guide 1422


Database

• Oracle Database Software Images on page 1568: An overview of the database software image feature.
• To create a database software image on page 1569 Provides information on how to create a custom image that
includes one or more interim patches.
• To patch a database on page 1423: Once you have created a database software image with your interim patches,
follow the instructions in this topic to patch using the image.
Using the Console

To patch a database
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of DB systems is displayed.
3. Find the DB system where the database is located, and click the system name to display details about it.
A list of databases is displayed.
4. Find the database on which you want to perform the patch operation, and click its name to display details about it.
5. Under Resources, click Updates.
The Oracle Provided Database Software Images tab displays generally-available Oracle Database software
images that you can use to patch your database. Oracle images that can be used for patching have the update type
of "Patch".
The Custom Database Software Images tab allows you to select a database software image that you have
created in advance. Use the Select a Compartment selector to specify the compartment that contains the database
software image. Custom images that can be used for patching have the update type of "Patch".
6. Review the list of database software images that you can use to patch your database. We recommend using the
latest database software image patch when possible.
7. Click the Actions icon (three dots) for the patch you are interested in, and then select one of the following actions:
• Precheck: Check for any prerequisites to ensure that the patch can be successfully applied.
• Apply: Performs the precheck, and then applies the patch.
8. Confirm when prompted.
9. In the list of patches, click the patch name to display its patch request and monitor the progress of the patch
operation.
While a patch is being applied, the patch's status displays as Applying and the database's status displays as
Updating. If the operation completes successfully, the patch's status changes to Applied and the database's status
changes to Available.
To view the patch history of a database
Each patch history entry represents an attempted patch operation and indicates whether the operation was successful
or failed. You can retry a failed patch operation. Repeating an operation results in a new patch history entry.
Note:

Patch history views in the Console do not show patches that were applied by
using command line tools like dbcli or the Patch utility.
1. Open the navigation menu. Click > Database > Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of DB systems is displayed.
3. To display details about the DB system where the database is located, and click the system name.
A list of databases is displayed.
4. To display details about the database you are interested in, locate the system name and click it.
5. Under Resources, click Update History.
The history of patch and upgrade operations for that database is displayed.

Oracle Cloud Infrastructure User Guide 1423


Database

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following APIs to manage database patching.
• ListDbHomePatches
• ListDbHomePatchHistoryEntries
• GetDbHomePatch
• GetDbHomePatchHistoryEntry
• UpdateDbHome
• UpdateDatabase
For the complete list of APIs for the Database service, see Database Service API.
Applying Interim Patches Manually
Note:

This topic applies only to database homes in 1-node and 2-node RAC DB
systems.
To apply an interim patch (previously known as a "one-off" patch) to fix a specific defect, follow the procedure in this
section. Use the Opatch utility to apply an interim patch to a database home.
To apply an interim patch to a database home
Note:

In the procedure example, the database home directory is /u02/app/oracle/


product/12.1.0.2/dbhome_1 and the patch number is 26543344.
1. Obtain the applicable interim patch from My Oracle Support.
2. Review the information in the patch README.txt file. This file might contain additional and/or custom
instructions to follow to apply the patch successfully.
3. Use SCP or SFTP to place the patch on your target database.
4. Shut down each database that is running in the database home.

srvctl stop database -db <db name> -stopoption immediate -verbose


5. Set the Oracle home environment variable to point to the target Oracle home.

sudo su - oracle
export ORACLE_HOME=/u02/app/oracle/product/12.1.0.2/dbhome_1
6. Change to the directory where you placed the patch, and unzip the patch.

cd <work_dir_where_opatch_is stored>
unzip p26543344_122010_Linux-x86-64.zip
7. Change to the directory with the unzipped patch, and check for conflicts.

cd 26543344
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./
8. Apply the patch.

$ORACLE_HOME/OPatch/opatch apply

Oracle Cloud Infrastructure User Guide 1424


Database

9. Verify that the patch was applied successfully.

$ORACLE_HOME/OPatch/opatch lsinventory -detail -oh $ORACLE_HOME


10. If the database home contains databases, restart them.

$ORACLE_HOME/bin/srvctl start database -db <db_name>

Otherwise, run the following command as root user.

# /u01/app/<db_version>/grid/bin/setasmgidwrap o=/u01/app/oracle/
product/<db_version>/dbhome_1/bin/oracle
11. If the readme indicates that the patch has a sqlpatch component, run the datapatch command against each
database.
Before you run datapatch, ensure that all pluggable databases (PDBs) are open. To open a PDB, you can use
SQL*Plus to execute ALTER PLUGGABLE DATABASE <pdb_name> OPEN READ WRITE; against the
PDB.

$ORACLE_HOME/OPatch/datapatch

Upgrading a Database
This topic describes the procedures to upgrade databases in bare metal and virtual machine DB systems by using the
Console and the API. Currently upgrades to Oracle Database 19c (Long Term Release) are available.
Note:

This topic is not applicable to Exadata Cloud Service instances.


Required IAM Policy
You must have the required type of access in a policy to use Oracle Cloud Infrastructure, whether you're using the
Console or the REST API with an SDK, CLI, or other tool. When running a command, if you see an error message
that says you don’t have permission or are unauthorized, contact your administrator. Confirm the type of access
you've been granted, and which compartment you should work in.
For administrators: The policy in Let database admins manage Oracle Cloud database systems on page 2158 enables
the specified group to do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150.
Details about writing policies for databases are located in Details for the Database Service on page 2251.
Prerequisites
The following are required in order to upgrade a database on a bare metal or virtual machine DB system:
• The DB system must use Oracle Linux 7 (OL7)
• If your DB System uses ASM storage management software, the system must use Oracle Grid Infrastructure 19c
For databases on DB systems not meeting the minimum software version requirements, you can upgrade only after
using the backup and restore operations to restore the database to a DB system that uses OL7 and version 19c Grid
Infrastructure. See the following topics for more information on restoring a database to another DB system by using
an on-demand full backup:
• Backing Up a Database to Oracle Cloud Infrastructure Object Storage on page 1436
• To create an on-demand full backup of a database on page 1440
• To create a new database in an existing DB system on page 1416
• To create a DB system from a backup on page 1378
Your Oracle database must be configured with the following settings in order to upgrade:

Oracle Cloud Infrastructure User Guide 1425


Database

• The database must be in archivelog mode


• The database must have flashback enabled
See the Oracle Database documentation for your database's release version to learn more about these settings.
About Upgrading a Database
For database software version upgrades, note the following:
• Database upgrades involve some database downtime. Keep this in mind when scheduling your upgrade.
• Oracle recommends that you back up your database and test the new software version on a test system before you
upgrade. See Backing Up a Database on page 1436 for information on creating an on-demand manual backup.
• Oracle recommends running an upgrade precheck operation for your database prior to attempting an upgrade so
that you can discover any issues that need mitigation prior to the time you plan to perform the upgrade.
• If your databases uses Data Guard, you will need to disable or remove the Data Guard association prior to
upgrading.
• An upgrade operation cannot take place while an automatic backup operation is underway. Before upgrading,
Oracle recommends disabling automatic backups and performing a manual backup. See To configure automatic
backups for a database on page 1439 and To create an on-demand full backup of a database on page 1440 for
more information.
• After upgrading, you cannot use automatic backups taken prior to the upgrade to restore the database to an earlier
point in time.
• If you are upgrading an database that uses version 11.2 software, the resulting version 19c database will be a non-
container database.
• The upgrade operation cannot be performed using the dbcli utility.

How the Upgrade Operation Is Performed by the Database Service


During the upgrade process, the Database service does the following:
• Executes an automatic precheck. This allows the system to identify issues needing mitigation and to stop the
upgrade operation.
• Sets a guaranteed restore point, enabling it to perform a flashback in the event of an upgrade failure.
• Creates a new Oracle Database Home based on the specified Oracle-published or custom database software image.
• Runs the Database Upgrade Assistant (DBUA) software to perform the upgrade.

Rolling Back an Unsuccessful Upgrade (Oracle Database Enterprise Editions Only)


If your upgrade does not complete successfully on a system using one of the Enterprise software editions, you have
the option of performing a rollback. A rollback resets your database to the state prior to the upgrade. All changes
to the database made during and after the upgrade will be lost. The rollback option is provided in a banner message
displayed on the database details page of a database following an unsuccessful upgrade operation. See To roll back a
failed database upgrade on page 1428 for more information.

After Your Upgrade Is Complete


After a successful upgrade, note the following:
• Oracle recommends that you remove the old Oracle Database Home using the dbcli ultility. See Dbhome
Commands on page 1519 in the dbcli reference for more information.
• Check that automatic backups are enabled for the database if you disabled them prior to upgrading. See To
configure automatic backups for a database on page 1439 for more information.
• Edit the Oracle Database COMPATIBLE parameter to reflect the new Oracle Database software version. See What
Is Oracle Database Compatibility? for more information.
• On virtual machine DB Systems, ensure that the .bashrc file in the home directory of the Oracle User has been
updated to point to the 19c Database Home.

Oracle Cloud Infrastructure User Guide 1426


Database

Using the Console


You can use the Console to:
• Upgrade you database
• View the update history of your database
• Roll back an unsuccessful upgrade
Oracle recommends that you use the precheck action to ensure that your database has met the requirements for the
upgrade operation.
To upgrade a database
1. Open the navigation menu. Select Bare Metal, VM, and Exadata, then select DB Systems.
2. Choose your Compartment.
A list of DB systems is displayed.
3. Find the DB system where the database is located, and click the system name to display details about it.
A list of databases is displayed.
4. Find the database you want to upgrade, and click its name to display details about it.
5. Under Resources, click Updates.
The Oracle Provided Database Software Images tab displays generally-available Oracle Database software
images that you can use to upgrade your database to a higher major release version. Oracle images that can be
used for upgrading have the update Type of "Upgrade". Note that only the most recent patch level of Oracle
Database 19c and the next-most recent patch level can be used for the upgrade operation.
The Custom Database Software Images tab allows you to select a database software image that you have
created in advance. Use the Select a Compartment selector to specify the compartment that contains the database
software image. Custom images that can be used for upgrading have the update Type of "Upgrade". Note that
only the most recent patch level of Oracle Database 19c and the next-most recent patch level can be used for the
upgrade operation.
6. Review the list of Oracle provided or custom database software images that you can use to upgrade your database,
and identify an image you want to use for the upgrade.
7. Click Actions (three dots) on the row of the image you want to use for the upgrade, and then select one of the
following actions:

Precheck: Check for any prerequisites to ensure that the upgrade can be successfully applied. Oracle
recommends that you manually perform a precheck operation prior to upgrading to ensure that your database is
ready to be upgraded.
• Upgrade: Applies the selected database upgrade.
8. Confirm when prompted.
9. While an upgrade is being applied, the database's status displays as Upgrading. If the operation completes
successfully, the database's status changes to Available.
To view the upgrade history of a database
1. Open the navigation menu. Select Bare Metal, VM, and Exadata, then select DB Systems.
2. Choose your Compartment.
A list of DB systems is displayed.
3. To display details about the DB system where the database is located, and click the system name.
A list of databases is displayed.
4. To display details about the database you are interested in, locate the system name and click it.
5. Under Resources, click Update History.
The history of patch and upgrade operations for that database is displayed.

Oracle Cloud Infrastructure User Guide 1427


Database

To roll back a failed database upgrade


Note:

The upgrade rollback operation is only available for Enterprise software


edition databases that were unsuccessfully upgraded and are currently in the
"Failed" lifecycle state.
1. Open the navigation menu. Select Bare Metal, VM, and Exadata, then select DB Systems.
2. Choose your Compartment.
A list of DB systems is displayed.
3. Find the DB system where the database is located, and click the system name to display details about it.
A list of databases is displayed.
4. Find the database that was unsuccessfully upgraded, and click its name to display details about it. The database
should display a banner at the top of the details page that includes a Rollback button.
5. Click Rollback. In the Confirm rollback dialog, confirm that you want to initiate a rollback to the previous
Oracle Database version by clicking Rollback.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following APIs to manage database upgrades:
• ListDatabaseUpgradeHistoryEntries
• UpgradeDatabase
For the complete list of APIs for the Database service, see Database Service API.
Note:

When using the UpgradeDatabase API to upgrade a database on a virtual


machine or bare metal DB system, you must specify either DB_VERSION or
DB_SOFTWARE_IMAGE as the upgrade source.

Monitoring a Database
This topic explains how to set up an:
• Enterprise Manager Express console to monitor a version 12.1.0.2 or later database
• Enterprise Manager Database Control console to monitor a version 11.2.0.4 database
Each console is a web-based database management tool inside the Oracle database. You can use the console to
perform basic administrative tasks such as managing user security, memory, and storage, and view performance
information.
Required IAM Policy
Some of the procedures below require permission to create or update security lists. For more information about
security list policies, see Security Lists on page 2876.
Monitoring a Database with Enterprise Manager Express
On 1- and 2-node RAC DB Systems, by default, the EM Express console is not enabled on version 18.1.0.0, 12.2.0.1,
and 12.1.0.2 databases. You can enable it for an existing database as described below, or you can enable it when you
create a database by using the Database Commands on page 1506 command with the -co parameter.
You must also update the security list and iptables for the DB system as described later in this topic.
When you enable the console, you'll set the port for the console. The procedure below uses port 5500, but each
additional console enabled on the same DB system will have a different port.

Oracle Cloud Infrastructure User Guide 1428


Database

To enable the EM Express console and determine its port number


1. SSH to the DB system, log in as opc, sudo to the oracle user, and log in to the database as SYS.

sudo su - oracle
. oraenv
<provide the database SID at the prompt>
sqlplus / as sysdba
2. Do one of the following:
• To enable the console and set its port, use the following command.

exec DBMS_XDB_CONFIG.SETHTTPSPORT(<port>);

For example:

SQL> exec DBMS_XDB_CONFIG.SETHTTPSPORT(5500);

PL/SQL procedure successfully completed.

• To determine the port for a previously enabled console, use the following command.

select dbms_xdb_config.getHttpsPort() from dual;

For example:

SQL> select dbms_xdb_config.getHttpsPort() from dual;

DBMS_XDB_CONFIG.GETHTTPSPORT()
------------------------------
5500
3. Return to the operating system by typing exit and then confirm that the listener is listening on the port:

lsnrctl status | grep HTTP

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=xxx.us.oracle.com)(PORT=5500))
(Security=(my_wallet_directory=/u01/app/oracle/admin/prod/xdb_wallet))
(Presentation=HTTP)(Session=RAW))
4. If you're using a 2-node RAC DB system, see To set the required permissions on a 2-node RAC DB system on
page 1429.
5. Open the console's port as described in Opening Ports on the DB System on page 1434.
6. Update the security list for the console's port as described in Updating the Security List for the DB System on
page 1435.
To set the required permissions on a 2-node RAC DB system
If you're using a 2-node RAC DB system, you'll need to add read permissions for the asmadmin group on the wallet
directory on both nodes in the system.
1. SSH to one of the nodes in the DB system, log in as opc, sudo to the grid user.

[opc@dbsysHost1 ~]$ sudo su - grid


[grid@dbsysHost1 ~]$ . oraenv
ORACLE_SID = [+ASM1] ?
The Oracle base has been set to /u01/app/grid
2. Get the location of the wallet directory, shown in red below in the command output.

[grid@dbsysHost1 ~]$ lsnrctl status | grep xdb_wallet

Oracle Cloud Infrastructure User Guide 1429


Database

(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)
(HOST=dbsysHost1.sub04061528182.dbsysapril6.oraclevcn.com)(PORT=5500))
(Security=(my_wallet_directory=/u01/app/oracle/admin/dbsys12_phx3wm/
xdb_wallet))(Presentation=HTTP)(Session=RAW))
3. Return to the opc user, switch to the oracle user, and change to the wallet directory.

[opc@dbsysHost1 ~]$ sudo su - oracle


[oracle@dbsysHost1 ~]$ cd /u01/app/oracle/admin/dbsys12_phx3wm/xdb_wallet
4. List the directory contents and note the permissions.

[oracle@dbsysHost1 xdb_wallet]$ ls -ltr


total 8
-rw------- 1 oracle asmadmin 3881 Apr 6 16:32 ewallet.p12
-rw------- 1 oracle asmadmin 3926 Apr 6 16:32 cwallet.sso
5. Change the permissions:

[oracle@dbsysHost1 xdb_wallet]$ chmod 640 /u01/app/oracle/admin/


dbsys12_phx3wm/xdb_wallet/*
6. Verify that read permissions were added.

[oracle@dbsysHost1 xdb_wallet]$ ls -ltr


total 8
-rw-r----- 1 oracle asmadmin 3881 Apr 6 16:32 ewallet.p12
-rw-r----- 1 oracle asmadmin 3926 Apr 6 16:32 cwallet.sso
7. Important! Repeat the steps above on the other node in the cluster.
To connect to the EM Express console
After you've enabled the console and opened its port in the security list and iptables, you can connect as follows:
1. From a web browser, connect to the console using the following URL format:

https://<ip_address>:<port>/em

For example, https://129.145.0.164:5500/em


Use the DB system's private or public IP address depending on your network configuration.
Use the private IP address to connect to the system from your on-premises network, or from within the virtual
cloud network (VCN). This includes connecting from a host located on-premises connecting through a VPN or
FastConnect to your VCN, or from another host in the same VCN. Use the Exadata system's public IP address to
connect to the system from outside the cloud (with no VPN). You can find the IP addresses in the Oracle Cloud
Infrastructure Console as follows:
• Cloud VM clusters (new resource model): On the Exadata VM Cluster Details page, click Virtual Machines
in the Resources list.
• DB systems: On the DB System Details page, click Nodes in the Resources list.
The values are displayed in the Public IP Address and Private IP Address & DNS Name columns of the table
displaying the Virtual Machines or Nodes of the Exadata Cloud Service instance.

Oracle Cloud Infrastructure User Guide 1430


Database

2. A login page is displayed and you can log in with any valid database credentials.

The Database Home page is displayed.

To learn more about EM Express, see Introduction to Oracle Enterprise Manager Database Express.
Note:

If you're using a 1-node DB system, and you are unable to connect to the EM
Express console, see Database Known Issues.
Monitoring a Database with Enterprise Manager Database Control
By default, the Enterprise Manager Database Control console is not enabled on version 11.2.0.4 databases. You can
enable the console:
• when you create a database by using the Database Commands on page 1506 with the -co parameter
• for an existing database as described here.
Port 1158 is the default port used for the first console enabled on the DB system, but each additional console enabled
on the DB system will have a different port.

Oracle Cloud Infrastructure User Guide 1431


Database

Note:

For a version 11.2.0.4 database on a 2-node RAC DB system, see To enable


the console for a version 11.2.0.4 database on a multi-node DB system on
page 1433.
To determine the port for the Enterprise Manager Database Control console
1. SSH to the DB system, log in as opc, and sudo to the oracle user.

sudo su - oracle
. oraenv
<provide the database SID at the prompt>
2. Use the following command to get the port number.

emctl status dbconsole

The port is in the URL, as shown in the following example:

[oracle@dbsys ~]$ emctl status dbconsole


Oracle Enterprise Manager 11g Database Control Release 11.2.0.4.0
Copyright (c) 1996, 2013 Oracle Corporation. All rights reserved.
https://dbprod:1158/em/console/aboutApplication
Oracle Enterprise Manager 11g is running.
------------------------------------------------------------------
Logs are generated in directory /u01/app/oracle/product/11.2.0.4/dbhome_2/
dbprod_db11/sysman/log

3. Open the console's port as described in Opening Ports on the DB System on page 1434.
4. Update the security list for the console's port as described in Updating the Security List for the DB System on
page 1435.
To connect to the Enterprise Manager Database Control console
After you've enabled the console and opened its port in the security list and iptables, you can connect as follows:
1. From a web browser, connect to the console using the following URL format:

https://<ip_address>:<port>/em

For example, https://129.145.0.164:1158/em


Use the DB system's private or public IP address depending on your network configuration.
Use the private IP address to connect to the system from your on-premises network, or from within the virtual
cloud network (VCN). This includes connecting from a host located on-premises connecting through a VPN or
FastConnect to your VCN, or from another host in the same VCN. Use the Exadata system's public IP address to
connect to the system from outside the cloud (with no VPN). You can find the IP addresses in the Oracle Cloud
Infrastructure Console as follows:
• Cloud VM clusters (new resource model): On the Exadata VM Cluster Details page, click Virtual Machines
in the Resources list.
• DB systems: On the DB System Details page, click Nodes in the Resources list.
The values are displayed in the Public IP Address and Private IP Address & DNS Name columns of the table
displaying the Virtual Machines or Nodes of the Exadata Cloud Service instance.
2. A login page will be displayed and you can log in with any valid database credentials.
To learn more about Enterprise Manager Database Control, see Introduction to Oracle Enterprise Manager Database
Control.

Oracle Cloud Infrastructure User Guide 1432


Database

To enable the console for a version 11.2.0.4 database on a multi-node DB system


A few extra steps are required to enable the console for a version 11.2.0.4 database on a multi-node DB system.
Configure SSH Equivalency Between the Two Nodes
You'll create SSH keys on each node and copy the key to the other node, so that each node has the keys for both
nodes. The following procedure uses the sample names node1 and node2.
1. SSH to node1, log in as opc, and sudo to the oracle user.

sudo su - oracle
2. Create a directory called .ssh, set its permissions, create an RSA key, and add the public key to the
authorized_keys file.

mkdir .ssh
chmod 755 .ssh
ssh-keygen -t rsa
cat id_rsa.pub > authorized_keys
3. Repeat the previous steps on the other node in the cluster.
4. On each node, add the id_rsa.pub key for the other node to the authorized_keys file.
When you're done, you should see both keys in authorized_keys on each node.
5. On node1, create the known_hosts file by doing the following:
• SSH to node1 and reply yes to the authentication prompt.
• SSH to node2 and reply yes to the authentication prompt.
6. On node2, create the known_hosts file by doing the following:
• SSH to node2 and reply yes to the authentication prompt.
• SSH to node1 and reply yes to the authentication prompt.
7. On node1, verify that SSH equivalency is now configured by using the following Cluster Verification Utility
(CVU) command.

cluvfy stage -pre crsinst -n all -verbose

Configure the Console


1. On node1, create a file called emca.rsp with the following entries.

DB_UNIQUE_NAME=<pdb_unique_name>
SERVICE_NAME=<db_unique_name>.<db_domain>
PORT=<scan listener port>
LISTENER_OH=$GI_HOME
SYS_PWD=<admin password>
DBSNMP_PWD=<admin password>
SYSMAN_PWD=<admin password>
CLUSTER_NAME=<cluster name> <=== to get the cluster name, run:
$GI_HOME/bin/cemutlo -n
ASM_OH=$GI_HOME
ASM_SID=+ASM1
ASM_PORT=<asm listener port>
ASM_USER_NAME=ASMSNMP
ASM_USER_PWD=<admin password>
2. On node1, run Enterprise Manager Configuration Assistant (EMCA) using the emca.rsp file as input.

$ORACLE_HOME/bin/emca -config dbcontrol db -repos create -cluster -silent


-respFile <location of response file above>

Oracle Cloud Infrastructure User Guide 1433


Database

3. On node2, configure the console so the agent in node1 reports to the console in node1, and the agent in node2
reports to the console in node2.

$ORACLE_HOME/bin/emca -reconfig dbcontrol -silent -cluster -EM_NODE <node2


host> -EM_NODE_LIST <node2 host> -DB_UNIQUE_NAME <db_unique_name>
-SERVICE_NAME <db_unique_name>.<db_domain>
4. On each node, verify that console is working properly.

$ export ORACLE_UNQNAME=<db_unique_name>

$ emctl status agent


Oracle Enterprise Manager 11g Database Control Release 11.2.0.4.0
Copyright (c) 1996, 2013 Oracle Corporation. All rights reserved.
---------------------------------------------------------------
Agent Version : 10.2.0.4.5
OMS Version : 10.2.0.4.5
Protocol Version : 10.2.0.4.5
Agent Home : /u01/app/oracle/product/11.2.0.4/
dbhome_x/<host>_<db_unique_name>
Agent binaries : /u01/app/oracle/product/11.2.0.4/dbhome_x
Agent Process ID : 26194
Parent Process ID : 25835
Agent URL : https://<node host>:1831/emd/main
Repository URL : https://<node host>:5501/em/upload/
Started at : 2017-03-15 20:20:34
Started by user : oracle
Last Reload : 2017-03-15 20:27:00
Last successful upload : 2017-03-15 21:06:36
Total Megabytes of XML files uploaded so far : 22.25
Number of XML files pending upload : 0
<=== should be zero
Size of XML files pending upload(MB) : 0.00
Available disk space on upload filesystem : 42.75%
Data channel upload directory : /u01/app/oracle/
product/11.2.0.4/dbhome_x/<host>_<db_unique_name>/sysman/recv
Last successful heartbeat to OMS : 2017-03-15 21:08:45
---------------------------------------------------------------

Update iptables and Security List


1. On each node, edit iptables to open the console's port as described in Opening Ports on the DB System on page
1434.
2. Update the security list for the console's port as described in Updating the Security List for the DB System on
page 1435.
Opening Ports on the DB System
Open the following ports as needed on the DB system:
• 6200 - For Oracle Notification Service (ONS).
• 5500 - For EM Express. 5500 is the default port, but each additional EM Express console enabled on the DB
system will have a different port. If you're not sure which port to open for a particular console, see Monitoring a
Database with Enterprise Manager Express on page 1428.
• 1158 - For Enterprise Manager Database Control. 1158 is the default port, but each additional console enabled
on the DB system will have a different port. If you're not sure which port to open for a particular console, see
Monitoring a Database with Enterprise Manager Database Control on page 1431.
For important information about critical firewall rules, see Essential Firewall Rules on page 1398.

Oracle Cloud Infrastructure User Guide 1434


Database

To open ports on the DB system


1. SSH to the DB System.

ssh -i <private_key_path> opc@<db_system_ip_address>


2. Log in as opc and then sudo to the root user.

login as: opc

[opc@dbsys ~]$ sudo su -


3. Save a copy of iptables as a backup.

[root@dbsys ~]# iptables-save > /tmp/iptables.orig

(If necessary, you can restore the original file by using the command iptables-restore < /tmp/
iptables.orig.)
4. Dynamically add a rule to iptables to allow inbound traffic on the console port, as shown in the following sample.
Change the port number and comment as needed.

[root@dbsys ~]# iptables -I INPUT 8 -p tcp -m state --state NEW -m tcp --


dport 5500 -j ACCEPT -m comment --comment "Required for EM Express.”
5. Make sure the rule was added.

[root@dbsys ~]# service iptables status


6. Save the updated file to /etc/sysconfig/iptables.

[root@dbsys ~]# /sbin/service iptables save

The change takes effect immediately and will remain in effect when the node is rebooted.
7. Update the DB system's security list as described in Updating the Security List for the DB System on page 1435.
Updating the Security List for the DB System
Review the list of ports in Opening Ports on the DB System on page 1434 and for every port you open in iptables,
update the security list used for the DB system, or create a new security list.
Note that port 1521 for the Oracle default listener is included in iptables, but should also be added to the security list.
To update an existing security list
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of DB systems is displayed.
3. Locate the DB system in the list.
4. Note the DB system's Subnet name and click its Virtual Cloud Network.
5. Locate the subnet in the list, and then click its security list under Security Lists.
6. Click Edit All Rules and add an ingress rule with source type = CIDR, source CIDR=<source CIDR>,
protocol=TCP, and port=<port number or port range>.
The source CIDR should be the CIDR block that includes the ports you open for the client connection.
For detailed information about creating or updating a security list, see Security Lists on page 2876.

Oracle Cloud Infrastructure User Guide 1435


Database

Backing Up a Database
Backing up your DB system is a key aspect of any Oracle database environment. You can store backups in the
cloud or in local storage. Each backup destination has advantages, disadvantages, and requirements that you should
consider, as described below.
Object Storage (Recommended)
• Backups are stored in the Oracle Cloud Infrastructure Object Storage.
• Durability: High
• Availability: High
• Back Up and Recovery Rate: Medium
• Advantages: High durability, performance, and availability.
Local Storage
• Backups are stored locally in the Fast Recovery Area of the DB System.
• Durability: Low
• Availability: Medium
• Back Up and Recovery Rate: High
• Advantages: Optimized back up and fast point-in-time recovery.
• Disadvantages: If the DB System becomes unavailable, the backup is also unavailable.
Currently, Oracle Cloud Infrastructure does not provide the ability to attach block storage volumes to a DB System,
so you cannot back up to network attached volumes.
For 1- and 2-node RAC DB Systems, see:
• Backing Up a Database to Oracle Cloud Infrastructure Object Storage on page 1436
• Backing Up a Database to Local Storage Using the Database CLI on page 1444
Backing Up a Database to Oracle Cloud Infrastructure Object Storage
Note:

This topic is not applicable to Exadata DB systems. For Exadata DB systems,


see Managing Exadata Database Backups on page 1321.
This topic explains how to work with backups managed by Oracle Cloud Infrastructure. You do this by using the
Console or the API. (For unmanaged backups, you can use RMAN or dbcli, and you must create and manage
your own Object Storage buckets for backups. See Backing Up a Database to Object Storage Using RMAN on page
1441.)
Caution:

If you previously used RMAN or dbcli to configure backups and then


you switch to using the Console or the API for backups, a new backup
configuration is created and associated with your database. This means that
you can no longer rely on your previously configured unmanaged backups to
work.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150.

Oracle Cloud Infrastructure User Guide 1436


Database

Prerequisites
The DB system requires access to the Oracle Cloud Infrastructure Object Storage service, including connectivity
to the applicable Swift endpoint for Object Storage. Oracle recommends using a service gateway with the VCN to
enable this access. For more information, see these topics:
• Network Setup for DB Systems on page 1360: For information about setting up your VCN for the DB system,
including the service gateway.
• Can I use Oracle Cloud Infrastructure Object Storage as a destination for my on-premises backups?: For
information about the Swift endpoints to use.
Important:

Note that your database and DB system must be in an “Available” state for
a backup operation to run successfully. Oracle recommends that you avoid
performing actions that could interfere with availability (such as patching
and Data Guard operations) while a backup operation is in progress. If an
automatic backup operation fails, the Database service retries the operation
during the next day’s backup window. If an on-demand full backup fails, you
can try the operation again when the DB system and database availability are
restored.
In addition to the prerequisites listed, ensure that the following conditions are
met to avoid backup failures:
• The database's archiving mode is set to ARCHIVELOG (the default).
• The /u01 directory on the database host file system has sufficient free
space for the execution of backup processes.
• The .bash_profile file for the oracle user does not include any interactive
commands (such as oraenv or one that could generate an error or
warning message).
• (For automatic backups) No changes were made to the default
WALLET_LOCATION entry in the sqlnet.ora file.
• No changes were made to RMAN backup settings by using standard RMAN
commands.
See Backup Failures on Bare Metal and Virtual Machine DB Systems on
page 1648 for details on problems that can result from not following these
guidelines.
Oracle Cloud Infrastructure Managed Backup Features
The following information applies to managed backups configured using the Oracle Cloud Infrastructure Console or
API.
Note:

Databases in a security zone compartment must have automatic backups


enabled. See the Security Zone Policies topic for a full list of policies that
affect Database service resources.

Automatic Incremental and Archived Redo Log Backups


When you enable the Automatic Backup feature for a database, the service creates the following on an on-going basis:
• Weekly level 0 backup, generally created on a specified weekend day. A level 0 backup is the equivalent of a
full backup. Note that in the Console, weekly level 0 backups appear in the list of backups with backup type
"incremental", as do the daily level 1 backups.

Oracle Cloud Infrastructure User Guide 1437


Database

• Daily level 1 backups, which are incremental backups created on each day for the six days following the level 0
backup day.
Level 0 and level 1 backups are stored in Object Storage and have an assigned OCID.
• Ongoing archived redo log backups (with a minimum frequency of every 60 minutes). The Last Backup Time
field on the database details page in the Oracle Cloud Infrastructure Console displays the time of the last archived
redo logs. This backup differs from the level 0 and level 1 automatic backups in that it is based on log data and
does not have an assigned OCID. The last archived redo log backup can be used to create a new database or to
recover a database with minimal data loss.
The automatic backup process used to create level 0 and level 1 backups can run at any time within the daily
backup window (between midnight and 6:00 AM). See note for backup window time zone information. Automatic
incremental backups (level 0 and level 1) are retained in Object Storage for 30 days by default.

Backup Retention
If you choose to enable automatic backups, you can choose one of the following preset retention periods: 7 days, 15
days, 30 days, 45 days, or 60 days. The system automatically deletes your incremental backups at the end of your
chosen retention period.

Audit and Trace File Retention for Databases Using Automatic Backups
Oracle Database writes audit and trace files to your database's local storage in the /u01 directory. These files are
retained for 30 days by default, though you can change this interval. Once a day, audit and trace files older than 30
days (or the user-specified interval, if applicable) are discarded by a Oracle Scheduler job. You can also disable the
Scheduler job if you want to retain these files permanently. Use the following dbcli commands to make changes to
this Scheduler job.
• To change the retention period from the default setting of 30 days:

dbcli update-database -in <dbName> -lr <number_of_days_to_retain_files>

For example:

dbcli update-database -in inventorydb -lr 15


• To disable the daily discard Scheduler job for older audit and trace files:

dbcli update-schedule -i <schedulerID> -d

For example:

dbcli update-schedule -i 5678 -d

Backup Scheduling
The automatic backup process starts at any time during your daily backup window. You can optionally specify a
2-hour scheduling window for your database during which the automatic backup process will begin. There are 12
scheduling windows to choose from, each starting on an even-numbered hour (for example, one window runs from
4:00-6:00 AM, and the next from 6:00-8:00 AM). Backups jobs do not necessarily complete within the scheduling
window
The default backup window of 00:00 to 06:00 in the time zone of the DB system's region is assigned to your database
if you do not specify a window. Note that the default backup scheduling window is six hours long, while the windows
you specify are two hours long. See note for backup window time zone information.
Note:

• Backup Window Time Zone - Automatic backups enabled for the first
time after November 20, 2018 on any database will run between midnight

Oracle Cloud Infrastructure User Guide 1438


Database

and 6:00 AM in the time zone of the DB system's region. If you have
enabled automatic backups on a database before this date, the backup
window for the database will continue to be between midnight and 6:00
AM UTC. You can create a My Oracle Support service request to have
your automatic backups run in a backup window of your choice.
• Data Guard - You can enable the Automatic Backup feature on a
database with the standby role in a Data Guard association. However,
automatic backups for that database will not be created until it assumes
the primary role.
• Retention Period Changes - If you shorten your database's automatic
backup retention period in the future, existing backups falling outside the
updated retention period are deleted by the system.
• Object Storage Costs - Automatic backups incur Object Storage usage
costs.

On-Demand Full Backups


You can create a full backup of your database at any time unless your database is assuming the standby role in a Data
Guard association.

Standalone Backups
When you terminate a DB system or a database, all of its resources are deleted, along with any automatic backups.
Full backups remain in Object Storage as standalone backups. You can use a standalone backup to create a new
database.
Using the Console
You can use the Console to enable automatic incremental backups, create full backups on demand, and view the list
of managed backups for a database. The Console also allows you to delete full backups.
Note:

The list of backups you see in the Console does not include any unmanaged
backups (backups created directly by using RMAN or dbcli).
All backups are encrypted with the same master key used for Transparent
Data Encryption (TDE) wallet encryption.
To navigate to the list of standalone backups for your current compartment
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Click Standalone Backups under Bare Metal, VM, and Exadata.
To configure automatic backups for a database
When you launch a DB system, you can optionally enable automatic backups for the initial database. Use this
procedure to configure or disable automatic backups after the database is created.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of DB systems is displayed.
3. Find the DB system where the database is located, and click the system name to display details about it.
A list of databases is displayed.
4. Find the database for which you want to enable or disable automatic backups, and click its name to display
database details. The details indicate whether automatic backups are enabled. When backups are enabled, the
details also indicate the chosen backup retention period .
5. Click Configure Automatic Backups.

Oracle Cloud Infrastructure User Guide 1439


Database

6. In the Configure Automatic Backups dialog, check or uncheck Enable Automatic Backup, as applicable.
If you are enabling automatic backups, you can choose to configure the following:
•Backup Retention Period: If you enable automatic backups, you can choose one of the following preset
retention periods: 7 days, 15 days, 30 days, 45 days, 60 days, or 90 days. The default selection is 30 days.
• Backup Scheduling: If you enable automatic backups, you can choose a two-hour scheduling window to
control when backup operations begin. If you do not specify a window, the six-hour default window of 00:00
to 06:00 (in the time zone of the DB system's region) is used for your database. See Backup Scheduling on
page 1438 for more information.
7. Click Save Changes.
To create an on-demand full backup of a database
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
3. In the list of DB systems, click the name of the system that contains the database that you want to work with.
4. On the DB system details page, find the database you want to work with in list of databases and click the display
name of the database to view the database details.
5. Under Resources, click Backups.
A list of backups is displayed.
6. Click Create Backup.
To delete full backups from Object Storage
Note:

You cannot explicitly delete automatic backups. Unless you terminate the
database, automatic backups remain in Object Storage for 30 days, after
which time they are automatically deleted.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of DB systems is displayed.
3. Find the DB system where the database is located and click the DB system name to display details.
A list of databases is displayed.
4. Find the database you are interested in and click its name to display database details.
5. Under Resources, click Backups.
A list of backups is displayed.
6. Click the Actions icon (three dots) for the backup you are interested in, and then click Delete.
7. Confirm when prompted.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to manage database backups:
• ListBackups
• GetBackup
• CreateBackup
• DeleteBackup
• UpdateDatabase - To enable and disable automatic backups.
For the complete list of APIs for the Database service, see Database Service API.

Oracle Cloud Infrastructure User Guide 1440


Database

What's Next?
See Recovering a Database from Object Storage on page 1447 for information on restoring your database.
See To create a database from a backup in an existing DB system on page 1418 for information on creating a new
database from a backup. You have the choice of using a daily incremental backup, a Standalone Backups on page
1439, the latest archive redo log backup, or a timestamp for a point-in-time copy.
See To create a DB system from a backup on page 1378 for information on creating a new DB system using a
backup as the source for the initial database. You have the choice of using a daily incremental backup, a Standalone
Backups on page 1439, the latest archive redo log backup, or a timestamp for a point-in-time copy.
Backing Up a Database to Object Storage Using RMAN
Note:

This topic is not applicable to Exadata DB systems. For Exadata DB systems,


see Managing Exadata Database Backups by Using bkup_api on page 1324.
This topic explains how to use Recovery Manager (RMAN) to manage backups of your Bare Metal or Virtual
Machine DB system database to your own Object Storage. For backups managed by Oracle Cloud Infrastructure, see
Backing Up a Database to Oracle Cloud Infrastructure Object Storage on page 1436.
To back up to the service you'll need to create an Object Storage bucket for the backups, generate a password for the
service, install the Oracle Database Cloud Backup Module, and then configure RMAN to send backups to the service.
The backup module is a system backup to tape (SBT) interface that’s tightly integrated with RMAN, so you can use
familiar RMAN commands to perform backup and recovery operations.
You'll notice Swift mentioned in the Console and in the endpoint URL for the service. That's because the backup
module is typically used to back up to the Oracle Database Backup Cloud Service, which is an OpenStack Swift
object store.
Tip:

On a 1-node DB system, you can use the database command line interface
(dbcli)to back up to Object Storage. This is an alternative to installing the
backup module and using RMAN for backups. For more information, see
Objectstoreswift Commands on page 1534. Note that the dbcli commands
are not available for a 2-node RAC DB system.
Prerequisites
You'll need the following:
• A DB system and a database to back up. For more information, see Creating Bare Metal and Virtual Machine DB
Systems on page 1371.
• The DB system's cloud network (VCN) must be configured with access to Object Storage:
• For Object Storage access in the same region as the DB system: Oracle recommends using a service gateway.
For more information, see Service Gateway for the VCN on page 1366.
• For Object Storage access in a different region than the DB system: Use an internet gateway. Note that the
network traffic between the DB system and Object Storage does not leave the cloud and never reaches the
public internet. For more information, see Internet Gateway on page 3271.
• An existing Object Storage bucket to use as the backup destination. You can use the Console or the Object Storage
API to create the bucket. For more information, see Managing Buckets on page 3426.
• An auth token generated by Oracle Cloud Infrastructure. You can use the Console or the IAM API to generate the
password. For more information, see Working with Auth Tokens.
• The user name (specified when you install and use the backup module) must have tenancy-level access to Object
Storage. An easy way to do this is to add the user name to the Administrators group. However, that allows access

Oracle Cloud Infrastructure User Guide 1441


Database

to all of the cloud services. Instead, an administrator should create a policy like the following that limits access to
only the required resources in Object Storage for backing up and restoring the database:

Allow group <group_name> to manage objects in


compartment <compartment_name> where target.bucket.name = '<bucket_name>'

Allow group <group_name> to read buckets in compartment <compartment_name>

For more information about adding a user to a group, see Managing Groups on page 2438. For more information
about policies, see Getting Started with Policies on page 2143.
Installing the Backup Module on the DB System
1. SSH to the DB system, log in as opc, and sudo to the oracle user.

ssh -i <SSH_key_used_when_launching_the_DB_system>
opc@<DB_system_IP_address_or_hostname>
login as: opc
sudo su - oracle
2. Change to the directory that contains the backup module opc_install.jar file.

cd /opt/oracle/oak/pkgrepos/oss/odbcs
3. Use the following command syntax to install the backup module.

java -jar opc_install.jar -opcId <user_id> -opcPass


'<auth_token>' -container <bucket_name> -walletDir
~/hsbtwallet/ -libDir ~/lib/ -configfile ~/config -
host https://swiftobjectstorage.<region_name>.oraclecloud.com/
v1/<object_storage_namespace>

The parameters are:

Parameter
Description
-The user name for the Oracle Cloud Infrastructure user account, for example:
opcId
-opcId <username>@<example>.com
This is the user name you use to sign in to the Console.
The user name must be a member of the Administrators group, as described in Prerequisites on page 1441.
You can also specify the user name in single quotes. This might be necessary if the name contains special
characters, for example:
-opcId 'j~smith@<example>.com'
Make sure to use straight single quotes and not slanted apostrophes.

-The auth token generated by using the Console or IAM API, in single quotes, for example:
opcPass
-opcPass '<password>'
Make sure to use straight single quotes and not slanted apostrophes.
For more information, see Managing User Credentials on page 2475.
This is not the password for the Oracle Cloud Infrastructure user.

-The name of an existing bucket in Object Storage to use as the backup destination, for example:
container
-container DBBackups

Oracle Cloud Infrastructure User Guide 1442


Database

Parameter
Description
-The directory where the install tool will create an Oracle Wallet containing the Oracle Cloud Infrastructure user
walletDir
name and auth token.
-walletDir ~/hsbtwallet creates the wallet in the current user (oracle) home directory.

-The directory where the SBT library is stored. The directory must already exist before you run the command.
libDir
This parameter causes the latest SBT library to be downloaded.
-libDir ~/lib/ downloads the libopc.so file to the current user's home directory, for example, /
home/oracle/lib/libopc.so.

-The name of the initialization parameter file that will be created by the install tool. This file will be referenced
configfile
by your RMAN jobs.
-configfile ~/config creates the file in the current user's home directory, for example, /home/
oracle/config.

-The endpoint URL to which backups are to be sent:


host
https://swiftobjectstorage.<region_name>.oraclecloud.com/
v1/<object_storage_namespace>
where <object_storage_namespace> is your tenancy's Object Storage namespace.
Do not add a slash after the Object Storage namespace.
See Regions and Availability Domains on page 182 to look up the region name.

Configuring RMAN
This section describes how to configure RMAN to use the bucket as the default backup destination. The following
assumes you are still logged in to the DB system.
1. On the DB system, set the ORACLE_HOME and ORACLE_SID environment variables using the oraenv utility.

. oraenv
2. Connect to the database using RMAN.

rman target /
3. Configure RMAN to use the SBT device and point to the config file that was created when you installed the
backup module. A sample command for a version 12 database is shown here.

RMAN> CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/


oracle/lib/libopc.so, SBT_PARMS=(OPC_PFILE=/home/oracle/config)';
4. Configure RMAN to use SBT_TAPE by default. The following sample enables the controlfile and spfile
autobackup to SBT_TAPE and configures encryption (recommended). There are other settings that may apply to
your installation such as compression, number of backup and recovery channels to use, backup retention policy,
archived log deletion policy, and more. See the Oracle Backup and Recovery documentation for your version of
Oracle for more information on choosing the appropriate settings.

RMAN> CONFIGURE DEFAULT DEVICE TYPE TO SBT_TAPE;


RMAN> CONFIGURE BACKUP OPTIMIZATION ON;
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO
'%F';
RMAN> CONFIGURE ENCRYPTION FOR DATABASE ON;

Oracle Cloud Infrastructure User Guide 1443


Database

Once the RMAN configuration is complete, you can use the same RMAN commands that you regularly use for tape
backups.
Backing up the Database
This section provides examples of commonly used backup commands.
1. Set the database encryption:

RMAN> SET ENCRYPTION IDENTIFIED BY "password" ONLY;

Note that this setting is not permanent; you must set it for each new RMAN session.
2. Back up the database and archivelogs. Below are some example commands. See the Oracle Backup and Recovery
documentation for your version of Oracle for more information about choosing a back up procedure that meets
your needs. Be sure to back up regularly to minimize potential data loss and always include a copy of the
spfile and controlfile. Note that the example below uses multi-section incremental backups, which is a feature
introduced in 12c. When using 11g, omit the section size clause.

RMAN> BACKUP INCREMENTAL LEVEL 0 SECTION SIZE 512M DATABASE PLUS


ARCHIVELOG;

RMAN> BACKUP INCREMENTAL LEVEL 1 SECTION SIZE 512M DATABASE PLUS


ARCHIVELOG;

RMAN> BACKUP INCREMENTAL LEVEL 1 CUMULATIVE SECTION SIZE 512M DATABASE


PLUS ARCHIVELOG;
3. Backup archivelogs frequently to minimize potential data loss, and keep multiple backup copies as a precaution.

RMAN> BACKUP ARCHIVELOG ALL NOT BACKED UP 2 TIMES;

When the backup job completes, you can display the backup files in your bucket in the Console on the Storage page,
by selecting Object Storage.
What's Next?
See Recovering a Database from Object Storage on page 1447.
Backing Up a Database to Local Storage Using the Database CLI
Note:

This topic is not applicable to virtual machine DB systems because they have
no local storage. For Exadata DB systems, see Managing Exadata Database
Backups on page 1321.
This topic explains how to back up to the local Fast Recovery Area on a bare metal DB system by using the database
CLI (dbcli). Some sample dbcli commands are provided below. For complete command syntax, see the Oracle
Database CLI Reference on page 1484.
Note:

Backing up to local storage is fast and provides for fast point-in-time


recovery, however, if the DB system becomes unavailable, the backup
also becomes unavailable. For information about more durable backup
destinations, see Backing Up a Database on page 1436.

Oracle Cloud Infrastructure User Guide 1444


Database

Backing Up the Database to Local Storage


You'll use the dbcli commands to create a backup configuration, associate the backup configuration with the database,
initiate the backup operation, and then review the backup job.
1. SSH to the DB System.

ssh -i <private_key_path> opc@<db_system_ip_address>


2. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the root user's profile,
which will set the PATH to the dbcli directory (/opt/oracle/dcs/bin).

login as: opc

[opc@dbsys ~]$ sudo su -


3. Create a backup configuration by using the Backupconfig Commands on page 1492 command and specify local
disk storage as the backup destination.
The following example creates a backup configuration named prodbackup and specifies a disk backup destination
and a disk recovery window of 5 (backups and archived redo logs will be maintained in local storage for 5 days).

[root@dbsys ~]# dbcli create-backupconfig --name prodbackup --


backupdestination disk --recoverywindow 5
{
"jobId" : "e7050756-0d83-48ce-9336-86592be59827",
"status" : "Success",
"message" : null,
"reports" : [ {
"taskId" : "TaskParallel_471",
"taskName" : "persisting backup config metadata",
"taskResult" : "Success",
"startTime" : 1467774813141,
"endTime" : 1467774813207,
"status" : "Success",
"taskDescription" : null,
"parentTaskId" : "TaskSequential_467",
"jobId" : "e7050756-0d83-48ce-9336-86592be59827",
"reportLevel" : "Info",
"updatedTime" : 1467774813207
} ],
"createTimestamp" : 1467774781851,
"description" : "create backup config:prodbackup",
"updatedTime" : 1467774813236
}

The example above uses full parameter names for demonstration purposes, but you can abbreviate the parameters
like this:

dbcli create-backupconfig -n prodbackup -d disk -w 5


4. Get the ID of the database you want to back up by using the Database Commands on page 1506 command.

[root@dbsys ~]# dbcli list-databases

ID DB Name DB Version CDB


Class Shape Storage Status
---------------------------------------- ---------- ---------- ----------
-------- -------- ---------- ----------
71ec8335-113a-46e3-b81f-235f4d1b6fde prod 12.1.0.2 true
OLTP odb1 ACFS Configured

Oracle Cloud Infrastructure User Guide 1445


Database

5. Get the ID of the backup configuration by using the Backupconfig Commands on page 1492 command.

[root@dbbackup backup]# /opt/oracle/dcs/bin/dbcli list-backupconfigs


ID Name
DiskRecoveryWindow BackupDestination createTime
---------------------------------------- -------------------- ----- ------
-----------------------------------

78a2a5f0-72b1-448f-bd86-cf41b30b64ee prodbackup 5 Disk


July 6, 2016 3:13:01 AM UTC
6. Associate the backup configuration ID with the database ID by using the Database Commands on page 1506
command.

[root@dbsys ~]# dbcli update-database --backupconfigid


78a2a5f0-72b1-448f-bd86-cf41b30b64ee --dbid 71ec8335-113a-46e3-
b81f-235f4d1b6fde
{
"jobId" : "2b104028-a0a4-4855-b32a-b97a37f5f9c5",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : 1467775842977,
"description" : "update database id:71ec8335-113a-46e3-
b81f-235f4d1b6fde",
"updatedTime" : 1467775842978
}

You can view details about the update job by using the Job Commands on page 1526 command and specifying
the job ID from the dbcli update-database command output, for example:

dbcli describe-job --jobid 2b104028-a0a4-4855-b32a-b97a37f5f9c5


7. Initiate the database backup by using the Backup Commands on page 1488 command. The backup operation is
performed immediately.
The following example creates a backup of the specified database.

[root@dbsys ~]# dbcli create-backup --dbid 71ec8335-113a-46e3-


b81f-235f4d1b6fde
{
"createTimestamp": 1467792576854,
"description": "Backup service creation with db name: prod",
"jobId": "d6c9edaa-fc80-40a9-bcdd-056430cdc56c",
"message": null,
"reports": [],
"status": "Created",
"updatedTime": 1467792576855
}

Or you can abbreviate the command parameters like this:

dbcli create-backup -i 71ec8335-113a-46e3-b81f-235f4d1b6fde

You can view details about the back up job by using the Job Commands on page 1526 command and specifying
the job ID from the dbcli create-backup command output, for example:

dbcli describe-job --jobid d6c9edaa-fc80-40a9-bcdd-056430cdc56c


8. Important! Manually back up any TDE password-based wallets to your choice of a safe location, preferably not
on the DB system. The wallets are required to restore the backup to a new host.

Oracle Cloud Infrastructure User Guide 1446


Database

After the backup command completes, the database backup files are available in the Fast Recovery Area on the
DB system.

Recovering a Database
For information on restoring a database on a bare metal or virtual machine DB system, see the following topics:
• Recovering a Database from Object Storage on page 1447
• Recovering a Database from the Oracle Cloud Infrastructure Classic Object Store on page 1453
Recovering a Database from Object Storage
Note:

This topic is not applicable to Exadata DB systems.


This topic explains how to recover a database from a backup stored in Object Storage. The service is a secure,
scalable, on-demand storage solution in Oracle Cloud Infrastructure. For information on using Object Storage as a
backup destination, see Backing Up a Database to Oracle Cloud Infrastructure Object Storage on page 1436.
You can recover a database using the Console, API, or by using RMAN.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150.
Prerequisites
The DB system requires access to the Oracle Cloud Infrastructure Object Storage service, including connectivity
to the applicable Swift endpoint for Object Storage. Oracle recommends using a service gateway with the VCN to
enable this access. For more information, see these topics:
• Network Setup for DB Systems on page 1360: For information about setting up your VCN for the DB system,
including the service gateway.
• Can I use Oracle Cloud Infrastructure Object Storage as a destination for my on-premises backups?: For
information about the Swift endpoints to use.
Using the Console
You can use the Console to restore the database from a backup in the Object Storage that was created by using the
Console or the API. You can restore to the last known good state of the database, or you can specify a point in time or
an existing System Change Number (SCN). You can also create a new database by using a standalone backup.
Note:

The list of backups you see in the Console does not include any unmanaged
backups (backups created directly by using RMAN or dbcli).
Restoring a database with Data Guard enabled is not supported. You must
first remove the Data Guard association by terminating the standby database
before you can restore the database.

Restoring an Existing Database


To restore a database
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of DB systems is displayed.

Oracle Cloud Infrastructure User Guide 1447


Database

3. Find the DB system where the database is located, and click the system name to display details about it.
A list of databases is displayed.
4. Find the database you want to restore, and click its name to display details about it.
A list of backups is displayed in the default view of the database details. You can also access the list of backups
for a database by clicking on Backups under Resources.
5. Click Restore.
6. Select one of the following options, and then click Restore Database:
• Restore to the latest: Restores the database to the last known good state with the least possible data loss.
• Restore to the timestamp: Restores the database to the timestamp specified.
• Restore to System Change Number (SCN): Restores the database using the SCN specified. This SCN must
be valid.
Tip:

You can determine the SCN number to use either by accessing and
querying your database host, or by accessing any online or archived logs.
7. Confirm when prompted.
If the restore operation fails, the database will be in a "Restore Failed" state. You can try restoring again using
a different restore option. However, Oracle recommends that you review the RMAN logs on the host and fix any
issues before reattempting to restore the database.
To restore a database using a specific backup from Object Storage
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose your Compartment.
A list of DB systems is displayed.
3. Find the DB system where the database is located, and click the system name to display details about it.
A list of databases is displayed.
4. Find the database you want to restore, and click its name to display details about it.
5. Click Restore.
6. In the Restore Database dialog, select Restore to the latest, Restore to timestamp, or Restore to System
Change Number (SCN). Specify a timestamp or System Change Number if you are using an option that requires
either.
7. Click Restore Database.

Creating a New Database from a Backup


You can use a backup to create a database in an existing DB system or to launch a new DB system. See the following
procedures for more information:
• To create a database from a backup in an existing DB system on page 1418
• To create a DB system from a backup on page 1378
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to recover a database:
• ListBackups
• GetBackup
• RestoreDatabase
• CreateDbHome - For creating a DB system database from a standalone backup.
For the complete list of APIs for the Database service, see Database Service API.

Oracle Cloud Infrastructure User Guide 1448


Database

Using an RMAN Backup


This topic explains how to recover a Recovery Manager (RMAN) backup stored in Object Storage.

Prerequisites
You'll need the following:
• A new DB system to restore the database to (see assumptions below). For more information, see Creating Bare
Metal and Virtual Machine DB Systems on page 1371.
• The Oracle Database Cloud Backup Module must be installed on the DB system. For more information, see
Installing the Backup Module on the DB System on page 1442.

Assumptions
The procedures below assume the following:
• A new DB system has been created to host the restored database and no other database exists on the new DB
system. It is possible to restore to a DB system that has existing databases, but that is beyond the scope of this
topic.
• The original database is lost and all that remains is the latest RMAN backup. For virtual machine DB systems, the
procedure assumes the DB system (inclusive of the database) no longer exists.
Caution:

Any data not included in the most recent backup will be lost.
• The Oracle Wallet and/or encryption keys used by the original database at the time of the last backup is available.
• The RMAN backup contains a copy of the control file and spfile as of the most recent backup as well as all of the
datafile and archivelog backups needed to perform a complete database recovery.
• An RMAN catalog will not be used during the restore.

Setting Up Storage on the DB system


1. SSH to the DB System.

ssh -i <private_key_path> opc@<db_system_ip_address>


2. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the root user's profile,
which will set the PATH to the dbcli directory (/opt/oracle/dcs/bin).

login as: opc

[opc@dbsys ~]$ sudo su -


3. You can use an existing empty database home or create a new one for the restore. Use the applicable commands to
help you complete this step.
If you will be using an existing database home:
• Use the Dbhome Commands on page 1519 command to list the database homes.

[root@dbsys ~]# dbcli list-dbhomes


ID Name DB Version
Home Location
---------------------------------------- -------------------- ----------
---------------------------------------------

Oracle Cloud Infrastructure User Guide 1449


Database

2e743050-b41d-4283-988f-f33d7b082bda OraDB12102_home1 12.1.0.2


/u01/app/oracle/product/12.1.0.2/dbhome_1
• Use the Database Commands on page 1506 command to ensure the database home is not associated with any
database.
If necessary, use the Dbhome Commands on page 1519 command to create a database home for the restore.
4. Use the Dbstorage Commands on page 1522 to set up directories for DATA, RECO, and REDO storage. The
following example creates 10GB of ACFS storage for the rectest database.

[root@dbsys ~]# dbcli create-dbstorage --dbname rectest --dataSize 10 --


dbstorage ACFS

Note:

When restoring a version 11.2 database, ACFS storage must be specified.

Performing the Database Restore and Recovery


1. SSH to the DB system, log in as opc, and then become the oracle user.

sudo su - oracle
2. Create an entry in /etc/oratab for the database. Use the same SID as the original database.

db1:/u01/app/oracle/product/12.1.0.2/dbhome_1:N
3. Set the ORACLE_HOME and ORACLE_SID environment variables using the oraenv utility.

. oraenv
4. Obtain the DBID of the original database. This can be obtained from the file name of the controlfile
autobackup on the backup media. The file name will include a string that contains the DBID. The typical format
of the string is c-DDDDDDDDDDDD-YYYYMMDD-NN where DDDDDDDDDDDD is the DBID, YYYYMMDD is

Oracle Cloud Infrastructure User Guide 1450


Database

the date the backup was created, and NN is a sequence number to make the file name unique. The DBID in the
following examples is 1508405000. Your DBID will be different.
Use the following curl syntax to perform a general query of Object Storage. The parameters in red are the same
parameters you specified when installing the backup module as described in Installing the Backup Module on the
DB System on page 1442.

curl -u '<user_ID>.com:<auth_token>' -
v https://swiftobjectstorage.<region_name>.oraclecloud.com/
v1/<object_storage_namespace>

See Regions and Availability Domains on page 182 to look up the region name.
For example:

curl -u '[email protected]:1cnk!d0++ptETd&C;tHR' -
v https://swiftobjectstorage.<region_name>.oraclecloud.com/v1/
myobjectstoragenamespace

To get the DBID from the control file name, use the following syntax:

curl -u '<user_id>.com:<auth_token>' -
v https://swiftobjectstorage.<region_name>.oraclecloud.com/
v1/<object_storage_namespace>/<bucket_name>?prefix=sbt_catalog/c-

For example:

curl -u '[email protected]:1cnk!d0++ptETd&C;tHR' -
v https://swiftobjectstorage.<region_name>.oraclecloud.com/v1/
myobjectstoragenamespace/dbbackups/?prefix=sbt_catalog/c-

In the sample output below, 1508405000 is the DBID.

{
"bytes": 1732,
"content_type": "binary/octet-stream",
"hash": "f1b61f08892734ed7af4f1ddaabae317",
"last_modified": "2016-08-11T20:28:34.438000",
"name": "sbt_catalog/c-1508405000-20160811-00/metadata.xml"
}
5. Run RMAN and connect to the target database. There is no need to create a pfile or spfile or use a
backup controlfile. These will be restored in the following steps. Note that the target database is (not
started). This is normal and expected at this point.

rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Wed Jun 22 18:36:40


2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights
reserved.

connected to target database (not started)


6. Set the DBID using the value obtained above.

RMAN> set dbid 1508405000;

executing command: SET DBID

Oracle Cloud Infrastructure User Guide 1451


Database

7. Run the STARTUP NOMOUNT command. If the server parameter file is not available, RMAN attempts to start
the instance with a dummy server parameter file. The ORA-01078 and LRM-00109 errors are normal and can be
ignored.

RMAN> STARTUP NOMOUNT

startup failed: ORA-01078: failure in processing system parameters


LRM-00109: could not open parameter file '/u01/app/oracle/
product/12.1.0.2/dbhome_1/dbs/initdb1.ora'

starting Oracle instance without parameter file for retrieval of spfile


Oracle instance started

Total System Global Area 2147483648 bytes

Fixed Size 2944952 bytes


Variable Size 847249480 bytes
Database Buffers 1254096896 bytes
Redo Buffers 43192320 bytes
8. Restore the server parameter file from autobackup.
The SBT_LIBRARY is the same library specified with the -libDir parameter when the Backup Module was
installed, for example /home/oracle/lib/.
The OPC_PFILE is the same file specified with the -configfile parameter when the Backup Module was
installed, for example /home/oracle/config.

set controlfile autobackup format for device type sbt to '%F';


run {
allocate channel c1 device type sbt PARMS 'SBT_LIBRARY=/home/oracle/
lib/libopc.so, SBT_PARMS=(OPC_PFILE=/home/oracle/config)';
restore spfile from autobackup;
}
9. Create the directory for audit_file_dest. The default is /u01/app/oracle/admin/$ORACLE_SID/adump.
You can see the setting used by the original database by searching the spfile for the string, audit_file_dest.

strings ${ORACLE_HOME}/dbs/spfile${ORACLE_SID}.ora | grep audit_file_dest


*.audit_file_dest='/u01/app/oracle/admin/db1/adump'

mkdir -p /u01/app/oracle/admin/db1/adump
10. If block change tracking was enabled on the original database, create the directory for the block change tracking
file. This will be a directory under db_create_file_dest. Search the spfile for the name of the directory.

strings ${ORACLE_HOME}/dbs/spfile${ORACLE_SID}.ora | grep


db_create_file_dest
*.db_create_file_dest='/u02/app/oracle/oradata/db1'

mkdir -p /u02/app/oracle/oradata/db1/<$ORA_UNQNAME if available or


database name>/changetracking
11. Restart the instance with the restored server parameter file.

STARTUP FORCE NOMOUNT;


12. Restore the controlfile from the RMAN autobackup and mount the database.

set controlfile autobackup format for device type sbt to '%F';


run {

Oracle Cloud Infrastructure User Guide 1452


Database

allocate channel c1 device type sbt PARMS 'SBT_LIBRARY=/home/oracle/lib/


libopc.so, SBT_PARMS=(OPC_PFILE=/home/oracle/config)';
restore controlfile from autobackup;
alter database mount;
}
13. Restore and recover the database.

RESTORE DATABASE;
RECOVER DATABASE;
14. RMAN will recover using archived redo logs until it can't find any more. It is normal for an error similar to the
one below to occur when RMAN has applied the last archived redo log in the backup and can't find any more logs.

unable to find archived log


archived log thread=1 sequence=29
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 06/28/2016 00:57:35
RMAN-06054: media recovery requesting unknown archived log for thread 1
with sequence 29 and starting SCN of 2349563
15. Open the database with resetlogs.

ALTER DATABASE OPEN RESETLOGS;

The recovery is complete. The database will have all of the committed transactions as of the last backed up archived
redo log.
Recovering a Database from the Oracle Cloud Infrastructure Classic Object Store
Note:

This topic is not applicable to Exadata DB systems.


This topic explains how to recover a database using a backup created by the Oracle Database Backup Module and
stored in Oracle Cloud Infrastructure Object Storage Classic.
The following terms are used throughout this topic:
• Source database: The database backup in Object Storage Classic.
• Target database: The new database on a DB system in Oracle Cloud Infrastructure.
Prerequisites
You'll need the following:
• The service name, identity name, container, user name, and password for Oracle Cloud Infrastructure Object
Storage Classic.
• The backup password if password-based encryption was used when backing up to Object Storage Classic.
• The source database ID, database name, database unique name (required for setting up storage).
• If the source database is configured with Transparent Data Encryption (TDE), you'll need a backup of the wallet
and the wallet password.
• Tnsnames to setup for any database links.
• The output of Opatch lsinventory for the source database Oracle_home, for reference.
• A copy of the sqlpatch directory from the source database home. This is required for rollback in case the target
database does not include these patches.

Oracle Cloud Infrastructure User Guide 1453


Database

Setting Up Storage on the DB System


1. SSH to the DB System.

ssh -i <private_key_path> opc@<db_system_ip_address>


2. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the root user's profile,
which will set the PATH to the dbcli directory (/opt/oracle/dcs/bin).

login as: opc

[opc@dbsys ~]$ sudo su -


3. Use the Dbstorage Commands on page 1522 command to set up directories for DATA, RECO, and REDO
storage. The following example creates 10GB of ACFS storage for the tdetest database.

[root@dbsys ~]# dbcli create-dbstorage --dbname tdetest --dataSize 10 --


dbstorage ACFS

Note:

When migrating a version 11.2 database, ACFS storage must be specified.


4. Use the Dbstorage Commands on page 1522 command to list the storage ID. You'll need the ID for the next
step.

[root@dbsys ~]# dbcli list-dbstorages


ID Type DBUnique Name
Status
---------------------------------------- ------ --------------------
----------
9dcdfb8e-e589-4d5f-861a-e5ba981616ed Acfs tdetest
Configured
5. Use the Dbstorage Commands on page 1522 command with the storage ID from the previous step to list the
DATA, RECO and REDO locations.

[root@dbsys ~]# dbcli describe-dbstorage --id 9dcdfb8e-e589-4d5f-861a-


e5ba981616ed
DBStorage details
----------------------------------------------------------------
ID: 9dcdfb8e-e589-4d5f-861a-e5ba981616ed
DB Name: tdetest
DBUnique Name: tdetest
DB Resource ID:
Storage Type: Acfs
DATA Location: /u02/app/oracle/oradata/tdetest
RECO Location: /u03/app/oracle/fast_recovery_area/
REDO Location: /u03/app/oracle/redo/
State: ResourceState(status=Configured)
Created: August 24, 2016 5:25:38 PM UTC
UpdatedTime: August 24, 2016 5:25:53 PM UTC
6. Note the DATA, RECO and REDO locations. You'll need them later to set the db_create_file_dest,
db_create_online_log_dest, and db_recovery_file_dest parameters for the database.
Choosing an ORACLE_HOME
Decide which ORACLE_HOME to use for the database restore and then switch to that home with the correct
ORACLE_BASE, ORACLE_HOME, and PATH settings. The ORACLE_HOME must not already be associated with
a database.

Oracle Cloud Infrastructure User Guide 1454


Database

To get a list of existing ORACLE_HOMEs and to ensure that the ORACLE_HOME is empty, use the Dbhome
Commands on page 1519 and the Database Commands on page 1506 commands, respectively. To create a new
ORACLE_HOME, use the Dbhome Commands on page 1519 command.
Copying the Source Database Wallets
Skip this section if the source database is not configured with TDE.
1. On the DB system, become the oracle user:

sudo su - oracle
2. Create the following directory, if it does not already exist:

mkdir /opt/oracle/dcs/commonstore/wallets/tde/<db_unique_name>
3. Copy the ewallet.p12 file from the source database to the directory you created in the previous step.
4. On the target host, make sure that $ORACLE_HOME/network/admin/sqlnet.ora contains the following
line:

ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/
opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME)))

Add the line if it doesn't exist in the file. (The line might not be there if this is a new home and no database has
been created yet on this host.)
5. Create the autologin wallet from the password-based wallet to allow auto-open of the wallet during restore and
recovery operations.
For a version 12.1 or later database, use the ADMINISTER KEY MANAGEMENT command:

$cat create_autologin_12.sh

#!/bin/sh
if [ $# -lt 2 ]; then
echo "Usage: $0 <dbuniquename><remotewalletlocation>"
exit 1;
fi

mkdir /opt/oracle/dcs/commonstore/wallets/tde/$1
cp $2/ewallet.p12* /opt/oracle/dcs/commonstore/wallets/tde/$1
rm -f autokey.ora
echo "db_name=$1" > autokey.ora
autokeystoreLog="autologinKeystore_`date +%Y%m%d_%H%M%S_%N`.log"
echo "Enter Keystore Password:"
read -s keystorePassword
echo "Creating AutoLoginKeystore -> "
sqlplus "/as sysdba" <<EOF
spool $autokeystoreLog
set echo on
startup nomount pfile=autokey.ora
ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE
FROM KEYSTORE '/opt/oracle/dcs/commonstore/wallets/tde/$1' -- Keystore
location
IDENTIFIED BY "$keystorePassword";
shutdown immediate;
EOF

Adjust the cwallet.sso permissions from oracle:asmadmin to oracle:oinstall.

$ ls -ltr /opt/oracle/dcs/commonstore/wallets/tde/<db_unique_name>

total 20

Oracle Cloud Infrastructure User Guide 1455


Database

-rw-r--r-- 1 oracle oinstall 5680 Jul 6 11:39 ewallet.p12

-rw-r--r-- 1 oracle asmadmin 5725 Jul 6 11:39 cwallet.sso

For a version 11.2 database, use the orapki command:

orapki wallet create -wallet wallet_location -auto_login [-pwd password]

Installing the Oracle Database Backup Module


The backup module JAR file is included on the DB system but you need to install it.
1. SSH to the DB system, log in as opc, and then become the oracle user.

ssh -i <path to SSH key used when launching the DB System> opc@<DB System
IP address or hostname>

sudo su - oracle
2. Change to the directory that contains the backup module opc_install.jar file.

cd /opt/oracle/oak/pkgrepos/orapkgs/oss/<version>/
3. Use the command syntax described in Installing the Oracle Database Cloud Backup Module to install the backup
module.
Setting Environment Variables
Set the following environment variables for the RMAN and SQL*Plus sessions for the database:

ORACLE_HOME=<path of Oracle Home where the database is to be restored>


ORACLE_SID=<database instance name>
ORACLE_UNQNAME=<db_unique_name in lower case>
NLS_DATE_FORMAT="mm/dd/yyyy hh24:mi:ss"

Allocating an RMAN SBT Channel


For each restore operation, allocate an SBT channel and set the SBT_LIBRARY parameter to the location of the
libopc.so file and the OPC_FILE parameter to the location of the opc_sbt.ora file, for example:

ALLOCATE CHANNEL c1 DEVICE TYPE sbt MAXPIECESIZE 2 G FORMAT '%d_%I_%U'


PARMS 'SBT_LIBRARY=/tmp/oss/libopc.so ENV=(OPC_PFILE=/<ORACLE_HOME>/dbs/
opc_sbt.ora)';

(For more information about these files, see Files Created When the Backup Module is Installed.)
Ensuring Decryption is Turned On
Make sure that decryption is turned on for all the RMAN restore sessions.

set decryption wallet open identified by <keystore password>;

For more information, see Providing Password Required to Decrypt Encrypted Backups.
Restoring Spfile
The following sample shell script restores the spfile. Set the $dbID variable to the dbid of the database being
restored. By default, spfile is restored to $ORACLE_HOME/dbs/spfile<sid>.ora.

rman target / <<EOF

spool log to "`date +%Y%m%d_%H%M%S_%N`_dbid_${dbID}_restore_spfile.log"


startup nomount

Oracle Cloud Infrastructure User Guide 1456


Database

set echo on
run {
ALLOCATE CHANNEL c1 DEVICE TYPE sbt MAXPIECESIZE 2 G FORMAT '%d_%I_%U' PARMS
'SBT_LIBRARY=/tmp/oss/libopc.so ENV=(OPC_PFILE=/tmp/oss/opc_sbt.ora)';
SET DBID=$dbID;
RESTORE SPFILE FROM AUTOBACKUP;
shutdown immediate;
EOF

Setting the Database Parameters


1. Start the database in nomount mode.

startup nomount
2. Update spfile and modify the following parameters.
• If the database storage type is ACFS, use the DATA, RECO, and REDO locations obtained from the dbcli
describe-dbstorage command output, as described in Setting Up Storage on the DB System on page
1454:

alter system set db_create_file_dest='/u02/app/oracle/oradata/' scope =


spfile;
alter system set db_create_online_log_dest_1='/u03/app/oracle/redo'
scope = spfile;
alter system set db_recovery_file_dest='/u03/app/oracle/
fast_recovery_area' scope = spfile;
• If the database storage type is ASM:

alter system set db_create_file_dest='+DATA' scope = spfile;


alter system set db_create_online_log_dest_1='+RECO' scope = spfile;
alter system set db_recovery_file_dest='+RECO' scope = spfile;
• Set db_recovery_file_dest_size is not set or is set incorrectly:

alter system set db_recovery_file_dest_size=<sizeG> scope=spfile;


• Set audit_file_dest to the correct value:

alter system set audit_file_dest=/u01/app/oracle/admin/<db_unique_name


in lower case>/adump
3. Remove the control_files parameter. The Oracle Managed Files (OMF) parameters will be used to create
the control file.

alter system reset control_files scope=spfile;


4. Restart the database in nomount mode using the newly added parameters.

shutdown immediate
startup nomount

Restoring the Control File


Modify the following sample shell script for your environment to restore the control file. Set the $dbID variable to
the dbid of the database being restored. Set SBT_LIBRARY to the location specified in the -libDir parameter
when you installed the Backup Module. Set OPC-PFILE to the location specified in the -configFile parameter,
which defaults to ORACLE_HOME/dbs/opcSID.ora.

rman target / <<EOF

spool log to "`date +%Y%m%d_%H%M%S_%N`_dbid_${dbID}_restore_controlfile.log"

Oracle Cloud Infrastructure User Guide 1457


Database

set echo on
run {
ALLOCATE CHANNEL c1 DEVICE TYPE sbt MAXPIECESIZE 2 G FORMAT '%d_%I_%U' PARMS
'SBT_LIBRARY=/<Backup Module libDir>/libopc.so ENV=(OPC_PFILE=/<Backup
Module configFile>/opcSID.ora)';
SET DBID=$dbID;
RESTORE CONTROLFILE FROM AUTOBACKUP;
alter database mount;
}

exit;
EOF

Restoring the Database


1. Preview and validate the backup. The database is now mounted and RMAN should be able to locate the backup
from the restored controlfile. This step helps ensure that the list of archivelogs is present and that the backup
components can be restored.
In the following examples, modify SBT_LIBRARY and OPC_PFILE as needed for your environment.

rman target / <<EOF

spool log to "`date +%Y%m%d_%H%M%S_%N`_restore_database_preview.log"


set echo on
run {
ALLOCATE CHANNEL c1 DEVICE TYPE sbt MAXPIECESIZE 2 G FORMAT '%d_%I_
%U' PARMS 'SBT_LIBRARY=/tmp/oss/libopc.so ENV=(OPC_PFILE=/tmp/oss/
opc_sbt.ora)';
ALLOCATE CHANNEL c2 DEVICE TYPE sbt MAXPIECESIZE 2 G FORMAT '%d_%I_
%U' PARMS 'SBT_LIBRARY=/tmp/oss/libopc.so ENV=(OPC_PFILE=/tmp/oss/
opc_sbt.ora)';
ALLOCATE CHANNEL c3 DEVICE TYPE sbt MAXPIECESIZE 2 G FORMAT '%d_%I_
%U' PARMS 'SBT_LIBRARY=/tmp/oss/libopc.so ENV=(OPC_PFILE=/tmp/oss/
opc_sbt.ora)';
restore database validate header preview;
}

Review the output and if there are error messages, investigate the cause of the problem.
2. Redirect the restore using set newname to restore the data files in OMF format and use switch datafile
all to allow the control file to update with the new data file copies.

rman target / <<EOF

spool log to "`date +%Y%m%d_%H%M%S_%N`_restore_database_preview.log"


set echo on
run {
ALLOCATE CHANNEL c1 DEVICE TYPE sbt MAXPIECESIZE 2 G FORMAT '%d_%I_
%U' PARMS 'SBT_LIBRARY=/tmp/oss/libopc.so ENV=(OPC_PFILE=/tmp/oss/
opc_sbt.ora)';
ALLOCATE CHANNEL c2 DEVICE TYPE sbt MAXPIECESIZE 2 G FORMAT '%d_%I_
%U' PARMS 'SBT_LIBRARY=/tmp/oss/libopc.so ENV=(OPC_PFILE=/tmp/oss/
opc_sbt.ora)';
ALLOCATE CHANNEL c3 DEVICE TYPE sbt MAXPIECESIZE 2 G FORMAT '%d_%I_
%U' PARMS 'SBT_LIBRARY=/tmp/oss/libopc.so ENV=(OPC_PFILE=/tmp/oss/
opc_sbt.ora)';
set newname for database to new;
restore database;
switch datafile all;
switch tempfile all;
recover database;

Oracle Cloud Infrastructure User Guide 1458


Database

This recovery will attempt to use the last available archive log backup and then fail with an error, for example:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 07/20/2016 12:09:02
RMAN-06054: media recovery requesting unknown archived log for thread 1
with sequence 22 and starting SCN of 878327
3. To complete the incomplete recovery, run a recovery using the sequence number and thread number shown in the
RMAN-06054 message, for example:

Recover database until sequence 22 thread 1;

Resetting the Logs


Reset the logs.

alter database open resetlogs;

Preparing to Register the Database


Before you register the database:
1. Make sure the database COMPATIBLE parameter value is acceptable. If the value is less than the minimum, the
database cannot be registered until you upgrade the database compatibility.
The minimum compatibility values are as follows:
• For a version 18.1 database - 18.0.0.0
• For a version 12.2 or 12.1 database - 12.1.0.2
• For a version 11.2 database - 11.2.0.4
2. Verify that the database has registered with the listener and the service name.

lsnrctl services
3. Make sure the password file was restored or created for the new database.

ls -ltr $ORACLE_HOME/dbs/orapw<oracle sid>

If the file does not exist, create it using the orapwd utility.

orapwd file=<$ORACLE_HOME/dbs/orapw<$ORACLE_SID>> password=<sys password>


4. Make sure the restored database if open in read write mode.

select open_mode from v$database;

The command output should indicate read write mode. The dbcli register-database command will
attempt to run datapatch, which requires read write mode. If there are PDBs, they should also be in read write
mode to ensure that datapatch runs on them.
5. From oracle home on the restored database, use the following command verify the connection to SYS:

conn sys/<password>@//<hostname>:1521/<database service name>

This connection is required to register the database later. Fix any connection issues before continuing.

Oracle Cloud Infrastructure User Guide 1459


Database

6. Make sure the database is running on spfile by using the SQL*Plus command.

SHOW PARAMETERS SPFILE


7. (Optional) If you would like to manage the database backup with the dbcli command line interface, you can
associate a new or existing backup configuration with the migrated database when you register it or after you
register it. A backup configuration defines the backup destination and recovery window for the database. Use the
following commands to create, list, and display backup configurations:
• Backupconfig Commands on page 1492
• Backupconfig Commands on page 1492
• Backupconfig Commands on page 1492
8. Copy the folder $ORACLE_HOME/sqlpatch from source database to the target database. This will enable the
dbcli register-database command to roll back any conflicting patches.
Note:

If you are migrating a version 11.2 database, additional steps are required
after you register the database. For more information, see Rolling Back
Patches on a Version 11.2 Database on page 1460.
Registering the Database on the DB System
The Database Commands on page 1506 command registers the restored database to the dcs-agent so it can be
managed by the dcs-agent stack.
Note:

The dbcli register-database command is not available on 2-node


RAC DB systems.
As the root user, use the dbcli register-database command to register the database on the DB system, for
example:

[root@dbsys ~]# dbcli register-database --dbclass OLTP --dbshape odb1 --


servicename tdetest --syspassword
Password for SYS:
{
"jobId" : "317b430f-ad5f-42ae-bb07-13f053d266e2",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "August 08, 2016 05:55:49 AM EDT",
"description" : "Database service registration with db service name:
tdetest",
"updatedTime" : "August 08, 2016 05:55:49 AM EDT"
}

Updating tnsnames.ora
Check the tnsnames.ora in the backup location, check the database links used in the cloned database, and
then add any relevant connection strings to the cloned database file at $ORACLE_HOME/network/admin/
tnsnames.ora.
Rolling Back Patches on a Version 11.2 Database
For version 11.2 databases, the sqlpatch application is not automated, so any interim patches (previously known
as a "one-off" patches) applied to the source database that are not part of the installed PSU must be rolled back
manually in the target database. After registering the database, execute the catbundle.sql script and then the
postinstall.sql script with the corresponding PSU patch (or the overlay patch on top of the PSU patch), as
described below.

Oracle Cloud Infrastructure User Guide 1460


Database

Tip:

Some interim patches may include files written to the $ORACLE_HOME/


rdbms/admin directory as well as the $ORACLE_HOME/sqlpatch directory.
Oracle recommends that you roll back these patches in the source database
using the instructions in the patch read-me prior to migrating the database to
OCI environment. Contact Oracle Support if you need assistance with rolling
back these patches.
1. On the DB System, use the dbcli list-dbhomes command to find the PSU patch number for the version
11.2 database home. In the following sample command output, the PSU patch number is the second number in the
DB Version column:

[root@dbsys ~]# dbcli list-dbhomes


ID Name DB Version
Home Location Status
------------------------------------ -----------------
-------------------------------------
----------------------------------------- ----------
59d9bc6f-3880-4d4f-b5a6-c140f16f8c64 OraDB11204_home1 11.2.0.4.160719
(23054319, 23054359) /u01/app/oracle/product/11.2.0.4/dbhome_1
Configured

(The first patch number, 23054319 in the example above, is for the OCW component in the database home.)
2. Find the overlay patch, if any, by using the lsinventory command. In the following example, patch number
24460960 is the overlay patch on top of the 23054359 PSU patch.

$ $ORACLE_HOME/OPatch/opatch lsinventory
...
Installed Top-level Products (1):

Oracle Database 11g


11.2.0.4.0
There are 1 products installed in this Oracle Home.

Interim patches (5) :

Patch 24460960 : applied on Fri Sep 02 15:28:17 UTC 2016


Unique Patch ID: 20539912
Created on 31 Aug 2016, 02:46:31 hrs PST8PDT
Bugs fixed:
23513711, 23065323, 21281607, 24006821, 23315889, 22551446, 21174504
This patch overlays patches:
23054359
This patch needs patches:
23054359
as prerequisites
3. Start SQL*Plus and execute the catbundle.sql script, for example:

SQL> startup
SQL> connect / as sysdba
SQL> @$ORACLE_HOME/rdbms/admin/catbundle.sql psu apply
exit
4. Apply the sqlpatch, using the overlay patch number from the previous step, for example:

SQL> connect / as sysdba


SQL> @$ORACLE_HOME/sqlpatch/24460960/postinstall.sql

Oracle Cloud Infrastructure User Guide 1461


Database

exit

Note:

If the source database has one-off patches installed and those patches are not
part of the installed PSU in the cloud environment, then the SQL changes
that correspond to those one-off patches need to be rolled back. To rollback
the SQL changes, copy the $ORACLE_HOME/sqlpatch/<patch#>/
postdeinstall.sql script from the source environment to the cloud
environment and execute the postdeinstall.sql script.
Post Restore Checklist
After the database is restored and registered on the DB system, use the following checklist to verify the results and
perform any post-restore customizations.
1. Make sure the database files were restored in OMF format.
2. Make sure the database is listed in the Database Commands on page 1506 command output.
3. Check for the following external references in the database and update them if necessary:
•External tables: If the source database uses external tables, back up that data and migrate it to the target host.
•Directories: Customize the default directories as needed for the restored database.
•Database links: Make sure all the required TNS entries are updated in the tnsnames.ora file in
ORACLE_HOME.
• Email and URLs: Make sure any email addresses and URLs used in the database are still accessible from the
DB system.
• Scheduled jobs: Review the jobs scheduled in source database and schedule similar jobs as needed in the
restored database.
4. If you associated a backup configuration when you registered the database, run a test back up using the Backup
Commands on page 1488 command.
5. If the restored database contains a CDB and PDBs, verify that patches have been applied to all PDBs.

Using Oracle Data Guard


Note:

This procedure is only applicable to bare metal and virtual machine DB


systems. To use Oracle Data Guard with Exadata, see Using Oracle Data
Guard with Exadata Cloud Service on page 1340.
This topic explains how to use the Console to manage Oracle Data Guard associations in your DB system.
For complete information on Oracle Data Guard, see the Data Guard Concepts and Administration documentation in
the Oracle Help Center.
Required IAM Service Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
If you are new to policies, then see Getting Started with Policies on page 2143 and Common Policies on page
2150.
Prerequisites
An Oracle Data Guard implementation requires two DB systems, one containing the primary database and one
containing the standby database. When you enable Oracle Data Guard for a virtual machine DB system database, a
new DB system with the standby database is created and associated with the primary database. For a bare metal DB

Oracle Cloud Infrastructure User Guide 1462


Database

system, the DB system with the database that you want to use as the standby must already exist before you enable
Oracle Data Guard.
Tip:

An Oracle Data Guard configuration on the Oracle Cloud Infrastructure is


limited to one standby database for each primary database.
Requirement details are as follows:
• Both DB systems must be in the same compartment.
• The DB systems must be the same shape type (for example, if the shape of the primary database is a virtual
machine, then the shape of the standby database can be any other virtual machine shape).
• The database versions and editions must be identical. Oracle Data Guard does not support Oracle Database
Standard Edition. (Active Data Guard requires Enterprise Edition - Extreme Performance.)
• The database version determines whether Active Data Guard is enabled. If you are using the BYOL licensing
model and if your license does not include Active Data Guard, then you must either use Oracle Database
Enterprise Edition - High Performance or set up Oracle Data Guard manually. See Using Oracle Data Guard with
the Database CLI on page 1470.
• If your primary and standby databases are in the same region, then both must use the same virtual cloud network
(VCN).
• If your primary and standby databases are in different regions, then you must peer the virtual cloud networks
(VCNs) for each database. See Remote VCN Peering (Across Regions) on page 3308.
• Configure the security list ingress and egress rules for the subnets of both DB systems in the Oracle Data Guard
association to enable TCP traffic to move between the applicable ports. Ensure that the rules you create are
stateful (the default).
For example, if the subnet of the primary DB system uses the source CIDR 10.0.0.0/24 and the subnet of the
standby DB system uses the source CIDR 10.0.1.0/24, then create rules as shown in the subsequent example.
Note:

The egress rules in the example show how to enable TCP traffic only for port
1521, which is a minimum requirement for Oracle Data Guard to work. If
TCP traffic is already enabled on all of your outgoing ports (0.0.0.0/0), then
you do not need to explicitly add these specific egress rules.
Security List for Subnet on the Primary DB System

Ingress Rules:
Stateless: No
Source: 10.0.1.0/24
IP Protocol: TCPSource Port Range: All
Destination Port Range: 1521
Allows: TCP traffic for ports: 1521

Egress Rules:

Stateless: No
Destination: 10.0.1.0/24
IP Protocol: TCP
Source Port Range: All
Destination Port Range: 1521
Allows: TCP traffic for ports: 1521

Security List for Subnet on the Standby DB System

Ingress Rules:

Stateless: No

Oracle Cloud Infrastructure User Guide 1463


Database

Source: 10.0.0.0/24
IP Protocol: TCP
Source Port Range: All
Destination Port Range: 1521
Allows: TCP traffic for ports: 1521

Egress Rules:

Stateless: No
Destination: 10.0.0.0/24
IP Protocol: TCP
Source Port Range: All
Destination Port Range: 1521
Allows: TCP traffic for ports: 1521

For information about creating and editing rules, see Security Lists on page 2876.
Availability Domain and Fault Domain Considerations for Oracle Data Guard
Oracle recommends that the DB system that contains the standby database be in a different availability domain
from that of the DB system containing the primary database to improve availability and disaster recovery. If you
enable Oracle Data Guard for a database and your standby database is in the same availability domain as the
primary database (either by choice, or because you are working in a single availability domain region), then Oracle
recommends that you place the standby database in a different fault domain from that of the primary database.
Note:

If your primary and standby databases are two-node Oracle RAC databases
and both are in the same availability domain, then only one of the two nodes
of the standby database can be in a fault domain that does not include any
other nodes from either the primary or standby database. This is because
each availability domain has only three fault domains, and the primary and
standby databases have a combined total of four nodes. For more information
on availability domains and fault domains, see Regions and Availability
Domains on page 182.
Working with Oracle Data Guard
Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data. The Oracle
Cloud Infrastructure Database Data Guard implementation requires two databases: one in a primary role and one
in a standby role. The two databases make an Oracle Data Guard association. Most of your applications access the
primary database, while the standby database is a transactionally consistent copy of the primary database.
Oracle Data Guard maintains the standby database by transmitting and applying redo data from the primary database.
If the primary database becomes unavailable, then you can use Oracle Data Guard to switch or fail over the standby
database to the primary role.
Tip:

The standby databases in Oracle Cloud Infrastructure Database are physical


standbys.

Switchover
A switchover reverses the primary and standby database roles. Each database continues to participate in the Oracle
Data Guard association in its new role. A switchover ensures no data loss. Performing planned maintenance on a DB
system with an Oracle Data Guard association is typically done by switching the primary database to the standby role,
performing maintenance on the standby database, and then switching it back to the primary role.

Oracle Cloud Infrastructure User Guide 1464


Database

Failover
A failover transitions the standby database into the primary role after the existing primary database fails or becomes
unreachable. A failover might result in some data loss when you use Maximum Performance protection mode.

Reinstate
Reinstates a database into the standby role in an Oracle Data Guard association. You can use the reinstate command
to return a failed database into service after correcting the cause of failure.
Note:

You cannot terminate a primary database that has an Oracle Data Guard
association with a standby database until you first delete the standby
database. Alternatively, you can switch over the primary database to the
standby role, and then terminate it.
You cannot terminate a DB system that includes databases that have Oracle
Data Guard enabled. To remove the Oracle Data Guard association:
• For a bare metal DB system database you can terminate the standby
database.
• For a virtual machine DB system database you can terminate the standby
DB system.
Terminating a DB System with Data Guard
If you want to terminate a DB system that has Data Guard enabled, you must terminate the standby DB system before
terminating the primary DB system. If you try to terminate a primary DB system that has a standby, the terminate
operation will not complete. See To terminate a DB system on page 1387 for instructions on terminating a DB
system.
Using the Console
Use the Console to enable an Oracle Data Guard association between databases, change the role of a database in an
Oracle Data Guard association using either a switchover or a failover operation, and reinstate a failed database.
When you enable Oracle Data Guard, a separate Oracle Data Guard association is created for the primary and the
standby databases.
To enable Oracle Data Guard on a bare metal DB system
Note:

If you do not already have bare metal DB systems with the databases to
assume the primary and standby roles, then create them as described in
Creating Bare Metal and Virtual Machine DB Systems on page 1371. A
new DB system includes an initial database.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose the Compartment that contains a DB system with a database for which you want to enable Oracle Data
Guard.
3. Click the name of a DB system that contains a database you want to assume the primary role.
4. On the DB System Details page, in the Databases section, click the name of the database you want to make
primary.
5. On the Database Details page, in the Resources section, click Data Guard Associations.
6. In the Data Guard Associations section, click Enable Data Guard.

Oracle Cloud Infrastructure User Guide 1465


Database

7. On the Enable Data Guard page, configure the Oracle Data Guard association.
• The settings in the Data Guard association details section are read only and cannot be changed.
• Protection mode: The protection mode used for this Oracle Data Guard association is set to Maximum
Performance.
• Transport type: The redo transport type used for this Oracle Data Guard association is set to Async
(asynchronous).
• In the Select peer DB system section, provide the following information for the standby database to obtain a
list of available DB systems in which to locate the standby database:
• Region: Select a region where you want to locate the standby database. The region where the primary
database is located is selected, by default. You can choose to locate the standby database in a different
region. The hint text associated with this field tells you in which region the primary database is located.
• Availability domain: Select an availability domain for the standby database. The hint text associated with
this field tells you in which availability domain the primary database is located.
Note:

If your standby database is in a region with only one availability


domain, or if you choose to provision your standby database in the
same availability domain as your primary database, then the system
asks you to specify a number of optional fault domains from the Fault
domain drop-down for your standby database. Oracle recommends
that you locate your standby database in a different fault domain from
your primary database. For more information on fault domains, see
Regions and Availability Domains on page 182.
• Shape: Select the shape of the DB system in which to locate the standby database. The shape can be
another bare metal DB system shape.
• Peer DB system: Select a DB system in which to locate the standby database.
• In the Configure standby database section, enter the database administrator password of the primary
database in the Database password field. Use this same database administrator password for the standby
database.
8. Click Enable Data Guard.
When you create the association, the details for a database and its peer display their respective roles as Primary or
Standby.
To enable Oracle Data Guard on a virtual machine DB system
If you do not already have a virtual machine DB system with a database that will assume the primary role, then create
a database as described in Creating Bare Metal and Virtual Machine DB Systems on page 1371. A new DB system
includes an initial database. When you enable Oracle Data Guard on the primary database, a virtual machine DB
system gets created for the standby database.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose the Compartment that contains a DB system with a database for which you want to enable Oracle Data
Guard.
3. Click the name of a DB system that contains a database you want to assume the primary role.
4. On the DB System Details page, in the Databases section, click the name of the database you want to make
primary.
5. On the Database Details page, in the Resources section, click Data Guard Associations.
6. In the Data Guard Associations section, click Enable Data Guard.

Oracle Cloud Infrastructure User Guide 1466


Database

7. On the Enable Data Guard page, configure your Oracle Data Guard association.
• The settings in the Data Guard association details section are read only and cannot be changed.
• Protection mode: The protection mode used for this Oracle Data Guard association is set to Maximum
Performance.
• Transport type: The redo transport type used for this Oracle Data Guard association is set to Async
(asynchronous).
• Specify the following values in the Create peer DB system section for the standby database:
• Display name: Enter a non-unique display name for the DB system containing the standby database.
An Oracle Cloud Identifier (OCID) will uniquely identify the DB system. Avoid entering confidential
information.
• Region: Select a region where you want to locate the standby database. The region where the primary
database is located is selected, by default. You can choose to locate the standby database in a different
region. The hint text associated with this field tells you in which region the primary database is located.
• Availability domain: Select an availability domain for the standby database. The hint text associated with
this field tells you in which availability domain the primary database is located.
Note:

If your standby database is in a region with only one availability


domain, or if you choose to provision your standby database in the
same availability domain as your primary database, then the system
asks you to specify a number of optional fault domains from the Fault
domain drop-down for your standby database. Oracle recommends
that you locate your standby database in a different fault domain from
your primary database. For more information on fault domains, see
Regions and Availability Domains on page 182.
• Select a shape: Choose a shape to use to create the standby DB system.
To specify a shape other than the default, click Change Shape and select an available virtual machine
shape from the list.
• In the Specify the network information section:
• Virtual cloud network in compartment: Select a virtual cloud network from the drop-down in which
to create the DB system containing the standby database. Click Change Compartment to select a
virtual cloud network in a different compartment.
• Subnet in compartment : The subnet to which the DB system containing the standby database attaches.
Click Change Compartment to select a subnet in a different compartment.
Do not use a subnet that overlaps with 192.168.16.16/28, which is used by the Oracle Clusterware
private interconnect on the database instance. Specifying an overlapping subnet will cause the private
interconnect to malfunction.
• Configure network security groups (NSGs): Select to add a number of network security groups,
which function as virtual firewalls, enabling you to apply a set of security rules that control inbound
and outbound traffic for the DB system containing the standby database. For more information, see
Network Security Groups on page 2867 and Network Setup for DB Systems on page 1360.
Note:

• If you choose a subnet with a security list, then the security


rules for the DB system will be a combination of the rules in the
security list and the network security groups.
• You must select a virtual cloud network to be able to assign
network security groups to your DB system.
• Hostname prefix: Enter a host name prefix for the DB system that contains the standby database.
The host name must begin with an alphabetic character, and can contain only alphanumeric characters

Oracle Cloud Infrastructure User Guide 1467


Database

and hyphens (-). The maximum number of characters allowed for bare metal and virtual machine DB
systems is 16.
Important:

If the host name within the subnet is not unique, then provisioning
of the DB system fails.
• Host domain name: The domain name for the DB system. If the subnet you selected uses the Oracle-
provided internet and virtual cloud network resolver for DNS name resolution, then this field displays
the domain name for the subnet, which you cannot change. Otherwise, you can enter a domain name.
Hyphens (-) are not permitted.
• Host and domain URL: Combines the host and domain names to display the fully-qualified domain
name for the database. The maximum length is 64 characters.
• In the Configure standby database section, enter the database administrator password of the primary
database in the Database password field. Use this same database administrator password for the standby
database.
• Database Admin Password: Enter the primary database admin password.
The same password is used for the standby database.
• Confirm Database Admin Password: Re-enter the Database Admin Password you specified.
8. Click Enable Data Guard.
When you create the association, the details for a database and its peer display their respective roles as Primary or
Standby.
Virtual machine shapes
Virtual machine X7 shapes:
• VM.Standard2.1: Provides a 1-node DB system with 1 core.
• VM.Standard2.2: Provides a 1- or 2-node DB system with 2 cores.
• VM.Standard2.4: Provides a 1- or 2-node DB system with 4 cores.
• VM.Standard2.8: Provides a 1- or 2-node DB system with 8 cores.
• VM.Standard2.16: Provides a 1- or 2-node DB system with 16 cores.
• VM.Standard2.24: Provides a 1- or 2-node DB system with 24 cores.
Virtual machine X5 shapes:
• VM.Standard1.1: Provides a 1-node DB system with 1 core.
• VM.Standard1.2: Provides a 1- or 2-node DB system with 2 cores.
• VM.Standard1.4: Provides a 1- or 2-node DB system with 4 cores.
• VM.Standard1.8: Provides a 1- or 2-node DB system with 8 cores.
• VM.Standard1.16: Provides a 1- or 2-node DB system with 16 cores.
Note:

• X5-based shapes availability is limited to monthly universal credit


customers existing on or before November 9th, 2018, in the US West
(Phoenix), US East (Ashburn), and Germany Central (Frankfurt) regions.
• VM.Standard1.1 and VM.Standard2.1 shapes cannot be used for 2-node
RAC clusters.
To perform a database switchover
You initiate a switchover operation by using the Data Guard association of the primary database.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose the Compartment that contains the DB system with the primary database you want to switch over.
3. Click the DB system name, and then click the name of the primary database.
4. Under Resources, click Data Guard Associations.

Oracle Cloud Infrastructure User Guide 1468


Database

5. For the Data Guard association on which you want to perform a switchover, click the Actions icon (three dots),
and then click Switchover.
6. In the Switchover Database dialog box, enter the database admin password, and then click OK.
This database should now assume the role of the standby, and the standby should assume the role of the primary in
the Data Guard association.
To perform a database failover
You initiate a failover operation by using the Data Guard association of the standby database.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose the Compartment that contains the DB system with the primary database's peer standby you want to fail
over to.
3. Click the DB system name, and then click the name of the standby database.
4. Under Resources, click Data Guard Associations.
5. For the Data Guard association on which you want to perform a failover, click Failover.
6. In the Failover Database dialog box, enter the database admin password, and then click OK.
This database should now assume the role of the primary, and the old primary's role should display as Disabled
Standby.
To reinstate a database
After you fail over a primary database to its standby, the standby assumes the primary role and the old primary is
identified as a disabled standby. After you correct the cause of failure, you can reinstate the failed database as a
functioning standby for the current primary by using its Data Guard association.
Note:

Before you can reinstate a 12.2 database, you must perform some steps on the
database host to stop the database or start it in MOUNT mode.
Set your ORACLE_UNQNAME environment variable to the value of
the Database Unique Name (as seen in the Console), and then run these
commands:

srvctl stop database -d db-unique-name -o abort


srvctl start database -d db-unique-name -o
mount

1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose the Compartment that contains the DB system with the failed database you want to reinstate.
3. Click the DB system name, and then click the database name.
4. Under Resources, click Data Guard Associations.
5. For the Data Guard association on which you want to reinstate this database, click the Actions icon (three dots),
and then click Reinstate.
6. In the Reinstate Database dialog box, enter the database admin password, and then click OK.
This database should now be reinstated as the standby in the Data Guard association.
To terminate a Data Guard association on a bare metal DB system
On a bare metal DB system, you remove a Data Guard association by terminating the standby database.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose the Compartment that contains the DB system that includes the standby database you want to terminate.
3. Click the DB system name.
4. For the standby database you want to terminate, click the Actions icon (three dots), and then click Terminate.
5. In the Terminate Database dialog box, enter the name of the database, and then click OK.

Oracle Cloud Infrastructure User Guide 1469


Database

To terminate a Data Guard association on a virtual machine DB system


On a virtual machine DB system, you remove a Data Guard association by terminating the standby DB system.
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Choose the Compartment that contains the standby DB system that you want to terminate.
3. Click the DB system name, click the Actions icon (three dots), and then click Terminate.
4. Confirm when prompted.
The DB system's icon indicates Terminating.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to manage Data Guard associations:
• CreateDataGuardAssociation
• GetDataGuardAssociation
• ListDataGuardAssociations
• SwitchoverDataGuardAssociation
• FailoverDataGuardAssociation
• ReinstateDataGuardAssociation
• DeleteDbHome - To terminate a bare metal DB system Data Guard association, delete the standby database.
• TerminateDbSystem - To terminate a virtual machine DB system Data Guard association, terminate the standby
DB system.
For the complete list of APIs for the Database service, see Database Service API.

Using Oracle Data Guard with the Database CLI


Oracle recommends that you use the Console instead of the database CLI to set up and work with Data Guard in
Oracle Cloud Infrastructure. See Using Oracle Data Guard on page 1462 for information and instructions.
Note:

This topic is not applicable to Exadata DB systems. You can manually


configure Data Guard on Exadata DB systems using native Oracle Database
utilities and commands, however this topic explains how set up primary
and standby databases using dbcli, which is not available on Exadata
DB systems. For more information, see Data Guard Concepts and
Administration for version 18.1, 12.2, 12.1, or 11.2.
This topic explains how to use the database CLI to set up Data Guard with Fast-Start Failover (FSFO) in Oracle
Cloud Infrastructure. The following sections explain how to prepare the primary and standby databases, and then
configure Data Guard to transmit redo data from the primary database and apply it to the standby database.
Note:

This topic assumes that you are familiar with Data Guard and FSFO. To learn
more about them, see documentation at the Oracle Document Portal.
Prerequisites
To perform the procedures in this topic, you'll need the following information for the primary and standby databases.
• db_name (or oracle_sid)
• db_unique_name
• oracle home directory (or database home)

Oracle Cloud Infrastructure User Guide 1470


Database

To find the database information


After you've launched the primary and standby DB systems and created databases as described later in this topic, you
can use the CLI on those systems to find the needed database information.
1. SSH to the DB System.

ssh -i <private_key_path> opc@<db_system_ip_address>


2. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the root user's profile,
which will set the PATH to the dbcli directory (/opt/oracle/dcs/bin).

login as: opc

[opc@dbsys ~]$ sudo su -


3. To find the db_name (or oracle_sid) and db_uniqueName, run the dbcli list-databases -j command.

[root@dbsys ~]# dbcli list-databases -j


[ {
"id" : "80ad855a-5145-4f8f-a08f-406c5e4684ff",
"name" : "dbtst",
"dbName" : "dbtst",
"databaseUniqueName" : "dbtst_phx1cs",
"dbVersion" : "12.1.0.2",
"dbHomeId" : "2efe7af7-0b70-4e9b-ba8b-71f11c6fe287",
"instanceOnly" : false,
.
.
.
4. To find the oracle home directory (or database home), run the dbcli list-dbhomes command. If there are
multiple database homes on the DB system, use the one that matches the "dbHomeId" in the dbcli list-
databases -j command output shown above.

[root@dbtst ~]# dbcli list-dbhomes

ID Name DB Version
Home Location
Status
---------------------------------------- --------------------
----------------------------------------
--------------------------------------------- ----------
2efe7af7-0b70-4e9b-ba8b-71f11c6fe287 OraDB12102_home1
12.1.0.2.160719 (23739960, 23144544) /u01/app/oracle/
product/12.1.0.2/dbhome_1 Configured
33ae99fe-5413-4392-88da-997f3cd24c0f OraDB11204_home1
11.2.0.4.160719 (23054319, 23054359) /u01/app/oracle/
product/11.2.0.4/dbhome_1 Configured

Creating a Primary DB System


If you don't already have a primary DB system, create one as described in Creating Bare Metal and Virtual Machine
DB Systems on page 1371. The DB system will include an initial database. You can create additional databases by
using the Database Commands on page 1506 command available on the DB system.
Creating a Standby DB System
Note:

The standby database must have the same db_name as the primary database,
but it must have a different db_unique_name. If you use the same database

Oracle Cloud Infrastructure User Guide 1471


Database

name for the standby and primary, you will have to delete the database
from the standby DB system by using the dbcli delete-database
command before you can run the dbcli create-database command
described below. Deleting and creating the database will take several minutes
to complete. The dbcli commands must be run as the root user.
1. Create a standby DB system as described in Creating Bare Metal and Virtual Machine DB Systems on page
1371 and wait for the DB system to finish provisioning and become available.
You can create the standby DB system in a different availability domain from the primary DB system for
availability and disaster recovery purposes (this is strongly recommended). You can create the standby DB system
in the primary DB system's cloud network so that both systems are in a single, routable network.
2. SSH to the DB System.

ssh -i <private_key_path> opc@<db_system_ip_address>


3. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the root user's profile,
which will set the PATH to the dbcli directory (/opt/oracle/dcs/bin).

login as: opc

[opc@dbsys ~]$ sudo su -


4. The DB system will include an initial database, but you'll need to create a standby database by using the dbcli
create-database command with the --instanceonly parameter. This parameter creates only the
database storage structure and starts the database in nomount mode (no other database files are created).
When using --instanceonly, both the --dbname and --adminpassword parameters are required and
they should match the dbname and admin password of the primary database to avoid confusion.
The following sample command prompts for the admin password and then creates a storage structure for a
database named dbname.

[root@dbsys ~]# dbcli create-database --dbname <same as primary dbname> --


databaseUniqueName <different from primary uniquename> --instanceonly --
adminpassword

If you are using pluggable databases, also specify the --cdb parameter.
For complete command syntax, see Database Commands on page 1506.
5. Wait a few minutes for the dbcli create-database command to create the standby database.
You can use the dbcli list-jobs command to verify that the creation job ran successfully, and then the
dbcli list-databases command verify that the database is configured.
Preparing the Primary DB System
To prepare the primary DB system, you'll need to configure static listeners, update tnsnames.ora, and configure some
database settings and parameters.

Configuring the Static Listeners


Create static listeners to be used by RMAN and Data Guard Broker.
1. SSH to the primary DB system, log in as the opc or root user, and sudo to the grid OS user.

sudo su - grid

Oracle Cloud Infrastructure User Guide 1472


Database

2. Edit /u01/app/<version>/grid/network/admin/listener.ora and add the following content to


it. The first static listener shown here is optional. The second DGMGRL static listener is optional for version 12.1 or
later databases, but required for version 11.2 databases.

SID_LIST_LISTENER=
(SID_LIST=
(SID_DESC=
(SDU=65535)
(GLOBAL_DBNAME = <primary_db_unique_name>.<primary_db_domain>)
(SID_NAME = <primary_oracle_sid>)
(ORACLE_HOME=<oracle_home_directory>)
(ENVS="TNS_ADMIN=<oracle_home_directory>/network/admin")
)
(SID_DESC=
(SDU=65535)
(GLOBAL_DBNAME = <primary_db_unique_name>_DGMGRL.<primary_db_domain>)
(SID_NAME = <primary_oracle_sid>)
(ORACLE_HOME=<oracle_home_directory>)
(ENVS="TNS_ADMIN=<oracle_home_directory>/network/admin")
)
)
3. Save your changes and then restart the listener.

$ srvctl stop listener


$ srvctl start listener

Adding Net Service Names to tnsnames.ora


As the oracle user, edit $ORACLE_HOME/network/admin/tnsnames.ora and add the standby database net
service name to it.

<standby db_unique_name> =
(DESCRIPTION =
(SDU=65535)
(ADDRESS = (PROTOCOL = TCP)(HOST = <standby_server>.<domain>) (PORT =
1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = <standby db_unique_name>.<standby db_domain>)
)
)

The sample above assumes that name resolution is working and that the <standby_server>.<domain> is
resolvable at the primary database. You can also use the private IP address of the standby server if the IP addresses
are routable within a single cloud network (VCN).

Configuring Primary Database Parameters


Tip:

If the primary and standby hosts have different directory structures, you
might need to set additional parameters that are not discussed here, such as
the log_file_name_convert parameter. See the RMAN documentation
for more information about how to create standbys for hosts with different
directory structures.
1. As the oracle user, enable automatic standby file management.

SQL> alter system set standby_file_management=AUTO;

Oracle Cloud Infrastructure User Guide 1473


Database

2. Identify the Broker configuration file names and locations. The commands used for this depend on the type of
database storage. If you're not sure of the database storage type, use the Database Commands on page 1506
command on the DB system.
For ACFS database storage, use the following commands to set the Broker configuration files.

SQL> alter system set dg_broker_config_file1='/u02/app/oracle/


oradata/<Primary db_unique_name>/dbs/dr1<Primary db_unique_name>.dat';
SQL> alter system set dg_broker_config_file2='/u02/app/oracle/
oradata/<Primary db_unique_name>/dbs/dr2<Primary db_unique_name>.dat';

For ASM database storage, use the following commands to set the Broker configuration files.

SQL> alter system set dg_broker_config_file1='+DATA/<Primary


db_unique_name>/dr1<db_unique_name>.dat';
SQL> alter system set dg_broker_config_file2='+DATA/<Primary
db_unique_name>/dr2<db_unique_name>.dat';
3. Enable Broker DMON process for the database.

SQL> alter system set dg_broker_start=true;


4. Force database logging for all database transactions.

SQL> alter database force logging ;


5. Add Standby Redo Logs (SRLs), based on the Online Redo Logs (ORLs). On a newly launched DB system, there
will be three ORLs of size 1073741824, so create four SRLs of the same size.
You can use the query below to determine the number and size (in bytes) of the ORLs.

SQL> select group#, bytes from v$log;

GROUP# BYTES
---------- ----------
1 1073741824
2 1073741824

Oracle Cloud Infrastructure User Guide 1474


Database

3 1073741824

All of the ORLs must be the same size.


The SRLs must be the same size as the ORLs, but there must be at least one more SRL than the ORLs. In the
example above, there are three ORLs, so four SRLs are required. So specify the current redo logs plus one, and
use the same size as the redo logs.

SQL> alter database add standby logfile thread 1 size <size>;

There should be only one member in the SRL group (by default, a DB system is created with only one member per
SRL group). To ensure this, you can name the file with the following syntax.

alter database add standby logfile thread 1 group 4 (<logfile name with
full path>) size 1073741824, group 5(<logfile name with full path>) size
1073741824 ...

For ASM/OMF configurations, the above command uses the diskgroup instead of <logfile name with full path>.

alter database add standby logfile thread 1 group 4 (+RECO) size


1073741824, group 5(+RECO) size 1073741824 ...

Tip:

ORLs and SRLs should be sized so that log switches do not occur
more frequently than every 10 minutes. This requires knowledge of
the application and may need to be adjusted after deployment. For
more information, see Use Standby Redo Logs and Configure Size
Appropriately.
6. Verify that you created the correct number of SRLs.

SQL> select group#, bytes from v$standby_log;


7. Make sure the database is in ARCHIVELOG mode.

SQL> archive log list


8. Enable database FLASHBACK. The minimum recommended value for db_flashback_retention_target is 120
minutes.

SQL> alter database flashback on ;


SQL> alter system set db_flashback_retention_target=120;
9. Perform a single switch redo log to activate archiving if database is newly created. (At least one log must be
archived prior to running the RMAN duplicate.)

SQL> alter system switch logfile;

Preparing the Standby Database


Before you prepare the standby database, make sure the database home on the standby is the same version as on the
primary. (If the primary and standby databases are both newly created with the same database version, the database
homes will be the same.) If it is not, create a database home that is the same version. You can use the Dbhome
Commands on page 1519 command to verify the versions and the Dbhome Commands on page 1519 command to
create a new database home as needed.
To prepare the standby DB system, you'll need to configure static listeners, update tnsnames.ora, configure TDE
Wallet, create a temporary password file, verify connectivity, run RMAN DUPLICATE, enable FLASHBACK, and
then create the database service.

Oracle Cloud Infrastructure User Guide 1475


Database

Configuring the Static Listeners


Create static listeners to be used by RMAN and Data Guard Broker.
1. SSH to the standby DB system, log in as the opc or root user, and sudo to the grid OS user.

sudo su - grid
2. Append the following content to /u01/app/<db_version>/grid/network/admin/listener.ora.
The first static listener shown below is required for RMAN DUPLICATE. The second DGMGRL static listener is
optional for database versions 12.2.0.1 and 12.1.0.2, but required for database version 11.2.0.4.

SID_LIST_LISTENER=
(SID_LIST=
(SID_DESC=
(SDU=65535)
(GLOBAL_DBNAME = <standby db_unique_name>.<standby db_domain>)
(SID_NAME = <standby oracle_sid>)
(ORACLE_HOME=<oracle home directory>)
(ENVS="TNS_ADMIN=<oracle home directory>/network/admin")
)
(SID_DESC=
(SDU=65535)
(GLOBAL_DBNAME = <standby db_unique_name>_DGMGRL.<standby db_domain>)
(SID_NAME = <standby oracle_sid>)
(ORACLE_HOME=<oracle home directory>)
(ENVS="TNS_ADMIN=<oracle home directory>/network/admin")
)
)
3. Restart the listener.

$ srvctl stop listener


$ srvctl start listener
4. Verify that the static listeners are available. The sample output below is for database version 12.1.0.2. Note that
the ...status UNKNOWN messages are expected at this point.

$ lsnrctl status

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 29-SEP-2016 21:09:25

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 -
Production
Start Date 29-SEP-2016 21:09:19
Uptime 0 days 0 hr. 0 min. 5 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/12.1.0.2/grid/network/admin/
listener.ora
Listener Log File /u01/app/grid/diag/tnslsnr/dg2/listener/alert/
log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.0.1.24)(PORT=1521)))
Services Summary...

Oracle Cloud Infrastructure User Guide 1476


Database

Service "dg2_phx2hx.oratst.org" has 1 instance(s).


Instance "dg2", status UNKNOWN, has 1 handler(s) for this service...
Service "dg2_phx2hx_DGMGRL.oratst.org" has 1 instance(s).
Instance "dg2", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

Adding Net Service Names to tnsnames.ora


As the oracle user, add the standby database net service name to $ORACLE_HOME/network/admin/
tnsnames.ora. $ORACLE_HOME is the database home where the standby database is running.

<Primary db_unique_name> =
(DESCRIPTION =
(SDU=65535)
(ADDRESS = (PROTOCOL = TCP)(HOST = <primary_server>.<domain>) (PORT =
1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = <primary db_unique_name).<primary db_domain>)
)
)

<Standby db_unique_name> =
(DESCRIPTION =
(SDU=65535)
(ADDRESS = (PROTOCOL = TCP)(HOST = <standby_server>.<domain>) (PORT =
1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = <standby db_unique_name>.<db_domain>)
)
)

Copying the TDE Wallets to the Standby System


Copy the TDE wallet files from the primary DB system to standby DB system using SCP. The following sample
command assumes the SCP command is being run by the oracle OS user and that the private key for oracle has been
created and exists on the host where SCP is being run.

$ scp -i <private key> primary_server:/opt/oracle/dcs/commonstore/wallets/


tde/<primary db_unique_name>/* standby_server:/opt/oracle/dcs/commonstore/
wallets/tde/<standby db_unique_name>

Setting Up the Standby System Configuration


As the oracle user, create the following directory for database version 11.2.0.4. This step is optional for version
12.2.0.1 and version 12.1.0.2.

[oracle@dbsys ~]$ mkdir -pv /u03/app/oracle/redo/<standby db_unique_name


uppercase>/controlfile

Creating the Audit File Destination


As the oracle user, create the following directory to use as the audit file destination.

[oracle@dbsys ~]$ mkdir -p /u01/app/oracle/admin/<db_name>/adump

Otherwise, the RMAN duplicate command used later will fail.

Oracle Cloud Infrastructure User Guide 1477


Database

Creating a Temporary Password File


As the oracle user, create a temporary password file.

[oracle@dbsys ~]$ orapwd file=$ORACLE_HOME/dbs/orapw<standby oracle_sid>


password=<admin password for primary> entries=5

The password must be the same as the admin password of the primary database. Otherwise, the RMAN duplicate step
below will fail with: RMAN-05614: Passwords for target and auxiliary connections must
be the same when using active duplicate.

Verifying the Standby Database is Available


1. As the oracle user, set the environment variables.

[oracle@dbsys ~]$ . oraenv


<enter the db_name>
2. Replace $ORACLE_HOME/dbs/init<standby sid_name>.ora with the following content:

db_name=<Primary db_name>
db_unique_name=<standby db_unique_name>
db_domain=<standby db_domain>
3. Remove the spfile from the standby.

/u02/app/oracle/oradata/<standby db_unique_name>/dbs/spfile
$ORACLE_SID.ora

The database needs to be started in nomount mode with no spfile specified, but the original init file contains an
spfile parameter which will prevent the RMAN duplicate step from working.
4. Set the ORACLE_UNQNAME environment variable to point to your DB_UNIQUE_NAME.

$ export ORACLE_UNQNAME =db_unique_name

Important:

If you do not perform this step, the wallet will not be opened, and running
the RMAN DUPLICATE command in the subsequent step will fail.
5. The dbcli create-database --instanceonly command used earlier opens the standby database as
a primary in read/write mode, so the database needs to be brought down before proceeding to the nomount step
below.

$ sqlplus / as sysdba
SQL> shutdown immediate
6. Start the database in nomount mode.

SQL> startup nomount

Verifying the Database Connections


Verify the connection between the primary and standby databases.
1. Make sure that the listener port 1521 is open in the security list(s) used for the primary and standby DB systems.
For more information, see Updating the Security List for the DB System on page 1435.

Oracle Cloud Infrastructure User Guide 1478


Database

2. From the primary database, connect to standby database.

$ sqlplus sys/<password>@<standby net service name> as sysdba


3. From standby database, connect to primary database.

$ sqlplus sys/<password>@<primary net service name> as sysdba

Running the RMAN DUPLICATE Command


Run the RMAN DUPLICATE command on the standby DB system, as the oracle user.
If the primary database is large, you can allocate additional channels to improve performance. For a newly installed
database, one channel typically runs the database duplication in a couple of minutes.
Make sure that there are no errors generated by the RMAN DUPLICATE command. If errors occur, restart the
database using the init.ora file (not spfile) in case it is generated under $ORACLE_HOME/dbs as part of RMAN
DUPLICATE.
In the following examples, use lowercase for the <Standby db_unique_name> unless otherwise specified.
For ACFS storage layout, run the following commands.

$ rman target sys/<password>@<primary alias> auxiliary


sys/<password>@<standby alias> log=rman.out

RMAN> run { allocate channel prim1 type disk;


allocate auxiliary channel sby type disk;
duplicate target database for standby from active database
dorecover
spfile
parameter_value_convert '/<Primary db_unique_name>/','/<Standby
db_unique_name>/','/<Primary db_unique_name uppercase>/','/<Standby
db_unique_name uppercase >/'
set db_unique_name='<Standby db_unique_name>'
set db_create_file_dest='/u02/app/oracle/oradata/<Standby
db_unique_name>'
set dg_broker_config_file1='/u02/app/oracle/oradata/<Standby
db_unique_name>/dbs/dr1<Standby db_unique_name>.dat'
set dg_broker_config_file2='/u02/app/oracle/oradata/<Standby
db_unique_name>/dbs/dr2<Standby db_unique_name>.dat'
set dispatchers ='(PROTOCOL=TCP) (SERVICE=<Standby
db_unique_name>XDB)'
set instance_name='<Standby db_unique_name>'
;
}

For ASM storage layout, run the following commands.

$ rman target sys/<password>@<primary alias> auxiliary


sys/<password>@<standby alias> log=rman.out

RMAN> run { allocate channel prim1 type disk;


allocate auxiliary channel sby type disk;
duplicate target database for standby from active database
dorecover
spfile
parameter_value_convert '/<Primary db_unique_name>/','/<Standby
db_unique_name>/','/<Primary db_unique_name uppercase>/','/<Standby
db_unique_name uppercase>/'
set db_unique_name='<Standby db_unique_name>'
set dg_broker_config_file1='+DATA/<Standby db_unique_name>/
dr1<Standby db_unique_name>.dat'

Oracle Cloud Infrastructure User Guide 1479


Database

set dg_broker_config_file2='+DATA/<Standby db_unique_name>/


dr2<Standby db_unique_name>.dat'
set dispatchers ='(PROTOCOL=TCP) (SERVICE=<Standby
db_unique_name>XDB)'
set instance_name='<Standby db_unique_name>'
;
}

Enabling Database FLASHBACK


1. As a Data Guard best practice, enable flashback and set db_flashback_retention_target to at least 120
minutes on both the primary and standby databases.

SQL> alter database flashback on;


SQL> alter system set db_flashback_retention_target=120;
2. Verify that the standby database is created properly.

SQL> select FORCE_LOGGING, FLASHBACK_ON, OPEN_MODE,


DATABASE_ROLE,SWITCHOVER_STATUS, DATAGUARD_BROKER, PROTECTION_MODE from v
$database ;

Creating a Database Service


Oracle recommends creating a database service for the standby database by using srvctl.
For ACFS storage layout.
1. Create a shared directory and copy the spfile file to it.

$ mkdir -pv /u02/app/oracle/oradata/<Standby db_unique_name>/dbs


$ cp $ORACLE_HOME/dbs/spfile<standby oracle_sid>.ora /u02/app/oracle/
oradata/<Standby db_unique_name>/dbs
2. Stop and remove the existing database service.

$ srvctl stop database -d <standby db_unique_name>

$ srvctl remove database -d <standby db_unique_name>


3. Create the database service.

$ srvctl add database -d <standby db_unique_name> -n <standby db_name>


-o $ORACLE_HOME -c SINGLE -p '/u02/app/oracle/oradata/<standby
db_unique_name>/dbs/spfile<standby db_name>.ora'
-x <standby hostname> -s "READ ONLY" -r PHYSICAL_STANDBY -i <db_name>
$ srvctl setenv database -d <standby db_unique_name> -t "ORACLE_
UNQNAME=<standby db_unique_name>"
$ srvctl config database -d <standby db_unique_name>
4. Start the database service.

$ srvctl start database -d <standby db_unique_name>


5. Clean up the files from $ORACLE_HOME/dbs.

$ rm $ORACLE_HOME/dbs/spfile<standby oracle_sid>.ora
$ rm $ORACLE_HOME/dbs/init<standby oracle_sid>.ora

Oracle Cloud Infrastructure User Guide 1480


Database

6. Create the $ORACLE_HOME/dbs/init<standby oracle_sid>.ora file to reference the new location of


the spfile file.

SPFILE='/u02/app/oracle/oradata/<standby db_unique_name>/dbs/
spfile<standby db_name>.ora'
7. Stop the standby database and then start it by using srvctl.

srvctl stop database -d <standby db_unique_name>


srvctl start database -d <standby db_unique_name>

For ASM storage layout.


1. Consider generating the spfile file under +DATA.

SQL> create pfile='init<standby oracle_sid>.ora' from spfile ;


SQL> create spfile='+DATA' from pfile='init<standby oracle_sid>.ora' ;
2. Stop and remove the existing database service.

$ srvctl stop database -d <standby db_unique_name>

$ srvctl remove database -d <standby db_unique_name>


3. Create the database service.

$ srvctl add database -d <standby db_unique_name> -n <standby db_name>


-o $ORACLE_HOME -c SINGLE -p '+DATA/<standby db_unique_name>/
PARAMETERFILE/spfile.xxx.xxxxxx'
-x <standby hostname> -s "READ ONLY" -r PHYSICAL_STANDBY -i <db_name>
$ srvctl setenv database -d <standby db_unique_name> -t
"ORACLE_UNQNAME=<standby db_unique_name>"
$ srvctl config database -d <standby db_unique_name>
4. Start the database service.

$ srvctl start database -d <standby db_unique_name>


5. Clean up the files from $ORACLE_HOME/dbs.

$ rm $ORACLE_HOME/dbs/init<standby oracle_sid>.ora
$ rm $ORACLE_HOME/dbs/spfile<standby oracle_sid>.ora
6. Create $ORACLE_HOME/dbs/init<standby oracle_sid>.ora file to reference the new location of the
spfile file.

SPFILE='+DATA/<standby db_unique_name>/PARAMETERFILE/spfile.xxx.xxxxxx'
7. Stop the database and start the standby database by using srvctl.

$ srvctl start database -d <standby db_unique_name>

Configuring Data Guard


Perform the following steps to complete the configuration of Data Guard and enable redo transport from the primary
database and redo apply in the standby database.

Oracle Cloud Infrastructure User Guide 1481


Database

1. Run the dgmgrl command line utility from either the primary or standby DB system and connect to the primary
database using sys credentials.

DGMGRL> connect sys/<sys password>@<primary tns alias>


2. Create the Data Guard configuration and identify for the primary and standby databases.

DGMGRL> create configuration mystby as primary database is <primary


db_unique_name> connect identifier is <primary tns alias>;
add database <standby db_unique_name> as connect identifier is <standby
tns alias> maintained as physical;
3. Enable Data Guard configuration.

DGMGRL> enable configuration;


4. Verify that Data Guard setup was done properly. Run the following SQL in both the primary and standby
databases.

SQL> select FORCE_LOGGING, FLASHBACK_ON, OPEN_MODE, DATABASE_ROLE,


SWITCHOVER_STATUS, DATAGUARD_BROKER, PROTECTION_MODE from v$database;
5. Verify that Data Guard processes are initiated in the standby database.

SQL> select PROCESS,PID,DELAY_MINS from V$MANAGED_STANDBY;


6. Verify parameter configuration on primary and standby.

SQL> show parameter log_archive_dest_


SQL> show parameter log_archive_config
SQL> show parameter fal_server
SQL> show parameter log_archive_format
7. Verify that the Data Guard configuration is working. Specifically, make sure redo shipping and redo apply are
working and that the standby is not unreasonably lagging behind the primary.

DGMGRL> show configuration verbose


DGMGRL> show database verbose <standby db_unique_name>
DGMGRL> show database verbose <primary db_unique_name>

Any discrepancies, errors, or warnings should be resolved. You can also run a transaction on the primary and
verify that it's visible in the standby.
8. Verify that the Data Guard configuration is functioning as expected by performing switchover and failover in both
directions. Run show configuration after each operation and make sure there are no errors or warnings.
Caution:

This step is optional, based on your discretion. If for any reason the
configuration is not valid, the switchover and/or failover will fail and it
might be difficult or impossible to start the primary database. A recovery
of the primary might be required, which will affect availability.

DGMGRL> switchover to <standby db_unique_name>


DGMGRL> switchover to <primary db_unique_name>
#connect to standby before failover:

DGMGRL> connect sys/<sys password>@<standby db_unique_name>


DGMGRL> failover to <standby db_unique_name>
DGMGRL> reinstate database <primary db_unique_name>
#connect to primary before failover:

Oracle Cloud Infrastructure User Guide 1482


Database

DGMGRL> connect sys/<sys password>@<primary db_unique_name>


DGMGRL> failover to <primary db_unique_name>
DGMGRL> reinstate database <standby db_unique_name>

Configuring Observer (Optional)


The best practice for high availability and durability is to run the primary, standby, and observer in separate
availability domains. The observer determines whether or not to failover to a specific target standby database. The
server used for observer requires the Oracle Client Administrator software, which includes the Oracle SQL NET and
Broker.
1. Configure TNS alias names for both the primary and standby databases as described previously, and verify the
connection to both databases.
2. Change protection mode to either maxavailability or maxperformance (maxprotection is not supported for FSFO).
To enable maxavailability:

DGMGRL> edit database <standby db_unique_name> set property


'logXptMode'='SYNC';
DGMGRL> edit database <primary db_unique_name> set property
'logXptMode'='SYNC';
DGMGRL> edit configuration set protection mode as maxavailability;

To enable maxperformance:

DGMGRL> edit configuration set protection mode as maxperformance;


DGMGRL> edit database <standby db_unique_name> set property
'logXptMode'='ASYNC';
DGMGRL> edit database <primary db_unique_name> set property
'logXptMode'='ASYNC';

For maxperformance, the FastStartFailoverLaglimit property limits the maximum amount of permitted data loss to
30 seconds by default.
3. The following properties should also be considered. Run show configuration verbose to see their
current values.
• FastStartFailoverPmyShutdown
• FastStartFailoverThreshold
• FastStartFailoverTarget
• FastStartFailoverAutoReinstate
(Running show configuration will result in the following error until the observer is started: Warning :
ORA-16819: fast-start failover observer not started.)
4. Enable fast-start failover from Broker:

DGMGRL> Enable fast_start failover


5. Verify the fast-start failover and associated settings.

DGMGRL> show fast_start failover


6. Start the observer from Broker (it will run in the foreground, but can also be run in the background).

DGMGRL> start observer


7. Verify fast-start failover is enabled and without errors or warnings.

DGMGRL> show configuration verbose

Oracle Cloud Infrastructure User Guide 1483


Database

8. Always test failover in both directions to ensure that everything is working as expected. Verify that FSFO is
running properly by performing a shutdown abort of the primary database.
The observer should start the failover to the standby database. If protection mode is set to maxprotection, some
loss of data can occur, based on the FastStartFailoverLaglimit value.

Oracle Database CLI Reference


The database CLI (dbcli) is a command line interface available on bare metal and virtual machine DB systems. After
you connect to the DB system, you can use the database CLI to perform tasks such as creating Oracle database homes
and databases.
Note:

The database CLI is not for use on Exadata DB systems.


Operational Notes
• The database CLI commands must be run as the root user.
• dbcli is in the /opt/oracle/dcs/bin/ directory.
This directory is included in the path for the root user's environment.
• Oracle Database maintains logs of the dbcli command output in the dcscli.log and dcs-agent.log files
in the /opt/oracle/dcs/log/ directory.
• The database CLI commands and most parameters are case sensitive and should be typed as shown. A few
parameters are not case sensitive, as indicated in the parameter descriptions, and can be typed in uppercase or
lowercase.
Caution:

Oracle recommends that you avoid specifying parameter values that include
confidential information when you use the database CLI commands.
Syntax
The database CLI commands use the following syntax:

dbcli command [parameters]

where:
• command is a verb-object combination such as create-database.
• parameters include additional options for the command. Most parameter names are preceded with two dashes,
for example, --help. Abbreviated parameter names are preceded with one dash, for example, -h.
• User-specified parameter values are shown in red text within angle brackets, for example, <db_home_id>. Omit
the angle brackets when specifying these values.
• The help parameter is available with every command.
The remainder of this topic contains syntax and other details about the commands.
CLI Update Command
Occasionally, new commands are added to the database CLI and other commands are updated to support new
features. You can use the following command to update the database CLI:

cliadm update-dbcli
Use the cliadm update-dbcli command to update the database CLI with the latest new and updated
commands.

Oracle Cloud Infrastructure User Guide 1484


Database

Note:

On RAC DB systems, execute the cliadm update-dbcli command on


each node in the cluster.

Syntax

cliadm update-dbcli [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

Example
The following command updates the dbcli:

[root@dbsys ~]# cliadm update-dbcli


{
"jobId" : "dc9ce73d-ed71-4473-99cd-9663b9d79bfd",
"status" : "Created",
"message" : "Dcs cli will be updated",
"reports" : [ ],
"createTimestamp" : "January 18, 2017 10:19:34 AM PST",
"resourceList" : [ ],
"description" : "dbcli patching",
"updatedTime" : "January 18, 2017 10:19:34 AM PST"
}

Agent Commands
The following commands are available to manage agents:
• dbcli ping-agent
• dbcli list-agentConfigParameters
• dbcli update-agentConfigParameters

dbcli ping-agent
Use the dbcli ping-agent command to test the reachability of an agent.

Syntax

dbcli ping-agent [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

Oracle Cloud Infrastructure User Guide 1485


Database

dbcli list-agentConfigParameters
Use the dbcli list-agentConfigParameters command to list agent configuration parameters.

Syntax

dbcli list-agentConfigParameters [-n] [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.
-n -name (Optional) Parameter name.

dbcli update-agentConfigParameters
Use the dbcli update-agentConfigParameters command to update agent configuration parameters.

Syntax

dbcli update-agentConfigParameters -n <parameter> [-v <value>] [-a] [-c] [-


d] [-u] [-r] [-h] [-j]

Parameters

Parameter Full Name Description


-a --append (Optional) Appends the specified
values to the specified parameters.
Example with multiple parameter
names and values: -n p1 -v v1
-n p2 -v v2 -a
-c --comment (Optional) Adds a comment for the
parameter. Default: [ ]
-d --description (Optional) Adds a description for the
parameter. Default: [ ]
-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.
-n --name Parameter name. Example with
multiple parameter names and
values: -n p1 -v v1 -n p2 -v
v2 Default: [ ]
-r --reset (Optional) Resets the parameter to
the default value. Example resetting
multiple parameters: -n p1 -n
p2 -r Default: false

Oracle Cloud Infrastructure User Guide 1486


Database

Parameter Full Name Description


-u --update (Optional) Replaces the specified
parameter values as directed.
Example with multiple parameter
names and values: -n p1 -v v1
-n p2 -v v2 -u Default: false
-v --value (Optional) Parameter value. Example
with multiple parameter names and
values: -n p1 -v v1 -n p2 -v
v2 Default: [ ]

Autologcleanpolicy Commands
The following commands are available to manage policies for automatic cleaning (purging) of logs.
• dbcli create-autoLogCleanPolicy
• dbcli list-autoLogCleanPolicy

dbcli create-autoLogCleanPolicy
Use the dbcli create-autoLogCleanPolicy command to create policies for automatic cleaning (purging)
of logs.

Syntax

dbcli create-autoLogCleanPolicy [-c {gi|database|dcs}] [-f <number>] [-


o <number>] [-u {Day|Hour|Minute}] [-uMB <number>] [-uPer <number>] [-h] [-
j]

Parameters

Parameter Full Name Description


-c --components (Optional) Components to purge.
Possible values are gi, database, and
dcs. Separate multiple values with
commas. Example: gi,dcs
-f --freeSpaceBelowPercentage (Optional) Purges logs when the free
disk space is below the specified
percentage of the total partition size.
Valid range: 20-50. Default: 20.
-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.
-o --olderthan (Optional) Quantity portion of time
interval. Default: 30. Cleans logs
older than the specified time interval
(-o and -u).

Oracle Cloud Infrastructure User Guide 1487


Database

Parameter Full Name Description


-u --olderThanUnit (Optional) Unit portion of time
interval. Possible values: Day, Hour,
or Minute. Default: Day. Cleans logs
older than the specified time interval
(-o and -u).
-uMB --usageOverMB (Optional) Purges logs when log
usage exceeds the specified number
of MegaBytes (MB). Valid range: 10
to 50% of total partition size.
-uPer --usageOverPercentage (Optional) Purges logs when
log usage exceeds the specified
percentage of the total partition size.
Valid range: 10-50.

dbcli list-autoLogCleanPolicy
Use the dbcli list-autoLogCleanPolicy command to list policies for automatic cleaning of logs.

Syntax

dbcli list-autoLogCleanPolicy [-c {gi|database|dcs}] [-h] [-j]

Parameters

Parameter Full Name Description


-c --components (Optional) Components. Possible
values are gi, database, and dcs.
Separate multiple values with
commas. Example: gi,dcs
-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

Backup Commands
The following commands are available to back up databases:
• dbcli create-backup
• dbcli getstatus-backup
• dbcli schedule-backup
Note:

Instead of using dbcli, you can use the Console or the API to manage
backing up your bare metal or virtual machine DB system databases to
Object Storage. However, if you switch from using dbcli to using managed
backups, a new backup configuration is created and associated with your
database, and backups you created by using dbcli will not be accessible
from the managed backup interfaces. For information about managed
backups, see Backing Up a Database to Oracle Cloud Infrastructure Object
Storage on page 1436.

Oracle Cloud Infrastructure User Guide 1488


Database

Before you can back up a database by using the dbcli create-backup command, you'll need to:
1. Create a backup configuration by using the dbcli create-backupconfig command.
2. Associate the backup configuration with the database by using the dbcli update-database command.
After a database is associated with a backup configuration, you can use the dbcli create-backup command
in a cron job to run backups automatically. You can use a cron utility such as CronMaker to help build expressions.
For more information, see http://www.cronmaker.com.

dbcli create-backup
Use the dbcli create-backup command to create a backup of a database.

Syntax

dbcli create-backup -in <db_name> -i <db_id> [-bt {Regular-L0|Regular-L1|


Longterm|ArchiveLog}] [-c {Database|TdeWallet}] [-k <n>] [-t <tag>] [-h] [-
j]

Parameters

Parameter Full Name Description


-bt --backupType (Optional) Backup type. Possible
values are Regular-L0, Regular-L1,
Longterm, and ArchiveLog. Regular-
L0 and Regular L1 correspond to
incremental L0 and L1 backups.
Longterm corresponds to Full
backup. ArchiveLog corresponds
to archived redo logs backup. The
default value is Regular-L1. Values
are not case-sensitive. If omitted, the
default value is used.

Oracle Cloud Infrastructure User Guide 1489


Database

Parameter Full Name Description


-c --component (Optional) Component. Possible
values are Database and TdeWallet.
The default value is Database.
The value TdeWallet backs up
TDE wallets. Values are not case-
sensitive. If omitted, the default
value is used.
Note:

TDE wallets are


automatically backed
up in the following
situations:
• A database is
created with
an Object
Storage backup
configuration.
• A database that
has an Object
Storage backup
configuration is
updated.
• An Object
Storage backup
configuration is
updated.
• A backup of the
type Longterm is
created.
• The TDE key for a
database is rotated.
• A database is
backed up and
no TDE wallet
backups exist yet.

-h --help (Optional) Displays help for using


the command.
-i --dbid The ID of the database to back
up. Use the dbcli list-
databases command to get the
database's ID.
-in --dbName The name of the database to back
up. Use the dbcli list-
databases command to get the
database's name.
-j --json (Optional) Displays JSON output.

Oracle Cloud Infrastructure User Guide 1490


Database

Parameter Full Name Description


-k --keepDays (Optional) Specifies the time until
which the backup or copy must be
kept. After this time the backup is
obsolete, regardless of the backup
retention policy settings. For
Longterm backup type only.

-t --tag (Required for Longterm backup


type) Specifies a user-specified tag
name for a backup set and applies
this tag to the output files generated
by the command. This value is not
case sensitive. Valid number of
characters: 1 to 30. The characters
are limited to the characters that are
valid in file names on the target file
system. For example, ASM does
not support the use of the hyphen
(-) character in the file names it uses
internally, so weekly-incremental
is not a valid tag name for backups
in ASM disk groups. Environment
variables are not valid in the TAG
parameter.

Examples
The following command creates a backup of the specified database using the database ID.

[root@dbsys ~]# dbcli create-backup -i 573cadb2-0cc2-4c1c-9c31-595ab8963d5b

The following command creates a backup of the specified database using the database name ("mydb").

[root@dbsys ~]# dbcli create-backup -in mydb

dbcli getstatus-backup
Use the dbcli getstatus-backup command to display the status of a backup.

Syntax

dbcli getstatus-backup -t <backup_type> [i <id>] [-in <name>] [-l] [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-i --dbId (Optional) Database Resource ID.
-in --dbName (Optional) Database Resource Name.

Oracle Cloud Infrastructure User Guide 1491


Database

Parameter Full Name Description


-j --json (Optional) Displays JSON output.
-l --isLatestBackupReport (Optional) Latest backup report.
Default: true.
-t --backupType Backup type.

dbcli schedule-backup
Use the dbcli schedule-backup command to schedule a backup of a database.

Syntax

dbcli schedule-backup -t <backup_type> -f <number> [i <id>] [-in <name>] [-


h] [-j]

Parameters

Parameter Full Name Description


-f --frequency Frequency in minutes.
-h --help (Optional) Displays help for using
the command.
-i --dbId (Optional) Database Resource ID.
-in --dbName (Optional) Database Resource Name.
-j --json (Optional) Displays JSON output.
-t --backupType Backup type.

Backupconfig Commands
A backup configuration determines the backup destination and recovery window for database backups. You create the
backup configuration and then associate it with a database by using the dbcli update-database command.
Caution:

Backups that were configured using the Console may become unusable if
you make changes using these commands. For backups configured using the
Console, use these commands with support guidance only.
Note:

Instead of using dbcli, you can use the Console or the API to manage
backing up your bare metal or virtual machine DB system databases to Object
Storage. For information about managed backups, see Backing Up a Database
to Oracle Cloud Infrastructure Object Storage on page 1436.
After a database is associated with a backup configuration, you can use the dbcli create-backup command
in a cron job to run backups automatically. You can use a cron utility such as CronMaker to help build expressions.
For more information, see http://www.cronmaker.com.
The following commands are available to manage backup configurations:
• dbcli create-backupconfig
• dbcli list-backupconfigs
• dbcli describe-backupconfig

Oracle Cloud Infrastructure User Guide 1492


Database

• dbcli update-backupconfig
• dbcli delete-backupconfig

dbcli create-backupconfig
Use the dbcli create-backupconfig command to create a backup configuration that defines the backup
destination and recovery windows.

Syntax

dbcli create-backupconfig -d {DISK|OBJECTSTORE|NONE} -c <bucket> -


o <object_store_swift_id> -on <object_store_swift_name> -w <n> -n <name> [-
cr|-no-cr] [-h] [-j]

Parameters

Parameter Full Name Description


-c --container The name of an existing bucket
in the Oracle Cloud Infrastructure
Object Storage service. You can use
the Console or the Object Storage
API to create the bucket. For more
information, see Managing Buckets
on page 3426.
You must also specify --
backupdestination
objectstore and the --
objectstoreswiftId
parameter.

-cr --crosscheck (Optional) Indicates whether to


enable the crosscheck operation. This
-no-cr --no-crosscheck
operation determines if the files on
the disk or in the media management
catalog correspond to data in the
RMAN repository. If omitted, the
default setting is used (crosscheck is
enabled by default).

-d --backupdestination The backup destination as one of the


following (these values are not case
sensitive):
DISK - The local Fast Recovery
Area.
OBJECTSTORE - The Oracle
Cloud Infrastructure Object
Storage service. You must also
specify the --container and
--objectstoreswiftId
parameters.
NONE - Disables the backup.

Oracle Cloud Infrastructure User Guide 1493


Database

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.
-n --name The name of the backup
configuration.
-o --objectstoreswiftId The ID of the object store
that contains the endpoint and
credentials for the Oracle Cloud
Infrastructure Object Storage
service. Use the dbcli list-
objectstoreswifts
command to get the object store
ID. Use the dbcli create-
objectstoreswift command to
create an object store.
You must also specify --
backupdestination
objectstore and the --
container parameter.

-on --objectstoreswiftName The name of the object store that


contains the endpoint and credentials
for the Oracle Cloud Infrastructure
Object Storage service.
Use the dbcli list-
objectstoreswifts
command to get the object store
ID. Use the dbcli create-
objectstoreswift command to
create an object store.
You must also specify --
backupdestination
objectstore and the --
container parameter.

-w --recoverywindow The number of days for which


backups and archived redo logs are
maintained. The interval always ends
with the current time and extends
back in time for the number of days
specified.
For a DISK backup destination,
specify 1 to 14 days.
For an OBJECTSTORE backup
destination, specify 1 to 30 days.

Oracle Cloud Infrastructure User Guide 1494


Database

Example
The following command creates a backup configuration named dbbkcfg1:

[root@dbsys ~]# dbcli create-backupconfig -d Disk -w 7 -n dbbkcfg1


{
"jobId" : "4e0e6011-db53-4142-82ef-eb561658a0a9",
"status" : "Success",
"message" : null,
"reports" : [ {
"taskId" : "TaskParallel_919",
"taskName" : "persisting backup config metadata",
"taskResult" : "Success",
"startTime" : "November 18, 2016 20:21:25 PM UTC",
"endTime" : "November 18, 2016 20:21:25 PM UTC",
"status" : "Success",
"taskDescription" : null,
"parentTaskId" : "TaskSequential_915",
"jobId" : "4e0e6011-db53-4142-82ef-eb561658a0a9",
"tags" : [ ],
"reportLevel" : "Info",
"updatedTime" : "November 18, 2016 20:21:25 PM UTC"
} ],
"createTimestamp" : "November 18, 2016 20:21:25 PM UTC",
"description" : "create backup config:dbbkcfg1",
"updatedTime" : "November 18, 2016 20:21:25 PM UTC"
}

dbcli list-backupconfigs
Use the dbcli list-backupconfigs command to list all the backup configurations in the DB system.

Syntax

dbcli list-backupconfigs [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

Example
The following command lists a backup configuration:

[root@dbsys ~]# dbcli list-backupconfigs

ID Name RecoveryWindow
BackupDestination CreateTime
---------------------------------------- --------------------
------------------ ----------------- -----------------------------
ccdd56fe-a40b-4e82-b38d-5f76c265282d dbbkcfg1 7
Disk July 10, 2016 12:24:08 PM UTC

Oracle Cloud Infrastructure User Guide 1495


Database

dbcli describe-backupconfig
Use the dbcli describe-backupconfig command to show details about a specific backup configuration.

Syntax

dbcli describe-backupconfig -i <id> -in <name> [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-i --backupconfigid The backup configuration
ID. Use the dbcli list-
backupconfigs command to get
the ID.
-in --backupconfigname The backup configuration
name. Use the dbcli list-
backupconfigs command to get
the name.
-j --json (Optional) Displays JSON output.

Example
The following command displays details about a backup configuration:

[root@dbsys ~]# dbcli describe-backupconfig -i ccdd56fe-a40b-4e82-


b38d-5f76c265282d

Backup Config details


----------------------------------------------------------------
ID: ccdd56fe-a40b-4e82-b38d-5f76c265282d
Name: dbbkcfg1
RecoveryWindow: 7
BackupDestination: Disk
CreatedTime: July 10, 2016 12:24:08 PM UTC
UpdatedTime: July 10, 2016 12:24:08 PM UTC

dbcli update-backupconfig
Use the dbcli update-backupconfig command to update an existing backup configuration.

Syntax

dbcli update-backupconfig -i <id> -in <name> -w <n>

-d {DISK|OBJECTSTORE|NONE}

-c <bucket> -o <object_store_swift_id> -on <object_store_swift_name> [-cr|-


no-cr] [-h] [-j]

Oracle Cloud Infrastructure User Guide 1496


Database

Parameters

Parameter Full Name Description


-c --container The name of an existing bucket
in the Oracle Cloud Infrastructure
Object Storage service. You can use
the Console or the Object Storage
API to create the bucket. For more
information, see Managing Buckets
on page 3426.
You must also specify --
backupdestination
objectstore and the --
objectstoreswiftId
parameter.

-cr --crosscheck (Optional) Indicates whether to


enable the crosscheck operation. This
-no-cr --no-crosscheck
operation determines if the files on
the disk on in the media management
catalog correspond to data in the
RMAN repository. If omitted, the
default setting is used (crosscheck is
enabled by default).

-h --help (Optional) Displays help for using


the command.
-i --backupconfigid The ID of the backup configuration
to update. Use the dbcli list-
backupconfigs command to get
the ID.
-in --backupconfigname The name of the backup
configuration to update. Use the
dbcli list-backupconfigs
command to get the name.
-j --json (Optional) Displays JSON output.
-o --objectstoreswiftId The ID of the object store
that contains the endpoint and
credentials for the Oracle Cloud
Infrastructure Object Storage
service. Use the dbcli list-
objectstoreswifts
command to get the object store
ID. Use the dbcli create-
objectstoreswift command to
create an object store.
You must also specify --
backupdestination
objectstore and the --
container parameter.

Oracle Cloud Infrastructure User Guide 1497


Database

Parameter Full Name Description


-on --objectstoreswiftname The name of the object store
that contains the endpoint and
credentials for the Oracle Cloud
Infrastructure Object Storage
service. Use the dbcli list-
objectstoreswifts
command to get the object store
ID. Use the dbcli create-
objectstoreswift command to
create an object store.
You must also specify --
backupdestination
objectstore and the --
container parameter.

-w --recoverywindow The new disk recovery window.


For a DISK backup destination,
specify 1 to 14 days.
For an OBJECTSTORE backup
destination, specify 1 to 30 days.

Example
The following command updates the recovery window for a backup configuration:

[root@dbsys ~]# dbcli update-backupconfig -i ccdd56fe-a40b-4e82-


b38d-5f76c265282d -w 5
{
"jobId" : "0e849291-e1e1-4c7a-8dd2-62b522b9b807",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : 1468153731699,
"description" : "update backup config: dbbkcfg1",
"updatedTime" : 1468153731700
}

dbcli delete-backupconfig
Use the dbcli delete-backupconfig command to delete a backup configuration.

Syntax

dbcli delete-backupconfig -i <id> -in <name> [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.

Oracle Cloud Infrastructure User Guide 1498


Database

Parameter Full Name Description


-i --id The backup configuration ID to
delete. Use the dbcli list-
backupconfigs command to get
the ID.
-in --backupconfigname The name of the backup
configuration to delete. Use the
dbcli list-backupconfigs
command to get the name.
-j --json (Optional) Displays JSON output.

Example
The following command deletes the specified backup configuration:

[root@dbsys ~]# dbcli delete-backupconfig -i ccdd56fe-a40b-4e82-


b38d-5f76c265282d

Bmccredential Commands
The following commands are available to manage credentials configurations, which are required for downloading DB
system patches from the Oracle Cloud Infrastructure Object Storage service. For more information, see Patching a DB
System on page 1408.
• dbcli create-bmccredential
• dbcli list-bmccredentials
• dbcli describe-bmccredential
• dbcli delete-bmccredential
• dbcli update-bmccredential
Note:

The bmccredential commands are not available on 2-node


RAC DB systems.

dbcli create-bmccredential
Use the dbcli create-bmccredential command to create a credentials configuration.

Prerequisites
Before you can create a credentials configuration, you'll need these items:
• An RSA key pair in PEM format (minimum 2048 bits). See How to Generate an API Signing Key on page
4216.
• The fingerprint of the public key. See How to Get the Key's Fingerprint on page 4219.
• Your tenancy's OCID and user name's OCID. See Where to Get the Tenancy's OCID and User's OCID on page
4220.
Then you'll need to upload the public key in the Console. See How to Upload the Public Key on page 4220.

Syntax

dbcli create-bmccredential -c [backup|patching|other] -t <tenant_ocid> -


u <user_ocid> -f <fingerprint> -k <private_key_path> -p|-hp <passphrase> [-
n <credentials_name>] [-e <object_store_url>] [-h] [-j]

Oracle Cloud Infrastructure User Guide 1499


Database

Parameters

Parameter Full Name Description


-c --credentialsType The type of Object Storage
credentials configuration to create
(these values are not case sensitive):
BACKUP - Reserved for the future
use.
PATCHING - For downloading
patches from the service.
OTHER - Reserved for the future
use.

-e --objectStoreUrl (Optional) The Object Storage


endpoint URL.
Omit this parameter when --
credentialsType PATCHING
is specified. The following URL is
assumed:
https://
objectstorage.<region_name>.oraclecloud.com
See Regions and Availability
Domains for region name strings.

-f --fingerPrint The public key fingerprint. You can


find the fingerprint in the Console
by clicking your user name in the
upper right corner and then clicking
User Settings. The fingerprint looks
something like this:

-
f 61:9e:52:26:4b:dd:46:dc:8c:a8:05:

-k --privateKey The path to the private key file in


PEM format, for example:

-k /root/.ssh/privkey

-h --help (Optional) Displays help for using


the command.
-j --json (Optional) Displays JSON output.
-n --name (Optional) The name for the new
credentials configuration. The
name is useful for tracking the
configuration.

Oracle Cloud Infrastructure User Guide 1500


Database

Parameter Full Name Description

-p --passPhrase The passphrase for the public/private


key pair, if you specified one when
-hp
creating the key pair.
Specify -p (with no passphrase) to
be prompted.
Specify -hp <passphrase>
to provide the passphrase in the
command.

-t --tenantOcid Your tenancy OCID. See Where


to Find Your Tenancy's OCID on
page 200. The tenancy OCID looks
something like this:

ocid1.tenancy.oc1..<unique_ID>

-u --userOcid The user name OCID for your


Oracle Cloud Infrastructure user
account. You can find the OCID in
the Console:

Open the Profile menu ( ) and


click User Settings.
The user name OCID looks
something like this:

ocid1.user.oc1..<unique_ID>

Example
The following command creates a credentials configuration:

[root@dbsys ~]# dbcli create-bmccredential -c patching -hp mypass -t


ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f44n2b2m2yt2j6rx32uzr4h25vqstifsfdsq
-u
ocid1.user.oc1..aaaaaaaalhdxviuxqi7xevqsksccl6edokgldvuf6raskcioq4x2z7watsfa
-f 60:9e:56:26:4b:dd:46:dc:8c:a8:05:6d:9f:0a:30:d2 -k /root/.ssh/privkey

{
"jobId" : "f8c80510-b717-4ee2-a47e-cd380480b28b",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "December 26, 2016 22:46:38 PM PST",
"resourceList" : [ ],
"description" : "BMC Credentials Creation",
"updatedTime" : "December 26, 2016 22:46:38 PM PST"
}

dbcli list-bmccredentials
Use the dbcli list-bmccredentials command to list the credentials configurations on the DB system.

Oracle Cloud Infrastructure User Guide 1501


Database

Syntax

dbcli list-bmccredentials [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

Example
The following command lists the credentials configurations on the DB system:

[root@dbsys ~]# dbcli list-bmccredentials


ID Name Type
End Point Status
---------------------------------------- ------------- ----------
----------------------------------------------------------- ----------
f19d7c8b-d0d5-4jhf-852b-eb2a81cb7ce5 patch1 Patching
https://objectstorage.us-phoenix-1.oraclecloud.com Configured
f1a8741c-b0c4-4jhf-239b-ab2a81jhfde4 patch2 Patching
https://objectstorage.us-phoenix-1.oraclecloud.com Configured

dbcli describe-bmccredential
Use the dbcli describe-bmccredential command to display details about a credentials configuration.

Syntax

dbcli describe-bmccredential -i <credentials_id> [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-i --id The ID for the credentials
configuration. Use the dbcli list-
bmccredentials on page 1501
command to get the ID.

-j --json (Optional) Displays JSON output.

Example
The following command displays details about the specified credentials configuration:

[root@dbsys ~]# dbcli describe-bmccredential -i 09f9988e-


eed5-4dde-8814-890828d1c763

BMC Credentials details


----------------------------------------------------------------

Oracle Cloud Infrastructure User Guide 1502


Database

ID: 09f9988e-eed5-4dde-8814-890678d1c763
Name: patch23
Tenant OCID:
ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f44n2b2m2yt2j6rx32uzr4h25vqstifsfdsq
User OCID:
ocid1.user.oc1..aaaaaaaalhjhfiuxqi7xevqsksccl6edokgldvuf6raskcioq4x2z7watjhf
Credentials Type: Patching
objectStore URL: https://objectstorage.us-phoenix-1.oraclecloud.com
Status: Configured
Created: January 9, 2017 1:19:11 AM PST
UpdatedTime: January 9, 2017 1:41:46 AM PST

dbcli delete-bmccredential
Use the dbcli delete-bmccredential command to delete a credentials configuration.

Syntax

dbcli delete-bmccredential -i <credentials_id> [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-i --id The ID for the credentials
configuration. Use the dbcli list-
bmccredentials on page 1501
command to get the ID.

-j --json (Optional) Displays JSON output.

Example
The following command deletes the specified credentials configuration:

[root@dbsys ~]# dbcli delete-bmccredential -i f19d7c8b-d0d5-4jhf-852b-


eb2a81cb7ce5

dbcli update-bmccredential
Use the dbcli update-bmccredential command to update a credentials configuration.

Syntax

dbcli update-bmccredential -i <credentials_id> -n <credentials_name>


-c [backup|patching|other] -p|-hp <passphrase> -f <fingerprint> -
k <private_key_path> [-h] [-j]

Oracle Cloud Infrastructure User Guide 1503


Database

Parameters

Parameter Full Name Description


-c --credentialsType The type of Object Storage
credentials configuration (these
values are not case sensitive):
BACKUP - Reserved for the future
use.
PATCHING - For downloading
patches from the service.
OTHER - Reserved for the future
use.

-i --id The ID for the credentials


configuration. Use the dbcli list-
bmccredentials on page 1501
command to get the ID.

-f --fingerPrint The public key fingerprint, for


example:

-
f 61:9e:52:26:4b:dd:46:dc:8c:a8:05:

-k --privateKey The path to the private key file in


PEM format, for example:

-k /root/.ssh/privkey

-h --help (Optional) Displays help for using


the command.
-j --json (Optional) Displays JSON output.
-n --name (Optional) The name for the
credentials configuration. Use the
dbcli list-bmccredentials on page
1501 command to get the name.

-p --passPhrase The passphrase for the public/private


key pair, if you specified one when
-hp
creating the key pair.
Specify -p (with no passphrase) to
be prompted.
Specify -hp <passphrase>
to provide the passphrase in the
command.

Oracle Cloud Infrastructure User Guide 1504


Database

Example
The following command updates a credentials configuration:

[root@dbsys ~]# dbcli update-bmccredential -c OTHER -i


6f921b29-61b6-56f4-889a-ce9270621956
{
"jobId" : "6e95a69e-cf73-4e51-a444-c7e4b9631c27",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "January 19, 2017 12:01:10 PM PST",
"resourceList" : [ ],
"description" : "Update BMC Credentials of object 6f921b29-61b6-48f4-889a-
ce9270621945",
"updatedTime" : "January 19, 2017 12:01:10 PM PST"

Component Command

dbcli describe-component
Tip:

Your DB system might not include this newer command. If you have trouble
running the command, use the CLI Update Command on page 1484
command to update the database CLI and then retry the command.
Note:

The dbcli describe-component command is not available on 2-


node RAC DB systems. Patching 2-node systems from Object Storage is not
supported.
Use the dbcli describe-component command to show the installed and available patch versions for the
server, storage, and/or database home components in the DB system.
This command requires a valid Object Storage credentials configuration. Use the Bmccredential Commands on page
1499 command to create the configuration if you haven't already done so. If the configuration is missing or invalid,
the command fails with the error: Failed to connect to the object store. Please provide
valid details.
For more information about updating the CLI, creating the credentials configuration, and applying patches, see
Patching a DB System on page 1408.

Syntax

dbcli describe-component [-s <server_group>] [-d <db_group>] [-h] [-j]

Parameters

Parameter Full Name Description


-d --dbhomes (Optional) Lists the installed and
available patch versions for only the
database home components.
-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

Oracle Cloud Infrastructure User Guide 1505


Database

Parameter Full Name Description


-s --server (Optional) Lists the installed and
available patch versions for only the
server components.

Example
The following command to show the current component versions and the available patch versions in the object store:

[root@dbsys ~]# dbcli describe-component


System Version
---------------
12.1.2.10.0

Component Installed Version Available


Version
---------------------------------------- --------------------
--------------------
OAK 12.1.2.10.0 up-to-date
GI 12.1.0.2.161018 up-to-date
ORADB12102_HOME1 12.1.0.2.161018 up-to-date
ORADB12102_HOME2, ORADB12102_HOME3 12.1.0.2.160719
12.1.0.2.161018

Database Commands
The following commands are available to manage databases:
• dbcli clone-database
• dbcli create-database
• dbcli delete-database
• dbcli describe-database
• dbcli list-databases
• dbcli modify-database
• dbcli recover-database
• dbcli register-database
• dbcli update-database

dbcli clone-database
Use the dbcli clone-database command to clone a database.

Syntax

dbcli clone-database -f <name> -u <name> -n <name> [-s <shape>] [-t <type>]


[m <sys_password>] [-p <tde_password>] [-h] [-j]

Parameters

Parameter Full Name Description


-f --sourcedbname Source database name.
-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

Oracle Cloud Infrastructure User Guide 1506


Database

Parameter Full Name Description


-m --syspassword (Optional) Password for SYS.
-n --dbname Database name.
-p --tdepassword (Optional) Password for source
TDE wallet.
-s --dbshape (Optional) Database shape.
Examples: odb1, odb2.
-t --dbtype (Optional) Database Type: SI
-u --databaseUniqueName Database unique name.

dbcli create-database
Use the dbcli create-database command to create a new database. You can create a database with a new or
existing Oracle Database home, however each database home can have only one database.
It takes a few minutes to create the database. After you run the dbcli create-database command, you can use
the dbcli list-jobs command to check the status of the database creation job.
Tip:

Wait for the database creation job to complete before you attempt to
create another database. Running multiple dbcli create-database
commands at the same time can result in some of the creation jobs not
completing.
Once the database is created, you can use the dbcli list-databases -j command to see additional
information about the database.
Note:

The dbcli create-database command is available on bare metal


DB systems only.
You must create and activate a master encryption key for any PDBs that
you create. After creating or plugging in a new PDB on a 1- or 2-node
RAC DB System, use the dbcli update-tdekey command to create
and activate a master encryption key for the PDB. Otherwise, you might
encounter the error ORA-28374: typed master key not found
in wallet when attempting to create tablespaces in the PDB. In a
multitenant environment, each PDB has its own master encryption key which
is stored in a single keystore used by all containers. For more information,
see "Overview of Managing a Multitenant Environment" in the Oracle
Database Administrator’s Guide.

Syntax

dbcli create-database -dh <db_home_id> -cl {OLTP|DSS|IMDB} -n <db_name> -


u <unique_name> -bi <bkup_config_id> -bn <bkup_config_name> -m -s <db_shape>
-r {ACFS|ASM} -y {SI|RAC|RACOne} [-dn <name>] -io -d <pdb_admin_user> [-
p <pdb>] [-ns <nlcharset>] [-cs <charset>] [-l <language>] [-dt <territory>]
-v <version> [-c|-no-c] [-co|-no-co] [-h] [-j]

Oracle Cloud Infrastructure User Guide 1507


Database

Parameters

Parameter Full Name Description


-bi --backupconfigid Defines the backup configuration
identifier for future use. Use the
dbcli list-backupconfigs
command to get the ID.
-bn --backupconfigname Defines the backup configuration
name for future use. Use the dbcli
list-backupconfigs command
to get the name.

-c --cdb (Optional) Indicates whether to


create a Container Database. If
-no-c --no-cdb omitted, a Container Database is not
created.
-cs --characterset (Optional) Defines the character
set for the database. The default is
AL32UTF8.
-cl --dbclass Defines the database class. The
options are OLTP, DSS, or IMDB.
The default is OLTP. For Enterprise
Editions, all three classes are
supported. For Standard Edition,
only OLTP is supported.

-co --dbconsole (Optional) Indicates whether the


Database Console is enabled. If
-no-co --no-dbconsole
omitted, the console is not enabled.
This parameter is not available
for a version 11.2.0.4 database on
a 2-node RAC DB system. For
more information, see To enable
the console for a version 11.2.0.4
database on a multi-node DB system
on page 1433.

-d --pdbadmin Defines the name of the Pluggable


Database (PDB) Admin User. The
default value is pdbadmin.
-dn --dbdomainname (Optional) Database domain name
(indicates the logical location of
the database within the network
structure).

-dt --dbterritory (Optional) Defines the territory


for the database. The default is
AMERICA.

Oracle Cloud Infrastructure User Guide 1508


Database

Parameter Full Name Description


-dh --dbhomeid Identifies the database home in
which to create the database. The
database home must be empty
because each database home can
have only one database. You can
use the dbcli list dbhomes
command to get the DB home ID.
If this parameter is omitted, the
database is created with a new Oracle
home.

-h --help (Optional) Displays help for the


command.
-j --json (Optional) Displays JSON output.
-l --dblanguage (Optional) Defines the language
for the database. The default is
AMERICAN.

-m --adminpassword A strong password for SYS,


SYSTEM, TDE wallet, and PDB
Admin. The password must be 9 to
30 characters and contain at least
two uppercase, two lowercase, two
numeric, and two special characters.
The special characters must be _, #,
or -. The password must not contain
the username (SYS, SYSTEM, and
so on) or the word "oracle" either
in forward or reversed order and
regardless of casing.
Specify -m (with no password) to be
prompted for the password.

-n --dbname Defines the name given to the new


database. The database name must
begin with an alphabetic character
and can contain a maximum of eight
alphanumeric characters. Special
characters are not permitted.
-ns --nationalscharacterset (Optional) Defines the national
character set for the database. The
default is AL16UTF16.

Oracle Cloud Infrastructure User Guide 1509


Database

Parameter Full Name Description


-p --pdbname (Optional) Defines a unique name
for the PDB. The PDB name must
begin with an alphabetic character
and can contain a maximum of 30
alphanumeric characters. The only
special character permitted is the
underscore ( _). The default value is
pdb1.
PDB names must be unique within a
CDB and within the listener to which
they are registered. Make sure the
PDB name is unique on the system.
To ensure uniqueness, do not use the
default name value (pdb1).

-r --dbstorage Defines the database storage, either


ACFS or ASM. The default value is
ASM.
See Usage Notes on page 1511 for
more information.

-s --dbshape Identifies the database sizing


template to use for the database.
For example, odb1, odb2, or odb3.
The default is odb1. For more
information, see Database Sizing
Templates on page 1553.
-u --databaseUniqueName Defines a unique name for the
database to ensure uniqueness within
an Oracle Data Guard group (a
primary database and its standby
databases). The unique name can
contain only alphanumeric and
underscore (_) characters. The
unique name cannot be changed.
The unique name defaults to the
name specified in the --dbname
parameter.

-v --version Defines the database version as one


of the following:
• 18.1.0.0
• 12.2.0.1
• 12.1.0.2 (the default)
• 11.2.0.4

Oracle Cloud Infrastructure User Guide 1510


Database

Parameter Full Name Description


-y --dbtype Defines the database type. Specify
SI for a 1-node instance, RAC for
a 2-node cluster, or RACOne for 1-
node instance with a second node in
cold standby mode. The default value
is RAC. These values are not case
sensitive.

Usage Notes
• You cannot mix Oracle Database Standard Edition and Enterprise Edition databases on the same DB system. (You
can mix supported database versions on the DB system, but not editions.)
• When --dbhomeid is not provided, the dbcli create-database command will create a new Oracle
Database home.
Note:

Bare metal DB systems allow only one database per database home.
• When --dbhomeid is provided, the dbcli create-database command creates the database using the
Oracle home specified. Use the dbcli list-dbhomes command to get the dbhomeid. The database home
you specify must be empty.
• Oracle Database 12.1 or later databases are supported on both Oracle Automatic Storage Management (ASM) and
Oracle ASM Cluster file system (ACFS). The default is Oracle ACFS.
• Oracle Database 11.2 is supported on Oracle ACFS.
• Each database is configured with its own Oracle ACFS file system for the datafiles and uses the following naming
convention: /u02/app/db user/oradata/db name. The default size of this mount point is 100G.
• Online logs are stored in the /u03/app/db user/redo/ directory.
• The Oracle Fast Recovery Area (FRA) is located in the /u03/app/db user/fast_recovery_area
directory.

Examples
To create a database and be prompted for the password interactively:

[root@dbsys ~]# dbcli create-database -n hrdb -c -m -cl OLTP -s odb2 -p pdb1

Password for SYS,SYSTEM and PDB Admin:


{
"jobId" : "f12485f2-dcbe-4ddf-aee1-de24d37037b6",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "August 08, 2016 03:54:03 AM EDT",
"description" : "Database service creation with db name: hrdb",
"updatedTime" : "August 08, 2016 03:54:03 AM EDT"
}

To create a database non-interactively, providing the password on the command line:

[root@dbsys ~]# dbcli create-database -n crmdb -hm <password> -cl OLTP -s


odb2
{
"jobId" : "30b5e2a6-493b-4461-98b8-78e9a15f8cdd",
"status" : "Created",
"message" : null,

Oracle Cloud Infrastructure User Guide 1511


Database

"reports" : [ ],
"createTimestamp" : "August 08, 2016 03:59:22 AM EDT",
"description" : "Database service creation with db name: crmdb",
"updatedTime" : "August 08, 2016 03:59:22 AM EDT"
}

dbcli delete-database
Use the dbcli delete-database command to delete a database.
Note:

The dbcli create-database command is available on bare metal


DB systems only.

Syntax

dbcli delete-database -i <db_id> -in <db_name> [-fd] [-j] [-h]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-fd --force (Optional) Forces the delete
operation.

-i --dbid The ID of the database to delete. Use


the dbcli list-databases
command to get the database ID.
-in --dbName The name of the database to
delete. Use the dbcli list-
databases command to get the
database name.
-j --json (Optional) Displays JSON output.

Example
The following command deletes the database named 625d9b8a-baea-4994-94e7-4c4a857a17f9:

[root@dbsys ~]# dbcli delete-database -i 625d9b8a-


baea-4994-94e7-4c4a857a17f9

dbcli describe-database
Use the dbcli describe-database command to display database details.

Syntax

dbcli describe-database -i <db_id> -in <db_name> [-h] [-j]

Oracle Cloud Infrastructure User Guide 1512


Database

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-i --dbid The ID of the database to
display. Use the dbcli list-
databases command to get the
database ID.
-in --dbName The name of the database to
display. Use the dbcli list-
databases command to get the
database name.
-j --json (Optional) Displays JSON output.

Example
The following command displays information for a database named b727bf80-c99e-4846-ac1f-28a81a725df6:

[root@dbsys ~]# dbcli describe-dbhome -i b727bf80-c99e-4846-


ac1f-28a81a725df6

DB Home details
----------------------------------------------------------------
ID: b727bf80-c99e-4846-ac1f-28a81a725df6
Name: OraDB12102_home1
Version: 12.1.0.2
Home Location: /u01/app/orauser/product/12.1.0.2/dbhome_1
Created: Jun 2, 2016 10:19:23 AM

dbcli list-databases
Use the dbcli list-databases command to list all databases on the DB system.

Syntax

dbcli list-databases [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

Example
The following command displays a list of databases:

[root@dbsys ~]# dbcli list-databases

ID DB Name DB Version CDB


Class Shape Storage Status

Oracle Cloud Infrastructure User Guide 1513


Database

---------------------------------------- ---------- --------------------


---------- -------- -------- ---------- ----------
80ad855a-5145-4f8f-a08f-406c5e4684ff dbst 12.1.0.2
true OLTP odb2 ACFS Configured
6f4e36ae-120b-4436-b0bf-d0c4aef9f7c9 db11tsta 11.2.0.4
false OLTP odb1 ACFS Configured
d8e31790-84e6-479c-beb0-ef97207091a2 db11tstb 11.2.0.4
false OLTP odb1 ACFS Configured
cce096c7-737b-447a-baa1-f4c2a330c030 pdbtst 12.1.0.2
true OLTP odb1 ACFS Configured

The following command displays the JSON output for a database:

[root@dbsys ~]# dbcli list-databases -j


[ {
"id" : "80ad855a-5145-4f8f-a08f-406c5e4684ff",
"name" : "dbtst",
"dbName" : "dbtst",
"databaseUniqueName" : "dbtst_phx1cs",
"dbVersion" : "12.1.0.2",
"dbHomeId" : "2efe7af7-0b70-4e9b-ba8b-71f11c6fe287",
"instanceOnly" : false,
"registerOnly" : false,
"dbId" : "167525515",
"isCdb" : true,
"pdBName" : "pdb1",
"pdbAdminUserName" : "pdbuser",
"enableTDE" : true,
"dbType" : "SI",
"dbTargetNodeNumber" : "0",
"dbClass" : "OLTP",
"dbShape" : "odb2",
"dbStorage" : "ACFS",
"dbCharacterSet" : {
"characterSet" : "US7ASCII",
"nlsCharacterset" : "AL16UTF16",
"dbTerritory" : "AMERICA",
"dbLanguage" : "AMERICAN"
},
"dbConsoleEnable" : false,
"backupConfigId" : null,
"backupDestination" : "NONE",
"cloudStorageContainer" : null,
"state" : {
"status" : "CONFIGURED"
},
"createTime" : "November 09, 2016 17:23:05 PM UTC",
"updatedTime" : "November 09, 2016 18:00:47 PM UTC"
}

dbcli modify-database
Use the dbcli modify-database command to modify a database.

Syntax

dbcli modify-database -i <db_id> -dh <destination_db_home_id> [-h] [-j]

Oracle Cloud Infrastructure User Guide 1514


Database

Parameters

Parameter Full Name Description


-dh --destdbhomeid Destination database home ID.
-h --help (Optional) Displays help for using
the command.
-i --databaseid Database ID.
-j --json (Optional) Displays JSON output.

dbcli recover-database
Use the dbcli recover-database command to recover a database.

Syntax

dbcli recover-database [-br <json>] [-in <db_name>] [-i <db_id>] [-r <time>]
[-t {Latest|PITR|SCN}] [-s] [-l <location>] [-tp <tde_password>] [-h] [-j]

Parameters

Parameter Full Name Description


-br --backupReport (Optional) JSON input for backup
report.
-h --help (Optional) Displays help for using
the command.
-i --dbid (Optional) Database resource ID.
-in --dbName (Optional) Database name.
-j --json (Optional) Displays JSON output.
-l --tdeWalletLocation (Optional) TDE wallet backup
location. TDE wallet should be
backed up in tar.gz format.
-r --recoveryTimeStamp (Required when recovery type
is PITR) Recovery timestamp in
the format mm/dd/yyyy hh:mi:ss.
Default: [ ]
-s --scn (Required when recovery type is
SCN) SCN.
-t --recoverytype (Required when backup report is
provided) Recovery type. Possible
values are Latest, PITR, and SCN.
-tp --tdeWalletPassword (Optional) TDE wallet password.

dbcli register-database
Use the dbcli register-database command to register a database that has been migrated to Oracle Cloud
Infrastructure. The command registers the database to the dcs-agent so it can be managed by the dcs-agent stack.

Oracle Cloud Infrastructure User Guide 1515


Database

Note:

The dbcli register-database command is not available on 2-node


RAC DB systems.

Syntax

dbcli register-database -bi <bkup_config_id> -c {OLTP|DSS|IMDB} [-co|-


no-co] -s {odb1|odb2|...} -t SI [-o <db_host_name>] [-tp <password>] -
sn <service_name> -p [-h] [-j]

Parameters

Parameter Full Name Description


-bi --backupconfigid Defines the backup configuration
ID. Use the dbcli list-
backupconfigs command to get
the ID.
-c --dbclass Defines the database class. The
options are OLTP, DSS, or IMDB.
The default is OLTP. For Enterprise
Editions, all three classes are
supported. For Standard Edition,
only OLTP is supported.

-co --dbconsole (Optional) Indicates whether the


Database Console is enabled or not.
-no-co --no-dbconsole
If omitted, the console is not enabled.

-h --help (Optional) Displays help for using


the command.
-j --json (Optional) Displays JSON output.
-o --hostname (Optional) Defines the database host
name. The default is Local host
name.
-p --syspassword Defines a strong password for SYS.
Specify -p with no password. You
will be prompted for the password.
If you must provide the password
in the command, for example in
a script, use -hp <password>
instead of -p.

-s --dbshape Defines the database sizing template


to use for the database. For example,
odb1, odb2, and odb3. For more
information, see Database Sizing
Templates on page 1553.

Oracle Cloud Infrastructure User Guide 1516


Database

Parameter Full Name Description


-sn --servicename Defines the database service name
used to build the EZCONNECT
string for connecting to the
database. The connect string
format is hostname:port/
servicename.
-t --dbtype (Optional) Defines the Database
Type as single node (SI). The default
value is SI.
-tp --tdeWalletPassword (Optional) Password for TDE wallet.
Required if TDE is enabled on the
migrated database.

Example
The following command registers the database with the specified database class, service name, and database sizing
template.

[root@dbsys ~]# dbcli register-database -c OLTP -s odb1 -sn


crmdb.example.com -p
Password for SYS:
{
"jobId" : "317b430f-ad5f-42ae-bb07-13f053d266e2",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "August 08, 2016 05:55:49 AM EDT",
"description" : "Database service registration with db service name:
crmdb.example.com",
"updatedTime" : "August 08, 2016 05:55:49 AM EDT"
}

dbcli update-database
Use the dbcli update-database command to associate a backup configuration with a database.

Syntax

dbcli update-database -i <db_id> -bi <bkup_config_id> -


bin <bkup_config_name> [-id <id>] -in <name> [-no-ab] [-h] [-j]

Parameters

Parameter Full Name Description


-bi --backupconfigid Defines the backup configuration
ID. Use the dbcli list-
backupconfigs command to get
the ID.
-bin --backupconfigname Defines the backup configuration
name for future use. Use the dbcli
list-backupconfigs command
to get the name.

Oracle Cloud Infrastructure User Guide 1517


Database

Parameter Full Name Description


-id --databaseid (Optional.) Specifies the
DBID, which is a unique 32-bit
identification number computed
when the database is created.
RMAN displays the DBID upon
connection to the target database.
You can obtain the DBID by
querying the V$DATABASE
view or the RC_DATABASE and
RC_DATABASE_INCARNATION
recovery catalog views.

-in --dbName Defines the database name to be


updated. Use the dbcli list-
databases command to get the
database name.

-h --help (Optional) Displays help for using


the command.
-i --dbid Defines the database ID to be
updated. Use the dbcli list-
databases command to get the
database ID.
-j --json (Optional) Displays JSON output.
-no-ab --noautobackup (Optional) Disables automatic
backups for the specified database.
Note:

Once disabled,
automatic backup
cannot be re-enabled
using the CLI. To
re-enable automatic
backup, use the
Console.

Example
The following command associates a backup configuration file with a database:

[root@dbsys ~]# dbcli update-database -bi 78a2a5f0-72b1-448f-bd86-


cf41b30b64ee -i 71ec8335-113a-46e3-b81f-235f4d1b6fde
{
"jobId" : "2b104028-a0a4-4855-b32a-b97a37f5f9c5",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : 1467775842977,
"description" : "update database id:71ec8335-113a-46e3-b81f-235f4d1b6fde",
"updatedTime" : 1467775842978
}

Oracle Cloud Infrastructure User Guide 1518


Database

Dbhome Commands
The following commands are available to manage database homes:
• dbcli create-dbhome
• dbcli describe-dbhome
• dbcli delete-dbhome
• dbcli list-dbhomes
• dbcli update-dbhome

dbcli create-dbhome
Use the dbcli create-dbhome command to create an Oracle Database Home.

Syntax

dbcli create-dbhome -v <version> [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.
-v --version Defines the Database Home version.
Specify one of the supported
versions:
• 18.1.0.0
• 12.2.0.1
• 12.1.0.2
• 11.2.0.4

Example
The following command creates an Oracle Database Home version 12.1.0.2:

[root@dbsys ~]# dbcli create-dbhome -v 12.1.0.2

dbcli describe-dbhome
Use the dbcli describe-dbhome command to display Oracle Database Home details.

Syntax

dbcli describe-dbhome -i <db_home_id> [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.

Oracle Cloud Infrastructure User Guide 1519


Database

Parameter Full Name Description


-i --dbhomeid Identifies the database home ID.
Use the dbcli list-dbhomes
command to get the ID.
-j --json (Optional) Displays JSON output.

Example
The following output is an example of using the display Oracle Database Home details command.

[root@dbsys ~]# dbcli describe-dbhome -i 52850389-228d-4397-


bbe6-102fda65922b

DB Home details
----------------------------------------------------------------
ID: 52850389-228d-4397-bbe6-102fda65922b
Name: OraDB12102_home1
Version: 12.1.0.2
Home Location: /u01/app/oracle/product/12.1.0.2/dbhome_1
Created: June 29, 2016 4:36:31 AM UTC

dbcli delete-dbhome
Use the dbcli delete-dbhome command to delete a database home from the DB system.

Syntax

dbcli delete-dbhome -i <db_home_id> [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-i --dbhomeid Identifies the database home ID to
be deleted. Use the dbcli list-
dbhomes command to get the ID.
-j --json (Optional) Displays JSON output.

dbcli list-dbhomes
Use the dbcli list-dbhomes command to display a list of Oracle Home directories.

Syntax

dbcli list-dbhomes [-h] [-j]

Parameter

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.

Oracle Cloud Infrastructure User Guide 1520


Database

Parameter Full Name Description


-j --json (Optional) Displays JSON output.

Example
The following command displays a list of Oracle Home directories.

[root@dbsys ~]# dbcli list-dbhomes


ID Name DB Version Home
Location
------------------------------------ ----------------- ----------
------------------------------------------
b727bf80-c99e-4846-ac1f-28a81a725df6 OraDB12102_home1 12.1.0.2 /u01/app/
orauser/product/12.1.0.2/dbhome_1

dbcli update-dbhome
Tip:

Your DB system might not include this newer command. If you have trouble
running the command, use the CLI Update Command on page 1484
command to update the database CLI and then retry the command.
Use the dbcli update-dbhome command to apply the DBBP bundle patch to a database home. For more
information about applying patches, see Patching a DB System on page 1408.

Syntax

dbcli update-dbhome -i <db_home_id> -n <node> [--local] [--precheck] [-h] [-


j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-i --dbhomeid The ID of the database home. Use
the dbcli list-dbhomes
command to get the ID.
-j --json (Optional) Displays JSON output.
-n --node (Optional) Node number to be
updated. Use the dbcli list-
nodes command to get the node
number.

--local (Optional) Performs the operation


on the local node of a multi-node
high availability (HA) system. This
parameter is not needed to perform
the operation on a single-node
system.

Oracle Cloud Infrastructure User Guide 1521


Database

Parameter Full Name Description


--precheck (Optional) Runs precheck operations
to check prerequisites.

Example
The following commands update the database home and show the output from the update job:

[root@dbsys ~]# dbcli update-dbhome -i e1877dac-a69a-40a1-b65a-d5e190e671e6


{
"jobId" : "493e703b-46ef-4a3f-909d-bbd123469bea",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "January 19, 2017 10:03:21 AM PST",
"resourceList" : [ ],
"description" : "DB Home Patching: Home Id is e1877dac-a69a-40a1-b65a-
d5e190e671e6",
"updatedTime" : "January 19, 2017 10:03:21 AM PST"
}

# dbcli describe-job -i 493e703b-46ef-4a3f-909d-bbd123469bea

Job details
----------------------------------------------------------------
ID: 493e703b-46ef-4a3f-909d-bbd123469bea
Description: DB Home Patching: Home Id is e1877dac-a69a-40a1-
b65a-d5e190e671e6
Status: Running
Created: January 19, 2017 10:03:21 AM PST
Message:

Task Name Start Time


End Time Status
---------------------------------------- -----------------------------------
----------------------------------- ----------
Create Patching Repository Directories January 19, 2017 10:03:21 AM PST
January 19, 2017 10:03:21 AM PST Success
Download latest patch metadata January 19, 2017 10:03:21 AM PST
January 19, 2017 10:03:21 AM PST Success
Update System version January 19, 2017 10:03:21 AM PST
January 19, 2017 10:03:21 AM PST Success
Update Patching Repository January 19, 2017 10:03:21 AM PST
January 19, 2017 10:03:31 AM PST Success
Opatch updation January 19, 2017 10:03:31 AM PST
January 19, 2017 10:03:31 AM PST Success
Patch conflict check January 19, 2017 10:03:31 AM PST
January 19, 2017 10:03:31 AM PST Running

Dbstorage Commands
The following commands are available to manage database storage:
• dbcli list-dbstorages
• dbcli describe-dbstorage
• dbcli create-dbstorage
• dbcli delete-dbstorage

Oracle Cloud Infrastructure User Guide 1522


Database

dbcli list-dbstorages
Use the dbcli list-dbstorages command to list the database storage in the DB system.

Syntax

dbcli list-dbstorages [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

Example
The following command displays details about database storage:

[root@dbsys ~]# dbcli list-dbstorages

ID Type DBUnique Name Status


---------------------------------------- ------ --------------------
----------
afb4a1ce-d54d-4993-a149-0f28c9fb33a4 Acfs db1_2e56b3a9b815
Configured
d81e8013-4551-4d10-880b-d1a796bca1bc Acfs db11xp
Configured

dbcli describe-dbstorage
Use the dbcli describe-dbstorage command to show detailed information about a specific database storage
resource.

Syntax

dbcli describe-dbstorage -i <db_storage_id> [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-i --id Defines the database storage ID. Use
the dbcli list-dbstorages
command to get the database storage
ID.
-j --json (Optional) Displays JSON output.

Oracle Cloud Infrastructure User Guide 1523


Database

Example
The following command displays the database storage details for 105a2db2-625a-45ba-8bdd-ee46da0fd83a:

[root@dbsys ~]# dbcli describe-dbstorage -i 105a2db2-625a-45ba-8bdd-


ee46da0fd83a

DBStorage details
----------------------------------------------------------------

ID: 105a2db2-625a-45ba-8bdd-ee46da0fd83a
DB Name: db1
DBUnique Name: db1
DB Resource ID: 439e7bd7-f717-447a-8046-08b5f6493df0
Storage Type:
DATA Location: /u02/app/oracle/oradata/db1
RECO Location: /u03/app/oracle/fast_recovery_area/
REDO Location: /u03/app/oracle/redo/
State: ResourceState(status=Configured)
Created: July 3, 2016 4:19:21 AM UTC
UpdatedTime: July 3, 2016 4:41:29 AM UTC

dbcli create-dbstorage
Use the dbcli create-dbstorage command to create the database storage layout without creating the
complete database. This is useful for database migration and standby database creation.

Syntax

dbcli create-dbstorage -n <db_name> [-u <db_unique_name>] [-r {ACFS|ASM}] [-


s <datasize>] [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.
-n --dbname Defines the database name. The
database name must begin with an
alphabetic character and can contain
a maximum of eight alphanumeric
characters. Special characters are not
permitted.
-r --dbstorage (Optional) Defines the type of
database storage as ACFS or ASM.
The default value is ASM.
-s --dataSize (Optional) Defines the data size in
GBs. The minimum size is 10GB.
The default size is 100GB.

-u --databaseUniqueName (Optional) Defines the unique name


for the database. The default is
the database name specified in --
dbname.

Oracle Cloud Infrastructure User Guide 1524


Database

Example
The following command creates database storage with a storage type of ACFS:

[root@dbsys ~]# dbcli create-dbstorage -r ACFS -n testdb -u testdbname

{
"jobId" : "5884a77a-0577-414f-8c36-1e9d8a1e9cee",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : 1467952215102,
"description" : "Database storage service creation with db name: testdb",
"updatedTime" : 1467952215103
}

dbcli delete-dbstorage
Use the dbcli delete-dbstorage command to delete database storage that is not being used by the database.
A error occurs if the resource is in use.

Syntax

dbcli delete-dbstorage -i <dbstorageID> [-h] [-j]

Parameters

Parameter Parameter Description


-h --help (Optional) Displays help for using
the command.
-i --id The database storage ID to
delete. Use the dbcli list-
dbstorages command to get the
database storage ID.
-j --json (Optional) Displays JSON output.

Example
The following command deletes the specified database storage:

[root@dbsys ~]# dbcli delete-dbstorage -i f444dd87-86c9-4969-a72c-


fb2026e7384b

{
"jobId" : "467c9388-18c6-4e1a-8655-2fd3603856ef",
"status" : "Running",
"message" : null,
"reports" : [ ],
"createTimestamp" : 1467952336843,
"description" : "Database storage service deletion with id:
f444dd87-86c9-4969-a72c-fb2026e7384b",
"updatedTime" : 1467952336856
}

Oracle Cloud Infrastructure User Guide 1525


Database

Dgconfig Commands

dbcli list-dgconfigs
Use the dbcli list-dgconfigs command to list DG configurations.

Syntax

dbcli list-dgconfigs [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

Featuretracking Commands

dbcli list-featuretracking
Use the dbcli list-featuretracking command to list tracked features.

Syntax

dbcli list-featuretracking[-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

Job Commands
The following commands are available to manage jobs:
• dbcli describe-job
• dbcli list-jobs

dbcli describe-job
Use the dbcli describe-job command to display details about a specific job.

Syntax

dbcli describe-job -i <job_id> [-h] [-j]

Oracle Cloud Infrastructure User Guide 1526


Database

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-i --jobid Identifies the job. Use the dbcli
list-jobs command to get the
jobid.
-j --json (Optional) Displays JSON output.

Example
The following command displays details about the specified job ID:

[root@dbsys ~]# dbcli describe-job -i 74731897-fb6b-4379-9a37-246912025c17

Job details
----------------------------------------------------------------
ID: 74731897-fb6b-4379-9a37-246912025c17
Description: Backup service creation with db name: dbtst
Status: Success
Created: November 18, 2016 8:33:04 PM UTC
Message:

Task Name Start Time


End Time Status
---------------------------------------- -----------------------------------
----------------------------------- ----------
Backup Validations November 18, 2016 8:33:04 PM UTC
November 18, 2016 8:33:13 PM UTC Success
validate recovery window November 18, 2016 8:33:13 PM UTC
November 18, 2016 8:33:17 PM UTC Success
Db cross check November 18, 2016 8:33:17 PM UTC
November 18, 2016 8:33:23 PM UTC Success
Database Backup November 18, 2016 8:33:23 PM UTC
November 18, 2016 8:34:22 PM UTC Success
Backup metadata November 18, 2016 8:34:22 PM UTC
November 18, 2016 8:34:22 PM UTC Success

dbcli list-jobs
Use the dbcli list-jobs command to display a list of jobs, including the job IDs, status, and the job
created date and time stamp.

Syntax

dbcli list-jobs [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

Oracle Cloud Infrastructure User Guide 1527


Database

Example
The following command displays a list of jobs:

[root@dbsys ~]# dbcli list-jobs

ID Description
Created
Status
----------------------------------------
---------------------------------------------------------------------------
----------------------------------- ----------
0a362dac-0339-41b5-9c9c-4d229e363eaa Database service creation with db
name: db11 November 10, 2016 11:37:54 AM UTC
Success
9157cc78-b487-4ee9-9f46-0159f10236e4 Database service creation with db
name: jhfpdb November 17, 2016 7:19:59 PM UTC
Success
013c408d-37ca-4f58-a053-02d4efdc42d0 create backup config:myBackupConfig
November 18, 2016 8:28:14 PM UTC
Success
921a54e3-c359-4aea-9efc-6ae7346cb0c2 update database
id:80ad855a-5145-4f8f-a08f-406c5e4684ff November 18,
2016 8:32:16 PM UTC Success
74731897-fb6b-4379-9a37-246912025c17 Backup service creation with db
name: dbtst November 18, 2016 8:33:04 PM
UTC Success
40a227b1-8c47-46b9-a116-48cc1476fc12 Creating a report for database
80ad855a-5145-4f8f-a08f-406c5e4684ff November 18, 2016 8:41:39 PM
UTC Success

Latestpatch Command

dbcli describe-latestpatch
Tip:

Your DB system might not include this newer command. If you have trouble
running the command, use the CLI Update Command on page 1484
command to update the database CLI and then retry the command.
Note:

The dbcli describe-latestpatch command is not available on 2-


node RAC DB systems. Patching 2-node systems from Object Storage is not
supported.
Use the dbcli describe-latestpatch command show the latest patches applicable to the DB system and
available in Oracle Cloud Infrastructure Object Storage.
This command requires a valid Object Storage credentials configuration. Use the Bmccredential Commands on page
1499 command to create the configuration if you haven't already done so. If the configuration is missing or invalid,
the command fails with the error: Failed to connect to the object store. Please provide
valid details.
For more information about updating the CLI, creating the credentials configuration, and applying patches, see
Patching a DB System on page 1408.

Syntax

dbcli describe-latestpatch [-h] [-j]

Oracle Cloud Infrastructure User Guide 1528


Database

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

Example
The following command displays patches available in the object store:

[root@dbsys ~]# dbcli describe-latestpatch

componentType availableVersion
--------------- --------------------
gi 12.1.0.2.161018
db 11.2.0.4.161018
db 12.1.0.2.161018
oak 12.1.2.10.0

Logcleanjob Commands
The following commands are available to manage log cleaning jobs:
• dbcli create-logCleanJob
• dbcli describe-logCleanJob
• dbcli list-logCleanJobs

dbcli create-logCleanJob
Use the dbcli create-logCleanJob command to create a log cleaning job.

Syntax

dbcli create-logCleanJob [-c {gi|database|dcs}] [-o <number>] [u {Day|Hour|


Minute}] [-h] [-j]

Parameters

Parameter Full Name Description


-c --components (Optional) Components. Possible
values are gi, database, and dcs.
Separate multiple values by commas.
-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.
-o --olderThan (Optional) Quantity portion of time
interval. Default: 30. Cleans logs
older than the specified time interval
(-o and -u).

Oracle Cloud Infrastructure User Guide 1529


Database

Parameter Full Name Description


-u --unit (Optional) Unit portion of time
interval. Possible values: Day, Hour,
or Minute. Default: Day. Cleans logs
older than the specified time interval
(-o and -u).

dbcli describe-logCleanJob
Use the dbcli describe-logCleanJob command to display the summary for a log cleaning job.

Syntax

dbcli describe-logCleanJob -i <job_id> [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-i --jobid ID of log cleaning job for which to
display the summary.
-j --json (Optional) Displays JSON output.

dbcli list-logCleanJobs
Use the dbcli list-logCleanJobs command to list log cleaning jobs.

Syntax

dbcli list-logCleanJobs [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

Logspaceusage Command

dbcli list-logSpaceUsage
Use the dbcli list-logSpaceUsage command to list log space usage.

Syntax

dbcli list-logSpaceUsage [-c {gi|database|dcs}] [-h] [-j]

Oracle Cloud Infrastructure User Guide 1530


Database

Parameters

Parameter Full Name Description


-c --components (Optional) Components. Possible
values: gi, database, and dcs.
Separate multiple values by commas.
-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

Netsecurity Commands
The following commands are available to manage network encryption on the DB system:
• dbcli describe-netsecurity
• dbcli update-netsecurity

dbcli describe-netsecurity
Use the dbcli describe-netsecurity command to display the current network encryption setting for a
database home.

Syntax

dbcli describe-netsecurity -H <db_home_id> [-h] [-j]

Parameters

Parameter Full Name Description


-H --dbHomeId Defines the database home ID.
Use the dbcli list-dbhomes
command to get the dbhomeid.
-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

Example
The following command displays the encryption setting for specified database home:

[root@dbsys ~]# dbcli describe-netsecurity -H 16c96a9c-f579-4a4c-


a645-8d4d22d6889d

NetSecurity Rules
----------------------------------------------------------------
DatabaseHomeID: 16c96a9c-f579-4a4c-a645-8d4d22d6889d

Role: Server
EncryptionAlgorithms: AES256 AES192 AES128
IntegrityAlgorithms: SHA1
ConnectionType: Required

Role: Client
EncryptionAlgorithms: AES256 AES192 AES128
IntegrityAlgorithms: SHA1

Oracle Cloud Infrastructure User Guide 1531


Database

ConnectionType: Required

dbcli update-netsecurity
Use the dbcli update-netsecurity command to update the Oracle Net security configuration on the DB
system.

Syntax

dbcli update-netsecurity {-c|-s} -t {REJECTED|ACCEPTED|REQUESTED|REQUIRED} -


H db_home_id> -e {AES256|AES192|AES128} -i {SHA1|SHA512|SHA384|SHA256} [-h]
[-j]

Parameters

Parameter Full Name Description


-c --client Indicates that the specified data
encryption or data integrity
configuration is for the client.
(--client and --server are
mutually exclusive.)

-e -- encryptionAlgorithms Defines the algorithm to be used for


encryption. Specify either AES256,
AES192, or AES128.
-H --dbHomeId Defines the database home ID.
Use the dbcli list-dbhomes
command to get the dbHomeId.
-h --help (Optional) Displays help for using
the command.
-i --integrityAlgorithms Defines the algorithm to be used
for integrity. Specify either SHA1,
SHA512, SHA384, or SHA256.
For Oracle Database 11g, the only
accepted value is SHA1.
-j --json (Optional) Displays JSON output.
-s --server Indicates that the specified data
encryption or data integrity
configuration is for the server.
(--client and --server are
mutually exclusive.)

Oracle Cloud Infrastructure User Guide 1532


Database

Parameter Full Name Description


-t --connectionType Specifies how Oracle Net Services
data encryption or data integrity
is negotiated with clients. The
following values are listed in the
order of increasing security:
REJECTED - Do not enable data
encryption or data integrity, even if
required by the client.
ACCEPTED - Enable data
encryption or data integrity if
required or requested by the client.
REQUESTED - Enable data
encryption or data integrity if the
client permits it.
REQUIRED - Enable data
encryption or data integrity or
preclude the connection.
For detailed information
about network data encryption
and integrity, see https://
docs.oracle.com/database/121/
DBSEG/asoconfg.htm#DBSEG1047.

Example
The following command updates the connection type to ACCEPTED:

[root@dbsys ~]# dbcli update-netsecurity -H a2ffbb07-c9c0-4467-a458-


bce4d3b76cd5 -t ACCEPTED

Node Command

dbcli list-nodes
Use the dbcli list-nodes command to display a list of nodes, including the node numbers.

Syntax

dbcli list-nodes [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

Oracle Cloud Infrastructure User Guide 1533


Database

Example
The following command displays a list of nodes:

[root@dbsys ~]# dbcli list-nodes


node Number node Name ilom Name IP Address Subnet
Mask Gateway
---------- ---------------------------------------- --------------------
------------------ ------------------ ------------------
0 rac21 N/A N/A N/A N/A
1 rac22 N/A N/A N/A N/A

Objectstoreswift Commands
You can back up a database to an existing bucket in the Oracle Cloud Infrastructure Object Storage service by using
the dbcli create-backup on page 1489 command, but first you'll need to:
1. Create an object store on the DB system, which contains the endpoint and credentials to access Object Storage, by
using the dbcli create-objectstoreswift command.
2. Create a backup configuration that refers to the object store ID and the bucket name by using the dbcli create-
backupconfig on page 1493 command.
3. Associate the backup configuration with the database by using the dbcli update-database on page 1517
command.
The following commands are available to manage object stores.
• dbcli create-objectstoreswift
• dbcli describe-objectstoreswift
• dbcli list-objectstoreswifts

dbcli create-objectstoreswift
Use the dbcli create-objectstoreswift command to create an object store.

Syntax

dbcli create-objectstoreswift -n <object_store_name>


-t <object_storage_namespace> -u <user_name> -
e https://swiftobjectstorage.<region_name>.oraclecloud.com/v1 -p [-h] [-j]

where <object_storage_namespace> is your tenancy's Object Storage namespace.

Parameters

Parameter Full Name Description


-e --endpointurl The following endpoint URL.
https://swiftobjectstorage.<region_n
v1
See Regions and Availability
Domains for region name strings.

-h --help (Optional) Displays help for using


the command.
-j --json (Optional) Displays JSON output.
-n --name The name for the object store to be
created.

Oracle Cloud Infrastructure User Guide 1534


Database

Parameter Full Name Description


-p --swiftpassword The auth token that you generated
by using the Console or IAM API.
For information about generating an
auth token for use with Swift, see
Managing User Credentials on page
2475.
This is not the password for the
Oracle Cloud Infrastructure user.
Specify -p (with no password) to be
prompted.
Specify -hp "<password> " in
quotes to provide the password (auth
token) in the command.

-t --tenantname The Object Storage namespace. of


your tenancy.
-u --username The user name for the Oracle Cloud
Infrastructure user account, for
example:
-u [email protected]
This is the user name you use to sign
in to the Console.
The user name must have tenancy-
level access to the Object Storage.
An easy way to do this is to add the
user name to the Administrators
group. However, that allows access
to all of the cloud services. Instead,
an administrator can create a policy
that allows tenancy-level access to
just Object Storage. The following is
an example of such a policy.

Allow group DBAdmins


to manage buckets in
tenancy

Allow group DBAdmins


to manage objects in
tenancy

For more information about adding


a user to a group, see Managing
Groups on page 2438. For more
information about policies, see
Getting Started with Policies on page
2143.

Oracle Cloud Infrastructure User Guide 1535


Database

Example
The following command creates an object store and prompts for the Swift password:

[root@dbsys ~]# dbcli create-objectstoreswift -n r2swift


-t MyObjectStorageNamespace -u [email protected] -
e https://swiftobjectstorage.<region_name>.oraclecloud.com/v1 -p
Password for Swift:
{
"jobId" : "c565bb71-f67b-4fab-9d6f-a34eae36feb7",
"status" : "Created",
"message" : "Create object store swift",
"reports" : [ ],
"createTimestamp" : "January 19, 2017 11:11:33 AM PST",
"resourceList" : [ {
"resourceId" : "8a0fe039-f5d4-426a-8707-256c612b3a30",
"resourceType" : "ObjectStoreSwift",
"jobId" : "c565bb71-f67b-4fab-9d6f-a34eae36feb7",
"updatedTime" : "January 19, 2017 11:11:33 AM PST"
} ],
"description" : "create object store:biyanr2swift",
"updatedTime" : "January 19, 2017 11:11:33 AM PST"
}

dbcli describe-objectstoreswift
Use the dbcli describe-objectstoreswift command to display details about an object store.

Syntax

dbcli describe-objectstoreswift -i <object_store_swift_id> -


in <object_store_swift_name> [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-i --objectstoreswiftid The object store ID. Use the dbcli
list-objectstoreswifts
command to get the ID.
-in --objectstoreswiftName The object store name.
Use the dbcli list-
objectstoreswifts command
to get the name.
-j --json (Optional) Displays JSON output.

Example
The following command displays details about an object store:

[root@dbsys ~]# dbcli describe-objectstoreswift -i 910e9e2d-25b4-49b4-b88e-


ff0332f7df87
Object Store details
----------------------------------------------------------------
ID: 910e9e2d-25b4-49b4-b88e-ff0332f7df87

Oracle Cloud Infrastructure User Guide 1536


Database

Name: objstrswift15
UserName: [email protected]
TenantName: CompanyABC
endpoint
URL: https://swiftobjectstorage.<region_name>.oraclecloud.com/v1
CreatedTime: November 16, 2016 11:25:34 PM UTC
UpdatedTime: November 16, 2016 11:25:34 PM UTC

dbcli list-objectstoreswifts
Use the dbcli list-objectstoreswifts command to list the object stores on a DB system.

Syntax

dbcli list-objectstoreswifts [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

Example
The following command lists the object stores on the DB system:

[root@dbsys ~]# dbcli list-objectstoreswifts

ID Name UserName
TenantName Url
createTime
---------------------------------------- --------------------
-------------------- -------------- ------
----------------------------------------------------
-----------------------------------
2915bc6a-6866-436a-a38c-32302c7c4d8b swiftobjstr1
[email protected]
LargeComputers https://swiftobjectstorage.<region_name>.oraclecloud.com/v1
November 10, 2016 8:42:18 PM UTC
910e9e2d-25b4-49b4-b88e-ff0332f7df87 objstrswift15
[email protected]
LargeComputers https://swiftobjectstorage.<region_name>.oraclecloud.com/v1
November 16, 2016 11:25:34 PM UTC

Pendingjob Command

dbcli list-pendingjobs
Use the dbcli list-pendingjobs command to display a list of pending jobs.

Syntax

dbcli list-pendingjobs [-h] [-j]

Oracle Cloud Infrastructure User Guide 1537


Database

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

Rmanbackupreport Commands
The following commands are available to manage RMAN backup reports:
• dbcli create-rmanbackupreport
• dbcli delete-rmanbackupreport
• dbcli describe-rmanbackupreport
• dbcli list-rmanbackupreports

dbcli create-rmanbackupreport
Use the dbcli create-rmanbackupreport command to create an RMAN backup report.

Syntax

dbcli create-rmanbackupreport -w {summary|detailed} -rn <name> [-i <db_id>]


[-in <db_name>] [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-i --dbid (Optional) Database resource ID.
-in --dbname (Optional) Database resource name.
-j --json (Optional) Displays JSON output.
-rn --rptname RMAN backup report name.
Maximum number of characters: 30.
Wrap name in single quotes when
special characters are used.
-w --reporttype RMAN backup report type. Possible
values: summary or detailed.

dbcli delete-rmanbackupreport
Use the dbcli delete-rmanbackupreport command to delete an RMAN backup report.

Syntax

dbcli delete-rmanbackupreport [-d <db_id>] [-dn <db_name>] [-n <number>] [-


i <rpt_id>] [-in <rpt_name>] [-h] [-j]

Oracle Cloud Infrastructure User Guide 1538


Database

Parameters

Parameter Full Name Description


-d --dbid (Optional) Database resource ID.
-dn --dbname (Optional) Database resource name.
-h --help (Optional) Displays help for using
the command.
-i --reportid (Optional) RMAN backup report ID
-in --rptname (Optional) RMAN backup report
name
-j --json (Optional) Displays JSON output.
-n --numofday (Optional) Number of days since
created (provided with Database ID/
Database Name)

dbcli describe-rmanbackupreport
Use the dbcli describe-rmanbackupreport command to

Syntax

dbcli describe-rmanbackupreport [-i <rpt_id>] [-in <rpt_name>] [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-i --id (Optional) RMAN backup report ID
-in --name (Optional) RMAN backup report
name
-j --json (Optional) Displays JSON output.

dbcli list-rmanbackupreports
Use the dbcli list-rmanbackupreports command to

Syntax

dbcli list-rmanbackupreports [-i <db_id>] [-in <db_name>] [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-i --dbid (Optional) Database resource ID.

Oracle Cloud Infrastructure User Guide 1539


Database

Parameter Full Name Description


-in --dbName (Optional) Database resource name.
-j --json (Optional) Displays JSON output.

Schedule Commands
The following commands are available to manage schedules:
• dbcli describe-schedule
• dbcli list-schedules
• dbcli update-schedule

dbcli describe-schedule
Use the dbcli describe-schedule command to describe a schedule.

Syntax

dbcli describe-schedule -i <id> [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-i --scheduleid Schedule ID.
-j --json (Optional) Displays JSON output.

dbcli list-schedules
Use the dbcli list-schedules command to list schedules.

Syntax

dbcli list-schedules [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

dbcli update-schedule
Use the dbcli update-schedule command to update a schedule.

Syntax

dbcli update-schedule -i <id> [-x <expression>] [-t <description>] [-d] [-e]


[-h] [-j]

Oracle Cloud Infrastructure User Guide 1540


Database

Parameters

Parameter Full Name Description


-d --disable (Optional) Disables the schedule.
-e --enable (Optional) Enables the schedule.
-h --help (Optional) Displays help for using
the command.
-i --scheduleid Schedule ID.
-j --json (Optional) Displays JSON output.
-t --description (Optional) Description
-x --cronExpression (Optional) Cron expression. Use
cronmaker.com to generate a valid
cron expression.

Scheduledexecution Command

dbcli list-scheduledExecutions
Use the dbcli list-scheduledExecutions command to list scheduled executions.

Syntax

dbcli list-scheduledExecutions [-e <execution_id>] [-i <schedule_id>] [-h]


[-j]

Parameters

Parameter Full Name Description


-e --executionid (Optional) Execution ID.
-h --help (Optional) Displays help for using
the command.
-i --scheduleid (Optional) Schedule ID.
-j --json (Optional) Displays JSON output.

Server Command

dbcli update-server
Tip:

Your DB system might not include this newer command. If you have trouble
running the command, use the CLI Update Command on page 1484
command to update the database CLI and then retry the command.
Use the dbcli update-server command to apply patches to the server components in the DB system. For more
information about applying patches, see Patching a DB System on page 1408.

Syntax

dbcli update-server [-n <number>] [--local] [--precheck] [-h] [-j]

Oracle Cloud Infrastructure User Guide 1541


Database

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.
-l --local (Optional) Performs the operation
on the local node of a multi-node
high availability (HA) system. This
parameter is not needed to perform
the operation on a single-node
system.

-n --node (Optional) Node number to be


updated. Use the dbcli list-
nodes command to get the node
number.

-p --precheck (Optional) Runs precheck operations


to check prerequisites.

Examples
The following commands update the server and show the output from the update job:

[root@dbsys ~]# dbcli update-server


{
"jobId" : "9a02d111-e902-4e94-bc6b-9b820ddf6ed8",
"status" : "Created",
"reports" : [ ],
"createTimestamp" : "January 19, 2017 09:37:11 AM PST",
"resourceList" : [ ],
"description" : "Server Patching",
"updatedTime" : "January 19, 2017 09:37:11 AM PST"
}

# dbcli describe-job -i 9a02d111-e902-4e94-bc6b-9b820ddf6ed8

Job details
----------------------------------------------------------------
ID: 9a02d111-e902-4e94-bc6b-9b820ddf6ed8
Description: Server Patching
Status: Running
Created: January 19, 2017 9:37:11 AM PST
Message:

Task Name Start Time


End Time Status
---------------------------------------- -----------------------------------
----------------------------------- ----------
Create Patching Repository Directories January 19, 2017 9:37:11 AM PST
January 19, 2017 9:37:11 AM PST Success
Download latest patch metadata January 19, 2017 9:37:11 AM PST
January 19, 2017 9:37:11 AM PST Success
Update System version January 19, 2017 9:37:11 AM PST
January 19, 2017 9:37:11 AM PST Success
Update Patching Repository January 19, 2017 9:37:11 AM PST
January 19, 2017 9:38:35 AM PST Success

Oracle Cloud Infrastructure User Guide 1542


Database

oda-hw-mgmt upgrade January 19, 2017 9:38:35 AM PST


January 19, 2017 9:38:58 AM PST Success
Opatch updation January 19, 2017 9:38:58 AM PST
January 19, 2017 9:38:58 AM PST Success
Patch conflict check January 19, 2017 9:38:58 AM PST
January 19, 2017 9:42:06 AM PST Success
apply clusterware patch January 19, 2017 9:42:06 AM PST
January 19, 2017 10:02:32 AM PST Success
Updating GiHome version January 19, 2017 10:02:32 AM PST
January 19, 2017 10:02:38 AM PST Success

The following command updates node 0 of the server only, with precheck:

# dbcli update-server -n 0 -p
{
"jobId" : "3e2a1e3c-83d3-4101-86b8-4d525f3f8c18",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "April 26, 2019 06:07:27 AM UTC",
"resourceList" : [ ],
"description" : "Server Patching Prechecks",
"updatedTime" : "April 26, 2019 06:07:27 AM UTC"
}

System Command

dbcli describe-system
Use the dbcli describe-system command to display details about the system. On a 2-node RAC DB system,
the command provides information about the local node.

Syntax

dbcli describe-system [-b] [-d] [-h] [-j]

Parameters

Parameter Full Name Description


-b --bom (Optional) Displays BOM
information.

-d --details (Optional) Displays additional


information about the DB system,
including dcs CLI and agent version
information.
-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

TDE Commands
The following commands are available to manage TDE-related items (backup reports, keys, and wallets):
• dbcli list-tdebackupreports
• dbcli update-tdekey
• dbcli recover-tdewallet

Oracle Cloud Infrastructure User Guide 1543


Database

dbcli list-tdebackupreports
Use the dbcli list-tdebackupreports command to list backup reports for TDE wallets.

Syntax

dbcli list-tdebackupreports [-i <db_id>] [-in <db_name>] [-h] [-j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-i --dbResid (Optional) Displays the TDE Wallet
backup reports for the specified
database resource ID. Use the
dbcli list-databases
command to get the database
resource ID.
-in --dbResname (Optional) Displays the TDE Wallet
backup reports for the specified
database resource name. Use the
dbcli list-databases
command to get the database
resource name.
-j --json (Optional) Displays JSON output.

Example
The following command lists the backup reports for TDE wallets:

[root@dbsys ~]# dbcli list-tdebackupreports


DbResID OraDbId BackupLocation
--------------------------------------- --------------------
----------------------------------------
538ca5b1-654d-4418-8ce1-f49b6c987a60 1257156075 https://
swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/dbaasimage/
backuptest/host724007/tdewallet/Testdb5/1257156075/2017-08-17/
TDEWALLET_BMC60_2017-08-17_10-58-17.0990.tar.gz
538ca5b1-9fb2-4245-b157-6e25d7c988c5 704287483 https://
swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/dbaasimage/
backuptest/host724007/tdewallet/Testdb1/704287483/2017-08-17/
TDEWALLET_AUTO_2017-08-17_11-03-25.0953.tar.gz
538ca5b1-9fb2-4245-b157-6e25d7c988c5 704287483 https://
swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/dbaasimage/
backuptest/host724007/tdewallet/Testdb1/704287483/2017-08-17/
TDEWALLET_BMC62_2017-08-17_11-04-41.0264.tar.gz
19714ffa-de1b-4433-9188-c0592887e609 1157116855 https://
swiftobjectstorage.us-phoenix-1.oraclecloud.com/v1/dbaasimage/
backuptest/host724007/tdewallet/Testdb7/1157116855/2017-08-17/
TDEWALLET_AUTO_2017-08-17_11-57-47.0605.tar.gz

Oracle Cloud Infrastructure User Guide 1544


Database

dbcli update-tdekey
Use the dbcli update-tdekey command to update the TDE encryption key inside the TDE wallet. You can
update the encryption key for Pluggable Databases (if -pdbNames are specified), and/or the Container Database (if -
rootDatabase is specified).

Syntax

dbcli update-tdekey -i <db_id> -p [-all] -n <pdbname1,pdbname2> [-r|-no-r] -


t <tag_name> [-h] [-j]

Parameters

Parameter Full Name Description


-all --allPdbNames (Optional) Flag to rotate (update) all
PDB names. To update all instead
of specified PDB names, use this
parameter instead of -n. Default:
false.

-i --databaseId Defines the database ID for which to


update the key.

-p --password Defines the TDE Admin wallet


password. Specify -p with no
password. You will be prompted for
the password.
If you must provide the password
in the command, for example in
a script, use -hp <password>
instead of -p.

-n --pdbNames Defines the PDB names to be rotated


(updated).

-r --rootDatabase Indicates whether to rotate the


key for the root database if it is a
-no-r --no-rootDatabase container database.
-t -tagName Defines the TagName used to
backup the wallet. The default is
OdaRotateKey.
-h --help (Optional) Displays help for using
the command.
-j --json (Optional) Displays JSON output.

Example
The following command updates the key for pdb1 and pdb2 only:

[root@dbsys ~]# dbcli update-tdekey -dbid ee3eaab6-a45b-4e61-a218-


c4ba665503d9 -p -n pdb1,pdb2

TDE Admin wallet password:


{

Oracle Cloud Infrastructure User Guide 1545


Database

"jobId" : "08e5edb1-42e1-4d16-a47f-783c0afa4778",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : 1467876407035,
"description" : "TDE update",
"updatedTime" : 1467876407035
}

The following command updates pdb1, pdb2, and the container database:

[root@dbsys ~]# dbcli update-tdekey -dbid ee3eaab6-a45b-4e61-a218-


c4ba665503d9 -p -n pdb1,pdb2 -r

TDE Admin wallet password:


{
"jobId" : "c72385f0-cd81-42df-a8e8-3a1e7cab1278",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : 1467876433783,
"description" : "TDE update",
"updatedTime" : 1467876433783
}

dbcli recover-tdewallet
Use the dbcli recover-tdewallet command to recover a TDE wallet.

Syntax

dbcli recover-tdewallet -in <db_name> -tp <password> [-l <location>] [-h] [-


j]

Parameters

Parameter Full Name Description


-h --help (Optional) Displays help for using
the command.
-in --dbName Database name.
-j --json (Optional) Displays JSON output.
-l --tdeWalletBackuplocation (Optional) TDE wallet backup
location. TDE wallet should b
ebacked up in tar.gz format.
-tp --tdeWalletPassword Defines the TDE Admin wallet
password.

Admin Commands
The following commands are to perform administrative actions on the DB system:
• dbadmcli manage diagcollect
• dbadmcli power
• dbadmcli power disk status
• dbadmcli show controller

Oracle Cloud Infrastructure User Guide 1546


Database

• dbadmcli show disk


• dbadmcli show diskgroup
• dbadmcli show env_hw (environment type and hardware version) (environment type and hardware version)
• dbadmcli show fs (file system details) (file system details)
• dbadmcli show storage
• dbadmcli stordiag

dbadmcli manage diagcollect


Use the dbadmcli manage diagcollect command to collect diagnostic information about a DB system for
troubleshooting purposes, and for working with Oracle Support Services.

Syntax

dbadmcli manage diagcollect --storage [-h]

Parameters

Parameter Description
-h (Optional) Displays help for using the command.
--storage Collects all of the logs for any storage issues.

Example

[root@dbsys ~]# dbadmcli manage diagcollect --storage


Collecting storage log data. It will take a while, please wait...
Collecting oak data. It will take a while, please wait...
tar: Removing leading `/' from member names
tar: /opt/oracle/oak/onecmd/tmp/OakCli-Command-Output.log: file changed as
we read it

Logs are collected to : /opt/oracle/oak/log/dbsys/oakdiag/oakStorage-


dbsys-20161118_2101.tar.gz

dbadmcli power
Use the dbadmcli power command to power a disk on or off.
Note:

The dbadmcli power command is not available on 2-node


RAC DB systems.

Syntax

dbadmcli power {-on|-off} <name> [-h]

Parameters

Parameter Description
-h (Optional) Displays help for using the command.

Oracle Cloud Infrastructure User Guide 1547


Database

Parameter Description
name Defines the disk resource name. The resource name
format is pd_[0..3]. Use the dbadmcli show disk
command to get the disk resource name.
-off Powers off the disk.
-on Powers on the disk.

dbadmcli power disk status


Use the dbadmcli power disk status command to display the current power status of a disk.

Syntax

dbadmcli power disk status <name> [-h]

Parameters

Parameter Description
-h (Optional) Displays help for using the command.
name Identifies a specific disk resource name. The resource
name format is pd_[0..3]. For example, pd_01.

Example

[root@dbsys ~]# dbadmcli power disk status pd_00

The disk is powered ON

dbadmcli show controller


Use the dbadmcli show controller command to display details of the controller.

Syntax

dbadmcli show controller <controller_id> [-h]

Parameter

Parameter Description
controller_id The ID number of the controller. Use the dbadmcli
show storage command to get the ID.
-h (Optional) Displays help for using the command.

dbadmcli show disk


Use the dbadmcli show disk command to display the status of a single disk or all disks on the DB system.

Oracle Cloud Infrastructure User Guide 1548


Database

Syntax

dbadmcli show disk [<name>] [-shared] [-all] [-getlog] [-h]

Parameters

Parameter Description
-all (Optional) Displays detailed information for the named
disk.
-h (Optional) Displays help for using the command.
-getlog (Optional) Displays all the SMART log entries for an
NVMe disk.
name (Optional) Identifies a specific disk resource name.
The resource name format is pd_[0..3]. If omitted, the
command displays information about all disks on the
system.

-shared (Optional) Displays all the shared disks.

Examples
To display the status of all the disks on the system:

[root@dbsys ~]# dbadmcli show disk


NAME PATH TYPE STATE
STATE_DETAILS

pd_00 /dev/nvme2n1 NVD ONLINE Good


pd_01 /dev/nvme3n1 NVD ONLINE Good
pd_02 /dev/nvme1n1 NVD ONLINE Good
pd_03 /dev/nvme0n1 NVD ONLINE Good

To display the status of a disk named pd_00:

[root@dbsys ~]# dbadmcli show disk pd_00


The Resource is : pd_00
ActionTimeout : 1500
ActivePath : /dev/nvme2n1
AsmDiskList : |data_00||reco_00|
AutoDiscovery : 1
AutoDiscoveryHi : |data:70:NVD||reco:30:NVD|
CheckInterval : 300
ColNum : 0
CriticalWarning : 0
DependListOpr : add
Dependency : |0|
DiskId : 360025380144d5332
DiskType : NVD
Enabled : 1
ExpNum : 29
HbaPortNum : 10
IState : 0
Initialized : 0
IsConfigDepende : false
ModelNum : MS1PC2DD3ORA3.2T
MonitorFlag : 1
MultiPathList : |/dev/nvme2n1|

Oracle Cloud Infrastructure User Guide 1549


Database

Name : pd_00
NewPartAddr : 0
OSUserType : |userType:Multiuser|
PlatformName : X5_2_LITE_IAAS
PrevState : Invalid
PrevUsrDevName :
SectorSize : 512
SerialNum : S2LHNAAH502855
Size : 3200631791616
SlotNum : 0
SmartDiskWarnin : 0
SmartTemperatur : 32
State : Online
StateChangeTs : 1467176081
StateDetails : Good
TotalSectors : 6251233968
TypeName : 0
UsrDevName : NVD_S00_S2LHNAAH502855
VendorName : Samsung
gid : 0
mode : 660
uid : 0

To display the SMART logs for an NVMe disk:

[root@dbsys ~]# dbadmcli show disk pd_00 -getlog


SMART / Health Information :
----------------------------
Critical Warning : Available Spare below Threshold : FALSE
Critical Warning : Temperature above Threshold : FALSE
Critical Warning : Reliability Degraded : FALSE
Critical Warning : Read-Only Mode : FALSE
Critical Warning : Volatile Memory Backup Device Failure : FALSE
Temperature : 32 degree
Celsius
Available Spare : 100%
Available Spare Threshold : 10%
Device Life Used : 0%
Data Units Read (in 512k byte data unit) : 89493
Data Units Written (in 512k byte data unit) : 270387
Number of Host Read Commands : 4588381
Number of Host Write Commands : 6237344
Controller Busy Time : 3 minutes
Number of Power Cycles : 227
Number of Power On Hours : 1115
Number of Unsafe Shutdowns : 218
Number of Media Errors : 0
Number of Error Info Log Entries : 0

dbadmcli show diskgroup


Use the dbadmcli show diskgroup command to list configured diskgroups or display a specific diskgroup
configuration.

Syntax
To list configured diskgroups:

dbadmcli show diskgroup [-h]

Oracle Cloud Infrastructure User Guide 1550


Database

To display DATA configurations:

dbadmcli show diskgroup [DATA] [-h]

To display RECO configurations:

dbadmcli show diskgroup [RECO] [-h]

Parameters

Parameter Description
DATA (Optional) Displays the DATA diskgroup configurations.
-h (Optional) Displays help for using the command.
RECO (Optional) Displays the RECO diskgroup configurations.

Examples
To list all diskgroups:

[root@dbsys ~]# dbadmcli show diskgroup

DiskGroups
----------
DATA
RECO

To display DATA configurations:

[root@dbsys ~]# dbadmcli show diskgroup DATA

ASM_DISK PATH DISK STATE STATE_DETAILS


data_00 /dev/NVD_S00_S2LHNAAH101026p1 pd_00 ONLINE Good
data_01 /dev/NVD_S01_S2LHNAAH101008p1 pd_01 ONLINE Good

dbadmcli show env_hw


Use the dbadmcli show env_hw command to display the environment type and hardware version of the current
DB system.

Syntax

dbadmcli show env_hw [-h]

Parameter

Parameter Description
-h (Optional) Displays help for using the command.

dbadmcli show fs
Use the dbadmcli show fs command to display file system details.

Oracle Cloud Infrastructure User Guide 1551


Database

Syntax

dbadmcli show fs [-h]

Parameter

Parameter Description
-h (Optional) Displays help for using the command.

dbadmcli show storage


Use the dbadmcli show storage command to show the storage controllers, expanders, and disks.

Syntax

dbadmcli show storage [-h]

To show storage errors:

dbadmcli show storage -errors [-h]

Parameters

Parameter Description
-errors (Optional) Shows storage errors.
-h (Optional) Displays help for using the command.

Example
To display storage devices:

[root@dbsys ~]# dbadmcli show storage


==== BEGIN STORAGE DUMP ========
Host Description: Oracle Corporation:ORACLE SERVER X5-2
Total number of controllers: 5
Id = 4
Pci Slot = -1
Serial Num =
Vendor =
Model =
FwVers =
strId = iscsi_tcp:00:00.0
Pci Address = 00:00.0

Id = 0
Pci Slot = 13
Serial Num = S2LHNAAH504431
Vendor = Samsung
Model = MS1PC2DD3ORA3.2T
FwVers = KPYA8R3Q
strId = nvme:25:00.00
Pci Address = 25:00.0

Id = 1
Pci Slot = 12
Serial Num = S2LHNAAH505449

Oracle Cloud Infrastructure User Guide 1552


Database

Vendor = Samsung
Model = MS1PC2DD3ORA3.2T
FwVers = KPYA8R3Q
strId = nvme:27:00.00
Pci Address = 27:00.0

Id = 2
Pci Slot = 10
Serial Num = S2LHNAAH503573
Vendor = Samsung
Model = MS1PC2DD3ORA3.2T
FwVers = KPYA8R3Q
strId = nvme:29:00.00
Pci Address = 29:00.0

Id = 3
Pci Slot = 11
Serial Num = S2LHNAAH503538
Vendor = Samsung
Model = MS1PC2DD3ORA3.2T
FwVers = KPYA8R3Q
strId = nvme:2b:00.00
Pci Address = 2b:00.0

Total number of expanders: 0


Total number of PDs: 4
/dev/nvme2n1 Samsung NVD 3200gb slot: 0 pci : 29
/dev/nvme3n1 Samsung NVD 3200gb slot: 1 pci : 2
/dev/nvme1n1 Samsung NVD 3200gb slot: 2 pci : 27
/dev/nvme0n1 Samsung NVD 3200gb slot: 3 pci : 25
==== END STORAGE DUMP =========

dbadmcli stordiag
Use the dbadmcli stordiag command to collect detailed information for each disk or NVM Express (NVMe).

Syntax

dbadmcli stordiag <name> [-h]

Parameters

Parameter Description
name Defines the disk resource name. The resource name
format is pd_[0..3].
-h (Optional) Displays help for using the command.

Example
To display detailed information for NVMe pd_00:

[root@dbsys ~]# dbadmcli stordiag pd_0

Database Sizing Templates


When you create a database using the dbcli create-database command, you can specify a database sizing
template with the --dbshape parameter. The sizing templates are configured for different types of database
workloads. Choose the template that best matches the most common workload your database performs:

Oracle Cloud Infrastructure User Guide 1553


Database

• Use the OLTP templates if your database workload is primarily online transaction processing (OLTP).
• Use the DSS templates if your database workload is primarily decision support (DSS) or data warehousing.
• Use the in-memory (IMDB) templates if your database workload can fit in memory, and can benefit from in-
memory performance capabilities.
The following tables describe the templates for each type of workload.

OLTP Database Sizing Templates

Template CPU Cores SGA (GB) PGA (GB) Flash (GB) Processes Redo Log Buffer
Log File (MB)
Size (GB)

odb1s 1 2 1 6 200 1 16

odb1 1 4 2 12 200 1 16

odb2 2 8 4 24 400 1 16

odb4 4 16 8 48 800 1 32

odb6 6 24 12 72 1200 2 64

odb8 8 32 16 n/a 1600 2 64

odb10 10 40 20 n/a 2000 2 64

odb12 12 48 24 144 2400 4 64

odb16 16 64 32 192 3200 4 64

odb20 20 80 40 n/a 4000 4 64

odb24 24 96 48 192 4800 4 64

odb32 32 128 64 256 6400 4 64

odb36 36 128 64 256 7200 4 64

DSS Database Sizing Templates

Template CPU Cores SGA (GB) PGA (GB) Processes Redo Log Log Buffer
File Size (GB) (MB)

odb1s 1 1 2 200 1 16

odb1 1 2 4 200 1 16

odb2 2 4 8 400 1 16

odb4 4 8 16 800 1 32

odb6 6 12 24 1200 2 64

Oracle Cloud Infrastructure User Guide 1554


Database

Template CPU Cores SGA (GB) PGA (GB) Processes Redo Log Log Buffer
File Size (GB) (MB)

odb8 8 16 32 1600 2 64

odb10 10 20 40 2000 2 64

odb12 12 24 48 2400 4 64

odb16 16 32 64 3200 4 64

odb20 20 40 80 4000 4 64

odb24 24 48 96 4800 4 64

odb32 32 64 128 6400 4 64

odb36 36 64 128 7200 4 64

In-Memory Database Sizing Templates

Template CPU Cores SGA (GB) PGA (GB) In-Memory Processes Redo Log Buffer
(GB) Log Lile (MB)
Size (GB)

odb1s 1 2 1 1 200 1 16

odb1 1 4 2 2 200 1 16

odb2 2 8 4 4 400 1 16

odb4 4 16 8 8 800 1 32

odb6 6 24 12 12 1200 2 64

odb8 8 32 16 16 1600 2 64

odb10 10 40 20 20 2000 2 64

odb12 12 48 24 24 2400 4 64

odb16 16 64 32 32 3200 4 64

odb20 20 80 40 40 4000 4 64

odb24 24 96 48 48 4800 4 64

odb32 32 128 64 64 6400 4 64

odb36 36 128 64 64 7200 4 64

Oracle Cloud Infrastructure User Guide 1555


Database

Storage Scaling Considerations for Virtual Machine Databases Using Fast


Provisioning
Note:

This topic applies only to 1-node virtual machine DB systems.


When you provision a virtual machine DB system using the fast provisioning option, the Available storage
(GB) value you specify during provisioning determines the maximum total storage available through scaling.
The following table details the maximum storage value available through scaling for each setting offered in the
provisioning workflow:

Initial storage specified during provisioning (GB) Maximum storage available through scaling (GB)
256 2560
512 2560
1024 5120
2048 10240
4096 20480
8192 40960

For more information on creating a virtual machine DB system, see Creating Bare Metal and Virtual Machine DB
Systems on page 1371.

Security Technical Implementation Guide (STIG) Tool for Virtual Machine DB


systems
This topic describes a python script, referred to as the STIG tool, for Oracle Cloud Infrastructure virtual machine DB
systems provisioned using Oracle Linux 7. The STIG tool is used to ensure security compliance with DISA's Oracle
Linux 7 STIG. The script does the following:
• Makes the base image of the virtual machine DB system compliant with the Oracle Linux 7 STIG
• Embeds certain STIG rules into the system that can be activated after provisioning when required to meeting
security compliance standards
• Categorizes the embedded rules, allowing you to view and monitor the rules in the following categories:
• Static: Rules included in the base image
• DoD: Rules optionally activated after provisioning when needed to meet U.S. Department of Defense
compliance standards
• Runtime: Rules activated after provisioning when needed. Intended for use by all users needing to harden
security for virtual machine DB systems (including users outside of the U.S. Department of Defense).
• Provides a rollback capability, allowing you to roll back a DB system to a state with no configuration
modifications made by the script
• Provides a compliance check capability, allowing you to see how many of the scripts rules are successfully passed
by the DB system
Acquiring the STIG Tool
The STIG tool is provided for all newly-provisioned virtual machine DB systems. The STIG tool is provided in the
following OS directory location on virtual machine DB system nodes : /opt/oracle/dcs/bin/dbcsstig
Updated versions of the STIG tool will be available for download from the Oracle Technology Network (OTN) .
Updated versions of the STIG tool are also provided as available when you update the DB system agent.

Oracle Cloud Infrastructure User Guide 1556


Database

Using the STIG Tool


Use the following syntax for the STIG tool:

dbcsstig --<operation> <category>

For example:

dbcsstig --fix dod

Command Reference

Operations

Operation Parameter Definition


--check, -c Checks for compliance with rules included in specified
category
--fix, -f Applies fixes for rules included in specified category
--rollback, -rb Rolls back system configuration changes implemented
by the STIG tool
--version, -v Provides version information for the STIG tool script
--help, -h Provides command line help information

Rule Categories

Category Parameter Definition


static Used to specify rules included in the base image of the
virtual machine DB system
dod Used to specify rules required for compliance with
DISA's Oracle Linux 7 STIG
runtime Used to specify rules activated after provisioning for
general security hardening
all Used to specify all rule

Enabling FIPS, SE Linux, and STIG on Bare Metal or Virtual Machine DB System
Components
This topic describes how to add FIPS security enhancements to a bare metal or virtual machine DB system in Oracle
Cloud Infrastructure (OCI). The procedure is performed on each system node, and enables the following:
• Federal Information Processing Standards (FIPS)
• Security Enhanced (SE) Linux
• Security Technical Implementation Guide (STIG) standards

To enable FIPS, SE Linux, and STIG


1. Open an SSH session to the DB system node and switch to the root user, then navigate to /opt/oracle/dcs/
bin:

# sudo -s
# cd /opt/oracle/dcs/bin

Oracle Cloud Infrastructure User Guide 1557


Database

2. Run the following command to enable FIPS:

# dbcli secure-dbsystem -se -sd -fo -fd

The system provide details on the enable job:

Job details
----------------------------------------------------------------
ID: <job_ID_number>
Description: Secure DB System
Status: Created
Created: November 8, 2020 4:12:29 PM UTC
Progress: 0%
Message:

Task Name Start Time End Time Status


3. Verify the job details as follows:

# dbcli describe-job -i <job_ID_number>

The system provides information on the progress, status, and details of the enable job. For example:

Job details
----------------------------------------------------------------
ID: <job_ID_number>
Description: Secure DB System
Status: Success
Created: November 8, 2020 4:12:29 PM UTC
Progress: 100%
Message:

Task Name Start Time End Time Status


------------------------------------------------------------------------
----------------------------------- -----------------------------------
----------
Enable SE Linux [<name>] November 8, 2020 4:12:31 PM UTC November 8, 2020
4:12:31 PM UTC Success
Enable STIG for DOD [<name>] November 8, 2020 4:12:31 PM UTC November 8,
2020 4:12:49 PM UTC Success
Enable FIPS for OS [<name>] November 8, 2020 4:12:49 PM UTC November 8,
2020 4:14:43 PM UTC Success
Enable FIPS for DB Home [<DB_home_name_1>] November 8, 2020 4:14:43 PM UTC
November 8, 2020 4:14:43 PM UTC Success
Enable FIPS for DB[<DB_name_1>] November 8, 2020 4:14:43 PM UTC November
8, 2020 4:14:46 PM UTC Success
Enable FIPS for DB Home [<DB_home_name_2>] November 8, 2020 4:14:46 PM UTC
November 8, 2020 4:14:46 PM UTC Success
Enable FIPS for DB[<DB_name_2>] November 8, 2020 4:14:46 PM UTC November
8, 2020 4:14:49 PM UTC Success
4. Once the Job details output shows the Status "Success", you must restart your DB system node using the OCI
Console. This is required because enabling FIPS and SE Linux updates OS kernel. See To start, stop, or reboot a
database system on page 1384 for instructions.

To check a DB system node for FIPS and SE Linux configurations


To confirm that FIPS and SE Linux are enabled on your DB system node, use the following dbcli command:

# dbcli get-dbsystemsecurestatus

Oracle Cloud Infrastructure User Guide 1558


Database

The system returns details as shown in the following example:

{
"isSELinuxEnabledForOS" : true,
"isFipsEnabledForOS" : true,
"fipsStatusForDBs" : [ {
"databaseResId" : "<DB_ID_number>",
"status" : true
} ]
}

External Database Service


You can manage and monitor Oracle Databases that are located outside of Oracle Cloud Infrastructure (OCI) using
OCI's External Database service. External Database allows you use cloud-based tools such as Database Management
with your external databases.

About the Database Management Service


As a Database Administrator, you can use the Oracle Cloud Infrastructure Database Management service to monitor
and manage your Oracle Databases. Database Management supports Oracle Database versions 11.2.0.4 and later.
Using Database Management you can:
• Monitor the key performance and configuration metrics of your fleet of Oracle Databases. You can also compare
and analyze database metrics over a selected period of time.
• Group your critical Oracle Databases, which reside across compartments into a Database Group, and monitor
them.
• Create SQL jobs to perform administrative operations on a single Oracle Database or a Database Group.
• Use Performance Hub to monitor database performance and diagnose performance issues such as determining the
causes of wait time, performance degradation, and changes in database performance. For detailed information, see
Using Performance Hub to Analyze Database Performance.
For complete documentation on the Database Management service, see Database Management. The rest of the
External Database section of the documentation covers only the creation and management of external database
"handles" and the OCI external database connection resource that allows you connect your external database to a
handle in OCI.

How the External Database Service Works


To manage an external database using OCI's External Database service, you create an OCI resource known as a
"handle" that represents the external database within your tenancy. After creating a handle for your database, you
create a second resource called a database connection. The connection stores the information required for your OCI
tenancy to connect to the external database. After creating the connection resource and connecting the OCI handle
to your external database instance, you can enable the Database Management service to monitor the health and
performance of your database.

The OCI External Database Handle


You can create an OCI external database handle for the following types of external databases:
• External container databases
• External pluggable databases
• External non-container databases
The handle stores a few pieces of metadata that allow you to manage your database instance within OCI. This
metadata includes the following information related to managing the handle in OCI:
• An OCID, which allows the external database instance to be identified and managed within OCI.
• An OCI display name

Oracle Cloud Infrastructure User Guide 1559


Database

• Compartment assignment information (optional)


• Tags (optional)
In addition to the OCI-related metadata, the handle stores metadata derived from the database instance. This includes
the database unique name, the Oracle Database software edition and version, and other details. All of this information
stored by the handle can be viewed in the OCI Console or retrieved using the API. Metadata derived from the external
database instance (such as database unique name) is only populated in the handle after a database connection is
established between the handle and the instance.

Scanning an External Container Database to Discover Pluggable Databases


After you create and connect an external container database handle, you can use the handle to scan the external
container database and discover pluggable databases that have not been connected to OCI. If any pluggable databases
are discovered that are not connected to Oracle Cloud Infrastructure, the connection details for these databases are
listed in the work request generated by the scan operation. See To scan an external container database for pluggable
databases on page 1563 for more information.

The OCI Database Connection Resource


The OCI database connection resource stores details about how a specific handle connects to an external Oracle
Database instance. These details include the following:
• Connection strings details (DNS hostname, port, service name, network protocol)
• Connection type and OCI agent ID
• User credentials and role

Prerequisites
To use the External Database service, you will need the following:
• An Oracle Cloud Infrastructure (OCI) tenancy. See Setting Up Your Tenancy on page 123 for information if you
do not currently use OCI.
• One or more external databases located outside of OCI. The External Database service supports container
databases, pluggable databases, and non-container databases that use the following Oracle Database software
versions: 11gR2, 12cR1, 12cR2, 18c, and 19c. You can use the External Database service with database clones
and with high-availability / disaster recovery databases standby databases.
• A Management Agent Cloud Service agent with source credentials. See the Management Agent documentation for
details on creating this resource in OCI.

Creating External Database Handles


This topic provides information on creating OCI external database handles using the OCI Console and API. See
External Database Service on page 1559 for an overview of the External Database service.
Required IAM Policy
To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let database admins manage Oracle Cloud external database resources on page
2159 lets the specified group do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
more information about writing policies for databases, see Details for the Database Service on page 2251.
Using the Console

Oracle Cloud Infrastructure User Guide 1560


Database

To create an OCI external pluggable database resource


This procedure describes the steps you take to create an OCI external pluggable database resource, also called a
"handle". The handle functions as a representation within OCI of an Oracle Database instance located outside of the
Oracle's cloud. Note: This procedure is not used to create an Oracle Database instance outside of Oracle's Cloud.
1. Open the . Under , click .
2. Choose your Compartment.
3. Under External Databases, click Pluggable Databases.
4. Click Create External Pluggable Database.
The Create an external pluggable database dialog opens.
5. Choose a compartment for the external pluggable database.
6. Enter a database display name. The display name is a user-friendly name to help you easily identify the resource.
7. Select an external container database to house the pluggable database.
8. Click Show Advanced Options to specify the following options for the database:
Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags to
that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more information
about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip this option (you
can apply tags later) or ask your administrator.
9. Click Create External Pluggable Database.
WHAT NEXT?
• After creating the OCI external pluggable database resource (the handle), you can configure the handle's
connection to the external pluggable database instance. See To create a connection for an OCI external pluggable
database resource on page 1565 for more information.
• After connecting the handle to an external pluggable database instance, you can enable Database Management for
your external pluggable database. See the Database Management documentation for more information.
To create an OCI external container database
This procedure describes the steps you take to create an OCI external container database resource, also called a
"handle". The handle functions as a representation within OCI of an Oracle Database instance located outside of the
Oracle's cloud. Note: This procedure is not used to create an Oracle Database instance outside of Oracle's Cloud.
1. Open the . Under , click .
2. Choose your Compartment.
3. Under External Databases, click Container Databases.
4. Click Create External Container Database.
The Create an external container database dialog opens.
5. Choose a compartment for the external pluggable database.
6. Enter a container database display name. The display name is a user-friendly name to help you easily identify the
resource.
7. Click Show Advanced Options to specify the following options for the database:
Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags to
that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more information
about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip this option (you
can apply tags later) or ask your administrator.
8. Click Create External Container Database.
WHAT NEXT?
• After creating the OCI external container database resource (the handle), you can configure the resource's
connection to a container database located outside of OCI. See To create a connection for an OCI external
pluggable database resource on page 1565 for more information.

Oracle Cloud Infrastructure User Guide 1561


Database

• After connecting the handle to an external container database, you can perform a scan of the external container
database to discover pluggable databases. See To scan an external container database for pluggable databases on
page 1563 for more information.
• After connecting the handle to an external container database, you can enable Database Management for your
external container database. See the Database Management documentation for more information.
To create an OCI external non-pluggable database
This procedure describes the steps you take to create an OCI external non-container database resource, also called a
"handle". The handle functions as a representation within OCI of an Oracle Database instance located outside of the
Oracle's cloud. Note: This procedure is not used to create an Oracle Database instance outside of Oracle's Cloud.
1. Open the . Under , click .
2. Choose your Compartment.
3. Under External Databases, click Non-Container Databases.
4. Click Create External Non-Container Database.
The Create an external non-container database dialog opens.
5. Choose a compartment for the external non-container database.
6. Enter a non-container database display name. The display name is a user-friendly name to help you easily identify
the resource.
7. Click Show Advanced Options to specify the following options for the database:
Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags to
that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more information
about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip this option (you
can apply tags later) or ask your administrator.
8. Click Create External Non-Container Database.
WHAT NEXT?
• After creating the OCI external non-container database resource (the handle), you can configure the handle's
connection to the external database instance. See To create a connection for an OCI external non-container
database resource on page 1566 for more information.
• After connecting the handle to an external database instance, you can enable Database Management for the
external database. See the Database Management documentation for more information.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to create OCI external database handles:
• CreateExternalContainerDatabase
• CreateExternalPluggableDatabase
• CreateExternalNonContainerDatabase

Managing External Database Handles


This topic provides information on managing Oracle Cloud Infrastructure (OCI) external database handles using
the OCI Console and API. See External Database Service on page 1559 for an overview of the External Database
service.
Note:

See the Database Management documentation for instructions on enabling


Database Management for an external database handle in the OCI Console.

Oracle Cloud Infrastructure User Guide 1562


Database

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let database admins manage Oracle Cloud external database resources on page
2159 lets the specified group do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
more information about writing policies for databases, see Details for the Database Service on page 2251.
Using the Console

To move an OCI external database handle to another compartment


Note:

For information on using compartments, see Understanding Compartments


on page 123.
1. Open the . Under , click .
2. Choose your Compartment.
3. Under External Databases, click either Pluggable Database, Container Databases, or Non-Container
Databases, depending on the type of external database handle you are managing.
4. In the list of external database handles, click on the name of the handle you want to move.
5. On the external database details page, click the Move Resource button.
6. In the Move Resource to a Different Compartment dialog, choose a new compartment using the drop-down
selector.
7. Click Move Resource.
To delete an OCI external database handle
This topic describes how to delete the following OCI resources:
• External pluggable database handle
• External container database handle
• External non-container database handle
Note:

• If your external database handle has an associated database connection


resources, you must first delete the connections before you can delete the
external database handle.
• For external container databases, you cannot delete the OCI handle until
all of the external pluggable database handles in the container database
handle have first been deleted.
1. Open the . Under , click .
2. Choose your Compartment.
3. Under External Databases, click either Pluggable Database, Container Databases, or Non-Container
Databases, depending on the type of external database handle you are deleting.
4. In the list of external database handles, click on the name of the handle you want to delete.
5. On the external database details page, click the Delete button.
To scan an external container database for pluggable databases
1. Open the . Under , click .
2. Choose your Compartment.
3. Under External Databases, click Container Databases.

Oracle Cloud Infrastructure User Guide 1563


Database

4. In the list of external container database handles, click on the name of the handle that is connected to the container
database you want to scan.
5. On the external container database details page, click Scan for Pluggable Databases.
If any pluggable databases are discovered that are not connected to Oracle Cloud Infrastructure, the connection
details for these databases are listed in the work request generated by the scan operation.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to manage external container database resources:
• ListExternalContainerDatabases
• GetExternalContainerDatabase
• ChangeExternalContainerDatabaseCompartment
• ScanExternalContainerDatabasePluggableDatabases
• EnableExternalContainerDatabaseDatabaseManagement
• DisableExternalContainerDatabaseDatabaseManagement
• UpdateExternalContainerDatabase
• DeleteExternalContainerDatabase
Use these API operations to manage external pluggable database resources:
• ListExternalPluggableDatabases
• GetExternalPluggableDatabase
• ChangeExternalPluggableDatabaseCompartment
• EnableExternalPluggableDatabaseDatabaseManagement
• DisableExternalPluggableDatabaseDatabaseManagement
• UpdateExternalPluggableDatabase
• DeleteExternalPluggableDatabase
Use these API operations to manage external non-container database resources:
• ListExternalNonContainerDatabases
• GetExternalNonContainerDatabase
• ChangeExternalNonContainerDatabaseCompartment
• EnableExternalNonContainerDatabaseDatabaseManagement
• DisableExternalNonContainerDatabaseDatabaseManagement
• UpdateExternalNonContainerDatabase
• DeleteExternalNonContainerDatabase

Creating and Managing an External Database Connection


This topic provides information on managing Oracle Cloud Infrastructure (OCI) external database connections using
the OCI Console and API. The external database connection resource allows you to connect an OCI external database
handle to an Oracle Database instance located outside of OCI. See External Database Service on page 1559 for more
information about the External Database service and the database connection resource.
Note:

Currently the External Database service supports only Management Agent


Cloud Service (MACS) agents for creating a connection to your external
databases. Enterprise Manager Cloud Control Agents are not supported at this
time.

Oracle Cloud Infrastructure User Guide 1564


Database

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let database admins manage Oracle Cloud external database resources on page
2159 lets the specified group do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. For
more information about writing policies for databases, see Details for the Database Service on page 2251.
Using the Console

To create a connection for an OCI external pluggable database resource


1. Open the . Under , click .
2. Choose your Compartment.
3. Under External Databases, click Pluggable Databases.
4. In the list of OCI external pluggable database resources (also called "handles"), click the display name of the
handle you want to create a connection for.
5. Click Connect to External Pluggable Database.
The Connect to an external pluggable database dialog opens.
6. Enter a connection display name. This is a user-friendly name to help you easily identify the resource.
7. Enter the DNS hostname for the database on your premises that you are connecting to Oracle Cloud.
8. Enter the port being used by the database outside of Oracle Cloud Infrastructure for database connections.
9. Enter the service name for the database outside of Oracle Cloud Infrastructure that will be used by the connection.
10. Enter the connection agent ID. See Management Agent for more information about this Oracle Cloud
Infrastructure feature.
11. Enter the Username for the database credentials that will be used by this connection.
12. Enter the Password for the database credentials that will be used by this connection.
13. Enter a Credential name prefix. This string is the first part of the full credential name. Your prefix is prepended
to a system-generated Credential name prefix to create the full credential name.
14. Enter the Role for the database credentials that will be used by this connection.
15. Click Show Advanced Options to specify the following options for the database:
Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags to
that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more information
about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip this option (you
can apply tags later) or ask your administrator.
16. Click Connect to External Pluggable Database.
To create a connection for an OCI external container database resource
1. Open the . Under , click .
2. Choose your Compartment.
3. Under External Databases, click Container Databases.
4. In the list of OCI external container database resources (also called "handles"), click the display name of the
handle you want to create a connection for.
5. Click Connect to External Pluggable Database.
The Connect to an external pluggable database dialog opens.
6. Enter a connection display name. This is a user-friendly name to help you easily identify the resource.
7. Enter the DNS hostname for the database on your premises that you are connecting to Oracle Cloud Infrastructure.
8. Enter the port being used by the database outside of Oracle Cloud Infrastructure for database connections.
9. Enter the service name for the database outside of Oracle Cloud Infrastructure that will be used by the connection.

Oracle Cloud Infrastructure User Guide 1565


Database

10. Enter the connection agent ID. See Management Agent for more information about this Oracle Cloud
Infrastructure feature.
11. Enter the Username for the database credentials that will be used by this connection.
12. Enter the Password for the database credentials that will be used by this connection.
13. Enter a Credential name prefix. This string is the first part of the full credential name. Your prefix is prepended
to a system-generated Credential name prefix to create the full credential name.
14. Enter the Role for the database credentials that will be used by this connection.
15. Click Show Advanced Options to specify the following options for the database:
Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags to
that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more information
about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip this option (you
can apply tags later) or ask your administrator.
16. Click Connect to External Pluggable Database.
To create a connection for an OCI external non-container database resource
1. Open the . Under , click .
2. Choose your Compartment.
3. Under External Databases, click Non-Container Databases.
4. In the list of OCI external non-container database resources (also called "handles"), click the display name of the
handle you want to create a connection for.
5. Click Connect to External Non-Container Database.
The Connect to an Connect to an external non-container database dialog opens.
6. Enter a connection display name. This is a user-friendly name to help you easily identify the resource.
7. Enter the DNS hostname for the database on your premises that you are connecting to Oracle Cloud.
8. Enter the port being used by the database outside of Oracle Cloud Infrastructure for database connections.
9. Enter the service name for the database outside of Oracle Cloud Infrastructure that will be used by the connection.
10. Enter the connection agent ID. See Management Agent for more information about this Oracle Cloud
Infrastructure feature.
11. Enter the Username for the database credentials that will be used by this connection.
12. Enter the Password for the database credentials that will be used by this connection.
13. Enter a Credential name prefix. This string is the first part of the full credential name. Your prefix is prepended
to a system-generated Credential name prefix to create the full credential name.
14. Enter the Role for the database credentials that will be used by this connection.
15. Click Show Advanced Options to specify the following options for the database:
Tags: If you have permissions to create a resource, then you also have permissions to apply free-form tags to
that resource. To apply a defined tag, you must have permissions to use the tag namespace. For more information
about tagging, see Resource Tags on page 213. If you are not sure whether to apply tags, then skip this option (you
can apply tags later) or ask your administrator.
16. Click Connect to External Non-Container Database.
To check the connection status of an external database connection
1. Open the . Under , click .
2. Choose your Compartment.
3. Under External Databases, click either Pluggable Database, Container Databases, or Non-Container
Databases, depending on the type of external database you are using.
4. In the list of external database handles, click the name of the handle you want to check the connection status of.
5. On the Database Details page, under Resources, click Connections.
6. In the list of database connections, click the name of the connection you want to check the status of.
7. Click Check Connection Status. A "Check Connection Status" work request is created. Click on the work request
name to see details of the connection status.

Oracle Cloud Infrastructure User Guide 1566


Database

To update the connection credentials of an external database handle


1. Open the . Under , click .
2. Choose your Compartment.
3. Under External Databases, click either Pluggable Database, Container Databases, or Non-Container
Databases, depending on the type of external database handle connection you are updating.
4. In the list of external database handles, click on the name of the handle associated with the connection you want to
update.
5. On the external database details page, under Resources, click Connections.
6. In the list of connections, click the name of the connection you want to update.
7. On the External Connection Details page, click Update Connection Credentials.
8. In the Update credentials dialog, enter the following information:
• Username
• Password
• Role
9. Click Update Credentials.
To update the connection strings of an external database handle
1. Open the . Under , click .
2. Choose your Compartment.
3. Under External Databases, click either Pluggable Database, Container Databases, or Non-Container
Databases, depending on the type of external database handle connection you are updating.
4. In the list of external database handles, click on the name of the handle associated with the connection you want to
update.
5. On the external database details page, under Resources, click Connections.
6. In the list of connections, click the name of the connection you want to update.
7. On the External Connection Details page, click Update Connection Strings.
8. In the Update connection strings dialog, enter the following information:
• DNS hostname
• Port
• Service
9. Click Update Connection Strings.
To delete an external database connection
1. Open the . Under , click .
2. Choose your Compartment.
3. Under External Databases, click either Pluggable Database, Container Databases, or Non-Container
Databases, depending on the type of external database handle you are deleting.
4. In the list of external database handles, click on the name of the handle associated with the connection you want to
delete.
5. On the external database details page, under Resources, click Connections.
6. In the list of connections, click the name of the connection you want to delete.
7. On the External Connection Details page, click Delete.
Using the API
For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to create and manage external database connections:
• CreateExternalDatabaseConnector
• ListExternalDatabaseConnectors
• GetExternalDatabaseConnector

Oracle Cloud Infrastructure User Guide 1567


Database

• CheckExternalDatabaseConnectorConnectionStatus
• UpdateExternalDatabaseConnector
• DeleteExternalDatabaseConnector

Oracle Database Software Images


This topic provides an overview of the database software image resource type, which you can use to create databases
and Oracle Database Homes, and to patch databases. Database software images give you the ability to create a
customized Oracle Database software configuration that includes your chosen updates (PSU, RU or RUR), and
optionally, a list of one-off (or interim) patches or an Oracle Home inventory file. This reduces the time required to
provision and configure your databases, and makes it easy for your organization to create an approved "gold image"
for developers and database administrators.

Using Database Software Images in Oracle Cloud Infrastructure

Creation and Storage of Database Software Images


Database software images are resources within your tenancy that you create prior to provisioning or patching a DB
system, Exadata Cloud Service instance, Database Home, or database. There is no limit on the number of database
software images you can create in your tenancy, and you can create your images with any Oracle Database software
version and update supported in Oracle Cloud Infrastructure.
Database software images are automatically stored in Oracle-managed Object Storage and can be viewed and
managed in the Oracle Cloud Infrastructure Console. Note that database software images incur Object Storage usage
costs. Database software image are regional-level resources and can be accessed from any availability domain within
their region.
See To create a database software image on page 1569 for information on creating an image.

Using a Database Software Image with a Bare Metal or Virtual Machine DB System
Provisioning: After you create a database software image, you can use it to provision the initial database in a new
bare metal or virtual machine DB system, or to provision a new database in an existing bare metal DB system. For
more information, see the following topics:
• To create a DB system on page 1373
• To create a new database in an existing DB system on page 1416
Patching: You can use a database software image to update the database software of an existing virtual machine or
bare metal database in Oracle Cloud Infrastructure. This is sometimes referred to as in-place patching. See To patch
a database on page 1423 for information on using a custom database software image to patch a database in a bare
metal or virtual machine DB system. To determine if a database has been patched with a particular database software
image, follow the instructions in To view the patch history of a database on page 1423. For Oracle Data Guard
associations, you can use a custom database software image for in-place patching on both the primary and standby
database instances to ensure that both databases have the same patches.

Using a Database Software Image with an Exadata Cloud Service Instance


Provisioning: After you create a database software image, you can use it to create an Oracle Database Home in an
Exadata Cloud Service instance. For more information, see To create a new Database Home in an existing Exadata
Cloud Service instance on page 1331.
Patching: To patch a database in an Exadata Cloud Service instance using a custom database software image, create
the Database Home using the image, and then move the database to that Database Home. For more information, see
Patching Individual Oracle Databases in an Exadata Cloud Service Instance on page 1288.
Setting up Data Guard: When creating an Oracle Data Guard association, you can use a custom database software
image to create a new Database Home for the new standby database. For more information, see To enable Oracle
Data Guard on an Exadata Cloud Service instance on page 1343.

Oracle Cloud Infrastructure User Guide 1568


Database

Required IAM Policy


To use Oracle Cloud Infrastructure, you must be granted security access in a policy by an administrator. This access
is required whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you get a message
that you don’t have permission or are unauthorized, verify with your administrator what type of access you have and
which compartment you should work in.
For administrators: The policy in Let database admins manage Oracle Cloud database systems on page 2158 lets the
specified group do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies on page 2143 and Common Policies on page 2150. If
you want to dig deeper into writing policies for databases, see Details for the Database Service on page 2251.

Using the Console

To create a database software image


1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Under Resources, click Database Software Images.
3. Click Create Database Software Image.
4. In the Display name field, provide a display name for your image. Avoid entering confidential information.
5. Choose your Compartment.
6. Choose a Shape family. A custom database software images is compatible with only one shape family. Available
shape families are the following:
• • Bare metal and virtual machine DB systems
• Exadata Cloud Service instances
7. Choose the Database version for your image.
8. Choose the patch set update, proactive bundle patch, or release update. For information on Oracle Database
patching models, see Release Update Introduction and FAQ (Doc ID 2285040.1).
9. Optionally, you can enter a comma-separated list of one-off (interim) patch numbers.
10. Optionally, you can upload an Oracle Home inventory file from an existing Oracle Database.
11. Click Show Advanced Options to add tags to your database software image. To apply a defined tag, you must
have permissions to use the tag namespace. For more information about tagging, see Resource Tags on page 213.
If you are not sure if you should apply tags, skip this option (you can apply tags later) or ask your administrator.
12. Click Create Database Software Image.
To view the patch information of a database software image
To view the Oracle Database version, update information (PSU/BP/RU level) and included one-off (interim) patches
of a database software image, use the following instructions:
1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Under Resources, click Database Software Images.
3. In the list of database software images, find the image you want to view and click on the display name of the
image.
4. On Database Software Image Details page for your selected image, details about the image are displayed:
• The Oracle Database version is displayed in the General Information section. For example: 19.0.0.0
• The PSU/BP/RU field of the Patch Information section displays update level for the image. For example:
19.5.0.0
• The One-Off Patches field displays the number of one-off patches included in the image, if any. The count
includes all patches specified when creating the image (excluding patches listed in lsinventory). To view the
included patches (if any are included), click the Copy All link and paste the list of included patches into a text
editor. The copied list of patch numbers is comma-separated and can be used to create additional database
software images.
To delete a database software image

Oracle Cloud Infrastructure User Guide 1569


Database

1. Open the navigation menu. Under Oracle Database, click Bare Metal, VM, and Exadata.
2. Under Resources, click Database Software Images.
3. In the list of database software images, find the image you want to delete and click the action icon (three dots) at
the end of the row.
4. Click Delete.

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use these API operations to manage database software images:
• CreateDatabaseSoftwareImage
• ListDatabaseSoftwareImages
• GetDatabaseSoftwareImage
• DeleteDatabaseSoftwareImage
• ChangeDatabaseSoftwareImageCompartment

Oracle Maximum Availability Architecture in Oracle Cloud Infrastructure


Oracle Maximum Availability Architecture is a set of best practices developed by Oracle engineers over many years
for the integrated use of Oracle High Availability technologies.

Oracle Maximum Availability Architecture and Autonomous Database Cloud


High availability is suitable for all development, test, and production databases that have high uptime requirements
and low data loss tolerance. By default, Autonomous Databases are highly available, incorporating a multi-
node configuration to protect against localized hardware failures that do not require fast disaster recovery. Each
Autonomous Database application service resides in at least one Oracle Real Application Clusters (Oracle RAC)
instance with the option to fail over to another available Oracle RAC instance using Autonomous Data Guard
for unplanned outages or planned maintenance activities, resulting in zero or near-zero downtime. Autonomous
Database's automatic backups are stored in Oracle Cloud Infrastructure Object Storage and replicated to another
availability domain, and can be restored in the event of a disaster. Major database upgrades, however, require
downtime.
The uptime service-level objective per month is 99.95% (a maximum of 22 minutes of downtime per month),
but when you use Maximum Availability Architecture best practices for continuous service, most months would
effectively have zero downtime. The uptime service-level objective does not include downtime due to customer-
initiated high availability tests, disaster recovery (such as an availability domain or regional outage), database
corruptions, or downtime due to planned maintenance that cannot be done online or through an Oracle RAC rolling
update solution, such as major database upgrades from one release to another.
The following table describes the recovery-time objectives and recovery-point objectives (data loss tolerance) for
service-level objectives.

Oracle Cloud Infrastructure User Guide 1570


Database

Default High Availability Policy Recovery Time (RTO) and Recovery Point (RPO) Service-level
Objectives

Failure and Maintenance Database Downtime Service-Level Downtime Potential Service-Level


Events (RTO) Data Loss (RPO)
Localized events, Zero Near-zero Zero
including:
• Exadata cluster network
topology failures
• Storage (disk and flash)
failures
• Database instance
failures
• Database server failures
• Periodic software and
hardware maintenance
updates

Events requiring restoring Minutes to hours Minutes to hours 15 minutes


from backup because
standby database does not
exist:
• Data corruptions
• Full database failures
• Complete storage
failures
• Availability domain or
region failures

In the preceding table, the amount of downtime for events requiring restoring from a backup varies due to the nature
of the failure. In the most optimistic case, a physical block corruption is detected and the block is repaired with block
media recovery in minutes. In this case, only a small portion of the database is affected with zero data loss.
In a more pessimistic case, an availability domain or data region fails, and a new cluster must be provisioned and
restored with the latest database backup, including all archives, and a complete database recovery must be run. Data
loss is limited by the last successful archive log backup, the frequency of which is every 15 minutes, by default, and
includes a log switch and subsequent archive log backup of any redo that has not been backed up to Oracle Cloud
Infrastructure Object Storage. Data loss can be seconds or, at worst, around 15 minutes.

Autonomous Data Guard for Autonomous Databases on Dedicated Exadata Infrastructure


Enable Autonomous Data Guard for mission-critical production databases that have more strict uptime requirements
than databases with the default high-availability configuration and limited data-loss tolerance considering a wider
range of potential problems, such as data corruption and database and regional site failures. Enabling Autonomous
Data Guard adds one symmetric standby database with Oracle Data Guard to an Exadata rack that is located in
another availability domain or in another region.
The primary and standby database systems are configured symmetrically to ensure that performance service levels
are maintained after Data Guard role transitions. Oracle Data Guard features asynchronous redo transport (maximum
performance mode) within the same region across availability domains, or across regions, by default. If zero data loss
is required, then you can change to synchronous redo transport (maximum availability mode).
As with databases that are not Data Guard-enabled, each Autonomous Database application service resides in at least
one Oracle RAC instance and will automatically fail over to another available Oracle RAC instance, as previously
described. The standby database provides expanded application services to offload reporting, queries, and some

Oracle Cloud Infrastructure User Guide 1571


Database

updates. The Database Backup Cloud Service schedules automated backups, which are stored in Oracle Cloud
Infrastructure Object Storage and replicated to another availability domain. Those backups can be used to restore
databases in the event of a double disaster where both primary and standby databases are lost.
Local and remote virtual cloud network peering provides a secure, high-bandwidth network across availability
domains and regions for any traffic between primary and standby servers.
The uptime service-level objective per month is 99.995% (maximum 132 seconds of downtime per month) and
recovery-time objectives (downtime) and recovery-point objectives (data loss) are low, as described in the subsequent
table, when a manual failover is initiated. When you use Maximum Availability Architecture best practices for
continuous service, most months would have an effective downtime of zero. The uptime service-level objective does
not include downtime as a result of user-initiated high availability tests, user-initiated Data Guard switchover tests, or
the time it takes to initiate a manual Data Guard failover.
Users can choose whether their database failover site is located in a different availability domain within the same
region or in a different region, contingent upon application or business requirements, and data center availability.

Autonomous Data Guard Recovery Time (RTO) and Recovery Point (RPO) Service-level Objectives

Failure and Maintenance Events Service-Level Downtime (RTO) Potential Service-Level Data Loss
(RPO)
Localized events, including: Near zero Zero
• Exadata cluster network fabric
failures
• Storage (disk and flash) failures
• Database instance failures
• Database server failures
• Periodic software and hardware
maintenance updates

Events requiring failover to the Few seconds to two minutes • Zero for maximum availability
standby database using Autonomous protection mode (uses
Data Guard-enabled dedicated synchronous redo transport).
Autonomous Databases, including: Most commonly used for intra-
• Data corruptions (because Data region standby databases.
Guard has automatic block • Near zero for maximum
repair for physical corruptions, performance protection mode
a failover operation is required (uses asynchronous redo
only for logical corruptions or transport). Most commonly
extensive data corruptions) used for cross-region standby
• Full database failures databases.
• Complete storage failures
• Availability domain or region
failures

Maintaining Application Uptime


Ensure that network connectivity to Oracle Cloud Infrastructure is reliable so that you can access your tenancy's
Autonomous Database resources.
Follow the guidelines in the Continuous Availability: Best Practices for Applications Using Autonomous Database -
Dedicated and Application Continuity: MAA Checklist for Preparation white papers to experience application-level
service uptime similar to that of the database uptime.

Oracle Cloud Infrastructure User Guide 1572


Database

Oracle Maximum Availability Architecture in Exadata DB Systems


Oracle Maximum Availability Architecture in Oracle Cloud Infrastructure provides inherent high availability, data
protection, and disaster recovery protection integrated with both cloud automation and lifecycle operations, enabling
Oracle Cloud Infrastructure to be the best cloud solution for enterprise databases and applications.

Oracle Maximum Availability Architecture Benefits in Oracle Cloud


• Deployment: Oracle deploys Exadata in Oracle Cloud Infrastructure using Oracle Maximum Availability
Architecture best practices, including configuration best practices for storage, network, operating system, Oracle
Grid Infrastructure, and Oracle Database. Exadata is optimized to run enterprise Oracle Databases with extreme
scalability and availability.
• Oracle Maximum Availability Architecture database templates: All cloud databases created with Oracle
Cloud automation use Oracle Maximum Availability Architecture settings optimized for the Exadata in Oracle
Cloud. Oracle does not recommend that you use custom scripts to create cloud databases.
• Backup and restore automation: When you configure automatic backup to Oracle Cloud Infrastructure Object
Storage, backup copies exist across multiple availability domains for additional protection, and RMAN validates
cloud database backups for any physical corruptions. Database backups occur daily with a full backup occurring
once per week and incremental backups occurring on all other days. Archive log backups occur frequently to
reduce potential data loss in case of disaster.
• Exadata inherent benefits: Exadata is the best Oracle Maximum Availability Architecture platform that Oracle
offers, engineered with hardware, software, database, and availability innovations to support the most mission-
critical enterprise applications. Specifically, Exadata provides unique high availability, data protection, and
quality-of-service capabilities that set Oracle apart from any other platforms or cloud vendor.
For a comprehensive list of Oracle Maximum Availability Architecture benefits for Exadata DB systems, see
Exadata Database Machine: Maximum Availability Architecture Best Practices and Deploying Oracle Maximum
Availability Architecture with Exadata Database Machine. Examples of these benefits include:
• High availability and low brownout: Fully-redundant, fault-tolerant hardware exists in the storage, network,
and database servers. Resilient, highly-available software, such as Oracle Real Application Clusters (Oracle
RAC), Oracle Clusterware, Oracle Database, Oracle Automatic Storage Management, Oracle Linux, and
Oracle Exadata Storage Server enable applications to maintain application service levels through unplanned
outages and planned maintenance events. For example, Exadata has instant failure detection that can detect
and repair database node, storage server, and network failures in less than two seconds, and resume application
and database service uptime and performance. Other platforms can experience 30 seconds, or even minutes,
of blackout and extended application brownouts for the same type of failures. Only the Exadata platform
offers a wide range of unplanned outage and planned maintenance tests to evaluate end-to-end application and
database brownouts and blackouts.
• Data protection: Exadata provides Oracle Database physical and logical block corruption prevention,
detection, and, in some cases, automatic remediation. The Exadata Hardware Assisted Resilient Data (HARD)
checks include support for server parameter files, control files, log files, Oracle data files, and Oracle Data
Guard broker files when those files are stored in Exadata storage. This intelligent Exadata storage validation
stops corrupted data from being written to disk when a HARD check fails, which eliminates a large class of
failures that the database industry had previously been unable to prevent. Examples of the Exadata HARD
checks include:
• Redo and block checksum
• Correct log sequence
• Block type validation
• Block number validation
• Oracle data structures, such as block magic number, block size, sequence number, and block header and
tail data structures
Exadata HARD checks initiate from Exadata storage software (cell services) and work transparently after
enabling a database DB_BLOCK_CHECKSUM parameter, which is enabled by default in the cloud. Exadata is
the only platform that currently supports the HARD initiative. Furthermore, Oracle Exadata Storage Server
provides non-intrusive, automatic hard disk scrub and repair. This feature periodically inspects and repairs

Oracle Cloud Infrastructure User Guide 1573


Database

hard disks during idle time. If bad sectors are detected on a hard disk, then Oracle Exadata Storage Server
automatically sends a request to Oracle Automatic Storage Management to repair the bad sectors by reading
the data from another mirror copy. Finally, Exadata and Oracle Automatic Storage Management can detect
corruptions as data blocks are read into the buffer cache and automatically repair data corruption with a good
copy of the data block on a subsequent database write. This inherent intelligent data protection makes Exadata
and Exadata Cloud the best data protection storage platform for Oracle Databases. For comprehensive data
protection, a Maximum Availability Architecture best practice is to use a standby database on a separate
Exadata to detect, prevent, and automatically repair corruptions that cannot be addressed by Exadata, alone.
The standby database also minimizes downtime and data loss for disasters that result from site, cluster, and
database failures.
• Response time quality of service: Only Exadata has end-to-end quality-of-service capabilities to ensure
that response time remains low and optimum. Database server I/O capping and Exadata storage I/O latency
capping ensures that read or write I/O can be redirected to partnered cells when response time exceeds
a certain threshold. If storage becomes unreliable (but not failed) because of poor and unpredictable
performance, then the disk or flash cache can be quarantined, offline, and later brought back online if
heuristics show that I/O performance is back to acceptable levels. Resource management can help prioritize
key database network or I/O functionality, so that your application and database perform at an optimized
level. For example, database log writes get priority over backup requests on the Exadata network and storage.
Furthermore, rapid response time is maintained during storage software updates by ensuring that partner flash
cache is warmed so flash misses are minimized.
• End-to-end testing and holistic health checks: Because Oracle owns the entire Exadata Cloud infrastructure,
end-to-end testing and optimizations benefit every Exadata customer around the world, whether hosted on
premise or in the cloud. Validated optimizations and fixes required to run any mission-critical system are
uniformly applied after rigorous testing. Health checks are designed to evaluate the entire stack. The Exadata
health check utility EXACHK is Exadata cloud-aware and highlights any configuration and software alerts that
may have occurred because of customer changes. No other cloud platform currently has this kind of end-to-end
health check available. For Oracle Autonomous Database, EXACHK runs automatically to evaluate Maximum
Availability Architecture compliance. For non-autonomous databases, Oracle recommends running EXACHK
at least once a month, and before and after any software updates, to evaluate any new best practices and alerts.
• Oracle Maximum Availability Architecture best practices paper: Oracle Maximum Availability Architecture
engineering collaborates with Oracle Cloud teams to integrate Oracle Maximum Availability Architecture
practices that are optimized for Oracle Cloud Infrastructure and security. See MAA Best Practices for the Oracle
Cloud for additional information about continuous availability, Oracle Data Guard, Hybrid Data Guard, Oracle
GoldenGate, and other Maximum Availability Architecture-related topics.
The following table lists various software updates and the impacts associated with those updates on databases and
applications.

Software Update Database Impact Application Impact Implementation


Network Zero downtime Zero to single-digit seconds Performed by Oracle Cloud
Storage cells Zero downtime Zero to single-digit seconds Performed by Oracle Cloud
Exadata Dom0 Zero downtime with Oracle Zero downtime Performed by Oracle Cloud
RAC rolling updates
Exadata DomU Zero downtime with Oracle Zero downtime Performed by Oracle Cloud
RAC rolling updates for Autonomous Database
Performed by customer
using cloud-assisted tools
for non-Autonomous
Database

Oracle Cloud Infrastructure User Guide 1574


Database

Software Update Database Impact Application Impact Implementation


Oracle Database quarterly Zero downtime with Oracle Zero downtime Performed by Oracle Cloud
update or patch RAC rolling updates for Autonomous Database
Performed by customer
using cloud-assisted tools
for non-Autonomous
Database

Oracle Grid Infrastructure Zero downtime with Oracle Zero downtime Performed by Oracle Cloud
quarterly update, patch, or RAC rolling updates for Autonomous Database
upgrade
Performed by customer
using cloud-assisted tools
for non-Autonomous
Database

Oracle Database upgrade Minimal downtime with Minimal downtime with Not applicable for
DBMS_ROLLING, Oracle DBMS_ROLLING, Oracle Autonomous Database
GoldenGate replication, or GoldenGate replication, or
with pluggable database Performed by customer
with pluggable database
relocate using generic Maximum
relocate
Availability Architecture
best practices

Achieving Continuous Availability for your Applications


As part of Exadata Cloud, all software updates (except for non-rolling database upgrades) can be done online or with
Oracle RAC rolling updates to achieve continuous database uptime. Furthermore, any local failures of the storage,
Exadata network, or Exadata database server are managed, automatically, and database uptime is maintained.
To achieve continuous application uptime during Oracle RAC switchover or failover events, follow these application-
configuration best practices:
• Use non-default Oracle Clusterware-managed services to connect your application.
• Use recommended connection string with built-in timeouts, retries, and delays, so that incoming connections do
not see errors during outages.
• Configure your connections with Fast Application Notification.
• Drain and relocate services prior to any planned maintenance outage on Exadata that requires restarting any of
the Oracle RAC instances. Software updates to Exadata Dom0 or DomU are automatic. For Oracle Database and
Oracle Grid Infrastructure software updates, Exadata Cloud-assisted tools and Autonomous Database drain and
relocate services automatically.
• Leverage Application Continuity or Transparent Application Continuity to replay in-flight uncommitted
transactions transparently after failures.
For more information, see Continuous Availability: Best Practices for Applications Using Autonomous Database -
Dedicated and Application Continuity: MAA Checklist for Preparation white papers to experience application-level
service uptime similar to that of the database uptime..

Oracle Maximum Availability Architecture Reference Architectures in the Exadata Cloud


Exadata Cloud supports all four Oracle Maximum Availability Architecture reference architectures, providing support
for all Oracle Databases, regardless of their specific high availability, data protection, and disaster recovery service-
level agreements. See MAA Best Practices for the Oracle Cloud for more information about Oracle Maximum
Availability Architecture in the Exadata Cloud.

Oracle Cloud Infrastructure User Guide 1575


Database

Security Zone Integration


This topic describes the Database service's support of security zones. Security zones are compartments in your
tenancy created with a set of security policies called a security recipe. This topic concentrates on the Oracle-
managed Maximum Security Recipe, which provides the highest level of protection for your Database resources. The
policies of a particular security recipe are applied to any resource that is provisioned or moved into a security zone
compartment that uses the recipe. Thus, the only way to apply security zone policies is to control the compartment
assignments of your Oracle Cloud Infrastructure resources.
For a complete overview of security zones, see the Security Zone section of the Oracle Cloud Infrastructure user
guide.

Restrictions on Database Service Resources Located in Maximum Security Recipe


Compartments
The Maximum Security Recipe includes all available security zone policies. For example, restrictions placed on a
databases in a Maximum Security Recipe compartments include:
• • The database cannot allow public network access
• The database must have automatic backups enabled
• The database cannot have Data Guard associations that aren't located in security zone compartments
For a complete list of the Database restrictions implemented by the Maximum Security Recipe, see the Security Zone
Policies topic.

Supported Database Service Resources


The following Database service resources can be provisioned and managed in security zones that use the Maximum
Security Recipe:
• Autonomous Database: Databases using dedicated Exadata infrastructure and using shared Exadata infrastructure
with private endpoint access
• Bare metal and virtual machine DB systems
• Exadata Cloud DB systems
Always Free Autonomous Databases, Autonomous Database configured with public endpoints, and the Exadata
Cloud@Customer service are not compatible with Maximum Security Recipe compartments.

DB System Time Zone


The Time Zone field in the Console and in the API allows you to launch a bare metal, virtual machine, or Exadata DB
system with a time zone other than UTC (the default). Although UTC is the recommended time zone to use, having a
common time zone for your database clients and application hosts can simplify management and troubleshooting for
the database administrator.
The time zone that you specify when you create the DB system applies to the host and to the Oracle Grid
Infrastructure (if the system has Grid Infrastructure), and controls the time zone of the database log files. The time
zone of the database itself is not affected, however, the database’s time zone affects only the timestamp datatype. You
can change the database time zone manually but Oracle recommends that you keep it as UTC (the default) to avoid
data conversion and improve performance when data is transferred among databases. This configuration is especially
important for distributed databases, replication, and export and import operations.

Time Zone Options


Whether you use the Console or the API, the time zone options you can select from are represented in the named
region format, for example, America/Los_Angeles. The Console allows you to select UTC, the time zone detected in
your browser (if your browser supports time zone detection), or an alternate time zone.
To specify an alternate time zone (the Select another time zone option), you first select a value in the Region or
country field to narrow the list of time zones to select from in the Time zone field. In the America/Los_Angeles

Oracle Cloud Infrastructure User Guide 1576


Database

example, America is the time region and Los_Angeles is the time zone. The options you see in these two fields
roughly correlate with the time zones supported in both the Java.util.TimeZone class and on the Linux operating
system. If you do not see the time zone you are looking for, try selecting "Miscellaneous" in the Region and country
field.
Tip:

If you are using the API and would like to see a list of supported time zones,
you can examine the time zone options in the Console. These options appear
on the Create DB System page when you show advanced options after you
select a DB system shape.

Changing Time Zones After Provisioning


Follow these steps if you need to change the time zone of the DB system host, Oracle Grid Infrastructure, or database,
after you launch the DB system:
To change the time zone of the host on DB systems that use Grid Infrastructure
1. Log on to the host system as root.
2. Stop the CRS stack on all of the compute nodes.

#Grid_Home/bin/crsctl stop crs


3. Run the following commands to check the current time zone and to change it to the time zone you choose:

$ cat /etc/sysconfig/clock
ZONE="America/New_York"
$ cp -p /etc/sysconfig/clock /etc/sysconfig/clock.20160629

$ vi /etc/sysconfig/clock
ZONE="Europe/Berlin"

$ date
Wed Jun 29 10:35:17 EDT 2016
$ ln -sf /usr/share/zoneinfo/Europe/Berlin /etc/localtime
$ date
Wed Jun 29 16:35:27 CEST 2016

In this example, the time zone was changed from America/New_York to Europe/Berlin.
Tip:

To see a list of valid time zones on the host, you can run the ls -l /
usr/share/zoneinfo command.
4. (Optional) On an Exadata DB system, you can verify that /opt/oracle.cellos/cell.conf indicates
the correct time zone. Using our example, the time zone entry in this file would be <Timezone>Europe/
Berlin</Timezone>.
5. Restart the CRS stack on all of the compute nodes.

#Grid_Home/bin/crsctl start crs

To change the time zone of the host on DB systems that use Logical Volume Manager
Use this procedure for Fast Provisioned virtual machine DB systems, which use Logical Volume Manager instead of
Grid Infrastructure for storage management.
1. Log on to the host system as root.

Oracle Cloud Infrastructure User Guide 1577


Database

2. Stop the database and the listener on all of the compute nodes.

#sqlplus / as sysdba
SQL> shutdown immediate
#lsnrctl stop
3. Stop all other running processes from the Oracle Database Home.
4. Run the following commands to check the current time zone and to change it to the time zone you choose:

$ cat /etc/sysconfig/clock
ZONE="America/New_York"
$ cp -p /etc/sysconfig/clock /etc/sysconfig/clock.20160629

$ vi /etc/sysconfig/clock
ZONE="Europe/Berlin"

$ date
Wed Jun 29 10:35:17 EDT 2016
$ ln -sf /usr/share/zoneinfo/Europe/Berlin /etc/localtime
$ date
Wed Jun 29 16:35:27 CEST 2016

In this example, the time zone was changed from America/New_York to Europe/Berlin.
Tip:

To see a list of valid time zones on the host, you can run the ls -l /
usr/share/zoneinfo command.
5. As Oracle, restart the listener and the database on all of the compute nodes.

lsnrctl start
sqlplus / as sysdba
startup

To change the time zone of the Oracle Grid Infrastructure


The time zone of the Oracle Grid Infrastructure determines the time zone of the database log files.
You can change this time zone by updating the TZ property in the GRID_HOME/crs/install/
s_crsconfig_<node_name>_env.txt configuration file.
Note:

This procedure does not apply to Fast Provisioned virtual machine DB


systems, which use Logical Volume Manager instead of Grid Infrastructure
for storage management.
1. Ensure that you are logged onto the host as root and that the CRS stack is stopped on all of the compute nodes.
See To change the time zone of the host on DB systems that use Grid Infrastructure on page 1577.
2. Inspect the current time zone value in the GRID_HOME/crs/install/
s_crsconfig_<node_name>_env.txt file.

$ cat /u01/app/19.0.0.0/grid/crs/install/s_crsconfig_node1_env.txt
#########################################################################
#This file can be used to set values for the NLS_LANG and TZ environment
#variables and to set resource limits for Oracle Clusterware and
#Database processes.
#1. The NLS_LANG environment variable determines the language and
# characterset used for messages. For example, a new value can be
# configured by setting NLS_LANG=JAPANESE_JAPAN.UTF8
#2. The Time zone setting can be changed by setting the TZ entry to
# the appropriate time zone name. For example, TZ=America/New_York

Oracle Cloud Infrastructure User Guide 1578


Database

#3. Resource limits for stack size, open files and number of processes
# can be specified by modifying the appropriate entries.
#
#Do not modify this file except as documented above or under the
#direction of Oracle Support Services.
#########################################################################
TZ=UTC
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
CRS_LIMIT_STACK=2048
CRS_LIMIT_OPENFILE=65536
CRS_LIMIT_NPROC=16384
TNS_ADMIN=

In this example, the time zone is set to UTC.


3. Modify the time zone value, as applicable. Perform this task for all nodes in the cluster.
4. Restart the CRS stack on all of the compute nodes.

#Grid_Home/bin/crsctl start crs

For more information about changing the time zone of the Grid Infrastructure, see How To Change Timezone for
Grid Infrastructure (Doc ID 1209444.1).
To change the time zone of a database
Use the ALTER DATABASE SET TIME_ZONE command to change the time zone of a database. This command
takes either a named region such as America/Los_Angeles or an absolute offset from UTC.
This example sets the time zone to UTC:

ALTER DATABASE SET TIME_ZONE = '+00:00';

You must restart the database for the change to take effect. For more information, see Setting the Database Time
Zone.

Database Metrics
You can monitor the health, capacity, and performance of your Oracle Cloud Infrastructure Database service
resources by using metrics, alarms, and notifications. For more information, see Monitoring Overview on page 2686
and Notifications Overview on page 3378.
The Database service metrics help you measure useful quantitative data, such as CPU and storage utilization, the
number of successful and failed database logon and connection attempts, database operations, SQL queries, and
transactions, and so on. You can use metrics data to diagnose and troubleshoot problems with your Database Service
resources.
See the following topics for information about currently available database metrics:
• Autonomous Database Metrics on page 1580
• External Database Metrics on page 1591

Prerequisites
IAM policies: To monitor resources, you must be given the required type of access in a policy written by an
administrator, whether you're using the Console or the REST API with an SDK, CLI, or other tool. The policy must
give you access to the monitoring services as well as the resources being monitored. If you try to perform an action
and get a message that you don’t have permission or are unauthorized, confirm with your administrator the type of
access you've been granted and which compartment you should work in. For more information on user authorizations
for monitoring, see the Authentication and Authorization section for the related service: Monitoring or Notifications.

Oracle Cloud Infrastructure User Guide 1579


Database

Using the API


For information about using the API and signing requests, see REST APIs on page 4409 and Security Credentials on
page 181. For information about SDKs, see Software Development Kits and Command Line Interface on page 4262.
Use the following APIs for monitoring:
• Monitoring API for metrics and alarms
• Notifications API for notifications (used with alarms)

Autonomous Database Metrics


This topic describes the metrics emitted by the Database service in the oci_autonomous_database namespace.
Resources: Autonomous Databases.
For a complete list of available metrics for Autonomous Databases, see Available Metrics: oci_autonomous_database
on page 1580.
To view a default set of metrics charts in the Console, navigate to the Autonomous Database that you're interested in,
and then click Metrics. You also can use the Monitoring service to create custom queries.
Available Metrics: oci_autonomous_database
The metrics listed in the following table are automatically available for any Autonomous Database that you create.
You do not need to enable monitoring on the resource to get these metrics.
Note:

Valid alarm intervals are 5 minutes or greater due to the frequency at which
these metrics are emitted. See To create an alarm for details on creating
alarms.
Database service metrics for Autonomous Databases include the following dimensions:
AUTONOMOUSDBTYPE
The type of Autonomous Database, Autonomous Data Warehouse (ADW) or Autonomous Transaction
Processing (ATP).
deploymentType
The Exadata infrastructure type, shared or dedicated. When using the Console to view default metric charts
for multiple Autonomous Databases, you must specify this dimension.
DISPLAYNAME
The friendly name of the Autonomous Database.
REGION
The region in which the Autonomous Database resides.
RESOURCEID
The OCID of the Autonomous Database.
RESOURCENAME
The name of the Autonomous Database.
In the following table, metrics that are marked with an asterisk (*) can be viewed only on the Service Metrics page
of the Oracle Cloud Infrastructure Console. All metrics can be filtered by the dimensions described in this topic.
Note that some metrics are only available for Autonomous Databases using either shared Exadata infrastructure or
dedicated Exadata infrastructure. This is indicated in the Applicable Exadata Infrastructure Type column.

Oracle Cloud Infrastructure User Guide 1580


Database

Metric Metric Unit DescriptionApplicable


Display Exadata
Name Infrastructure
Type
ApplyLag Apply seconds This Dedicated
Lag metric only
displays
(in
seconds)
how
far the
standby
database
is behind
the
primary
database
as of
the time
sampled.
BlockChanges DB Block changes The Dedicated
Changes per average only
second number
of blocks
changed
per
second.
ConnectionLatency ConnectionmillisecondsThe time Shared
Latency taken to only
connect
to a
Autonomous
Database
that uses
shared
Exadata
infrastructure
in each
region
from a
Compute
service
virtual
machine
in the
same
region.

Oracle Cloud Infrastructure User Guide 1581


Database

Metric Metric Unit DescriptionApplicable


Display Exadata
Name Infrastructure
Type
CpuTime* CPU seconds Average Dedicated
Time per rate of only
second accumulation
of CPU
time by
foreground
sessions
in the
database
over the
time
interval.
The CPU
time
component
of
Average
Active
Sessions.
CpuUtilization CPU percent The CPU Both
Utilization usage
expressed
as a
percentage,
aggregated
across all
consumer
groups.
The
utilization
percentage
is
reported
with
respect
to the
number of
CPUs the
database
is allowed
to use,
which
is two
times the
number of
OCPUs.

Oracle Cloud Infrastructure User Guide 1582


Database

Metric Metric Unit DescriptionApplicable


Display Exadata
Name Infrastructure
Type
CurrentLogons* Current count The Both
Logons number of
successful
logons
during the
selected
interval.

Oracle Cloud Infrastructure User Guide 1583


Database

Metric Metric Unit DescriptionApplicable


Display Exadata
Name Infrastructure
Type

DBTime* DB Time seconds The Dedicated


per amount only
second of time
database
user
sessions
spend
executing
database
code
(CPU
Time +
WaitTime).
DB Time
is used
to infer
database
call
latency,
because
DB Time
increases
in direct
proportion
to both
database
call
latency
(response
time)
and call
volume.
It is
calculated
as the
average
rate of
accumulation
of
database
time by
foreground
sessions
in the
database
over the
time
interval.
Also
known as
Average
Active
Sessions.
Oracle Cloud Infrastructure User Guide 1584
Database

Metric Metric Unit DescriptionApplicable


Display Exadata
Name Infrastructure
Type
ExecuteCount Execute count The Both
Count number of
user and
recursive
calls that
executed
SQL
statements
during the
selected
interval.

FailedConnections* Failed count The Shared


Connections number only
of failed
database
connections.

FailedLogons Failed count The Shared


Logons number only
of log ons
that failed
because
of an
invalid
user name
and/or
password,
during the
selected
interval.
IOPS IOPS operations The Dedicated
per average only
second number
of I/O
operations
per
second.
IOThroughput IO MB per The Dedicated
Throughputsecond average only
throughput
in MB per
second.

Oracle Cloud Infrastructure User Guide 1585


Database

Metric Metric Unit DescriptionApplicable


Display Exadata
Name Infrastructure
Type
LogicalBlocksRead Logical reads per The Dedicated
Reads second average only
number
of logical
block
reads ("db
block
gets" plus
"consistent
gets") per
second.
Includes
buffered
and direct
I/O.
OCPUsAllocated OCPU integer The Dedicated
Allocated actual only
number of
OCPUs
allocated
by the
service
during the
selected
interval of
time.
ParsesByType Parses parses per The Dedicated
By Type second number only
of hard
or soft
parses per
second.
ParseCount* Parse count The Both
Count number
(Total) of hard
and soft
parses
during the
selected
interval.
QueryLatency Query millisecondsThe time Shared
Latency taken to only
display
the results
of a
simple
query on
the user's
screen.

Oracle Cloud Infrastructure User Guide 1586


Database

Metric Metric Unit DescriptionApplicable


Display Exadata
Name Infrastructure
Type
QueuedStatements Queued count The Both
Statements number
of queued
SQL
statements,
aggregated
across all
consumer
groups,
during the
selected
interval.
RedoSize Redo MB per The Dedicated
Generated second Average only
amount
of redo
generated
in MB per
second.
RunningStatements Running count The Both
Statements number of
running
SQL
statements,
aggregated
across all
consumer
groups,
during the
selected
interval.
Sessions Sessions count The Both
number of
sessions
in the
database.

StorageAllocated* Storage GB Maximum Dedicated


Space amount only
Allocated of space
allocated
to the
database
during the
interval.

Oracle Cloud Infrastructure User Guide 1587


Database

Metric Metric Unit DescriptionApplicable


Display Exadata
Name Infrastructure
Type

StorageAllocatedByTablespace* Allocated GB Maximum Dedicated


Storage amount only
Space By of space
Tablespace allocated
for each
tablespace
during the
interval.

StorageUsed* Storage GB Maximum Dedicated


Space amount only
Used of space
used
during the
interval.

StorageUsedByTablespace* Storage GB Maximum Dedicated


Space amount only
Used By of space
Tablespace used
by each
tablespace
during the
interval.
StorageUtilization Storage percent The Both
Utilization percentage
of
provisioned
storage
capacity
currently
in use.
Represents
the total
allocated
space
for all
tablespaces.

StorageUtilizationByTablespace* Storage percent The Dedicated


Space percentage only
Utilization of space
By utilized
Tablespace by each
tablespace.

Oracle Cloud Infrastructure User Guide 1588


Database

Metric Metric Unit DescriptionApplicable


Display Exadata
Name Infrastructure
Type
TransactionsByStatus Transactions
transactions The Dedicated
By Status per number of only
second committed
or rolled
back
transactions
per
second.
TransactionCount* Transactioncount The Both
Count combined
number
of user
commits
and user
rollbacks
during the
selected
interval.
TransportLag Transport seconds The Dedicated
Lag approximateonly
number of
seconds
of redo
not yet
available
on the
standby
database
as of
the time
sampled.
UserCalls* User count The Both
Calls combined
number
of logons,
parses,
and
execute
calls
during the
selected
interval.

Oracle Cloud Infrastructure User Guide 1589


Database

Metric Metric Unit DescriptionApplicable


Display Exadata
Name Infrastructure
Type

WaitTime* Wait seconds Average Dedicated


Time per rate of only
second accumulation
of non-
idle wait
time by
foreground
sessions
in the
database
over the
time
interval.
The wait
time
component
of
Average
Active
Sessions.

Using the Console


To view default metric charts for a single Autonomous Database
1. Open the navigation menu. Under Oracle Database, click Autonomous Data Warehouse, Autonomous JSON
Database, or Autonomous Transaction Processing.
2. Choose the Compartment that contains the Autonomous Database you want to view, and then click display name
of the database to view its details.
3. Under Resources, click Metrics.
The Metrics page displays a default set of charts for the current Autonomous Database. See Available Metrics:
oci_autonomous_database on page 1580 for information about the default charts.
For more information about monitoring metrics and using alarms, see Monitoring Overview on page 2686. For
information about notifications for alarms, see Notifications Overview on page 3378.
To view default metric charts for multiple Autonomous Databases
1. Open the navigation menu. Under Solutions and Platform, go to Monitoring and click Service Metrics.
2. For Compartment, select the compartment that contains the Autonomous Databases that you're interested in.
3. For Metric Namespace, select oci_autonomous_database.
The Service Metrics page dynamically updates the page to show charts for each metric that is emitted by the
selected metric namespace.
4. For Dimensions, specify an Exadata infrastructure deployment type (shared or dedicated). Important: If you do
not specify a deployment type, no service metrics will display on the page.
Optionally, you can specify other dimensions to filter your displayed metrics. See To filter results on page 2699
and To select different resources on page 2699 in the Monitoring documentation for more information.
Tip:

If there are multiple Autonomous Databases in the compartment, the charts


default to show a separate line for each master encryption key. You can

Oracle Cloud Infrastructure User Guide 1590


Database

instead show a single line aggregated across all Autonomous Databases in the
compartment by selecting the Aggregate Metric Streams check box.
For more information about monitoring metrics and using alarms, see Monitoring Overview on page 2686. For
information about notifications for alarms, see Notifications Overview on page 3378.

External Database Metrics


This topic describes the metrics emitted by the Database service in the oracle_external_database
namespace.
Resources: External Databases.
To view a default set of metrics charts in the Console, navigate to the external database that you're interested in, and
then click Metrics. You also can use the Monitoring service to create custom queries.
Available Metrics: oracle_external_database
The metrics listed in the following table are automatically available for external databases.
Note:

Valid alarm intervals are 5 minutes or greater due to the frequency at which
these metrics are emitted. See To create an alarm for details on creating
alarms.
Database service metrics for external databases include the following dimensions:
DISPLAYNAME
The friendly name of the external database.
REGION
The region in which the OCI external database resource resides.
RESOURCEID
The OCID of the external database.
RESOURCENAME
The name of the external database.
In the following table, metrics that are marked with an asterisk (*) can be viewed only on the Service Metrics page of
the Oracle Cloud Infrastructure Console. All metrics can be filtered by the dimensions described in this topic.

Metric Metric Display Unit Description Collection


Name Frequency
Allocated Space percent
AllocatedStorageUtilizationByTablespace* Percentage of space 30 minutes
Utilization By used by tablespace
Tablespace out of allocated.
Not applicable to
external container
databases.
BlockChanges* DB Block Changes changes per second The average number 5 minutes
of blocks changed
per second.

Oracle Cloud Infrastructure User Guide 1591


Database

Metric Metric Display Unit Description Collection


Name Frequency
CpuCount* CPU Count CPU Container and non- 5 minutes
container databases:
The value of the
cpu_count
parameter, if set;
otherwise, the value
of num_cpus for the
database host.
Pluggable databases:
The value of the
cpu_count
parameter if set.
Otherwise the value
of the cpu_count
parameter of the
associated container
database (if set), or
the num_cpus value
of the database host
if the cpu_count
of the container
database is not set.

CpuTime* CPU Time seconds per second Average rate of 5 minutes


accumulation of CPU
time by foreground
sessions in the
database over the
time interval. The
CPU time component
of Average Active
Sessions.
CpuUtilization CPU Utilization percent The CPU usage 5 minutes
expressed as
a percentage,
aggregated across
all consumer groups.
The utilization
percentage is
reported with respect
to the number of
CPUs the database is
allowed to use, which
is two times the
number of OCPUs.
CurrentLogons Current Logons count The number of 5 minutes
successful logons
during the selected
interval.

Oracle Cloud Infrastructure User Guide 1592


Database

Metric Metric Display Unit Description Collection


Name Frequency

DBTime DB Time seconds per second The amount of 5 minutes


time database user
sessions spend
executing database
code (CPU Time
+ WaitTime). DB
Time is used to infer
database call latency,
because DB Time
increases in direct
proportion to both
database call latency
(response time)
and call volume.
It is calculated as
the average rate of
accumulation of
database time by
foreground sessions
in the database over
the time interval.
Also known as
Average Active
Sessions.
ExecuteCount Execute Count count The number of user 5 minutes
and recursive calls
that executed SQL
statements during the
selected interval.
IOPS* IOPS operations per second The average number 5 minutes
of I/O operations per
second.
IOThroughput* IO Throughput MB per second The average 5 minutes
throughput in MB per
second.
Logical Reads
LogicalBlocksRead* reads per second The average number 5 minutes
of logical block reads
("db block gets" plus
"consistent gets")
per second. Includes
buffered and direct I/
O.
Max Tablespace
MaxTablespaceSize* GB Maximum possible 30 minutes
Size tablespace size.
Not applicable to
external container
databases.

Oracle Cloud Infrastructure User Guide 1593


Database

Metric Metric Display Unit Description Collection


Name Frequency
MemoryUsage* Memory Usage MB The total amount of 15 minutes
memory used by the
database during the
collection interval.
Not applicable to
external pluggable
databases.
Monitoring Status
MonitoringStatus* No unit; failed Monitoring Status 5 minutes
status is displayed of the resource. In
when monitoring is case if any metric
interrupted. collection failed that
information will
be captured in this
metric.
ParseCount* Parse Count (Total) count The number of hard 5 minutes
and soft parses
during the selected
interval.
ParsesByType* Parses By Type parses per second The number of hard 5 minutes
or soft parses per
second.
RedoSize* Redo Generated MB per second The Average amount 5 minutes
of redo generated in
MB per second.
Storage Space
StorageAllocated* GB Maximum amount of 30 minutes
Allocated space allocated to the
database during the
interval.
Allocated
StorageAllocatedByTablespace* GB Maximum amount 30 minutes
Storage Space By of space allocated
Tablespace for each tablespace
during the interval.
StorageUsed* Storage Space Used GB Maximum amount of 30 minutes
space used during the
interval.
Storage Space Used
StorageUsedByTablespace* GB Maximum amount 30 minutes
By Tablespace of space used by
each tablespace
during the interval.
Not applicable to
external container
databases.
Storage Utilization
StorageUtilization percent The percentage of 30 minutes
provisioned storage
capacity currently in
use. Represents the
total allocated space
for all tablespaces.

Oracle Cloud Infrastructure User Guide 1594


Database

Metric Metric Display Unit Description Collection


Name Frequency
Storage Space
StorageUtilizationByTablespace* percent The percentage of 30 minutes
Utilization By space utilized by
Tablespace each tablespace.
Not applicable to
external container
databases.
Transaction Count
TransactionCount* count The combined 5 minutes
number of user
commits and user
rollbacks during the
selected interval.
Transactions By
TransactionsByStatus* transactions per The number of 5 minutes
Status second committed or rolled
back transactions per
second.
UserCalls* User Calls count The combined 5 minutes
number of logons,
parses, and execute
calls during the
selected interval.

WaitTime* Wait Time seconds per second Average rate of 5 minutes


accumulation of non-
idle wait time by
foreground sessions
in the database over
the time interval. The
wait time component
of Average Active
Sessions.

Using the Console


To view default metric charts for a single External Database
1. Open the . Under , click .
2. Choose the Compartment that contains the External Database you want to view, and then click display name of
the database to view its details.
3. Under Resources, click Metrics.
The Metrics page displays a default set of charts for the current External Database. See Available Metrics:
oracle_external_database on page 1591 for information about the default charts.
For more information about monitoring metrics and using alarms, see Monitoring Overview on page 2686. For
information about notifications for alarms, see Notifications Overview on page 3378.
To create metrics queries for all available external database metrics using the Monitoring Service's Metrics
Explorer tool
1. Open the navigation menu. Under Solutions and Platform, go to Monitoring and click Service Metrics.
2. Click Metrics Explorer. Note: external database metrics are not available in the Service Metrics tab. For
information on building queries in the Metrics Explorer tab, see Building Metric Queries on page 2726.

Oracle Cloud Infrastructure User Guide 1595


Database

3. Create a metrics query:



For Compartment, select the compartment that contains the external databases that you're interested in.

For Metric Namespace, select oracle_external_database.

Select a Metric name. See Available Metrics: oracle_external_database on page 1591 for definitions of each
metric.
• Select an Interval. See Available Metrics: oracle_external_database on page 1591 for information on the
collection frequency of each external database metric.
• Select the Statistic type. This is the aggregation function applied for converting a set of data points. Available
functions include count, max, mean, rate, min, sum, and percentile.
• Select a Metric dimension. Dimensions are used to filter metric data. For example, by choosing the
"resourceID" dimension, you can specific a single external database by selecting the OCID of the OCI external
database resource.
4. Click Update Chart after configuring your query.
For more information about monitoring metrics and using alarms, see Monitoring Overview on page 2686. For
information about notifications for alarms, see Notifications Overview on page 3378.

Using the Oracle Database Service Overview to Manage Resources


This topic describes the Overview page in the Oracle Database section of the Oracle Cloud Infrastructure
Console. The Overview page gives you a way to view and manage all of your tenancy's Oracle Database resources
using a single dashboard tool. The Overview provides information on all of your work request activity, alarms,
announcements and more, regardless of whether the databases use Autonomous Database, bare metal, virtual
machine, Exadata Cloud, or Exadata Cloud@Customer infrastructure.

Resource Summary Section


The resource summary tiles provide you with details on how many Oracle Databases you have currently provisioned
in your tenancy. The Autonomous Database tile provides total numbers of databases by workload type and
infrastructure type.

The Databases tile provides total numbers of bare metal, or virtual machine databases by database edition type, total
numbers of Exadata databases by infrastructure type.

Oracle Cloud Infrastructure User Guide 1596


Database

At the bottom of each tile are messages reporting whether or not usage of the resources listed in the column are near
or at service limits for the tenancy. For resources that at near or at service limits, you can request a service limit
increase.

Operations Section
The tiles for alarms, activity, announcements, and service health allow you to quickly assess whether any of your
Oracle Database resources need attention, how normal operations are proceeding, and if there are service-related
announcements you need to know about to effectively manage your resources.

"The Alarms tile displays the Oracle Cloud Infrastructure Monitoring service alarms for your Database service
resources. You can click the alarm icon in the Alarms tile to navigate to detailed information about the alarms.
In the list view of your tenancy's alarms, you can limit the list by compartment, alarm status, and alarm severity.
For more information on creating and using alarms to manage your Oracle Database resources, see the Monitoring
documentation.
The Activity tile displays the number of in-progress work requests for your Database service resources. Click on the
tile's activity icon to navigate to a complete list of in-progress work requests in your specified compartment.
The Announcements tile displays the number of unread Oracle Database announcements for your tenancy, if any.
Click the tile's announcements icon to navigate to your unread announcements. See Console Announcements on page
264 for more information on Oracle Cloud Infrastructure announcements.
The Service Health tile displays the availability status of all Oracle Database instances in the selected region, and
allows you toallows you to easily navigate to the Console's Oracle Cloud Infrastructure status page to check the
availability status of all Oracle Cloud Infrastructure services, by region.

Oracle Cloud Infrastructure User Guide 1597


Database

What's New and Help


The What's New list provides a view of the most recent updates to the Oracle Database service. The Help list
provides quick access to information about each type of Oracle Database cloud offering.

To see a list of alarms for your tenancy's Oracle Database resources


1. Open the navigation menu. Under Database, click Overview.
2. Under Advanced, click Alarms.
3. Optional. Limit the scope of the list using the compartment selector under List Scope.
4. Optional. Limit the list to alarms with a particular status or severity using the available list filters.
To see a list of work requests for your tenancy's Oracle Database resources
1. Open the navigation menu. Under Database, click Overview.
2. Under Advanced, click Activity.
3. Optional. Limit the scope of the list using the compartment selector under List Scope.
4. Optional. Limit the list to work requests with a particular status or operation type using the available list filters.
To see a list of announcements for your tenancy's Oracle Database resources
1. Open the navigation menu. Under Database, click Overview.
2. Under Advanced, click Announcements.
3. Optional. Limit the list by announcement status, Oracle Database service type, announcement action type, and
publication time.
To see details about your tenancy's Oracle Database resource categories that are at or near service
limits
1. Open the navigation menu. Under Database, click Overview.
2. Under Advanced, click Limits.
3. Optional. Limit the scope of the information using the compartment selector under List Scope.
4. Optional. Limit the list by Oracle Database service type using the Service Type filter. This allows you to view
limits information for either Autonomous Databases, co-managed databases, or Exadata infrastructure instances.

Oracle Cloud Infrastructure User Guide 1598


Database

5. Optional. Limit the list to information about the Console's currently specified region, or to an availability domain
within the current region, using the Scope filter.

Using Performance Hub to Analyze Database Performance


This topic describes how to use Performance Hub to analyze and tune the performance of a selected Oracle Cloud
Infrastructure Autonomous Database or Oracle Database. With this tool, you can view real-time and historical
performance data.
In Database Management, Performance Hub is available from database detail pages so that users such as Database
and Fleet Administrators can monitor external databases. To access Performance Hub, click the Performance Hub
link in a database summary page in the Database Management Service.
Note:

Using Identity and Access Management (IAM), you can create a policy that
grants users access to Performance Hub while limiting actions they can take
on an Autonomous Database. For more information, about policies and how
to use them, see How Policies Work. The following example shows a policy
that grants access only to performance data without allowing general use
access on Autonomous Databases.

Allow group <groupname> to inspect autonomous-database-family in


compartment <name>
Allow group <groupname> to use autonomous-database-family in
compartment <name> where
request.operation = 'RetrieveAutonomousDatabasePerformanceBulkData'

Performance Hub Features


The Performance Hub window consists of a graphical Time Range display that you use to select the time period of
all data to be displayed. It includes the following tabs that display performance data:
• ASH Analytics
• SQL Monitoring
• Workload
• Blocking Sessions
• ADDM (available for databases using shared Exadata infrastructure).
These tabs, described in detail in this topic, provide information that you can use to analyze the performance of a
selected database, including the following:
• How much of the database is waiting for a resource, such as CPU or disk I/O
• Whether database performance degraded over a given time period and what could be the likely cause
• Which specific modules may be causing a load on the system, and where most of database time is being spent on
this module
• Which SQL statements are the key contributors to changes in database performance, and which executions are
causing them
• Which user sessions are causing performance bottlenecks
• Which sessions are currently blocking and if there are outstanding requests for a lock

Time Range Selector


The time range selector is displayed at the top of the Performance Hub page. It consists of a graphically displayed
time field as shown in the following illustration. The selected time range applies to all charts and graphs in the
Performance Hub window.
Using the Time range selector, you can view real-time and historical performance data.

Oracle Cloud Infrastructure User Guide 1599


Database

• In real-time mode, performance data is retrieved from in-memory views. You can display data in any time range
from within the time selection shown by the date picker.
• In historical mode, data is retrieved from the Automatic Workload Repository (AWR). You can select any time
period, provided there is sufficient data in the AWR. When you view historical data in the Performance Hub, you
are viewing statistics collected as part of the snapshots of your database.
You can hide the Activity Summary chart to save space and display only the main tab content. To do so, click the
Hide Activity Summary checkbox that is located directly above the graph.

Figure 1: Performance Hub Activity Summary

The time range field (#1 in the previous illustration) shows database activity in chart form for the specified Time
Range period. The time range is the amount of time being monitored.
Use the Quick Select selector to set the time range. The menu includes five time choices, Last Hour, Last 8 Hours,
Last 24 Hours, Last Week, and Custom. The default time range is Last Hour. You can also click the Time Range
field to specify a custom time range. This opens the Custom Time Range dialog, allowing you to specify a custom
range.
The Activity Summary graph displays the average number of active sessions broken down by CPU, User I/O, and
Wait. Maximum threads are shown as a red line above the time field.
The sliding box (circled at right in the previous illustration) on the time range chart is known as the time slider. The
time slider selects a section of the time range (#2 in the previous illustration) shown in the time range field. It shows
the time being analyzed. In the illustration, the arrows inside the time slider point to the vertical 'handle' elements on
the left and right boundaries of the slider box. The time slider works as follows:
• To change the start and end time of the analysis while keeping the same amount of time between them, left click
anywhere inside the box. Then slide the box left or right along the time range without changing its size. The
selected times are displayed below the time graph.
• To increase or decrease the length of time being analyzed, left click either one of the handles and drag it left or
right to expand or contract the box.
• To refresh the data in Performance Hub according to the time range chosen, click Refresh (upper right corner of
the window).
Note:

The time slider provides an extra display feature in the Workload tab. See the
description in the Workload Tab section of this page.

Oracle Cloud Infrastructure User Guide 1600


Database

Use the Quick Select menu to set the time duration. The menu includes the following five time choices: Last Hour,
Last 8 Hours, Last 24 Hours, Last Week, and Custom. The default Time Range is Last Hour. The time slider
selects the time period of the data displayed in Performance Hub. The time slider has a different default time period
based on the selected Time Range.

Time Zone Selector


The Time Zone selector is located above the time range field, beside the Quick Select and Time Range selectors. By
default, when you open Performance Hub, the tool displays data in UTC (Coordinated Universal Time) time. You can
use the time zone selector to change the time zone to either your local web browser time, or the time zone setting of
the database you are working with. When you change the time zone, the Performance Hub reports display data in your
specified time zone.

ASH Analytics Tab


Displayed by default, the ASH (Active Session History) Analytics tab shows ASH analytics charts that you can use
to explore ASH data. You can use this tab to drill down into database performance across multiple dimensions such
as Consumer Group, Wait Class, SQL ID, and User Name. In the ASH Analytics tab, you can select an Average
Active Sessions dimension and view the top activity for that dimension for the selected time period.
The Average Active Session chart has a control at the right end of the chart to select the displayed resolution of ASH
data (low, medium, high, or maximum). For more information on ASH, see Active Session History (ASH) in Oracle
Database Concepts.
ASH Sample Resolution
The ASH Sample Resolution menu gives users the ability to control the sampling of ASH data displayed in the
Average Active Sessions chart. Data resolution means displaying more or fewer data points in the sample data in
given time period. Lower resolution displays coarser data with better performance and less impact on the database.
Higher resolution aggregates more data to display finer detail, but can have a corresponding cost in latency and
impact on the database.
The Sample Resolution menu is displayed at the right side of the chart. The data resolution selections are:
• Low - the chart displays the fewest data points in the selected data sample.
• Medium - the chart displays more data points in the selected data sample.
• High - the chart displays more data points in the selected data sample.
• Maximum - the chart displays the most data points available in the selected data sample.
To use this feature, see To view the average active sessions data by a selected dimension on page 1605
Activity tables
By default, the two tables located below the Average Active Sessions graph display the top SQLs and user sessions
for the time period covered by the Average Activity Sessions graph. Use the menus at the top left of each of the two
tables to view activities by other dimensions.

SQL Monitoring Tab


The SQL Monitoring tab is not displayed by default. To view it, click SQL Monitoring on the Performance Hub
page.
SQL statements are only monitored if they have been running for at least five seconds or if they are run in parallel.
The table in this section displays monitored SQL statement executions by dimensions including Last Active Time,
CPU Time, and Database Time. The table displays currently running SQL statements and SQL statements that
completed, failed, or were terminated. The columns in the table provide information for monitored SQL statements
including Status, Duration, and SQL ID.
The Status column includes the following icons:
• A spinning icon indicates that the SQL statement is executing.

Oracle Cloud Infrastructure User Guide 1601


Database

• A green check mark icon indicates that the SQL statement completed its execution during the specified time
period.
• A red cross icon indicates that the SQL statement did not complete. The icon displays when an error occurs
because the session was terminated.
• A clock icon indicates that the SQL statement is queued.
To terminate a running or queued SQL statement, click Kill Session.
You can also click an SQL ID to go to the corresponding Real-time SQL Monitoring page. This page provides extra
details to help you tune the selected SQL statement.

Workload Tab
The Workload tab graphically displays four sets of statistics that you can use to monitor the database workload and
identify spikes and bottlenecks. Each set of statistics is displayed in a separate region, as described in the following
sections.

Monitored and analyzed time indications


The time slider has more functionality in the Workload tab than it does in the Active Session History and
SQL Monitoring tabs. Note the following about the Quick Select time range options:
• Last Hour, Last 8 Hours, and Last 24 Hours - The charts in the Workload tab display data for the entire time
period of specified time range. A shadowed area is displayed in each chart that corresponds to the position of the
time slider in the time range.
• Last Week - The charts in the Workload tab display data for the selected time period of the time slider in the time
range. There is no shadowed area displayed in this case.
• Custom - The shadowed area display depends on whether the time period is up to and including 24 hours, or
greater than 24 hours.

Regions
The tab contains four regions: CPU Statistics, Wait Time Statistics, Workload Profile, and Sessions. Each region
contains one or more charts that indicate the characteristics of the workload and the distribution of the resources. The
data displayed on all the charts is for the same time period, as selected by the Time Range and time slider at the top of
the window.
• The CPU Statistics region contains two charts:
• CPU Time - This chart shows how much CPU time is being used by the foreground sessions every second. It
identifies where the CPU time is mostly spent in the workload and pinpoints any unusual CPU spikes.
• CPU Utilization (%) - This chart indicates the percentage of CPU time aggregated by consumer group as
calculated by the resource manager.
• The Wait Time Statistics region contains a chart that displays the time used in different wait classes. To see
the total average active sessions, select the DB Time check box. The activities are broken down by the 13 wait
classes.

Oracle Cloud Infrastructure User Guide 1602


Database

• The Workload Profile region contains a group of charts that indicate patterns of user calls, executions,
transactions, and parses, as well as the number of running statements and queued statements. This region includes
a menu that you can use to select the data to display. It contains the following options.
• User Calls - This option displays the combined number of logons, parses, and execute calls per second.
• Executions - This option displays the combined number of user and recursive calls that executed SQL
statements per second.
• Transactions - This option displays the combined number of user commits and user rollbacks per second.
• Parses - This option displays the combined number of hard and soft parses per second.
• Running Statements - This option displays the number of running SQL statements, aggregated by consumer
group.
• Queued Statements - This option displays the number of queued parallel SQL statements, aggregated by
consumer group.
• The Sessions region contains charts that show the number of current logons and sessions. It contains a menu that
includes the following options:
• Current Logons - This option displays the number of current successful logons.
• Sessions - This option displays the number of sessions.

Blocking Sessions Tab


The Performance Hub blocking sessions tab displays the current blocking and waiting sessions in a hierarchical
display. You can view detailed information about each blocking session, and can view the sessions blocked by each
blocking session. You can also use the tab to inspect or drill down into the SQL involved, to determine the cause of
the blocking. You can perform several operations in the tab, including killing one or more of the listed sessions to
resolve a waiting session problem. Instructions for the tab functions are located in this topic under Using the Oracle
Cloud Infrastructure Console on page 1604
The hierarchical display nests waiting sessions underneath the session that is blocking them in an easily viewable
parent-child relationship. The hierarchy can contain any number of levels to correctly represent the structure of the
sessions involved,
The sessions listed include sessions that are waiting for a resource and sessions that hold a resource that is being
waited on that creates the blocking condition.

ADDM Tab
The Performance Hub Automatic Database Diagnostic Monitor (ADDM) tab includes controls to access the
information stored by ADDM. ADDM analyzes the Automatic Workload Repository (AWR) data on a regular basis,
then locates the root causes of performance problems, provides recommendations for correcting any problems, and
identifies non-problem areas of the application. Because AWR is a repository of historical performance data, ADDM
can be used to analyze performance issues after the event, often saving time and resources that would be needed to
reproduce a problem.
ADDM provides the following benefits:
• Time-based quantification of application problem impacts and recommendation benefits
• Recommendations for treating the root causes of problems
• Identification of non-problem areas of the application
In addition to the benefits ADDM provides for production systems, it can be used on development and test systems to
provide early warnings of application performance issues.
Instructions to use the ADDM tab are located below in Using the Oracle Cloud Infrastructure Console on page
1604.

Oracle Cloud Infrastructure User Guide 1603


Database

Automatic Workload Repository Reports


The Automatic Workload Repository (AWR) collects, processes, and maintains performance statistics for problem
detection and self-tuning purposes. This data is both in memory and stored in the database. From the Performance
Hub, you can generate and download a report of the gathered data.
An AWR report shows data captured between two points in time (or snapshots). AWR reports are divided into
multiple sections. The content of the report contains the workload profile of the system for the selected range of
snapshots. The HTML report includes links that you can use to navigate quickly between sections.
The statistics collected and processed by AWR include:
• Object statistics that determine both access and usage statistics of database segments
• Time model statistics based on time usage for activities, displayed in the V$SYS_TIME_MODEL and V
$SESS_TIME_MODEL views
• Some of the system and session statistics collected in the V$SYSSTAT and V$SESSTAT views
• SQL statements that are producing the highest load on the system, based on criteria such as elapsed time and CPU
time
• ASH statistics, representing the history of recent sessions activity
To generate and download an AWR report, see To download an AWR report on page 1606.

Using the Oracle Cloud Infrastructure Console


To navigate to Performance Hub in the Oracle Cloud Infrastructure Console interface of an
Autonomous Database
1. Open the navigation menu. Under Database, click Autonomous Transaction Processing or Autonomous Data
Warehouse.
2. Choose your Compartment.
3. In the list of Autonomous Databases, click the display name of the database that you want to analyze using
Performance Hub reports.
4. Click Performance Hub.
To navigate to Performance Hub in the External Database Service
This topic provides the steps to navigate to the page that explains how to use Performance Hub with external
databases.
1. Open the navigation menu. Under Oracle Database, click External Database.
2. In the left pane of the window, choose the type of database you want to use: Plugable Databases, Container
Databases, or Non-Container Databases.
3. Choose your Compartment.
4. In the list of external databases that is shown, click the display name of the database that you want to analyze
using Performance Hub. The database details page for the selected database is displayed.
Note:

The Associated Services section of the database details page shows


whether the Database Management service is enabled for the database.
• If Database Management is Enabled, click Disable to disable it.
• If Database Management is Disabled, click Enable to enable it.
5. In the database details page, click Performance Hub.
Note:

Performance Hub is enabled only under the following conditions.


• The Database Management service must be enabled.
• The database must be an Enterprise Edition, version 12.1.0.0.0 or
higher.

Oracle Cloud Infrastructure User Guide 1604


Database

To view the average active sessions data by a selected dimension


1. Go to the Performance Hub page of the Oracle Cloud Infrastructure Console for the database which you wan
to manage. See To navigate to Performance Hub in the Oracle Cloud Infrastructure Console interface of an
Autonomous Database on page 1604 for more information.
•The database name is displayed at the top of the Performance Hub page.
•The time period for which information is available on the Performance Hub is displayed in the Time Range
field.
• The selected time period is indicated on the time slider graph by the adjustable time slider box.
• The ASH Analytics tab is displayed with the top activity for a selected dimension in the selected time period.
2. Use the Quick Select selector to set the exact time period for which data is displayed in the ASH Analytics tables
and graphs. By default, the last hour is selected. The time range is the total amount of time available for analysis.
3. Use the box on the time slider to further narrow down the time period for which performance data is displayed on
the ASH Analytics tab.
4. Select a dimension in the Average Active Sessions drop-down list to display ASH analytics by that dimension.
When the Consumer Group dimension is selected, the data is categorized by default to the High, Medium, or
Low service name that is associated with the database.
Optionally, you can:
•Click the Maximum Threads check box to view the number of Max CPU Threads. The red line on the chart
shows this limit.
• Click the Total Activity check box to view a black border that denotes total activity of all the components of
the selected dimension on the chart. This option is selected by default when you use the filtering capabilities to
only view the data for a particular component within a dimension. For information on filtering Average Active
Sessions data, see Filter Average Active Sessions Data.
5. Use the Sample Resolution menu to select the sampling of ASH data displayed in the Average Active Sessions
chart. To select a resolution, click Sample Resolution to display the following menu and click the desired
resolution to display the data.
• Low - the graph displays the fewest data points available in the selected data sample.
• Medium - the graph displays more data points in the selected data sample.
• High - the graph displays more data points in the selected data sample.
• Maximum - the graph displays the most data points available in the selected data sample.
6. For the dimension selected in the Average Active Sessions drop-down list, you can further drill down into session
details by selecting dimensions in the two sections at the bottom of the ASH Analytics tab. By default, the
following dimensions are selected:
• SQL ID by Consumer Group, which displays the SQL statements with the top average active sessions
activity for consumer groups for the selected time period. You can right-click the bar charts to sort the SQL
statements in ascending or descending order or click the SQL ID to go the SQL Details page.
• User Session by Consumer Group, which displays the user sessions with the top average active sessions
activity for consumer groups for the selected time period. You can right-click the bar charts to sort the user
sessions in ascending or descending order or click the user session to go to the User Session page.
To filter average active sessions data
1. Go to the Performance Hub page of the Oracle Cloud InfrastructureConsole for the database that you want
to manage. See To navigate to Performance Hub in the Oracle Cloud Infrastructure Console interface of an
Autonomous Database on page 1604 for more information.
• The database name is displayed at the top of the Performance Hub page.
• The time period for which information is available on the Performance Hub is displayed in the Time Range
field. The selected time period is indicated on the time slider graph by the adjustable time slider block.
The ASH Analytics tab is displayed with the top activity for a selected dimension in the selected time period.
2. Use the Quick Select selector to set the exact time period for which data is displayed in the ASH Analytics tables
and graphs. By default, the last hour is selected. The time range is the total amount of time available for analysis.

Oracle Cloud Infrastructure User Guide 1605


Database

3. Use the adjustable time slider box to further narrow down the time period for which performance data is
displayed on the ASH Analytics tab.
4. In the ASH Analytics tab, select a dimension in the Average Active Sessions by drop-down list. By default,
Consumer Group is selected.
The chart is displayed. Each color in the chart denotes a component of the selected dimension.For example, the
Consumer Group dimension has High, Medium, and Low, which are predefined service names assigned to your
database to provide different levels of concurrency and performance.
5. Click a component in the legend. The selected component is displayed in the Applied Filters field and the chart is
updated to only display data pertaining to that component. The total activity, which includes all the components of
the dimension, is defined by a black outline and is displayed by default when you filter data.
To view the SQL Monitoring report
1. Go to the Performance Hub page of the Oracle Cloud Infrastructure Console for the database which you want
to manage. See To navigate to Performance Hub in the Oracle Cloud Infrastructure Console interface of an
Autonomous Database on page 1604 for more information.

The database name is displayed at the top of the Performance Hub page.

The time period for which information is available on the Performance Hub is displayed in the Time Range
field. The selected time period is indicated on the time slider graph by the adjustable time slider box.
2. Click SQL Monitoring to display the SQL monitoring tab.
3. Optionally, you can get detailed information on a specific SQL statement by clicking an ID number in the SQL ID
column. When you click an ID number, the Real-time SQL Monitoring page is displayed.
4. Click Download Report to download the report data for your selected SQL statement.
To download an AWR report

For databases version 18c and older:


1. Go to the Performance Hub page of the Oracle Cloud Infrastructure Console for the database which you want
to manage. See To navigate to Performance Hub in the Oracle Cloud Infrastructure Console interface of an
Autonomous Database on page 1604 for more information.
• The database name is displayed at the top of the Performance Hub page.
• The time period for which information is available on the Performance Hub is displayed in the Time Range
field. The selected time period is indicated on the time slider graph by the adjustable time slider box.
2. Click AWR to display the Generate AWR Report dialog.
3. You can choose to generate a report either from two snapshots closest to the current time and date or from a
custom time range of your choice.
4. You can choose to generate a report either from two snapshots closest to the current time and date or from a
custom time range of your choice.
5. If you choose to generate a report from a custom time range, then select Custom and select start and end times for
your range. Click Download.
6. Oracle Database generates a report named AWRReport_date_range.html that downloads to the default download
folder for your browser. View the report after the download completes.

For databases version 19c and newer:


1. Go to the Performance Hub page of the Oracle Cloud Infrastructure Console for the database which you want
to manage. See To navigate to Performance Hub in the Oracle Cloud Infrastructure Console interface of an
Autonomous Database on page 1604 for more information.
•The database name is displayed at the top of the Performance Hub page.
•The time period for which information is available on the Performance Hub is displayed in the Time Range
field. The selected time period is indicated on the time slider graph by the adjustable time slider box.
2. In the Quick Select menu, choose a time period for which an AWR report will be generated.

Oracle Cloud Infrastructure User Guide 1606


Database

3. In the upper right corner, click Reports > Automatic Workload Repository.
The Generate Automatic Workload Repository dialog box is displayed.
4. Use the Start Snapshot and End Snapshot menus to select the beginning and end of the snapshot time range to
generate the report.
5. Click Download. The database generates the report named AWRReport_date_range.html. When the report is
complete, the report name is displayed at the top of the screen, and the report is automatically downloaded to the
default download folder for your browser.
6. Open the default download folder for your browser on your system and view the report from there.
To view the Workload metrics
1. Go to the Performance Hub page of the Oracle Cloud Infrastructure Console for the database that you want
to manage. See To navigate to Performance Hub in the Oracle Cloud Infrastructure Console interface of an
Autonomous Database on page 1604 for more information. The database name is displayed at the top of the
Performance Hub page.
2. Use the Quick Select selector to set the exact time period for which data is displayed in the ASH Analytics tables
and graphs. By default, the last hour is selected. The time range is the total amount of time available for analysis.
3. Use the time slider to further narrow down the time period for which performance data is displayed on the
Workload tab. All charts show data for the entire specified time range if within 24 hours.
4. Click Workload to view the Workload tab. The four regions and their associated charts are displayed.
• CPU Statistics The CPU Statistics region contains two charts, CPU Time and CPU Utilization (%).
• To display how much CPU Time is being consumed by the foreground sessions per second, select CPU Time
in the menu in this region. This identifies where the CPU time is mostly spent in the workload and pinpoints
any unusual CPU spikes. When CPU time is selected optionally click the Maximum Threads check box to
show the maximum CPU time available. This shows the CPU time component of Average Active Sessions. .
• To display the CPU Utilization (%) chart, select CPU Utilization (%) in the menu. This chart displays the
percentage of CPU time aggregated by consumer group, as calculated by the resource manager.
• Wait Time Statistics The Wait Time Statistics region contains one chart that displays the time used in different
wait classes. To see the total average active sessions, select the DB Time check box. The activities are broken
down by the 13 wait classes.
• Workload Profile To change the metrics displayed in the Workload Profile, click the menu and select the metric
that you want to view.
• Select User Calls to display the combined number of logons, parses, and execute calls per second.
• Select Executions to display the combined number of user and recursive calls that executed SQL statements
per second.
• Select Transactions to display the combined number of user commits and user rollbacks per second.
• Select Parses to display the combined number of hard and soft parses per second
• Select Running Statements to display the number of running SQL statements, aggregated by consumer group.
• Select Queued Statements to display the number of queued parallel SQL statements, aggregated by consumer
group.
• Sessions To change the metrics displayed in the Sessions region, click the menu and select the metric that you
want to view:
• Select Current Logons to display the number of current successful logons.
• Select Sessions to display the number of sessions.
To view blocking and waiting sessions
1. Go to the Performance Hub page of the Oracle Cloud Infrastructure Console for the database that you want
to manage. See To navigate to Performance Hub in the Oracle Cloud Infrastructure Console interface of an
Autonomous Database on page 1604 for more information.
• The database name is displayed at the top of the Performance Hub page.
• The time period for which information is available on the Performance Hub is displayed in the Time Range
field. The selected time period is indicated on the time slider graph by the adjustable time slider box. See the

Oracle Cloud Infrastructure User Guide 1607


Database

Time Range information in Performance Hub Features on page 1599 to learn how to set the duration of the
time to be monitored.
2. Click Blocking Sessions to display details about current blocking and waiting sessions. Analysis of historical
sessions is not supported.
3. Click the link in each column of the table to view the details of the listed blocking and waiting sessions, as shown
in the following table.
Note:

If you see an error message that says the server failed to get performance
details for the selected session at the selected time, try the selection again.
If the same error message is displayed, try a different time selection. If that
fails, contact Oracle Support.

Tab Column Description

User Name This is the name of the user.

Status The status indicates whether the session is active, inactive, or expired.
Lock This is the lock type for the session. Click the lock type to display a table with
more information about the session lock. It lists the Lock Type, Lock Mode,
Lock Request, Object Type, Subobject Type, Time, ID1, ID2, Lock Object
Address, and Lock Address of the selected session.

User Session The user session lists the Instance, SID, and Serial number.

SQL ID This is the ID of the SQL associated with the session.

Wait Event This is the wait event for the session. Click the wait event to show additional
wait event details.

Object Name This is the name of the locked database object.

Blocking Time This is the time that a blocking session has been blocking a session.
Wait Time This is the time that a session has been waiting.

Setting the Minimum Wait Time


The minimum wait time works like a filter for the Blocking Sessions information. It sets the minimum time that a
session must wait before it is displayed in the tab. For example, if the minimum wait time is set to three seconds, and
a session has waited only two seconds, it is not displayed in the table. But if you change the minimum wait time to
one second, the session that waited only two seconds is added to the display.
Note:

The minimum wait time default setting is three seconds.

Killing a Session
1. Click the check box at the left of the session User Name to select a session. The Kill Session button is enabled.
2. Click Kill Session. The Kill Session confirmation dialog box is displayed.
3. Click Kill Session to end the session.

Oracle Cloud Infrastructure User Guide 1608


Database

Displaying Lock Details


1. In the session Lock column, click the name of the lock type (Lock or Exclusive Lock) for the session. The Wait
Event Details message box is displayed.
2. Note the information in the table and use as needed to determine any action to take.

Displaying Wait Event Information


1. In the session Wait Event column, click the name of the wait event for the selected session. The Session Lock
Information table is displayed.
2. Note the information in the message box and use as needed to determine any action to take.

Displaying Session Details


1. In the session User Session column, click the session identifier for the session. The Performance Hub Session
Details page is displayed.
2. Optionally move the time slider to display a specific time range of the session.
3. Use the Session Details page to explore additional details about the session.

Displaying SQL Details


1. In the session SQL ID column, click the SQL ID associated with the session. The Performance Hub SQL Details
page is displayed.
2. Optionally move the time slider to display a specific time range of the session.
3. Select one or more of the following tabs, note the information in them, and take any action needed.
• Summary. This tab displays the SQL Overview and Source details.
• ASH Analytics. This tab displays the SQL average active sessions.
• Execution Statistics. This tab displays the SQL plans and plan details.
• SQL Monitoring. This tab displays information about monitored SQL executions.
• SQL Text. This tab displays the SQL.
To view ADDM data
This procedure explains how to view Automatic Database Diagnostic Monitor (ADDM) information with
Performance Hub.
1. Go to the Performance Hub page of the Oracle Cloud Infrastructure Console for the database that you want
to manage. See To navigate to Performance Hub in the Oracle Cloud Infrastructure Console interface of an
Autonomous Database on page 1604 for more information. The database name is displayed at the top of the
Performance Hub page.
2. Click the ADDM tab to open it.
3. Use the menu located below Quick Select to select a time range. The data for that time range is displayed.
4. In the Activity Summary area, just below the data, click one of the gray AWR snapshot icons to display
findings for the associated ADDM task. A white check mark in the gray icon indicates that there are problem
findings available. When selected, the gray icon changes to blue.
Note:

You can alternatively select an ADDM task from the menu below the
ADDM tab or by positioning the time slider above an icon.
Note:

When you manually change the ADDM task selection, either by clicking
the gray icon for an associated AWR snapshot, or by selecting an option
from the ADDM task menu, the time slider position and size are adjusted
to cover the analysis period for the ADDM task.

Oracle Cloud Infrastructure User Guide 1609


Database

5. Hover over the icon to display a message about the AWR snapshot and ADDM task, including the number of
findings for the ADDM task. The findings are displayed in two tables:
• Findings table. When there are findings, the Findings table shows the Name of the finding, the Impact,
Number of recommendations, and Average Active Sessions for that finding. If there are no findings available,
the table displays a message that says no findings are available for the selected analysis period.
• Warnings and Information table. The Warnings and Information table is displayed below the Findings table.
It lists messages related to the findings.
• Warning messages identify issues such as missing data in the AWR that may affect the completeness or
accuracy of the ADDM analysis.
• Information messages provide information that is relevant to understanding the performance of the
database but does not represent a performance problem. This may include identification of non-problem
areas of the database and automatic database maintenance activity.
Note:

Both the Findings table and the Warnings and Information table are
collapsible to save space when many findings are found. Click the minus
icon to collapse a table. Click the plus icon to expand the table again.
6. If a finding has ADDM recommendations available, the name of the finding is displayed as a link. Click the
name of the finding to display more information about the finding, including a table of recommendations for
corrective actions. Each recommendation includes the problem area, the suggested action to take to solve it, and
the estimated benefit that will result when the action is taken.
7. Click the expand icon at the end of a row in the recommendations table to view a rationale for the
recommendation.

Migrating Databases to the Cloud


You can migrate your on-premises Oracle Database to an Oracle Cloud Infrastructure Database service database using
a number of different methods that use several different tools. The method that applies to a given migration scenario
depends on several factors, including the version, character set, and platform endian format of the source and target
databases.
Tip:

Oracle now offers the Zero Downtime Migration service, a quick and easy
way to move on-premises Oracle Databases and Oracle Cloud Infrastructure
Classic databases to Oracle Cloud Infrastructure. You can migrate databases
to the following types of Oracle Cloud Infrastructure systems: Exadata,
Exadata Cloud@Customer, bare metal, and virtual machine.
Zero Downtime Migration leverages Oracle Active Data Guard to create a
standby instance of your database in an Oracle Cloud Infrastructure system.
You switch over only when you are ready, and your source database remains
available as a standby. Use the Zero Downtime Migration service to migrate
databases individually or at the fleet level. See Move to Oracle Cloud Using
Zero Downtime Migration for more information.

Choosing a Migration Method


Not all migration methods apply to all migration scenarios. Many of the migration methods apply only if specific
characteristics of the source and destination databases match or are compatible. Moreover, additional factors can
affect which method you choose for your migration from among the methods that are technically applicable to your
migration scenario.
Some of the characteristics and factors to consider when choosing a migration method are:
• On-premises database version
• Database service database version

Oracle Cloud Infrastructure User Guide 1610


Database

• On-premises host operating system and version


• On-premises database character set
• Quantity of data, including indexes
• Data types used in the on-premises database
• Storage for data staging
• Acceptable length of system outage
• Network bandwidth
To determine which migration methods are applicable to your migration scenario, gather the following information.
1. Database version of your on-premises database:
• Oracle Database 12c Release 2 version 12.2.0.1
• Oracle Database 12c Release 1 version 12.1.0.2 or higher
• Oracle Database 12c Release 1 version lower than 12.1.0.2
• Oracle Database 11g Release 2 version 11.2.0.3 or higher
• Oracle Database 11g Release 2 version lower than 11.2.0.3
2. For on-premises Oracle Database 12c Release 2 and Oracle Database 12c Release 1 databases, the architecture of
the database:
• Multitenant container database (CDB)
• Non-CDB
3. Endian format (byte ordering) of your on-premises database’s host platform
Some platforms are little endian and others are big endian. Query V$TRANSPORTABLE_PLATFORM to
identify the endian format, and to determine whether cross-platform tablespace transport is supported.
The Oracle Cloud Infrastructure Database uses the Linux platform, which is little endian.
4. Database character set of your on-premises database and the Oracle Cloud Infrastructure Database database.
Some migration methods require that the source and target databases use compatible database character sets.
5. Database version of the Oracle Cloud Infrastructure Database database you are migrating to:
• Oracle Database 12c Release 2
• Oracle Database 12c Release 1
• Oracle Database 11g Release 2
Oracle Database 12c Release 2 and Oracle Database 12c Release 1 databases created on the Database service use
CDB architecture. Databases created using the Enterprise Edition software edition are single-tenant, and databases
created using the High Performance or Extreme Performance software editions are multitenant.
After gathering this information, use the “source” and “destination” database versions as your guide to see which
migration methods apply to your migration scenario:
• Migrating from Oracle Database 11g to Oracle Database 11g in the Cloud on page 1618
• Migrating from Oracle Database 11g to Oracle Database 12c in the Cloud on page 1619
• Migrating from Oracle Database 12c CDB to Oracle Database 12c in the Cloud on page 1620
• Migrating from Oracle Database 12c Non-CDB to Oracle Database 12c in the Cloud on page 1621

Migration Connectivity Options


You have several connectivity options when migrating your on-premises databases to the Oracle Cloud Infrastructure.
The options are listed below in order of preference.
1. FastConnect: Provides a secure connection between your existing network and your virtual cloud network (VCN)
over a private physical network instead of the internet. For more information, see FastConnect on page 3200.
2. IPSec VPN: Provides a secure connection between a dynamic routing gateway (DRG) and customer-premise
equipment (CPE), consisting of multiple IPSec tunnels. The IPSec connection is one of the components forming a
site-to-site VPN between a VCN and your on-premises network. For more information, see VPN Connect on page
2958.

Oracle Cloud Infrastructure User Guide 1611


Database

3. Internet gateway: Provides a path for network traffic between your VCN and the internet. For more information,
see Internet Gateway on page 3271.

Migration Methods
Many methods exist to migrate Oracle databases to the Oracle Cloud Infrastructure Database service. Which of these
methods apply to a given migration scenario depends on several factors, including the version, character set, and
platform endian format of the source and target databases.
• Data Pump Conventional Export/Import on page 1623
• Data Pump Full Transportable on page 1624
• Data Pump Transportable Tablespace on page 1627
• Remote Cloning a PDB on page 1629
• Remote Cloning Non-CDB on page 1630
• RMAN Cross-Platform Transportable PDB on page 1630
• RMAN Cross-Platform Transportable Tablespace Backup Sets on page 1631
• RMAN Transportable Tablespace with Data Pump on page 1633
• RMAN DUPLICATE from an Active Database on page 1638
• RMAN CONVERT Transportable Tablespace with Data Pump on page 1635
• SQL Developer and INSERT Statements to Migrate Selected Objects on page 1646
• SQL Developer and SQL*Loader to Migrate Selected Objects on page 1646
• Unplugging/Plugging a PDB on page 1646
• Unplugging/Plugging Non-CDB on page 1647
• Zero Downtime Migration Service

Migrating an On-Premises Database to Oracle Cloud Infrastructure by Creating a


Backup in the Cloud
Note:

This topic is not applicable to Exadata DB systems.


You can migrate an on-premises database to Oracle Cloud Infrastructure by creating a backup of your on-premises
database in Oracle Cloud Infrastructure's Database service.
Oracle provides a Python script to create a backup of your database. The script invokes an API call to create the
backup and then places the backup in Oracle Cloud Infrastructure. You can then use the Console or the API to create
a new database or DB system from that backup. Backups created using the instructions in this topic appear under
Standalone Backups in the console.
The Python script is bundled as a part of the Oracle Cloud Infrastructure CLI installation. Oracle provides the
migration script and associated files at no cost. Normal Object Storage charges apply for the storage of your backup
in Oracle Cloud Infrastructure.
Compatibility
The scripted migration process is compatible with the following bare metal and virtual machine DB system
configurations:

Oracle Cloud Infrastructure User Guide 1612


Database

Configuration Version or Type Notes


Database Version 19.x • For versions 19c, 18c, 12.2.0.1,
and 12.1.0.2:
18.x
• Only Container Databases
12.2.0.1
(CDBs) are supported. The
12.1.0.2 scripted migration process
may work with non-CDB
11.2.0.4
databases for these database
versions, but Oracle does
not provide support for
the migration of non-CDB
databases using the script
described in this topic.
For information on creating
an on-premises pluggable
database (PDB) by cloning a
non-CDB in Oracle Database
19c, see About Cloning a
Non-CDB. For an overview
of multitenant architecture
in Oracle Database 19c,
see Introduction to the
Multitenant Architecture.
For information on creating
an on-premises pluggable
database (PDB) from a non-
CDB database in Oracle
Database 12c Release 2
(12.2), see Upgrading a Non-
CDB Oracle Database To
a PDB on a CDB. For an
overview of multitenant
architecture in 12c Release 2,
see Overview of Managing a
Multitenant Environment.
• The Oracle Cloud
Infrastructure Database
service will attempt to run
datapatch, which requires
read/write mode. If there are
pluggable databases (PDBs),
they should also be in read/
write mode to ensure that
datapatch runs on them.
• For version 11.2.0.4, depending
on the source database patch
level, you may need to roll
back patches prior to migrating.
See Rolling Back Patches on a
Version 11.2 Database for more
information.
• If your on-premises database
has an interim patch (previous
known as a one-off patch), see
Applying Interim Patches for
details on applying the patch in
Oracle Cloud Infrastructure User Guide 1613
Oracle Cloud Infrastructure.
Database

Configuration Version or Type Notes


Source Database Platform Oracle Enterprise Linux / Red Hat • The scripted migration
Enterprise Linux 5.x described in this topic may
work in Microsoft Windows
Oracle Linux / Red Hat Enterprise
environments, but Oracle
Linux 6.x
currently does not provide
Oracle Linux / Red Hat Enterprise support for this script in
7.x Windows.
• For Oracle Linux 6.x users, see
Configuring Oracle Linux 6
to install Python for details on
configuring the operating system
to install a compatible version of
Python. See Installing the CLI
for more information regarding
Oracle Linux 6.

Encryption TDE • In a non-TDE configuration, the


RMAN encryption password is
Non-TDE
required.
• Oracle requires that unencrypted
on-premises databases be
encrypted after they are restored
to Oracle Cloud Infrastructure.
The stored RMAN standalone
backups are always encrypted.

Target Database Edition Standard Edition


Enterprise Edition
Enterprise Edition - High
Performance
Enterprise Edition - Extreme
Performance

Cluster Single
RAC

Prerequisites
On the source database host:
• Outbound internet connectivity for installing Python packages, running yum install, and access to the Oracle
Cloud Infrastructure API and Object Storage.
• RMAN configuration to autobackup controlfile and spfile:

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;

Note:

RMAN configuration changes must be completed prior to running the


script. The script may modify RMAN parameters as required to complete
the backup and migration tasks.

Oracle Cloud Infrastructure User Guide 1614


Database

To Migrate an On-Premises Database Using a Standalone Backup


Perform the following tasks on the source database host:
1. Create a directory named /home/oracle/migrate.
Tip:

You can name the migrate portion of the directory path anything you
want. If you use a different name, you must adjust all of the paths that
appear in this task accordingly. The following examples assume the name
migrate for simplicity and clarity.
2. As root, run the CLI installer in the directory you created in step 1. (For example, /home/oracle/migrate.)
See Installing the CLI for instructions on running the installer script in either Windows or the Bash environment
(for MacOS, Linux, and Unix).
The installer installs Python 3.6.0 if either Python 2.7 or Python 3.6 does not exist on the machine. The installer
also installs the Python script required to create and migrate a standalone backup from an on-premises database.
On Oracle Linux 6, a newer version of Python (such as Python 3.6.0) is usually required. Use the following
instructions to configure Oracle Linux 6 before running the backup script.
3. Copy the following files into the new directory:
• Oracle Database Backup Module (opc_install.jar)
• Your API *.pem key file.
4. Respond to the prompts as follows:

(yum install)
Is this ok [y/N]: y

===> Missing native dependencies. Continue and install the following


dependencies: gcc, libffi-devel, python36u-devel, openssl-devel? (Y/n): Y

===> In what directory would you like to place the install? (leave blank
to use '/root/lib/oracle-cli'): /home/oracle/migrate/lib/oracle-cli

===> In what directory would you like to place the 'oci' executable?
(leave blank to use '/root/bin'): /home/oracle/migrate/bin

===> In what directory would you like to place the OCI scripts? (leave
blank to use '/root/bin/oci-cli-scripts'): /home/oracle/migrate/bin/oci-
cli-scripts

===> Currently supported optional packages are: ['db (will install


cx_Oracle)'] What optional CLI packages would you like to be installed
(comma separated names; press enter if you don't need any optional
packages)?: db

===> Modify profile to update your $PATH and enable shell/tab completion
now? (Y/n): Y

===> Enter a path to an rc file to update (leave blank to use '/


root/.bashrc'): /home/oracle/.bashrc
5. Perform the following file operations:

# mv /home/oracle/migrate/lib/oracle-cli/lib/python<version>/site-
packages/oci_cli/scripts/dbaas.py /home/oracle/migrate/lib/oracle-cli/lib/
python<version>/site-packages/oci_cli/scripts/dbaas_orig.py

Oracle Cloud Infrastructure User Guide 1615


Database

# cp /home/oracle/migrate/dbaas_0704.py /home/oracle/migrate/lib/oracle-
cli/lib/python<version>/site-packages/oci_cli/scripts/dbaas.py

# chown -R oracle:oinstall /home/oracle/migrate


6. Edit the /home/oracle/migrate/config.txt file

[DEFAULT]
tenancy=<your_tenancy_OCID>
user=<your_user_OCID>
fingerprint=<fingerprint>
key_file=/home/oracle/migrate/<your_api_key>.pem
region=<region>

If you do not know your API signing key's fingerprint, see How to Get the Key's Fingerprint on page 4219.
7. As oracle user (not root), run one of the following sets of commands, depending on the type of database you are
migrating.
For a non-TDE database:

export AD=<destination_availability_domain>
export C=<destination_compartment_OCID>
export ORACLE_SID=<ORACLE_SID>
export ORACLE_HOME=<ORACLE_HOME>
export PATH=$PATH:$ORACLE_HOME/bin
export LC_ALL=en_US.UTF-8
export ORACLE_UNQNAME=<source_DB_unique_name>
rm -rf /home/oracle/migrate/onprem_upload
cd /home/oracle/migrate/bin/oci-cli-scripts/
./create_backup_from_onprem --config-file /home/oracle/migrate/config.txt
--display-name <example_display_name> --availability-domain $AD --edition
ENTERPRISE_EDITION_EXTREME_PERFORMANCE --opc-installer-dir /home/oracle/
migrate --tmp-dir /home/oracle/migrate/onprem_upload --compartment-id $C
--rman-password <password>

For a TDE-enabled database:

export AD=<destination_availability_domain>
export C=<destination_compartment_OCID>
export ORACLE_SID=<ORACLE_SID>
export ORACLE_HOME=<ORACLE_HOME>
export PATH=$PATH:$ORACLE_HOME/bin
rm -rf /home/oracle/migrate/onprem_upload
cd /home/oracle/migrate/bin/oci-cli-scripts/
./create_backup_from_onprem --config-file /home/oracle/migrate/config.txt
--display-name <example_display_name> --availability-domain $AD --edition
ENTERPRISE_EDITION_EXTREME_PERFORMANCE --opc-installer-dir /home/oracle/
migrate --tmp-dir /home/oracle/migrate/onprem_upload --compartment-id $C

See the following list of parameters used by the script for more details.
8. Create a new database or launch a new DB system using the backup you created in the preceding step. See
Creating Databases on page 1415 for information on creating a new database from a backup. See Creating Bare
Metal and Virtual Machine DB Systems on page 1371 for information on creating a new DB system from a
backup.

Oracle Cloud Infrastructure User Guide 1616


Database

Configuring Oracle Linux 6 to install Python


In Oracle Linux 6 use the following /etc/yum.repos.d/ol6.repo file to ensure that a compatible version of
Python is installed by the script if a compatible version is not already installed. Include this file before attempting to
run the script with the ./install.sh command.

[ol6_latest]
name=Oracle Linux $releasever Latest ($basearch)
baseurl=http://yum.oracle.com/repo/OracleLinux/OL6/latest/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1

Parameters used by the script

Parameter Description Required


--config-file The path to the oci-cli config No
file. The default path is as
follows: ~/.oci/config

--profile The profile in the config file to load. No


This profile will also be used to
locate any default parameter values
which have been specified in the OCI
CLI-specific configuration file. The
default value is DEFAULT.
--compartment-id The compartment OCID of the Yes
Oracle Cloud Infrastructure
compartment that will contain your
standalone backup.
--display-name The name of the backup, as you want Yes
it to be displayed in the OCI Console
under Standalone Backups. Avoid
entering confidential information.

--availability-domain The availability domain where the Yes


backup is to be stored.

--edition The edition of the Oracle Cloud Yes


Infrastructure DB system that will
contain the database created from the
standalone backup. You can choose
the same edition as the on-premises
database, or any addition above the
on-premises database. The choices,
listed from lowest to highest, are the
following:
• STANDARD_EDITION
• ENTERPRISE_EDITION
• ENTERPRISE_EDITION_HIGH_PERFORMANCE
• ENTERPRISE_EDITION_EXTREME_PERFORMANCE

Oracle Cloud Infrastructure User Guide 1617


Database

Parameter Description Required


--opc-installer-dir The directory containing the Yes
opc_installer.jar file. This is
the directory you created in step 1 of
this procedure.

--additional-opc-args Optional additional arguments for the No


opc installer.
--tmp-dir Optional temporary directory for No
intermediate files.
--rman-password The RMAN password to use for the Required if TDE is not enabled
standalone backup. The password
must have 8 or more characters.
--rman-channels RMAN channels. The default value No
is 5.
--help Displays in-line help for the script in No
the OCI-CLI environment.

The script will produce a standalone backup of your on-premises database in your Oracle Cloud Infrastructure
tenancy. You can check the Console for your backup by viewing the Standalone Backups page in the Database
service, under Bare Metal, VM, and Exadata.
Tip:

To access command line help for the backup script, run the following
command in the /home/oracle/migrate/bin/oci-cli-scripts/ directory:
create_backup_from_onprem --help

Migrating from Oracle Database 11g to Oracle Database 11g in the Cloud
You can migrate Oracle Database 11g databases from on-premises to Oracle Database 11g databases in the Database
service using several different methods.
The applicability of some of the migration methods depends on the on-premises database’s character set and platform
endian format.
If you have not already done so, determine the database character set of your on-premises database, and determine
the endian format of the platform your on-premises database resides on. Use this information to help you choose an
appropriate method.
• Data Pump Conventional Export/Import
This method can be used regardless of the endian format and database character set of the on-premises database.
For the steps this method entails, see Data Pump Conventional Export/Import on page 1623.
• Data Pump Transportable Tablespace
This method can be used only if the on-premises platform is little endian, and the database character sets of your
on-premises database and the Oracle Cloud Infrastructure Database database are compatible.
For the steps this method entails, see Data Pump Transportable Tablespace on page 1627.
• RMAN Transportable Tablespace with Data Pump
This method can be used only if the on-premises platform is little endian, and the database character sets of your
on-premises database and the Oracle Cloud Infrastructure Database database are compatible.
For the steps this method entails, see RMAN Transportable Tablespace with Data Pump on page 1633.

Oracle Cloud Infrastructure User Guide 1618


Database

• RMAN CONVERT Transportable Tablespace with Data Pump


This method can be used only if the database character sets of your on-premises database and the Oracle Cloud
Infrastructure Database database are compatible.
This method is similar to the Data Pump Transportable Tablespace method, with the addition of the
RMAN CONVERT command to enable transport between platforms with different endianness. Query V
$TRANSPORTABLE_PLATFORM to determine if the on-premises database platform supports cross-platform
tablespace transport and to determine the endian format of the platform. The Database service platform is little-
endian format.
For the steps this method entails, see RMAN CONVERT Transportable Tablespace with Data Pump on page
1635.

Migrating from Oracle Database 11g to Oracle Database 12c in the Cloud
You can migrate Oracle Database 11g databases from on-premises to Oracle Database 12c databases in the Database
service using several different methods.
The applicability of some of the migration methods depends on the on-premises database’s version, database
character set and platform endian format.
If you have not already done so, determine the database version and database character set of your on-premises
database, and determine the endian format of the platform your on-premises database resides on. Use this information
to help you choose an appropriate method.
• Data Pump Conventional Export/Import
This method can be used regardless of the endian format and database character set of the on-premises database.
For the steps this method entails, see Data Pump Conventional Export/Import on page 1623.
• Data Pump Transportable Tablespace
This method can be used only if the on-premises platform is little endian, and the database character sets of your
on-premises database and the Databaseservice database are compatible.
For the steps this method entails, see Data Pump Transportable Tablespace on page 1627.
• RMAN Transportable Tablespace with Data Pump
This method can be used only if the on-premises platform is little endian, and the database character sets of your
on-premises database and the Database service database are compatible.
For the steps this method entails, see RMAN Transportable Tablespace with Data Pump on page 1633.
• RMAN CONVERT Transportable Tablespace with Data Pump
This method can be used only if the database character sets of your on-premises database and the Database service
database are compatible.
This method is similar to the Data Pump Transportable Tablespace method, with the addition of the
RMAN CONVERT command to enable transport between platforms with different endianness. Query V
$TRANSPORTABLE_PLATFORM to determine if the on-premises database platform supports cross-platform
tablespace transport and to determine the endian format of the platform. The Database service platform is little-
endian format.
For the steps this method entails, see RMAN CONVERT Transportable Tablespace with Data Pump on page
1635.
• Data Pump Full Transportable
This method can be used only if the source database release version is 11.2.0.3 or later, and the database character
sets of your on-premises database and the Database service database are compatible.
For the steps this method entails, see Data Pump Full Transportable on page 1624.

Oracle Cloud Infrastructure User Guide 1619


Database

Migrating from Oracle Database 12c CDB to Oracle Database 12c in the Cloud
You can migrate Oracle Database 12c CDB databases from on-premises to Oracle Database 12c databases in the
Oracle Cloud Infrastructure Database service using several different methods.
The applicability of some of the migration methods depends on the on-premises database’s character set and platform
endian format.
If you have not already done so, determine the database character set of your on-premises database, and determine
the endian format of the platform your on-premises database resides on. Use this information to help you choose an
appropriate method.
• Data Pump Conventional Export/Import
This method can be used regardless of the endian format and database character set of the on-premises database.
For the steps this method entails, see Data Pump Conventional Export/Import on page 1623.
• Data Pump Transportable Tablespace
This method can be used only if the on-premises platform is little endian, and the database character sets of your
on-premises database and the Database database are compatible.
For the steps this method entails, see Data Pump Transportable Tablespace on page 1627.
• RMAN Transportable Tablespace with Data Pump
This method can be used only if the on-premises platform is little endian, and the database character sets of your
on-premises database and the Oracle Cloud Infrastructure Database service database are compatible.
For the steps this method entails, see RMAN Transportable Tablespace with Data Pump on page 1633.
• RMAN CONVERT Transportable Tablespace with Data Pump
This method can be used only if the database character sets of your on-premises database and the Database
database are compatible.
This method is similar to the Data Pump Transportable Tablespace method, with the addition of the
RMAN CONVERT command to enable transport between platforms with different endianness. Query V
$TRANSPORTABLE_PLATFORM to determine if the on-premises database platform supports cross-platform
tablespace transport and to determine the endian format of the platform. The Database service platform is little-
endian format.
For the steps this method entails, see RMAN CONVERT Transportable Tablespace with Data Pump on page
1635.
• RMAN Cross-Platform Transportable Tablespace Backup Sets
This method can be used only if the database character sets of your on-premises database and the Database service
database are compatible.
For the steps this method entails, see RMAN Cross-Platform Transportable Tablespace Backup Sets on page
1631.
• Data Pump Full Transportable
This method can be used only if the database character sets of your on-premises database and the Database service
database are compatible.
For the steps this method entails, see Data Pump Full Transportable on page 1624.
• Unplugging/Plugging (CDB)
This method can be used only if the on-premises platform is little endian, and the on-premises database and
Database database have compatible database character sets and national character sets.
For the steps this method entails, see Unplugging/Plugging a PDB on page 1646.

Oracle Cloud Infrastructure User Guide 1620


Database

• Remote Cloning (CDB)


This method can be used only if the on-premises platform is little endian, the on-premises database release
is 12.1.0.2 or higher, and the on-premises database and Database service database have compatible database
character sets and national character sets.
For the steps this method entails, see Remote Cloning a PDB on page 1629.
• RMAN Cross-Platform Transportable PDB
This method can be used only if the on-premises platform is little endian, and the database character sets of your
on-premises database and the Database service database are compatible.
For the steps this method entails, see RMAN Cross-Platform Transportable PDB on page 1630.
• SQL Developer and SQL*Loader to Migrate Selected Objects
You can use SQL Developer to create a cart into which you add selected objects to be loaded into your Oracle
Database 12c database on the cloud. In this method, you use SQL*Loader to load the data into your cloud
database.
For the steps this method entails, see SQL Developer and SQL*Loader to Migrate Selected Objects on page
1646.
• SQL Developer and INSERT Statements to Migrate Selected Objects
You can use SQL Developer to create a cart into which you add selected objects to be loaded into your Oracle
Database 12c database on the cloud. In this method, you use SQL INSERT statements to load the data into your
cloud database.
For the steps this method entails, see SQL Developer and INSERT Statements to Migrate Selected Objects on
page 1646.

Migrating from Oracle Database 12c Non-CDB to Oracle Database 12c in the Cloud
You can migrate Oracle Database 12c non-CDB databases from on-premises to Oracle Database 12c databases in
Oracle Cloud Infrastructure Database service using several different methods.
The applicability of some of the migration methods depends on the on-premises database’s character set and platform
endian format.
If you have not already done so, determine the database character set of your on-premises database, and determine
the endian format of the platform your on-premises database resides on. Use this information to help you choose an
appropriate method.
• Data Pump Conventional Export/Import
This method can be used regardless of the endian format and database character set of the on-premises database.
For the steps this method entails, see Data Pump Conventional Export/Import on page 1623.
• Data Pump Transportable Tablespace
This method can be used only if the on-premises platform is little endian, and the database character sets of your
on-premises database and the Database database are compatible.
For the steps this method entails, see Data Pump Transportable Tablespace on page 1627.
• RMAN Transportable Tablespace with Data Pump
This method can be used only if the on-premises platform is little endian, and the database character sets of your
on-premises database and the Database service database are compatible.
For the steps this method entails, see RMAN Transportable Tablespace with Data Pump on page 1633.

Oracle Cloud Infrastructure User Guide 1621


Database

• RMAN CONVERT Transportable Tablespace with Data Pump


This method can be used only if the database character sets of your on-premises database and the Database service
database are compatible.
This method is similar to the Data Pump Transportable Tablespace method, with the addition of the
RMAN CONVERT command to enable transport between platforms with different endianness. Query V
$TRANSPORTABLE_PLATFORM to determine if the on-premises database platform supports cross-platform
tablespace transport and to determine the endian format of the platform. The Database service platform is little-
endian format.
For the steps this method entails, see RMAN CONVERT Transportable Tablespace with Data Pump on page
1635.
• RMAN Cross-Platform Transportable Tablespace Backup Sets
This method can be used only if the database character sets of your on-premises database and the Database
database are compatible.
For the steps this method entails, see RMAN Cross-Platform Transportable Tablespace Backup Sets on page
1631.
• Data Pump Full Transportable
This method can be used only if the database character sets of your on-premises database and the Database service
database are compatible.
For the steps this method entails, see Data Pump Full Transportable on page 1624.
• Unplugging/Plugging (non-CDB)
This method can be used only if the on-premises platform is little endian, and the on-premises database and
Database service database have compatible database character sets and national character sets.
You can use the unplug/plug method to migrate an Oracle Database 12c non-CDB database to Oracle Database
12c in the cloud. This method provides a way to consolidate several non-CDB databases into a single Oracle
Database 12c CDB on the cloud.
For the steps this method entails, see Unplugging/Plugging Non-CDB on page 1647.
• Remote Cloning (non-CDB)
This method can be used only if the on-premises platform is little endian, the on-premises database release
is 12.1.0.2 or higher, and the on-premises database and Database service database have compatible database
character sets and national character sets.
You can use the remote cloning method to copy an Oracle Database 12c non-CDB on-premises database to your
Oracle Database 12c database in the cloud.
For the steps this method entails, see Remote Cloning Non-CDB on page 1630.
• SQL Developer and SQL*Loader to Migrate Selected Objects
You can use SQL Developer to create a cart into which you add selected objects to be loaded into your Oracle
Database 12c database on the cloud. In this method, you use SQL*Loader to load the data into your cloud
database.
For the steps this method entails, see SQL Developer and SQL*Loader to Migrate Selected Objects on page
1646.
• SQL Developer and INSERT Statements to Migrate Selected Objects
You can use SQL Developer to create a cart into which you add selected objects to be loaded into your Oracle
Database 12c database on the cloud. In this method, you use SQL INSERT statements to load the data into your
cloud database.
For the steps this method entails, see SQL Developer and INSERT Statements to Migrate Selected Objects on
page 1646.

Oracle Cloud Infrastructure User Guide 1622


Database

Data Pump Conventional Export/Import


You can use this method regardless of the endian format and database character set of the on-premises database.
To migrate an on-premises source database, tablespace, schema, or table to the database on a Database service
database deployment using Data Pump Export and Import, you perform these tasks:
1. On the on-premises database host, invoke Data Pump Export and export the on-premises database.
2. Use a secure copy utility to transfer the dump file to the Database service compute node.
3. On the Database service compute node, invoke Data Pump Import and import the data into the database.
4. After verifying that the data has been imported successfully, you can delete the dump file.
For information about Data Pump Import and Export, see these topics:
• "Data Pump Export Modes" in Oracle Database Utilities for Release 12.2, 12.1 or 11.2.
• "Data Pump Import Modes" in Oracle Database Utilities for Release 12.2, 12.1 or 11.2.
Data Pump Conventional Export/Import: Example
This example provides a step-by-step demonstration of the tasks required to migrate a schema from an on-premises
Oracle database to a Database service database.
This example illustrates a schema mode export and import. The same general procedure applies for a full database,
tablespace, or table export and import.
In this example, the on-premises database is on a Linux host.
1. On the on-premises database host, invoke Data Pump Export to export the schemas.
a. On the on-premises database host, create an operating system directory to use for the on-premises database
export files.

$ mkdir /u01/app/oracle/admin/orcl/dpdump/for_cloud
b. On the on-premises database host, invoke SQL*Plus and log in to the on-premises database as the SYSTEM
user.

$ sqlplus system
Enter password: <enter the password for the SYSTEM user>
c. Create a directory object in the on-premises database to reference the operating system directory.

SQL> CREATE DIRECTORY dp_for_cloud AS '/u01/app/oracle/admin/orcl/


dpdump/for_cloud';
d. Exit from SQL*Plus.
e. On the on-premises database host, invoke Data Pump Export as the SYSTEM user or another user with the
DATAPUMP_EXP_FULL_DATABASE role and export the on-premises schemas. Provide the password for the
user when prompted.

$ expdp system SCHEMAS=fsowner DIRECTORY=dp_for_cloud

Oracle Cloud Infrastructure User Guide 1623


Database

2. Use a secure copy utility to transfer the dump file to the Database service compute node.
In this example the dump file is copied to the /u01 directory. Choose the appropriate location based on the size
of the file that will be transferred.
a. On the Database service compute node, create a directory for the dump file.

$ mkdir /u01/app/oracle/admin/ORCL/dpdump/from_onprem
b. Before using the scp command to copy the export dump file, make sure the SSH private key that provides
access to the Database service compute node is available on your on-premises host.
c. On the on-premises database host, use the SCP utility to transfer the dump file to the Databaseservice compute
node.

$ scp –i private_key_file \
/u01/app/oracle/admin/orcl/dpdump/for_cloud/expdat.dmp \
oracle@IP_address_DBaaS_VM:/u01/app/oracle/admin/ORCL/dpdump/from_onprem
3. On the Database service compute node, invoke Data Pump Import and import the data into the database.
a. On the Database service compute node, invoke SQL*Plus and log in to the database as the SYSTEM user.

$ sqlplus system
Enter password: <enter the password for the SYSTEM user>
b. Create a directory object in the Database service database.

SQL> CREATE DIRECTORY dp_from_onprem AS '/u01/app/oracle/admin/ORCL/


dpdump/from_onprem';
c. If they do not exist, create the tablespace(s) for the objects that will be imported.
d. Exit from SQL*Plus.
e. On the Database service compute node, invoke Data Pump Import and connect to the database. Import the data
into the database.

impdp system SCHEMAS=fsowner DIRECTORY=dp_from_onprem


4. After verifying that the data has been imported successfully, you can delete the expdat.dmp file.

Data Pump Full Transportable


You can use this method only if the source database release version is 11.2.0.3 or later, and the database character sets
of your on-premises database and the Oracle Cloud Infrastructure Database service database are compatible.
You can use the Data Pump full transportable method to copy an entire database from your on-premises host to the
database on a Database service database deployment.
To migrate an Oracle Database 11g on-premises database to the Oracle Database 12c database on a Database service
database deployment using the Data Pump full transportable method, you perform these tasks:
1. On the on-premises database host, prepare the database for the Data Pump full transportable export by placing the
user-defined tablespaces in READ ONLY mode.
2. On the on-premises database host, invoke Data Pump Export to perform the full transportable export.
3. Use a secure copy utility to transfer the Data Pump Export dump file and the datafiles for all of the user-defined
tablespaces to the Database service compute node.
4. Set the on-premises tablespaces back to READ WRITE.
5. On the Database service compute node, prepare the database for the tablespace import.
6. On the Database service compute node, invoke Data Pump Import and connect to the database.
7. After verifying that the data has been imported successfully, you can delete the dump file.

Oracle Cloud Infrastructure User Guide 1624


Database

Data Pump Full Transportable: Example


This example provides a step-by-step demonstration of the tasks required to migrate an Oracle Database 11g database
to a Database service 12c database.
In this example, the source database is on a Linux host.
1. On the source database host, prepare the database for the Data Pump full transportable export.
a. On the source database host, create a directory in the operating system to use for the source export.

$ mkdir /u01/app/oracle/admin/orcl/dpdump/for_cloud
b. On the source database host, invoke SQL*Plus and log in to the source database as the SYSTEM user.

$ sqlplus system
Enter password: <enter the password for the SYSTEM user>
c. Create a directory object in the source database to reference the operating system directory.

SQL> CREATE DIRECTORY dp_for_cloud AS '/u01/app/oracle/admin/orcl/


dpdump/for_cloud';
d. Determine the name(s) of the tablespaces and data files that belong to the user-defined tablespaces by querying
DBA_DATA_FILES. These files will also be listed in the export output.

SQL> SELECT tablespace_name, file_name FROM dba_data_files;


TABLESPACE_NAME FILE_NAME
--------------- --------------------------------------------------
USERS /u01/app/oracle/oradata/orcl/users01.dbf
UNDOTBS1 /u01/app/oracle/oradata/orcl/undotbs01.dbf
SYSAUX /u01/app/oracle/oradata/orcl/sysaux01.dbf
SYSTEM /u01/app/oracle/oradata/orcl/system01.dbf
EXAMPLE /u01/app/oracle/oradata/orcl/example01.dbf
FSDATA /u01/app/oracle/oradata/orcl/fsdata01.dbf
FSINDEX /u01/app/oracle/oradata/orcl/fsindex01.dbf
SQL>
e. On the source database host, set all tablespaces that will be transported (the transportable set) to READ ONLY
mode.

SQL> ALTER TABLESPACE example READ ONLY;


Tablespace altered.
SQL> ALTER TABLESPACE fsindex READ ONLY;
Tablespace altered.
SQL> ALTER TABLESPACE fsdata READ ONLY;
Tablespace altered.
SQL> ALTER TABLESPACE users READ ONLY;
Tablespace altered.
SQL>
f. Exit from SQL*Plus.
2. On the source database host, invoke Data Pump Export to perform the full transportable export. Specify FULL=y
and TRANSPORTABLE=always. Because this is an Oracle Database 11g database and full transportable is an
Oracle Database 12c feature, specify VERSION=12. Provide the password for the SYSTEM user when prompted.

$ expdp system FULL=y TRANSPORTABLE=always VERSION=12 DUMPFILE=expdat.dmp


DIRECTORY=dp_for_cloud

Oracle Cloud Infrastructure User Guide 1625


Database

3. Use a secure copy utility to transfer the Data Pump Export dump file and the datafiles for all of the user-defined
tablespaces to the Database service compute node.
In this example the dump file is copied to the /u01 directory. Choose the appropriate location based on the size
of the file that will be transferred.
a. On the Database service compute node, create a directory for the dump file.

$ mkdir /u01/app/oracle/admin/ORCL/dpdump/from_source
b. Before using the scp utility to copy files, make sure the SSH private key that provides access to the Database
service compute node is available on your source host.
c. On the source database host, use the scp utility to transfer the dump file and all datafiles of the transportable
set to the Database service compute node.

$ scp -i private_key_file \
/u01/app/oracle/admin/orcl/dpdump/for_cloud/expdat.dmp \
oracle@compute_node_IP_address:/u01/app/oracle/admin/ORCL/dpdump/
from_source

$ scp -i private_key_file \
/u01/app/oracle/oradata/orcl/example01.dbf \
oracle@compute_node_IP_address:/u02/app/oracle/oradata/ORCL/PDB2

$ scp -i

You might also like