Oracle Database 11g Administration Works
Oracle Database 11g Administration Works
Oracle Database 11g Administration Works
D50102GC20
Edition 2.0
September 2009
D62542
Authors Copyright © 2009, Oracle. All rights reserved.
Mark Fuller This document contains proprietary information and is protected by copyright and
other intellectual property laws. You may copy and print this document solely for your
own use in an Oracle training course. The document may not be modified or altered in
Technical Contributors any way. Except where your use constitutes "fair use" under copyright law, you may
and Reviewers not use, share, download, upload, copy, print, display, perform, reproduce, publish,
license, post, transmit, or distribute this document in whole or in part without the
Maria Billings express authorization of Oracle.
Herbert Bradbury
The information contained in this document is subject to change without notice. If you
Yanti Chang find any problems in the document, please report them in writing to: Oracle University,
500 Oracle Parkway, Redwood Shores, California 94065 USA. This document is not
Timothy Chien warranted to be error-free.
Andy Fotunak
Restricted Rights Notice
Gerlinde Frenzen
Steve Friedberg If this documentation is delivered to the United States Government or anyone using
Editors
Raj Kumar
Daniel Milne
Graphic Designer
Rajiv Chandrabhanu
Publishers
Jobi Varghese
Veena Narasimhan
Contents
I Introduction
Course Objectives I-2
Suggested Schedule I-3
Oracle Products and Services I-4
Oracle Database 11g: “g” Stands for Grid I-5
Grid Infrastructure for Single-Instance I-7
iii
Segments, Extents, and Blocks 1-37
Tablespaces and Data Files 1-38
SYSTEM and SYSAUX Tablespaces 1-39
Automatic Storage Management 1-40
ASM Storage Components 1-41
Interacting with an Oracle Database: Memory, Processes and Storage 1-42
Quiz 1-44
Summary 1-46
Practice 1: Overview 1-47
iv
Choosing the Database Edition 2-34
Specifying Installation Location 2-35
Choosing Operating System Groups 2-36
Performing Prerequisite Checks 2-37
Installation Summary Page 2-38
The Install Product Page 2-39
Installation Finished 2-40
Installation Option: Silent Mode 2-41
Quiz 2-42
Summary 2-44
Practice 2 Overview: Preparing the Database Environment 2-45
v
Simplified Initialization Parameters 4-14
Initialization Parameters: Examples 4-15
Using SQL*Plus to View Parameters 4-19
Changing Initialization Parameter Values 4-21
Changing Parameter Values: Examples 4-23
Quiz 4-24
Database Startup and Shutdown: Credentials 4-26
Starting Up an Oracle Database Instance 4-27
Starting Up an Oracle Database Instance: NOMOUNT 4-28
Starting Up an Oracle Database Instance: MOUNT 4-29
Starting Up an Oracle Database Instance: OPEN 4-30
vi
Starting and Stopping ASM Instances Using asmcmd 5-16
Disk Group Overview 5-17
ASM Disks 5-18
Allocation Units 5-19
ASM Files 5-20
Extent Maps 5-21
Striping Granularity 5-22
Fine-Grained Striping 5-23
ASM Failure Groups 5-25
Stripe and Mirror Example 5-26
Failure Example 5-27
vii
Naming Methods 6-20
Easy Connect 6-21
Local Naming 6-22
Directory Naming 6-23
External Naming Method 6-24
Configuring Service Aliases 6-25
Advanced Connection Options 6-26
Testing Oracle Net Connectivity 6-28
User Sessions: Dedicated Server Process 6-29
User Sessions: Shared Server Processes 6-30
SGA and PGA 6-31
viii
Predefined Administrative Accounts 8-5
Creating a User 8-6
Authenticating Users 8-7
Administrator Authentication 8-9
Unlocking a User Account and Resetting the Password 8-10
Privileges 8-11
System Privileges 8-12
Object Privileges 8-14
Revoking System Privileges with ADMIN OPTION 8-15
Revoking Object Privileges with GRANT OPTION 8-16
Benefits of Roles 8-17
ix
Deadlocks 9-14
Quiz 9-15
Summary 9-17
Practice 9 Overview: Managing Data and Concurrency 9-18
x
Oracle Audit Vault 11-23
Quiz 11-24
Summary 11-26
Practice 11 Overview: Implementing Oracle Database Security 11-27
12 Database Maintenance
Objectives 12-2
Database Maintenance 12-3
Viewing the Alert History 12-4
Terminology 12-5
Oracle Optimizer: Overview 12-6
xi
13 Performance Management
Objectives 13-2
Performance Monitoring 13-3
Enterprise Manager Performance Page 13-4
Drilling Down to a Particular Wait Category 13-5
Performance Page: Throughput 13-6
Performance Monitoring: Top Sessions 13-7
Performance Monitoring: Top Services 13-8
Managing Memory Components 13-9
Enabling Automatic Memory Management (AMM) 13-10
Enabling Automatic Shared Memory Management (ASMM) 13-11
xii
Archive Log Files 14-26
Archiver (ARCn) Process 14-27
Archive Log File: Naming and Destinations 14-28
Enabling ARCHIVELOG Mode 14-29
Quiz 14-30
Summary 14-32
Practice 14 Overview: Configuring for Recoverability 14-33
xiii
Loss of a System-Critical Data File in ARCHIVELOG Mode 16-13
Data Failure: Examples 16-14
Data Recovery Advisor 16-15
Assessing Data Failures 16-16
Data Failures 16-17
Listing Data Failures 16-18
Advising on Repair 16-19
Executing Repairs 16-20
Data Recovery Advisor Views 16-21
Quiz 16-22
Summary 16-24
17 Moving Data
Objectives 17-2
Moving Data: General Architecture 17-3
Oracle Data Pump: Overview 17-4
Oracle Data Pump: Benefits 17-5
Directory Objects for Data Pump 17-7
Creating Directory Objects 17-8
Data Pump Export and Import Clients: Overview 17-9
Data Pump Utility: Interfaces and Modes 17-10
Data Pump Export using Database Control 17-11
Data Pump Export Example: Basic Options 17-12
Data Pump Export Example: Advanced Options 17-13
Data Pump Export Example: Files 17-14
Data Pump Export Example: Schedule 17-16
Data Pump Export Example: Review 17-17
Data Pump Import Example: impdp 17-18
Data Pump Import: Transformations 17-19
Using Enterprise Manager to Monitor Data Pump Jobs 17-20
Migration with Data Pump Legacy Mode 17-21
Data Pump Legacy Mode 17-22
Managing File Locations 17-24
SQL*Loader: Overview 17-25
Loading Data with SQL*Loader 17-27
SQL*Loader Control File 17-28
Loading Methods 17-30
External Tables 17-31
External Table Benefits 17-32
Defining an External Tables with ORACLE_LOADER 17-33
xiv
External Table Population with ORACLE_DATAPUMP 17-34
Using External Tables 17-35
Data Dictionary 17-36
Quiz 17-37
Summary 17-39
Practice 17 Overview: Moving Data 17-40
xv
Appendix A: Practices and Solutions
F Oracle Restart
xvi
Oracle Applications Community G-15
Technical Support: My Oracle Support G-16
Oracle Database Product Page G-17
Thank You! G-18
xvii
Oracle Internal & Oracle Academy Use Only
Backup and Recovery Concepts
Categories of Failure
• Statement failure: A single database operation (select, insert, update, or delete) fails.
• User process failure: A single database session fails.
• Network failure: Connectivity to the database is lost.
• User error: A user successfully completes an operation, but the operation (dropping a
table or entering bad data) is incorrect.
• Instance failure: The database instance shuts down unexpectedly.
• Media failure: A loss of any file that is needed for database operation (that is, the files
have been deleted or the disk has failed).
Statement Failure
When a single database operation fails, DBA involvement may be necessary to correct errors
with user privileges or database space allocation. DBAs may also need to assist in trouble-
shooting, even for problems that are not directly in their task area. This can vary greatly from
one organization to another. For example, in organizations that use off-the-shelf applications
(that is, organizations that have no software developers), the DBA is the only point of contact
and must examine logic errors in applications.
To understand logic errors in applications, you should work with developers to understand the
scope of the problem. Oracle Database tools may provide assistance by helping to examine audit
trails or previous transactions.
Note: In many cases, statement failures are by design and desired. For example, security policies
and quota rules are often decided upon in advance. When a user gets an error while trying to
exceed his or her limits, it may be desired for the operation to fail and no resolution may be
necessary.
Network Failure
The best solution to network failure is to provide redundant paths for network connections.
Backup listeners, network connections, and network interface cards reduce the chance that
network failures will affect system availability.
User Error
Users may inadvertently delete or modify data. If they have not yet committed or exited their
program, they can simply roll back.
You can use Oracle LogMiner to query your online redo logs and archived redo logs through an
Enterprise Manager or SQL interface. Transaction data may persist in online redo logs longer
than it persists in undo segments; if you have configured archiving of redo information, redo
persists until you delete the archived files. Oracle LogMiner is discussed in the Oracle Database
Utilities reference.
Users who drop a table can recover it from the recycle bin by flashing back the table to before
the drop. Flashback technologies are discussed in detail in the Oracle Database 11g:
Administration Workshop II course.
If the recycle bin has already been purged, or if the user dropped the table with the PURGE
option, the dropped table can still be recovered by using point-in-time recovery (PITR) if the
database has been properly configured. PITR is discussed in the Oracle Database 11g:
Administration Workshop II course and in the Oracle Database Backup and Recovery User’s
Guide.
Flashback Technology
The Oracle database provides Oracle Flashback technology: a group of features that support
viewing past states of data—and winding data back and forth in time—without requiring
restoring the database from backup. With this technology, you help users analyze and recover
from errors. For users who have committed erroneous changes, use the following to analyze the
errors:
• Flashback Query: View committed data as it existed at some point in the past. The
SELECT command with the AS OF clause references a time in the past through a time
stamp or SCN.
• Flashback Version Query: View committed historical data for a specific time interval.
Use the VERSIONS BETWEEN clause of the SELECT command (for performance reasons
with existing indexes).
• Flashback Transaction Query: View all database changes made at the transaction level
Possible solutions to recover from user error:
• Flashback Transaction Backout: Rolls back a specific transaction and dependent
transactions
• Flashback Table: Rewinds one or more tables to their contents at a previous time without
affecting other database objects
Instance Failure
Instance failure occurs when the database instance is shut down before synchronizing all
database files. An instance failure can occur because of hardware or software failure or through
the use of the emergency SHUTDOWN ABORT and STARTUP FORCE shutdown commands.
Administrator involvement in recovering from instance failure rarely required if Oracle Restart
enabled and monitoring your database. Oracle Restart attempts to restart your database instance
as soon as it fails. If manual intervention is required then there may be a more serious problem
that prevents the instance from restarting, such as a memory CPU failure.
Control
files
CKPT
Instance Recovery
The Oracle database automatically recovers from instance failure. All that needs to happen is for
the instance to be started normally. If Oracle Restart is enabled and configured to monitor this
database then this happens automatically. The instance mounts the control files and then
attempts to open the data files. When it discovers that the data files have not been synchronized
during shutdown, the instance uses information contained in the redo log groups to roll the data
files forward to the time of shutdown. Then the database is opened and any uncommitted
transactions are rolled back.
SCN:
SCN:129 SCN:143 102-143
Undo
SCN: 99
Control Redo log
Data files files group
Transactions
Media Failure
Oracle Corporation defines media failure as any failure that results in the loss or corruption of
one or more database files (data, control, or redo log file).
Recovering from media failure requires that you restore and recover the missing files. To ensure
that your database can be recovered from media failure, follow the best practices outlined in the
next few pages.
Archiver (ARCn):
• Is an optional background SGA
process
• Automatically archives online Redo log buffer
redo log files when
ARCHIVELOG mode is set for
LGWR
ARCn
Archiver process
Answer: 2
Answers: 2, 4
Backup pieces
Archive
log files
Redundant Backup data
archive log
files Fast recovery area
A user-managed scenario:
• Is a manual process of tracking backup needs and status
• Typically uses your own written scripts
• Requires that database files be put in the correct mode for
backup
• Relies on operating system commands to make backups
User-Managed Backup
A user-managed backup can be performed interactively. However, most often it entails the
writing of scripts to perform the backup. There are several scenarios that can be run, and scripts
must be written to handle them.
Some of the actions that scripts must take:
• Querying V$DATAFILE to determine the data files that need to be backed up and their
current state
• Querying V$LOGFILE to identify the online redo log files
• Querying V$CONTROLFILE to identify the control file to back up
• Placing each tablespace in online backup mode
• Querying V$BACKUP to see what data files are part of a tablespace that has been placed in
online backup mode
• Issuing operating system copy commands to copy the data files to the backup location
• Bringing each tablespace out of online backup mode
Terminology
Whole database backup: Includes all data files and at least one control file (Remember that all
control files in a database are identical.)
Partial database backup: May include zero or more tablespaces and zero or more data files;
may or may not include a control file
Full backup: Makes a copy of each data block that contains data and that is within the files
being backed up
Incremental backup: Makes a copy of all data blocks that have changed since a previous
backup. The Oracle database supports two levels of incremental backup (0 and 1). A level 1
incremental backup can be one of two types: cumulative or differential. A cumulative backup
backs up all changes since the last level 0 backup. A differential backup backs up all changes
since the last incremental backup (which could be either a level 0 or level 1 backup). Change
Tracking with RMAN supports incremental backups.
Offline backups (also known as “cold” or consistent backup): Are taken while the database is
not open. They are consistent because, at the time of the backup, the system change number
(SCN) in data file headers matches the SCN in the control files.
Online backups (also known as “hot” or inconsistent backup): Are taken while the database is
open. They are inconsistent because, with the database open, there is no guarantee that the data
files are synchronized with the control files.
Oracle Database 11g: Administration Workshop I 15 - 6
Terminology
Data file #6
Image copies
(Duplicate data and log files in OS format)
Terminology (continued)
Image copies: Are duplicates of data or archived log files (similar to simply copying the files by
using operating system commands)
Backup sets: Are collections of one or more binary files that contain one or more data files,
control files, server parameter files, or archived log files. With backup sets, empty data blocks
are not stored, thereby causing backup sets to use less space on the disk or tape. Backup sets can
be compressed to further reduce the space requirements of the backup.
Image copies must be backed up to the disk. Backup sets can be sent to the disk or directly to the
tape.
The advantage of creating a backup as an image copy is improved granularity of the restore
operation. With an image copy, only the file or files need to be retrieved from your backup
location. With backup sets, the entire backup set must be retrieved from your backup location
before you extract the file or files that are needed.
The advantage of creating backups as backup sets is better space usage. In most databases, 20%
or more of the data blocks are empty blocks. Image copies back up every data block, even if the
data block is empty. Backup sets significantly reduce the space required by the backup. In most
systems, the advantages of backup sets outweigh the advantages of image copies.
Best practice
Backup
pieces
Change
tracking Recovery
Data files file area
Managing Backups
Select Enterprise Manager > Availability > Manage Current Backup to manage your existing
backups. On this page, you can see when a backup was completed, where it was created (disk or
tape), and whether it is still available.
At the top of the Manage Current Backups page, four buttons enable you to work with existing
backups:
• Catalog Additional Files: Although RMAN (working through Enterprise Manager) is the
recommended way to create backups, you might have image copies or backup sets that
were created by some other means or in some other environment with the result that
RMAN is not aware of them. This task identifies those files and adds them to the catalog.
• Crosscheck All: RMAN can automatically delete obsolete backups, but you can also
delete them by using operating system commands. If you delete a backup without using
RMAN, the catalog does not know whether the backup is missing until you perform a
cross-check between the catalog and what is really there.
• Delete All Obsolete: This deletes backups older than the retention policy.
• Delete All Expired: This deletes the catalog listing for any backups that are not found
when the cross-check is performed.
1 $ rman target /
2 RMAN> CONFIGURE …
3 RMAN> BACKUP DATABASE PLUS ARCHIVELOG;
Copies of
Answer: 2
To open a database:
• All control files must be present and synchronized
• All online data files must be present and synchronized
• At least one member of each redo log group must be
present
NOMOUNT
SHUTDOWN
Opening a Database
As a database moves from the shutdown stage to being fully open, it performs internal
consistency checks with the following stages:
• NOMOUNT: For an instance to reach the NOMOUNT (also known as STARTED) status, the
instance must read the initialization parameter file. No database files are checked while the
instance enters the NOMOUNT state.
• MOUNT: As the instance moves to the MOUNT status, it checks whether all control files
listed in the initialization parameter file are present and synchronized. If even one control
file is missing or corrupt, the instance returns an error (noting the missing control file) to
the administrator and remains in the NOMOUNT state.
• OPEN: When the instance moves from the MOUNT state to the OPEN state, it does the
following:
- Checks whether all redo log groups known to the control file have at least one
member present. Any missing members are noted in the alert log.
After the database is open, it fails in the case of the loss of:
• Any control file
• A data file belonging to the system or undo tablespaces
• An entire redo log group
(As long as at least one member of the group is available,
the instance remains open.)
If a member of a redo log file group is lost and if the group still
has at least one member, note the following results:
• Normal operation of the instance is not affected.
• You receive a message in the alert log notifying you that a
member cannot be found.
• You can restore the missing log file by dropping the lost
Users
Data Failures
Data failures are detected by checks, which are diagnostic procedures that assess the health of
the database or its components. Each check can diagnose one or more failures, which are then
mapped to a repair.
Checks can be reactive or proactive. When an error occurs in the database, reactive checks are
automatically executed. You can also initiate proactive checks (for example, by executing the
VALIDATE DATABASE command).
In Enterprise Manager, select Availability > Perform Recovery or click the Perform Recovery
button if you find your database in a “down” or “mounted” state. Click “Advise and Recover” to
have Enterprise Manager analyze and produce recovery advice.
2a
Advising on Repair
On the “View and Manage Failures” page, the Data Recovery Advisor generates a manual
checklist after you click the Advise button. Two types of failures can appear.
• Failures that require human intervention: An example is a connectivity failure when a disk
cable is not plugged in.
• Failures that are repaired faster if you can undo a previous erroneous action: For example,
if you renamed a data file by error, it is faster to rename it back to its previous name than
to initiate RMAN restoration from backup.
You can initiate the following actions:
• Click “Re-assess Failures” after you perform a manual repair. Resolved failures are
implicitly closed; any remaining failures are displayed on the “View and Manage Failures”
page.
• Click “Continue with Advise” to initiate an automated repair. When the Data Recovery
Advisor generates an automated repair option, it generates a script that shows how RMAN
plans to repair the failure. Click Continue if you want to execute the automated repair. If
you do not want the Data Recovery Advisor to automatically repair the failure, you can use
this script as a starting point for your manual repair.
Executing Repairs
The Data Recovery Advisor displays these pages. In the example, a successful repair is
completed in 40 seconds.
Answer: 3
Answer: 2
SQL*Loader
expdp impdp Other clients
(sqlldr)
Data Pump
DBMS_DATAPUMP
Oracle Oracle
Loader DataPump Direct Path API Metadata API
Data Pump offers many benefits and some new features over
earlier data movement tools, such as:
• Fine-grained object and data selection
• Explicit specification of database version
• Parallel execution
• Estimation of export job space consumption
3 5
Source Target
“Network mode”
impdp
client
$ impdp hr DIRECTORY=DATA_PUMP_DIR \
DUMPFILE=HR_SCHEMA.DMP \
PARALLEL=1 \
CONTENT=ALL \
TABLES="EMPLOYEES" \
SQL*Loader Rejected
Field processing
Discarded Accepted
Record selection
Log file
SQL*Loader: Overview
SQL*Loader loads data from external files into tables of an Oracle database. It has a powerful
data parsing engine that puts little limitation on the format of the data in the data file.
SQL*Loader uses the following files:
Input data files: SQL*Loader reads data from one or more files (or operating system
equivalents of files) that are specified in the control file. From SQL*Loader’s perspective, the
data in the data file is organized as records. A particular data file can be in fixed record format,
variable record format, or stream record format. The record format can be specified in the
control file with the INFILE parameter. If no record format is specified, the default is stream
record format.
Control file: The control file is a text file that is written in a language that SQL*Loader
understands. The control file indicates to SQL*Loader where to find the data, how to parse and
interpret the data, where to insert the data, and so on. Although not precisely defined, a control
file can be said to have three sections.
• The first section contains such session-wide information as the following:
- Global options, such as the input data file name and records to be skipped
- INFILE clauses to specify where the input data is located
- Data to be loaded
Table
HWM
Always generates redo entries Generates redo only under specific conditions
Can load into clustered tables Does not load into clusters
Allows other users to modify tables during Prevents other users from making changes to tables during
load operation load operation
Maintains index entries on each insert Merges new index entries at the end of the load
(Text)
ORACLE_DATAPUMP
driver
Database (Binary)
External Tables
External tables access data in external sources as if it were in a table in the database. You can
connect to the database and create metadata for the external table using DDL. The DDL for an
external table consist of two parts: one part the describes the Oracle Database column types, and
another part that describes the mapping of the external data to the Oracle Database data
columns.
An external table does not describe any data that is stored in the database. Nor does it describe
how data is stored in the external source. Instead, it describes how the external table layer must
present the data to the server. It is the responsibility of the access driver and the external table
layer to do the necessary transformations required on the data in the external file so that it
matches the external table definition. External tables are read only; therefore, no DML
operations are possible, and no index can be created on them.
There are two access driver used with external tables. The ORACLE_LOADER access driver can
only be used to read table data from and external table and load it into the database. It uses text
files as the data source. The ORACLE_DATAPUMP access driver can both load table data from
an external file into the database and also unload data from the database into an external file. It
uses binary files as the external files. The binary files have the same format as the files used by
the impdp and expdp utilities and can be interchanged with them.
Data Dictionary
The data dictionary views in the slide list the following table information:
[DBA| ALL| USER]_EXTERNAL_TABLES: Specific attributes of external tables in the
database
[DBA| ALL| USER]_EXTERNAL_LOCATIONS: Data sources for external tables
[DBA| ALL| USER]_TABLES: Descriptions of the relational tables in the database
[DBA| ALL| USER]_TAB_COLUMNS: Descriptions of the columns of tables, views, and
clusters in the database
[DBA| ALL]_DIRECTORIES: Describes the directory objects in the database.
Answer: 2
Answer: 2
View critical
1 error alerts in
Enterprise Manager.
Researching an Issue
My Oracle Support provides several resources that can be used to research an issue. The
following steps outline basic troubleshooting techniques that use My Oracle Support resources:
1. Keyword search: Most issues can be resolved quickly and easily by using the keyword
search utility on My Oracle Support. Effective searches can provide much information
about a specific problem and its solutions.
2. Documentation: If keyword searching fails to yield a solution, you should review the
documentation to ensure that setup problems are not the root cause. Setup issues account
for more than one-third of all service requests; it is always good to review setups early in
the troubleshooting process. Documentation consists of user guides and implementation
manuals published in PDF format as well as product README files and installation notes
published in HTML. Both of these document types are available on My Oracle Support
and can be accessed through the self-service toolkits for each product.
Kinds of patches
• Interim patches
– For specific issues
– No regression testing
• CPUs (Critical Patch Updates)
– Critical security issues
Managing Patches
You can apply different kinds of patches at different times for different reasons.
• Interim patches (also known as one-off or one-of patches) are created to solve a specific
problem. They do not go through a full regression test. Interim patches are typically
installed with the opatch utility. The Enterprise Manager Patching Wizard can help
automate the patching process by downloading, applying, and staging the patches. This
wizard uses the opatch utility in the background.
• CPU patches (Critical Patch Update patches) include security patches and dependent non-
security patches. The CPU patches are cumulative, which means fixes from previous
Oracle security alerts and critical patch updates are included. It is not required to have
previous security patches applied before applying the CPU patches. However, you must be
on the stated patch set level. CPU patches are for a specific patch release level (such as
10.2.0.3). CPU patches are installed with the opatch utility or through EM Patching
Wizard. The CPU patches are issued quarterly. CPU patches and interim patches can also
be removed from your system with opatch rollback -id <patch id>.
Oracle does extensive testing of Critical Patch Updates with our own applications, as well
as running regression tests for the Critical Patch Updates themselves. To verify that a patch
has been applied, query the inventory with opatch -lsinventory and see if the
patch is listed.
Applying a Patch
You can find and apply a patch, CPU, or patch release by using the “Software and Support”
page.
Staging a Patch
When you click Stage Patch in the Database Software Patching section of the “Software and
Support” page, the Patch Wizard is invoked.
The first step is to select the patch either by number or by criteria.
You then select the destination. In this step, you can choose from a list of available targets.
In the third step, provide the credentials of the OS user that is to do the patching, It is
recommended that this be the user that owns the software installation.
In the next step, you can choose either to stage the patch or to stage and apply the patch.
The fifth step schedules the job.
The final step enables you to review and submit the patch job.
The staged patches are stored in the $ORACLE_HOME/EMStagedPatches_<sid>
directory on UNIX and Linux platforms, and in the
%ORACLE_HOME%\EMStagedPatches_<sid> directory on Windows platforms.
Answers: 1, 3, and 4
7) The _______________________process writes the redo entries to the online redo log
files.
11) The _____________________ contains data and control information for a server or
background process.
Perform the following tasks as the default oracle OS user, unless otherwise indicated.
Note: Completing this practice is critical for all following practice sessions.
2) On the Select Installation Option page, select the Install and Configure Grid
Infrastructure for a standalone server option and click Next.
3) On the Product Languages page, select all the available languages and click Next.
c) Click OK in the Execute Configuration scripts window. The OUI continues with
the remaining installation tasks.
12) Click Close on the Finish page to complete the installation of the Oracle Grid
Infrastructure for a standalone server.
13) The next step is to configure the +FRA disk group. In a terminal window, logged in
b) Start the ASM Configuration Assistant by entering asmca at the command line.
$ asmca
c) The ASM Configuration Assistant opens displaying the current disk groups for
the +ASM instance. Click Create.
2) The Configure Security Updates page is the first to appear. In your real-world
environment, you would enter your email address and My Oracle Support password;
5) Ensure that Single instance database installation is selected on the Install Type page
and click Next.
6) On the Product Languages page, select all the available languages and click Next.
b) Run the script shown in the Execute Configuration scripts window. Accept the
default for the local bin directory and do not overwrite any files (you can just
press [Enter] because the default option is to not overwrite).
2) Click Next on the Welcome page to begin the orcl database creation.
3) On the Operations page, select Create a Database, and then click Next.
4) On the Database Templates page, select the General Purpose or Transaction
Processing template.
a) Click Show Details and answer the following questions:
i) Question 1: How many control files are created?
Answer: Two
Note: The location will change later in this practice when we choose to use
ASM as our storage technique.
ii) Question 2: How many redo log groups are created?
Answer: Three
Note: The location will change later in this practice when we choose to use
ASM as our storage technique.
iii) Question 3: What is the database block size (db_block_size)?
Answer: 8 KB
Answer: WE8MSWIN1252
Note: You will change this setting later in this practice to use a Unicode
database character set.
b) Click Close to close the Template Details window.
c) Click Next on the Database Templates page to continue the database creation
https://_________________________________________:______/em
You will be using this URL many times throughout the remainder of the course.
b) Click the Password Management button.
c) Scroll down the Password Management page until you see the HR username.
5) Using SQL*Plus, verify that you are not able to connect as the HR user to a database
that has been shut down.
c) Note that the modes that the database goes through during startup are MOUNT and
OPEN.
d) Locate and view the text version of the alert log.
Connect to the database as the system user (password is oracle_4U) using
SQL*Plus and query the V$DIAG_INFO view. To view the text-only alert log
without the XML tags, complete these steps:
i) In the V$DIAG_INFO query results, note the path that corresponds to the
Diag Trace entry.
SQL> select * from V$DIAG_INFO;
INST_ID NAME
---------- ------------------------------------------------
VALUE
-----------------------------------------------------------
...
1 Diag Trace
/u01/app/oracle/diag/rdbms/orcl/orcl/trace
...
9) Use the SHOW PARAMETER command to verify the settings for SGA_MAX_SIZE,
DB_CACHE_SIZE, and SHARED_POOL_SIZE.
c) Kill the LGWR using the kill -9 command and the process ID you determined
in the previous step. This will cause the instance to shut down.
$ kill -9 10478
STATUS
------------
OPEN
SQL>
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, Automatic Storage Management, OLAP,
Data Mining
and Real Application Testing options
SQL> SQL>
Table created.
SQL> SQL>
1 row created.
SQL> SQL> 2 3 4 5 6 7
PL/SQL procedure successfully completed.
6) Offline the second disk that is part of the DATA disk group making sure that the Disk
Repair Time attribute is set to 0:
a) Navigate back to the Disk Group: DATA General page. Select the second disk
(ASMDISK02), and click Offline.
b) On the Confirmation page, change the Disk Repair Time from its default (3.6
hours) to 0.0 and click Show SQL.
ALTER DISKGROUP DATA OFFLINE DISK ASMDISK02 DROP AFTER 0.0 h
c) Click Return.
d) Navigate back to the Confirmation page. Click Yes.
7) What do you observe?
a) Navigate back to the Disk Group: DATA General page. You can see that
ASMDISK02 is now offlined. Refresh your browser page until you no longer see
the offlined disk. It will be renamed to something similar to this:
_DROPPED_0000_DATA
The Pending Operations will show 1 as the disk is being dropped. Click the 1 to
view the progress of the rebalance operation.
SQL> commit;
9) Add the dropped ASM disk back to the DATA disk group:
a) You now need to wipe out the dropped disk before you can add it back. You must
be root to do this:
# oracleasm listdisks
# oracleasm deletedisk ASMDISK02
# oracleasm createdisk ASMDISK02 /dev/xvdc
b) Navigate back to the Disk Group: DATA General page. Click Add.
c) On the Add Disks page, select ORCL:ASMDISK02 from the Candidate Member
Disks table. Set REBALANCE POWER to 11.
d) Click Show SQL.
ALTER DISKGROUP DATA ADD DISK 'ORCL:ASMDISK02' SIZE 2304 M
REBALANCE POWER 11
e) Click Return.
f) On the Add Disks page, click OK.
10) What do you observe?
a) Navigate back to the Disk Group: DATA General page. Click the Pending
Operations 1 link to monitor the rebalancing operation.
b) You can see that a rebalance operation is going on for a while.
c) Allow the rebalance to complete. This may take several minutes.
SQL> commit;
12) Now, how would you add the offlined disk back into the DATA disk group? It is not
necessary to wipe out the dropped disk.
a) Navigate back to the Disk Group: DATA General page. Select the offline disk
and click Online.
b) On the Confirmation page, click Yes.
c) Navigate back to the Disk Group: DATA General page. You should see the disk
back to its level (around 41% full), without the need of any rebalance operation.
The disk is added back immediately.
+DATA/ORCL/:
CONTROLFILE/
DATAFILE/
ONLINELOG/
PARAMETERFILE/
TEMPFILE/
Spfileorcl.ora
ASMCMD> ls +DATA/ORCL/DATAFILE
EXAMPLE.260.630800437
SYSAUX.257.628766309
SYSTEM.256.628766309
TBSJMW.269.628767357
UNDOTBS1.258.628766309
USERS.259.628766309
2) Using ASMCMD, generate a list of all the commands that are allowed with the help
command.
ASMCMD> help
3) Navigate to the CONTROLFILE directory of the ORCL database in the DATA disk
group and use ASMCMD to copy the current control file to the /tmp directory. Use the
help cp command for syntax guidance.
ASMCMD> cd +DATA/ORCL/CONTROLFILE
ASMCMD> ls
Current.260.692183799
ASMCMD> help cp
ASMCMD> cp Current.260.692183799 /tmp
copying +DATA/ORCL/CONTROLFILE/Current.260.692183799 ->
/tmp/Current.260.692183799
5) Determine the syntax for the lsdg command, and generate a list of all disk groups.
6) Determine the syntax for the mkdg command, and create a new disk group named
DATA2 of type external redundancy, using two disks: ORCL:ASMDISK11 and
ORCL:ASMDISK12. Verify the disk group created successfully.
ASMCMD> help mkdg
ASMCMD> mkdg <dg name="DATA2" redundancy="external"> <dsk
string="ORCL:ASMDISK11" /> <dsk string="ORCL:ASMDISK12" />
</dg>
ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB
Free_MB Req_mir_free_MB Usable_file_MB Offline_disks
Voting_files Name
MOUNTED NORMAL N 512 4096 1048576 13824
10269 600 4834 0
N DATA/
MOUNTED EXTERN N 512 4096 1048576 4608
4556 0 4556 0
N DATA2/
MOUNTED EXTERN N 512 4096 1048576 9216
8982 0 8982 0
N FRA/
Enter password:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
SQL>
The Oracle SQL*Plus window opens. If you receive any errors or warnings,
resolve them.
c) At the SQL> prompt, enter the following command:
SQL> select instance_name, host_name from v$instance;
INSTANCE_NAME
----------------
HOST_NAME
-----------------------------------------------------------
orcl
edrsr25p1.us.oracle.com
i) Click the Add button to connect the new listener with your orcl database.
j) Enter the following values:
Option Value
Service Name orcl
Oracle Home Directory /u01/app/oracle/product/11.2.0/dbhome_1
Oracle System Identifier orcl
c) Connect to your database using the new listener using an easy connect string.
Note: This method of connecting is not a recommended approach for a production
environment; it is being used in this simple classroom environment just to prove
the newly created listener works.
$ sqlplus hr/oracle_4U@your_ip_address:1561/orcl
Your connection is through your newly created listener. Exit SQL*Plus after you
complete this step.
4) You can now stop this new LISTENER2 because you do not need it for the remainder
of the course.
$ lsnrctl stop LISTENER2
Note: Because the creation of users has not been covered, a script is provided for this
practice.
d) Enter the following command to run the script that creates the DBA1 user:
$ ./lab_07_01_01.sh
e) Leave the terminal window open. You will use it again later.
2) Use the Setup link in the top-right corner of Enterprise Manager (EM) to define the
DBA1 user as one who can perform administrative tasks in EM. When the non-SYS
user is configured, log out as the SYS user and log in as the DBA1 user. Use the
DBA1 user to perform the rest of these tasks, unless otherwise indicated.
a) In the far top-right corner of the EM window, click Setup and then on the Setup
page select Administrators.
b) Click Create to add the DBA1 user to the Administrators list. This will enable the
DBA1 user to perform management tasks by using Enterprise Manager.
c) Enter dba1 as Name and leave Email Address blank. Select Super
Administrator for the Administrator Privilege and then click Review.
Answer: SH.CUSTOMERS_PK
d) Question 4: Which segment is stored physically first in the tablespace? That is,
which one is stored right after the tablespace header?
i) Scroll to the bottom of the page, and then click the plus icon to the left of the
Extent Map label.
Answer: HR.COUNTRY_C_ID_PK
f) Click the Storage tab, and verify that Extent Allocation is Automatic, Segment
Space Management is Automatic, Compression Options is Disabled, and
Logging is set to Yes.
b) Log in to SQL*Plus as the dba1 user (with a password of oracle_4U) and run
the lab_07_02_02.sql script.
Note: Remember to use oraenv to set your environment to the orcl database,
if you have not already done so in your terminal window.
$ sqlplus dba1
Enter password:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, Automatic Storage Management, OLAP,
Data Mining
and Real Application Testing options
SQL> @lab_07_02_02.sql
c) Note that there is eventually an error ORA-01653 stating that the table cannot be
extended. There is not enough space to accommodate all the rows to be inserted.
SQL> commit
2 /
Commit complete.
SQL> quit
Disconnected from Oracle Database 11g Enterprise Edition
Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application
Testing options
3) Go to the Enterprise Manager window and increase the amount of space available for
the INVENTORY tablespace. For educational purposes, you will accomplish this
using two different methods. First, increase the size of the current datafile to 40 MB.
Then, to show that both ASM and non-ASM datafiles can exist for the same
tablespace, add a second datafile using file system storage. This second datafile
should be 30 MB in size. For both techniques use the show SQL functionality to view
the supporting SQL statements.
a) Select Server> Storage > Tablespaces.
b) Select the INVENTORY tablespace, and then click Edit.
k) Click Apply.
l) Notice now that there are now two datafiles for the INVENTORY tablespace, one
that is using ASM storage and the other using file system (non-ASM) storage.
4) Go back to the terminal window and run the lab_07_02_04.sql script. It drops
the table and re-executes the original script that previously returned the space error.
a) Go to the terminal window.
b) Log in to SQL*Plus as the dba1 user (with a password of oracle_4U) and run
the lab_07_02_04.sql script.
Note: Remember to use oraenv to set your environment to the orcl database if
you have not already done so in your terminal window.
$ sqlplus dba1
Enter password:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, Automatic Storage Management, OLAP,
Data Mining
and Real Application Testing options
SQL> @lab_07_02_04.sql
c) Note that the same number of row inserts are attempted, and there is no error
because of the increased size of the tablespace.
Enter password:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, Automatic Storage Management, OLAP,
Data Mining
and Real Application Testing options
SQL> @lab_07_02_05.sql
cd ~/labs
. set_db.sh
exit;
EOF
$ ./lab_08_01_01.sh
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, Automatic Storage Management, OLAP,
Data Mining
SQL> SQL> 2
User created.
SQL> SQL>
Grant succeeded.
2) Create a profile named HRPROFILE that allows only 15 minutes idle time.
a) Invoke Enterprise Manager as the DBA1 user in the SYSDBA role for your orcl
database.
b) Click the Server tab, and then click Profiles in the Security section.
c) Click the Create button.
d) Enter HRPROFILE in the Name field.
e) Enter 15 in the Idle Time (Minutes) field.
f) Leave all the other fields set to DEFAULT.
g) Click the Password tab, and review the Password options, which are currently all
set to DEFAULT.
h) Optionally, click the Show SQL button, review your underlying SQL statement,
and then click Return.
i) Finally, click OK to create your profile
3) Set the RESOURCE_LIMIT initialization parameter to TRUE so that your profile
limits are enforced.
a) Click the Server tab, and then click Initialization Parameters in the Database
Configuration section.
b) Enter resource_limit in the Name field, and then click Go.
c) Select TRUE from the Value drop-down list, and then click Apply.
Or, if you are already in SQL*Plus, use the CONNECT command. If you reconnect
as dhamby in SQL*Plus, the login and change-of-password session looks like
this:
SQL> CONNECT dhamby
Enter password: newuser <<<Password does not appear on screen
ERROR:
ORA-28001: the password has expired
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, Automatic Storage Management, OLAP, Data
Mining
and Real Application Testing options
SQL>
SALARY
----------
3000
c) Now attempt to delete the same record from the hr.employees table.
SQL> DELETE FROM hr.employees WHERE EMPLOYEE_ID=197;
DELETE FROM hr.employees WHERE EMPLOYEE_ID=197
*
ERROR at line 1:
ORA-01031: insufficient privileges
5) Repeat the test as the JGOODMAN user. Use oracle_4U as the new password. After
deleting the row, issue a rollback, so that you still have the original 107 rows.
a) Connect to the orcl database as the JGOODMAN user.
SQL> connect jgoodman
Enter password:
ERROR:
ORA-28001: the password has expired
<Change the password to oracle_4U as shown above>
SALARY
----------
3000
1 row deleted.
d) Roll back the delete operation (because this was just a test).
Rollback complete.
e) Confirm that you still have 107 rows in this table.
SQL> SELECT COUNT(*) FROM hr.employees;
COUNT(*)
----------
107
SQL>
Question 2: When you created the new users, you did not select a default or
temporary tablespace. What determines the tablespaces that the new users will use?
Question 3: You did not grant the CREATE SESSION system privilege to any of the
new users, but they can all connect to the database. Why?
Answer: Because Enterprise Manager automatically assigns the CONNECT role to the
new users, and CREATE SESSION is contained within that role
6) Use SQL*Plus to connect to the orcl database as the RPANDYA user. Change the
password to oracle_4U. (You must change the password, because this is the first
connection as RPANDYA.) Leave RPANDYA connected during the next lesson or at
the end of the day. HRPROFILE specifies that users whose sessions are inactive for
more than 15 minutes will automatically be logged out. Verify that the user was
automatically logged out by trying to select from the HR.EMPLOYEES table again.
Enter password:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, Automatic Storage Management, OLAP,
Data Mining
and Real Application Testing options
SQL> @lab_09_01_01.sql
Creating users...
1 row updated.
c) Leave this session connected in the state that it is currently. Do not exit at this
time.
Enter password:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, Automatic Storage Management, OLAP,
Data Mining
and Real Application Testing options
SQL> @lab_09_01_02.sql
Sleeping for 20 seconds to ensure first process gets the
lock first.
Sleep is finished.
Connected.
b) Notice that this session appears to be hung. Leave this session as is and move on
to the next step.
3) Using Enterprise Manager, click the Blocking Sessions link on the Performance page
and detect which session is causing the locking conflict.
a) In Enterprise Manager, click the Performance page.
b) Click Blocking Sessions in the Additional Monitoring Links area. You should
see the following:
5) Resolve the conflict in favor of the user who complained, by killing the blocking
session. What SQL statement resolves the conflict?
d) Click Return, and then click Yes to carry out the KILL SESSION command.
6) Return to the SQL*Plus command window, and note that SMAVRIS’s update has
1 row updated.
Update is completed.
SQL>
7) Try issuing a SQL select statement in the NGREENBERG session. What do you see?
SQL> SELECT sysdate from dual;
SELECT sysdate from dual
*
ERROR at line 1:
ORA-03135: connection lost contact
Process ID: 7129
Session ID: 51 Serial number: 7460
SQL>
Answer: The session has been disconnected.
Close all open SQL sessions by entering exit, and then close the terminal windows.
Answer: Yes, (but most likely not enough to support the required 48 hours).
2) Modify the undo retention time and calculate the undo tablespace size to support the
requested 48-hour retention.
f) This command will change the undo retention to support the 48-hour requirement.
Review the SQL statement and click Return.
g) Click Apply to make the change to undo retention.
h) Now adjust the undo tablespace size by clicking the Edit Undo Tablespace
button.
i) Scroll down to Datafiles and click Edit to make a change to the datafile file size
for the Undo tablespace.
j) Change the file size to the Minimum Required Undo Tablespace Size that was
determined when you ran the Undo Advisor (249 MB is the value in the
screenshot above) and click Continue.
Click Return.
l) Click Apply to change the tablespace size.
3) Go back to the Automatic Undo Management page to see the results of the changes
you just made. You see that the undo retention time has increased to support the 48
hours requirement. Your undo tablespace size has also increased based on the
changes you made to the size of the datafile for the undo tablespace.
b) Continue with the next step when you see that the database is restarted.
3) Back in Enterprise Manager, select HR.JOBS as the audited object and DELETE,
INSERT, and UPDATE as Selected Statements. Gather audit information by session.
Because the database has been restarted, you have to log in to Enterprise Manager
again as the DBA1 user.
a) Click logout in the upper-right corner of the Enterprise Manager window.
b) Log in as the DBA1 user in the SYSDBA role for your orcl database.
c) Click the Database home page tab to ensure that Enterprise Manager had time to
update the status of the database and its agent connections.
d) Click the Server tab, and then click Audit Settings in the Security section.
e) Click the Audited Objects tab at the bottom of the page, and then click the Add
button.
Question: Can you tell which user increased and which user decreased the
salaries?
Answer: No, the standard audit records only show which user accessed the table.
d) Click Return.
6) Undo your audit settings for HR.JOBS, disable database auditing, and then restart the
database by using the lab_11_01_06.sh script.
a) On the Audit Settings page, click the Audited Objects tab at the bottom of the
page.
b) Enter HR as Schema, and then click Search.
c) Select all three rows, and then click Remove.
e) Review the statements, and then click Yes to confirm your removal.
7) Maintain your audit trail: Because you are completely finished with this task, backup
and delete all audit files from the /u01/app/oracle/admin/orcl/adump
directory.
a) In a terminal window, enter:
$ cd /u01/app/oracle/admin/orcl/adump
$ ls
b) Create a backup of the audit trail files, and then remove the files
$ tar –czf $HOME/audit_today.tar.z *
$ rm –f *
set echo on
exit;
END
$ ./lab_12_01_01.sh
2) Create a new SPCT user, identified by oracle_4U. Assign the TBSSPC tablespace
as the default tablespace. Assign the TEMP tablespace as the temporary tablespace.
Grant the following roles to the SPCT users: CONNECT, RESOURCE, and DBA.
Execute the lab_12_01_02.sh script to perform these tasks. In a terminal
window, enter:
$ cat lab_12_01_02.sh
…
sqlplus / as sysdba << END
set echo on
exit;
END
$ ./lab_12_01_02.sh
set echo on
exec
dbms_advisor.set_default_task_parameter('ADDM','DB_ACTIVITY_MI
N',30);
exec DBMS_STATS.GATHER_TABLE_STATS(-
ownname=>'SPCT', tabname=>'SPCT',-
estimate_percent=>DBMS_STATS.AUTO_SAMPLE_SIZE);
exec DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT();
exit;
EOF
$ ./lab_12_01_03.sh
6) Look at the Performance Analysis findings in order of their impact. There are
several access paths to this information. The results should look similar to the
following:
c) Select the task, and then click the View Result button (alternatively, click the
name of the task).
This finding shows that there is a hot data block that belongs to the SPCT.SPCT
table. The recommendation is to investigate the application logic to find the cause.
2) You get calls from HR application users saying that a particular query is taking longer
than normal to execute. The query is in the lab_13_01_02.sql script. To run this
3) Using Enterprise Manager, locate the HR session in which the above statement was
just executed, and view the execution plan for that statement.
a) In Enterprise Manager, click the Performance tab, and the click Search Sessions
in the Additional Monitoring Links section.
b) On the Search Sessions page, change the Search criteria to “DB User,” enter HR
in the field to the right of that, and then click Go.
c) Click the SID number in the Results listing.
d) You now see the Session Details page for this session. Click the hash value link
to the right of the Previous SQL label in the Application section.
5) Now that you have seen one index with a non-VALID status, you decide to check all
indexes. Using SQL*Plus, as the HR user, find out which HR schema indexes do not
have STATUS of VALID. To do this, you can query a data dictionary view with a
condition on the STATUS column.
a) Go to the SQL*Plus session where you are still logged in as the HR user, and run
this query:
SQL> select index_name, table_name, status
from user_indexes where status <> ‘VALID’;
6 rows selected.
SQL>
b) You notice that the output lists six indexes, all on the EMPLOYEES table. This is a
problem you will need to fix.
6) You decide to use Enterprise Manager to reorganize all the indexes in the HR schema
that are marked as UNUSABLE.
a) In Enterprise Manager, on the page displaying the EMP_EMP_ID_PK index,
m) Click Reload on your browser until you see the job has succeeded.
b) Repeat the tasks listed in step 3 to view the execution plan for the query. Now the
icon indicates the use of an index. Click View Table. Note that the plan now uses
an index unique scan.
This script takes about 20 minutes to complete. So, run it in a separate terminal
window and continue with this practice exercise while it runs.
Note: Because this script generates a fairly heavy load in terms of CPU and disk I/O,
you will notice that response time for Database Control is slower.
$ sqlplus / as sysdba
SQL> @lab_13_01_09.sql
10) Go back to Enterprise Manager and examine the performance of your database.
Question 1: In the Average Active Sessions graph, which are the two main
categories that active sessions are waiting for?
Answer: In this example, it looks like CPU Wait and User I/O are quite high.
Configuration is also showing high wait activity. Your results may differ from what is
shown here.
Answer: LGWR
c) Click Top Activity in the Additional Monitoring Links region.
b) On the Session Details page, click Kill Session, and then click Yes to confirm.
Note: If you remain on this Session Details page long enough for a few automatic
refreshes to be done, you may see a warning, “WARNING, Session has expired.” or a
SQL Error saying the session is marked for kill. This warning means you are
attempting to refresh information about a session that has already been killed. You
can ignore this warning.
2) Verify that you have at least two control files to ensure redundancy.
a) Invoke Enterprise Manager as the DBA1 user in the SYSDBA role for your orcl
Question 1: On the Control Files: General page, how many control files do you
have?
Answer: 2 .
3) Review the fast recovery area configuration and change the size to 8 GB.
a) In Enterprise Manager, select Availability > Recovery Settings in the Setup
section.
i) Click Apply.
4) Check how many members each redo log group has. Ensure that there are at least two
redo log members in each group. One set of members should be stored in the fast
recovery area.
a) Click Server > Redo Log Groups, and note how many members are in the “# of
Members” column.
5) You notice that, for each log group, the Archived column has a value of No. This
means that your database is not retaining copies of redo logs to use for database
recovery, and in the event of a failure, you will lose all data since your last backup.
Place your database in ARCHIVELOG mode, so that redo logs are archived.
Note: You must continue with step 5, so that your changes are applied.
a) In Enterprise Manager, select Availability > Recovery Settings in the Setup
section.
b) In the Media Recovery region, select the ARCHIVELOG Mode check box.
Also, verify that Log Archive Filename Format contains %t, %s, and %r.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, Automatic Storage Management, OLAP,
Data Mining
and Real Application Testing options
Now that your database is in ARCHIVELOG mode, it will continually archive a copy
of each online redo log file before reusing it for additional redo data.
Note: Remember that this consumes space on the disk and that you must regularly
back up older archive logs to some other storage.
e) Optionally, use a terminal window, logged in as the oracle user to view the
trace file name at the end of the alert log by executing the following command:
cd /u01/app/oracle/diag/rdbms/orcl/orcl/trace
tail alert_orcl.log
The following output shows only the last few lines:
$ cd /u01/app/oracle/diag/rdbms/orcl/orcl/trace
$ tail alert_orcl.log
Sat Jul 11 09:10:03 2009
SMCO started with pid=23, OS id=9837
Sat Jul 11 09:46:31 2009
ALTER DATABASE BACKUP CONTROLFILE TO TRACE
Backup controlfile written to trace file
/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_12190.trc
Completed: ALTER DATABASE BACKUP CONTROLFILE TO TRACE
Sat Jul 11 09:46:56 2009
ALTER DATABASE BACKUP CONTROLFILE TO TRACE
Backup controlfile written to trace file
/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_12190.trc
Completed: ALTER DATABASE BACKUP CONTROLFILE TO TRACE
$
b) Note the message under the Disk Backup Location that says the fast recovery area
is the current disk backup location.
4) Establish the backup policy to automatically back up the SPFILE and control file.
a) Click the Policy tab under the Backup Settings heading.
c) Scroll to the bottom and enter oracle and oracle for Host Credentials
m) Click View Job to monitor the status of the backup job. The time for this backup
depends on your hardware and system resources.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, Automatic Storage Management, OLAP,
Data Mining
and Real Application Testing options
SQL> @lab_16_01_01.sql
Connected.
Java created.
Procedure created.
Synonym created.
Grant succeeded.
SQL>
SQL> @lab_16_02_01.sql
3) Troubleshoot and recover as necessary. The error message suggests that the
inventory02.dbf data file is corrupt or missing.
a) In Enterprise Manager, on the Home page, look in the Alerts section and notice
the Data Failure alert.
g) On the Recovery Advise page, you see the RMAN script. Click Continue.
h) On the Review page, you see the failure and the suggested solution. Click
“Submit Recovery Job.”
i) A Processing window appears, followed by the Job Activity page. You should see
a message that the job was successfully created. (Your link name is probably
different.)
COUNT(*)
----------
217368
Answer: Because recovery of system or undo data files must be done with the
database closed, whereas recovery of an application data file can be done with the
database open and available to users
2) As the oracle OS user, execute the lab_16_03_02.sh script in your labs
directory. This script deletes the system data file.
3) In Enterprise Manager, review the Database home page. If you see a message that
says the connection was refused, try re-entering the EM home page URL in the
browser. You may need to try several times before you see the Database home page.
2) The Help desk begins receiving calls saying that the database appears to be down.
Troubleshoot and recover as necessary. Use SRVCTL to try to start up the database.
a) In a terminal window, ensure that your environment is configured for your orcl
database environment using oraenv.
4) Notice that the missing control file is the one from your +FRA disk group. You know
you also have a control file on the +DATA disk group. You can perform a recovery
by restoring from the control file that is in the +DATA disk group, but you need to
know the file name. Using asmcmd, determine the name of the control file in the
+DATA disk group.
b) Start asmcmd and use the ls command to determine the name of the control file
in the +DATA disk group (this file will be in the +data/orcl/controlfile
directory).
$ asmcmd
ASMCMD> ls +data/orcl/controlfile
Current.260.695209463
ASMCMD>
c) Make a note of this name along with its full path because you will need this
information for the next step.
5) In another terminal window, connect to RMAN and use the following command to
restore your control file:
restore controlfile from
‘+DATA/orcl/controlfile/yourcontrolfilename’;
Then mount and open your database.
a) Set your environment for your orcl database using oraenv and then connect to
RMAN.
$ . oraenv
ORACLE_SID = [oracle] ? orcl
The Oracle base for
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 is
/u01/app/oracle
$ rman target /
RMAN>
RMAN>
c) Restore the control file from the existing control file on the +DATA disk group.
Note: Use the file name determined in step 4.
RMAN> restore controlfile from
'+DATA/orcl/controlfile/current.260.695209463';
RMAN>
database mounted
e) Open your database.
RMAN> alter database open;
database open
In the end, you learn that the only database for which management approves an import is
the orcl database. So you perform the import with the Data Pump Wizard, remapping
the HR schema to DBA1 schema.
Then you receive two data load requests for which you decide to use SQL*Loader.
Tablespace created.
User created.
Role created.
Grant succeeded.
Grant succeeded.
Table altered.
Grant succeeded.
Grant succeeded.
g) Review Advanced Options (but do not change), and then click Next.
h) On the Export: Files page, select DATA_PUMP_DIR from the Directory Object
drop-down list, enter HREXP%U.DMP as File Name, and then click Next.
3) Now, import the EMPLOYEES table from the exported HR schema into the DBA1
schema. To get a feeling for the command-line interface, you can use the impdp
utility from the command line to import the EMPLOYEES table into the DBA1 user
schema.
a) Ensure that your environment is configured for the orcl database by running
oraenv.
$ . oraenv
ORACLE_SID = [oracle] ? orcl
The Oracle base for
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 is
/u01/app/oracle
$
b) Enter the following entire command string. Do not press [Enter] before reaching
the end of the command:
impdp dba1/oracle_4U DIRECTORY=data_pump_dir
DUMPFILE=HREXP01.DMP REMAP_SCHEMA=hr:dba1
TABLES=hr.employees LOGFILE=empimport.log
Note: You may see errors on constraints and triggers not being created because only
the EMPLOYEES table is imported and not the other objects in the schema. These
4) Confirm that the EMPLOYEES table has been loaded into the DBA1 schema by
logging in to SQL*Plus as the DBA1 user and selecting data from the EMPLOYEES
table.
a) Log in to SQL*Plus as the DBA1 user.
Note: Remember to use oraenv to set your environment to the orcl database if
you have not already done so in your terminal window.
$ sqlplus dba1
Enter Password:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 -
Production
With the Partitioning, OLAP, Data Mining and Real Application
Testing options
SQL>
b) Select a count of the rows from the EMPLOYEES table in the DBA1 schema, for
verification of the import.
COUNT(*)
----------
107
SQL>
e) On the Load Data: Data File page, click Provide the full path and name on the
database server machine and enter
/home/oracle/labs/lab_17_02_01.dat as the data file name and path,
or use the flashlight icon to select this data file. Click Next.
g) On the Load Data: Options page, accept all defaults, but enter
/home/oracle/labs/lab_17_02_01.log as the log file name and path.
Review the advanced options if you want, but do not change any, and then click
Next.
i) On the Load Data: Review page, review the loading information and parameters,
and then click Submit Job.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, Automatic Storage Management, OLAP,
Data Mining
and Real Application Testing options
SQL> @lab_18_01_02.sql
Connected. Write down this Block
Number because you
FILE_NO BLOCK_NO will need to enter this
---------- ---------- number when prompted.
9 129
System altered.
3) Log in to Enterprise Manager as the DBA1 user in the SYSDBA role, and then view
the alerts on the Database home page and investigate the alert details. When the
incident appears in the alerts, click the Active Incidents link.
You should see one or more critical alerts. Depending on the timing, you may see one
or more of the following:
The number of Active Incidents may not match the number of alerts immediately.
Click the Active Incidents link.
8) When the problem details page appears, notice that the Investigate and Resolve
section has two tabs that allow you to pursue the solution yourself or enlist the aid of
Oracle Support.
10) Get advise. Under the Checker Findings tab, in the Data Corruption section, Select
the finding with the description that starts with “Datafile …” and click Launch
Recovery Advisor.
Appendix B
Basic Linux and
vi Commands
___________________
The Visual Interpreter/Editor (vi) is the most widely used text editor available for the UNIX environment.
While almost everybody curses its unwieldy command syntax, it is still the only editor almost certain to
be included with every version of the UNIX and Linux operating system. The following are a partial list
of available vi commands.
vi has two modes. Command line (where anything typed is taken as an editing command) and input mode
(where everything typed will be treated as part of the file being edited. To enter the input mode, type a, A,
i, I, o, O, c, C, s, S, r, or R. To return to the command-line mode, use the <ESC> key. To access the vi
editor from SQLPlus, enter the following command:
SQL>define _editor=vi
<ctrl> f - Scroll forward one page <ctrl> b - Scroll backward one page
u - Will undo the most recent change. U - Will undo the most recently deleted text.
:e! - re-edit current file without saving any changes made since last change
A - Append text to the end of a line (jumps to end of line and begin appending).
c - Change object C - Change from current cursor position to end of the line
i - Insert text before the current cursor position. I - Insert text at the beginning of a line.
r - Replace character at current cursor position R - Replace all characters until <ESC> is pressed
options include: g (change all occurences on current line) c (confirm prior to each change)
x - Will delete the character directly under the current cursor location.
dnd (where n is some integer) will delete n lines from current cursor position
J - Delete return at end of current line. Join this line and the next
COPY, CUT, and PASTE: vi uses a single buffer where the last changed or deleted text is stored. This text may be
manipulated with the following commands:
Y - Yank a copy of the current line y <integer> - Yank a copy of next <int> lines
yw - Yank a copy of the current word yb - Yank a copy of the previous word
p - Put buffer contents after cursor P - Put buffer contents before cursor
zz - Will terminate edit mode. :w filename - Will save changes to the filename specified.
ZZ - Will terminate edit mode. :q! - Will terminate the file without saving changes.
This appendix is meant to serve only as a quick reference while you are in class. For more
details on these commands, consult the man pages, your Linux documentation, or other
Linux command reference books.
Files and Linux Commands Description/Comments
Directories
Command man <command> Find the manual entry for this
manual <command>.
man –k <string> Show all the manual entries that
contain this <string>.
man man Displays the manual page for
{grub}
c Use GRUB Boot Loader.
kernel vmlinuz-2.4.9-13 single
ro root=/dev/hda8
initrd /initrd-2.4.9-13.img
boot
Table 1 shows each SQL statement and its related syntax. Table 2 shows the syntax of the
subclauses found in the table 1.
See Also: Oracle Database SQL Reference for detailed information about Oracle SQL
ANALYZE ANALYZE
{ TABLE [ schema. ]table
AUDIT AUDIT
{ sql_statement_clause | schema_object_clause }
[ BY { SESSION | ACCESS } ]
[ WHENEVER [ NOT ] SUCCESSFUL ] ;
CALL CALL
{ routine_clause
| object_access_expression
}
[ INTO :host_variable
[ [ INDICATOR ] :indicator_variable ] ] ;
COMMENT COMMENT ON
{ TABLE [ schema. ]
{ table | view }
| COLUMN [ schema. ]
{ table. | view. | materialized_view. } column
| OPERATOR [ schema. ] operator
| INDEXTYPE [ schema. ] indextype
NOAUDIT NOAUDIT
PURGE PURGE
{ { TABLE table
| INDEX index
}
| { RECYCLEBIN | DBA_RECYCLEBIN }
| TABLESPACE tablespace
[ USER user ]
} ;
TRUNCATE TRUNCATE
{ TABLE [ schema. ]table
[ { PRESERVE | PURGE } MATERIALIZED VIEW LOG ]
| CLUSTER [ schema. ]cluster
}
[ { DROP | REUSE } STORAGE ] ;
Subclause Syntax
activate_standby_db_clause ACTIVATE
[ PHYSICAL | LOGICAL ]
STANDBY DATABASE
[ SKIP [ STANDBY LOGFILE ] ]
add_disk_clause ADD
[ FAILGROUP failgroup_name ]
DISK qualified_disk_clause
[, qualified_disk_clause ]...
[ [ FAILGROUP failgroup_name ]
DISK qualified_disk_clause
[, qualified_disk_clause ]...
]...
add_table_partition { add_range_partition_clause
| add_hash_partition_clause
| add_list_partition_clause
}
alter_datafile_clause DATAFILE
{ 'filename' | filenumber }
[, 'filename' | filenumber ]...
}
alter_external_table_clauses { add_column_clause
| modify_column_clauses
| drop_column_clause
| parallel_clause
| external_data_properties
| REJECT LIMIT { integer | UNLIMITED }
| PROJECT COLUMN { ALL | REFERENCED }
}
[ add_column_clause
| modify_column_clauses
| drop_column_clause
| parallel_clause
| external_data_properties
| REJECT LIMIT { integer | UNLIMITED }
| PROJECT COLUMN { ALL | REFERENCED }
]...
alter_index_partitioning { modify_index_default_attrs
| add_hash_index_partition
| modify_index_partition
| rename_index_partition
| drop_index_partition
| split_index_partition
| coalesce_index_partition
| modify_index_subpartition
}
alter_iot_clauses { index_org_table_clause
| alter_overflow_clause
| alter_mapping_table_clauses
| COALESCE
}
alter_mv_refresh REFRESH
{ { FAST | COMPLETE | FORCE }
| ON { DEMAND | COMMIT }
| { START WITH | NEXT } date
| WITH PRIMARY KEY
| USING
{ DEFAULT MASTER ROLLBACK SEGMENT
| MASTER ROLLBACK SEGMENT
rollback_segment
}
| USING { ENFORCED | TRUSTED } CONSTRAINTS
}
alter_overflow_clause { OVERFLOW
{ allocate_extent_clause
| deallocate_unused_clause
}
[ allocate_extent_clause
| deallocate_unused_clause
]...
| add_overflow_clause
}
alter_system_reset_clause parameter_name
[ SCOPE = { MEMORY | SPFILE | BOTH } ]
SID = 'sid'
alter_system_set_clause parameter_name =
parameter_value [, parameter_value ]...
[ COMMENT 'text' ]
[ DEFERRED ]
alter_table_partitioning { modify_table_default_attrs
| set_subpartition_template
| modify_table_partition
| modify_table_subpartition
| move_table_partition
| move_table_subpartition
| add_table_partition
| coalesce_table_partition
| drop_table_partition
| drop_table_subpartition
alter_table_properties { { physical_attributes_clause
| logging_clause
| table_compression
| supplemental_table_logging
| allocate_extent_clause
| deallocate_unused_clause
| shrink_clause
| { CACHE | NOCACHE }
| upgrade_table_clause
| records_per_block_clause
| parallel_clause
| row_movement_clause
}
[ physical_attributes_clause
| logging_clause
| table_compression
| supplemental_table_logging
| allocate_extent_clause
| deallocate_unused_clause
| shrink_clause
| { CACHE | NOCACHE }
| upgrade_table_clause
| records_per_block_clause
| parallel_clause
| row_movement_clause
]...
| RENAME TO new_table_name
}
[ alter_iot_clauses ]
autoextend_clause AUTOEXTEND
{ OFF
| ON [ NEXT size_clause ]
[ maxsize_clause ]
}
binding_clause BINDING
bitmap_join_index_clause [ schema.]table
( [ [ schema. ]table. | t_alias. ]column
[ ASC | DESC ]
[, [ [ schema. ]table. | t_alias. ]column
[ ASC | DESC ]
]...
)
FROM [ schema. ]table [ t_alias ]
[, [ schema. ]table [ t_alias ]
]...
WHERE condition
[ local_partitioned_index ]
index_attributes
check_diskgroup_clauses CHECK
{ ALL
| DISK
disk_name
[, disk_name ]...
| DISKS IN FAILGROUP
failgroup_name
[, failgroup_name ]...
| FILE
filename
[, filename ]...
}
[ CHECK
{ ALL
| DISK
disk_name
[, disk_name ]...
| DISKS IN FAILGROUP
failgroup_name
[, failgroup_name ]...
| FILE
filename
[, filename ]...
}
]...
[ REPAIR | NOREPAIR ]
column_clauses { { add_column_clause
| modify_column_clause
| drop_column_clause
}
[ add_column_clause
| modify_column_clause
| drop_column_clause
]...
| rename_column_clause
| modify_collection_retrieval
[ modify_collection_retrieval ]...
| modify_LOB_storage_clause
| alter_varray_col_properties
}
column_properties { object_type_col_properties
| nested_table_col_properties
| { varray_col_properties | LOB_storage_clause }
[ (LOB_partition_storage
[, LOB_partition_storage ]...
)
]
| XMLType_column_properties
}
[ { object_type_col_properties
| nested_table_col_properties
| { varray_col_properties |
LOB_storage_clause }
[ (LOB_partition_storage
[, LOB_partition_storage ]...
)
]
| XMLType_column_properties
}
]...
compile_type_clause COMPILE
[ DEBUG ]
constraint { inline_constraint
| out_of_line_constraint
| inline_ref_constraint
| out_of_line_ref_constraint
}
constructor_declaration [ FINAL ]
[ INSTANTIABLE ]
CONSTRUCTOR FUNCTION datatype
[ [ SELF IN OUT datatype, ]
parameter datatype
[, parameter datatype ]...
]
RETURN SELF AS RESULT
{ IS | AS } { pl/sql_block | call_spec }
constructor_spec [ FINAL ]
[ INSTANTIABLE ]
CONSTRUCTOR FUNCTION datatype
[ ([ SELF IN OUT datatype, ]
create_mv_refresh { REFRESH
{ { FAST | COMPLETE | FORCE }
| ON { DEMAND | COMMIT }
| { START WITH | NEXT } date
| WITH { PRIMARY KEY | ROWID }
| USING
{ DEFAULT [ MASTER | LOCAL ]
ROLLBACK SEGMENT
| [ MASTER | LOCAL ]
ROLLBACK SEGMENT rollback_segment
}
[ DEFAULT [ MASTER | LOCAL ]
ROLLBACK SEGMENT
| [ MASTER | LOCAL ]
ROLLBACK SEGMENT rollback_segment
]...
datafile_tempfile_spec [ 'filename' ]
[ SIZE size_clause ]
[ REUSE ]
[ autoextend_clause ]
dependent_handling_clause { INVALIDATE
| CASCADE [ { [ NOT ] INCLUDING TABLE DATA
| CONVERT TO SUBSTITUTABLE
}
]
[ [FORCE ] exceptions_clause ]
}
diskgroup_availability { MOUNT
| DISMOUNT [ FORCE | NOFORCE ]
}
diskgroup_clauses { diskgroup_name
{ rebalance_diskgroup_clause
| check_diskgroup_clauses
| diskgroup_template_clauses
| diskgroup_directory_clauses
| diskgroup_alias_clauses
| drop_diskgroup_file_clause
}
| { diskgroup_name | ALL }
diskgroup_availability
}
dml_table_expression_clause { [ schema. ]
{ table
[ { PARTITION (partition)
| SUBPARTITION (subpartition)
}
| @ dblink
]
| { view | materialized view } [ @ dblink ]
}
| ( subquery [ subquery_restriction_clause ] )
| table_collection_expression
}
drop_constraint_clause DROP
{ { PRIMARY KEY
| UNIQUE (column [, column ]...)
}
[ CASCADE ]
[ { KEEP | DROP } INDEX ]
| CONSTRAINT constraint
[ CASCADE ]
}
drop_disk_clauses DROP
{ DISK
disk_name [ FORCE | NOFORCE ]
[, disk_name [ FORCE | NOFORCE ] ]...
| DISKS IN FAILGROUP
failgroup_name [ FORCE | NOFORCE ]
[, failgroup_name [ FORCE | NOFORCE ] ]...
}
element_spec [ inheritance_clauses ]
{ subprogram_spec
| constructor_spec
| map_order_function_spec
}
[ subprogram_clause
| constructor_spec
file_specification { datafile_tempfile_spec
| diskgroup_file_spec
| redo_log_file_spec
}
for_clause FOR
{ TABLE
| ALL [ INDEXED ] COLUMNS [ SIZE integer ]
| COLUMNS [ SIZE integer ]
{ column | attribute } [ SIZE integer ]
[ { column | attribute }
[ SIZE integer ]
]...
| ALL [ LOCAL ] INDEXES
}
[ FOR
{ TABLE
| ALL [ INDEXED ] COLUMNS
[ SIZE integer ]
| COLUMNS [ SIZE integer ]
{ column | attribute } [ SIZE integer ]
[ { column | attribute }
[ SIZE integer ]
]...
| ALL [ LOCAL ] INDEXES
}
]...
fully_qualified_file_name +diskgroup_name/db_name/file_type/
file_type_tag.filenumber.incarnation_number
function_association { FUNCTIONS
[ schema. ]function [, [ schema. ]function
]...
| PACKAGES
[ schema. ]package [, [ schema. ]package
]...
| TYPES
[ schema. ]type [, [ schema. ]type ]...
| INDEXES
[ schema. ]index [, [ schema. ]index ]...
| INDEXTYPES
[ schema. ]indextype [, [ schema.
]indextype ]...
}
{ using_statistics_type
| { default_cost_clause
[, default_selectivity_clause ]
| default_selectivity_clause
[, default_cost_clause ]
}
}
general_recovery RECOVER
[ AUTOMATIC ]
[ FROM 'location' ]
{ { full_database_recovery
| partial_database_recovery
| LOGFILE 'filename'
}
[ { TEST
grant_system_privileges { system_privilege
| role
| ALL PRIVILEGES
}
[, { system_privilege
group_by_clause GROUP BY
{ expr
index_org_table_clause [ { mapping_table_clause
| PCTTHRESHOLD integer
| key_compression
}
[ mapping_table_clause
| PCTTHRESHOLD integer
| key_compression
]...
]
[ index_org_overflow_clause ]
index_properties [ { { global_partitioned_index
| local_partitioned_index
}
| index_attributes
}
[ { { global_partitioned_index
| local_partitioned_index
}
| index_attributes
}
]...
| domain_index_clause
]
individual_hash_partitions (PARTITION
[ partition partitioning_storage_clause ]
[, PARTITION
[ partition partitioning_storage_clause
]
]...
)
inner_cross_join_clause table_reference
{ [ INNER ] JOIN table_reference
{ ON condition
| USING (column [, column ]...)
}
| { CROSS
| NATURAL [ INNER ]
}
JOIN table_reference
}
interval_day_to_second INTERVAL
'{ integer | integer time_expr | time_expr }'
{ { DAY | HOUR | MINUTE }
[ (leading_precision) ]
| SECOND
[ (leading_precision
[, fractional_seconds_precision ]
)
]
}
[ TO { DAY | HOUR | MINUTE | SECOND
[ (fractional_seconds_precision) ]
}
]
LOB_storage_clause LOB
{ (LOB_item [, LOB_item ]...)
STORE AS (LOB_parameters)
| (LOB_item)
STORE AS
{ LOB_segname (LOB_parameters)
| LOB_segname
| (LOB_parameters)
}
}
local_partitioned_index LOCAL
[ on_range_partitioned_table
| on_list_partitioned_table
| on_hash_partitioned_table
| on_comp_partitioned_table
]
logfile_clause LOGFILE
[ GROUP integer ] file_specification
[, [ GROUP integer ] file_specification ]...
materialized_view_props [ column_properties ]
[ table_partitioning_clauses ]
[ CACHE | NOCACHE ]
[ parallel_clause ]
[ build_clause ]
model_clause MODEL
[ cell_reference_options ]
[ return_rows_clause ]
[ reference_model ]
[ reference_model ]...
main_model
model_rules_clause RULES
[ UPSERT | UPDATE ]
[ { AUTOMATIC | SEQUENTIAL } ORDER ]
[ ITERATE (number) [ UNTIL (condition) ] ]
([ UPDATE | UPSERT ]
cell_assignment [ order_by_clause ] = expr
[ [ UPDATE | UPSERT ]
cell_assignment [ order_by_clause ] = expr
]...
)
modify_hash_subpartition { { allocate_extent_clause
| deallocate_unused_clause
| shrink_clause
| { LOB LOB_item
| VARRAY varray
}
modify_LOB_parameters
[ { LOB LOB_item
| VARRAY varray
}
modify_LOB_parameters
]...
}
| [ REBUILD ] UNUSABLE LOCAL INDEXES
}
modify_list_subpartition { allocate_extent_clause
| deallocate_unused_clause
| shrink_clause
| { LOB LOB_item | VARRAY varray }
modify_LOB_parameters
[ { LOB LOB_item | VARRAY varray }
modify_LOB_parameters
] ...
| [ REBUILD ] UNUSABLE LOCAL INDEXES
| { ADD | DROP } VALUES (value[, value ]...)
}
modify_LOB_parameters { storage_clause
| PCTVERSION integer
| RETENTION
| FREEPOOLS integer
| REBUILD FREEPOOLS
| { CACHE
| { NOCACHE | CACHE READS } [ logging_clause ]
}
[ storage_clause
| PCTVERSION integer
| RETENTION
| FREEPOOLS integer
| REBUILD FREEPOOLS
| { CACHE
| { NOCACHE | CACHE READS } [ logging_clause
]
multiset_intersect nested_table1
MULTISET INTERSECT [ ALL | DISTINCT ]
nested_table2
multiset_union nested_table1
MULTISET UNION [ ALL | DISTINCT ]
nested_table2
number [ + | - ]
{ digit [ digit ]... [ . ] [ digit [ digit ]...
]
| . digit [ digit ]...
}
[ e [ + | - ] digit [ digit ]... ]
[ f | d ]
numeric_file_name +diskgroup_name.filenumber.incarnation_number
on_list_partitioned_table ( PARTITION
[ partition
[ { segment_attributes_clause
| key_compression
}
[ segment_attributes_clause
| key_compression
]...
]
]
[, PARTITION
[ partition
[ { segment_attributes_clause
| key_compression
}
[ segment_attributes_clause
| key_compression
]...
]
]
]...
)
on_range_partitioned_table ( PARTITION
[ partition
[ { segment_attributes_clause
| key_compression
}
[ segment_attributes_clause
| key_compression
outer_join_clause table_reference
[ query_partition_clause ]
{ outer_join_type JOIN
| NATURAL [ outer_join_type ] JOIN
}
table_reference [ query_partition_clause ]
parallel_enable_clause PARALLEL_ENABLE
[ (PARTITION argument BY
{ ANY
| { HASH | RANGE } (column [, column ]...)
}
)
[ streaming_clause ]
]
partition_attributes [ { physical_attributes_clause
| logging_clause
| allocate_extent_clause
| deallocate_unused_clause
| shrink_clause
}
password_parameters { { FAILED_LOGIN_ATTEMPTS
| PASSWORD_LIFE_TIME
| PASSWORD_REUSE_TIME
| PASSWORD_REUSE_MAX
| PASSWORD_LOCK_TIME
| PASSWORD_GRACE_TIME
qualified_template_clause template_name
ATTRIBUTES
([ MIRROR | UNPROTECTED ]
[ FINE | COARSE ]
)
query_partition_clause PARTITION BY
query_table_expression { query_name
| [ schema. ]
{ table [ { PARTITION (partition)
| SUBPARTITION (subpartition)
}
[ sample_clause ]
| [ sample_clause ]
| @ dblink
]
| { view | materialized view } [ @ dblink ]
}
| (subquery [ subquery_restriction_clause ])
| table_collection_expression
}
recovery_clauses { general_recovery
| managed_standby_recovery
| BEGIN BACKUP
| END BACKUP
}
redo_log_file_spec [ 'filename'
| ('filename' [, 'filename' ]...)
referencing_clause REFERENCING
{ OLD [ AS ] old
| NEW [ AS ] new
| PARENT [ AS ] parent }
[ OLD [ AS ] old
| NEW [ AS ] new
| PARENT [ AS ] parent ]...
register_logfile_clause REGISTER
[ OR REPLACE ]
[ PHYSICAL | LOGICAL ]
LOGFILE
[ file_specification
[, file_specification ]...
]
FOR logminer_session_name
resize_disk_clauses RESIZE
{ ALL [ SIZE size_clause ]
| DISK
disk_name [ SIZE size_clause ]
[, disk_name [ SIZE size_clause ] ]...
| DISKS IN FAILGROUP
resource_parameters { { SESSIONS_PER_USER
| CPU_PER_SESSION
| CPU_PER_CALL
| CONNECT_TIME
| IDLE_TIME
| LOGICAL_READS_PER_SESSION
| LOGICAL_READS_PER_CALL
| COMPOSITE_LIMIT
}
revoke_system_privileges { system_privilege
| role
| ALL PRIVILEGES
}
[, { system_privilege
| role
| ALL PRIVILEGES
}
]...
FROM grantee_clause
segment_attributes_clause { physical_attributes_clause
| TABLESPACE tablespace
| logging_clause
}
[ physical_attributes_clause
| TABLESPACE tablespace
| logging_clause
]...
select_list { *
| { query_name.*
| [ schema. ]
{ table | view | materialized view } .*
| expr [ [ AS ] c_alias ]
}
[, { query_name.*
| [ schema. ]
{ table | view | materialized view } .*
single_table_insert insert_into_clause
{ values_clause [ returning_clause ]
| subquery
}
size_clause integer [ K | M | G | T ]
standby_database_clauses ( activate_standby_db_clause
| maximize_standby_db_clause
| register_logfile_clause
| commit_switchover_clause
| start_standby_clause
| stop_standby_clause
)
[ parallel_clause ]
storage_clause STORAGE
({ INITIAL integer [ K | M ]
| NEXT integer [ K | M ]
| MINEXTENTS integer
| MAXEXTENTS { integer | UNLIMITED }
| PCTINCREASE integer
| FREELISTS integer
| FREELIST GROUPS integer
| OPTIMAL [ integer [ K | M ]
| NULL
]
| BUFFER_POOL { KEEP | RECYCLE | DEFAULT }
}
[ INITIAL integer [ K | M ]
| NEXT integer [ K | M ]
| MINEXTENTS integer
| MAXEXTENTS { integer | UNLIMITED }
| PCTINCREASE integer
| FREELISTS integer
| FREELIST GROUPS integer
| OPTIMAL [ integer [ K | M ]
| NULL
]
| BUFFER_POOL { KEEP | RECYCLE | DEFAULT }
]...
)
subquery [ subquery_factoring_clause ]
SELECT
[ hint ]
[ { { DISTINCT | UNIQUE }
| ALL
}
]
select_list
FROM table_reference
[, table_reference ]...
[ where_clause ]
[ hierarchical_query_clause ]
[ group_by_clause ]
[ HAVING condition ]
[ model_clause ]
[ { UNION [ ALL ]
| INTERSECT
| MINUS
}
(subquery)
supplemental_id_key_clause DATA
({ ALL
| PRIMARY KEY
| UNIQUE
| FOREIGN KEY
}
[, { ALL
| PRIMARY KEY
| UNIQUE
| FOREIGN KEY
}
]...
)
COLUMNS
supplemental_logging_props { supplemental_log_grp_clause
| supplemental_id_key_clause
}
table_partition_description [ segment_attributes_clause ]
[ table_compression | key_compression ]
[ OVERFLOW [ segment_attributes_clause ] ]
[ { LOB_storage_clause
| varray_col_properties
}
[ LOB_storage_clause
| varray_col_properties
]...
]
[ partition_level_subpartition ]
table_partitioning_clauses { range_partitioning
| hash_partitioning
| list_partitioning
| composite_partitioning
}
table_properties [ column_properties ]
[ table_partitioning_clauses ]
[ CACHE | NOCACHE ]
[ parallel_clause ]
[ ROWDEPENDENCIES | NOROWDEPENDENCIES ]
[ enable_disable_clause ]
[ enable_disable_clause ]...
[ row_movement_clause ]
[ AS subquery ]
tablespace_logging_clauses { logging_clause
| [ NO ] FORCE LOGGING
}
tablespace_state_clauses { ONLINE
| OFFLINE [ NORMAL | TEMPORARY | IMMEDIATE ]
}
| READ { ONLY | WRITE }
| { PERMANENT | TEMPORARY }
text [ N | n ]
{ 'c [ c ]...'
| { Q | q }
'quote_delimiter c [ c ]... quote_delimiter'
}
update_index_clauses { update_global_index_clause
| update_all_indexes_clause
}
update_set_clause SET
{ { (column [, column ]...) = (subquery)
XML_attributes_clause XMLATTRIBUTES
(value_expr [ AS c_alias ]
[, value_expr [ AS c_alias ]...
)
XMLType_storage STORE AS
{ OBJECT RELATIONAL
| CLOB [ { LOB_segname [ (LOB_parameters) ]
| LOB_parameters
}
]
XMLType_view_clause OF XMLTYPE
[ XMLSchema_spec ]
Appendix D
Oracle Background
Processes
___________________
This appendix is not an exhaustive list of all background processes and is meant to serve as a
quick reference. For more details on these background processes and any that have not been
mentioned here, consult the Oracle Database Reference guide.
General Processes
Required Started
for basic by
Acronym Process Name Description operation default
ARCn Archiver Process Writes filled redo logs to the archive log No No
location(s). Possible processes include ARC0–
ARC9 and ARCa–ARCt.
CJQ0 Job Queue Coordinator Spawns slave processes (Jnnn) to execute jobs in No Yes
RMAN Processes
Required Started
for basic by
Acronym Process Name Description operation default
CTWR Change Tracking Writes to the RMAN Change Tracking Log, a No No
Writer Process bitmap representing the entire database. The
bitmap has an associated SCN, which is the SCN
as at the last backup.
Appendix E
Acronyms and
Terms
_________________
Flashback Drop A feature that enables you to undo the effects of a DROP TABLE
statement without resorting to traditional point-in-time recovery
Flashback Table A command that enables you to recover a table and all its
dependent objects from the recycle bin
Flashback Transaction A diagnostic tool that you can use to view changes made to the
Query database at the transaction level
Flashback Versions A query syntax that provides a history of changes made to a row
Query along with the corresponding identifier of the transaction that
made the change
Format mask elements A character literal that describes the format of datetime or
Instance The collection of shared memory and processes used to access the
Oracle database
IPC Internal Process Communication
isqlplusctl Control utility for starting and stopping iSQL*Plus listener
processes
ISV Independent software vendor
Java pool A region of memory in the SGA that is used for all session-
specific Java code and data within the Java Virtual Machine
(JVM)
JDBC Java Database Connectivity
jnnn Job Queue Processes. They execute scheduled jobs.
Keep buffer cache An area of memory in the SGA used to cache data in the buffer
cache for longer periods of time
Language and Character A statistic-based utility for determining the language and
Set File Scanner character set for unknown file text
Large pool An optional memory storage area used for buffering large I/O
requests
Redo log buffer A region of memory that caches redo information until it can be
written to disk
Redo Log File Sizing A feature of Enterprise Manager that offers redo log file-sizing
Advisor advice
Resource Manager A feature of the Oracle database that gives the Oracle database
server more control over resource management decisions, thus
circumventing problems resulting from inefficient operating
system management
Resumable space A means for suspending, and later resuming, the execution of
allocation large database operations in the event of space allocation failures
RMAN Recovery Manager
Oracle Restart
Oracle Restart is designed to improve the availability of your Oracle Database. It implements a
high availability solution for single instance (nonclustered) environments only. For Oracle Real
Application Cluster (Oracle RAC) environments, the functionality to automatically restart
components is provided by Oracle Clusterware. Oracle Restart can monitor the health and
automatically restart the following components:
• Database instances
• Oracle Net listener
• Database services
• ASM instance
• ASM disk groups
• Oracle Notification Services (ONS/eONS): Service for sending Fast Application
Notification (FAN) events to integrated clients upon failover. The eONS is used by Oracle
Enterprise Manager to receive notification of change in status of components managed by
Oracle Restart.
Restarting an ASM disk group means mounting it. The ability to restart ONS is applicable only
in Oracle Data Guard installations for automatic failover of connections between primary and
standby databases through FAN.
Answers: 1, 2, and 4
http:// www.oracle.com/education
Oracle University
Oracle University is the world’s largest corporate educator with education centers around the
globe. The goal is 100% student satisfaction.
Oracle certifications are tangible, industry-recognized credentials that provide measurable
benefits to IT Professionals and their employers. Numerous certification paths exist, for
example, for DBAs:
• Oracle Certified Associate (OCA)
• Oracle Certified Professional (OCP)
• Oracle Certified Master (OCM), and
• Specialty certifications, for example, Oracle 10g: Managing Oracle on Linux Certified
Expert
• Consolidating different
workloads to a single grid
• Virtualizing the
information platform
• Flexible physical Databases
infrastructure (including
Storage
Physical/ Snapshot
Observer: Initiating
fast-start failover
Propagate
Non-Oracle
database
Streams Overview
A stream is a flow of information either within a database or from one database to another.
Oracle Streams is a set of processes and database structures that enable you to share data and
messages in a data stream. The unit of information that is put into a stream is called an event:
• DDL or DML changes, formatted as an LCR
• User-created events
Events are staged in and propagated between queues.
Most people think of Streams as replication where all databases can be updatable, and without
platform or release considerations. Characteristics include:
• All sites: Active and updateable
• Automatic conflict detection and optional resolution
• Supporting data transformations
• Flexible configurations: n-way, hub & spoke, and so on
• Different database platforms, releases and schemas
• Providing high availability for applications (where update conflicts can be avoided or
managed)
Security
For more information about all security-related aspects of the database, visit the “Security
Technology Center,” which is updated regularly.
• What is an OBE?
– A set of hands-on, step-by-step instructions
• Where can I find them?
– http://www.oracle.com//technology/obe
• What is available?
— Hundreds of OBE tutorials on many of the Oracle product areas
Oracle by Example
The Oracle by Example (OBE) series provides hands-on, step-by-step instructions on how to use
various new features of Oracle products. OBEs help to reduce the time spent on learning a new
product capability and enhance the users’ understanding of how the feature can be implemented
in their environment. Currently, OBEs are available for the Oracle database, Fusion Middleware,
Oracle Application Server, Oracle Enterprise Manager Grid Control, Oracle Collaboration Suite,
JDeveloper, and Business Intelligence. OBEs can be accessed at
http://www.oracle.com/technology/obe.
• Free subscription
• Oracle Magazine Archives
http://www.oracle.com/technology/oramag/index.html
Oracle Magazine
Among the many different types of resources to which you have access from OTN is Oracle
Magazine. You can receive your free subscription also by mail.
http://www.oracle.com/technology/community/apps/index.html
http://metalink.oracle.com
Oracle MetaLink
My Oracle Support is your gateway to Oracle’s Support resources. Here you find answers to the
most common issues facing Oracle administrators and developers, as well as resources to solve
many of those issues.
Like Oracle Technology Network, My Oracle Support includes the most recent headlines about
issues that affect the Oracle professional.
Thank You!
Oracle University’s mission is to enhance the adoption of Oracle technology. Our goal is to
partner with you, providing information that is pertinent, timely, and relevant to your needs.
Please take a minute to complete the end-of-course evaluation and let us know how we can serve
you better. In the U.S., feel free to e-mail our office of customer satisfaction at:
[email protected]
If you have questions about continuing your Oracle education, need help finding a class, or want
to arrange for on-site training at your company, contact Oracle Education Services for
assistance. In the U.S., dial 800.529.0165. For contact numbers outside the U.S., visit the
following Web site:
http://www.oracle.com/education/index.html?contact.html
Thanks again. We hope to see you in another class!