Oracle DBA
Oracle DBA
Oracle Database Replay, a tool that captures SQL statements and lets you replay them all in
another database to test the changes before you actually apply then on a production database;
Transaction Management using Log Miner and Flashback Data Archive to get DML statements
from redo log files;
Online Patching;
Installing and upgrading the Oracle Database server and application tools.
Allocating system storage and planning future storage requirements for the database system.
Creating primary database storage structures (tablespaces) after application developers have
designed an application.
Creating primary objects (tables, views, indexes) once application developers have designed an
application.
Modifying the database structure, as necessary, from information given by application
developers.
Security Officers
In some cases, a site assigns one or more security officers to a database. A security officer enrolls users,
controls and monitors user access to the database, and maintains system security. As a DBA, you might
not be responsible for these duties if your site has a separate security officer.
Network Administrators
Some sites have one or more network administrators. A network administrator, for example, administers
Oracle networking products, such as Oracle Net Services.
Database Users
Database users interact with the database through applications or utilities. A typical user's
responsibilities include the following tasks:
Entering, modifying, and deleting data, where permitted
Generating reports from the data
Application Developers
Application developers design and implement database applications. Their responsibilities include the
following tasks:
Application developers can perform some of these tasks in collaboration with DBAs.
Application Administrators
An Oracle Database site can assign one or more application administrators to administer a particular
application.
Each application can have its own administrator.
1.2. Tasks of a Database Administrator
The following tasks present a prioritized approach for designing, implementing, and maintaining an
Oracle
Database:
Task 1: Evaluate the Database Server Hardware
Task 2: Install the Oracle Database Software
Task 3: Plan the Database
Task 4: Create and Open the Database
Task 5: Back Up the Database
Task 6: Enroll System Users
Task 7: Implement the Database Design
Task 8: Back Up the Fully Functional Database
Task 9: Tune Database Performance
Task 10: Download and Install Patches
Task 11: Roll Out to Additional Hosts
Note: When upgrading to a new release, back up your existing production environment, both software
and database, before installation.
Task 1: Evaluate the Database Server Hardware
Evaluate how Oracle Database and its applications can best use the available computer resources.
This evaluation should reveal the following information:
How many disk drives are available to the Oracle Products?
How many, if any, dedicated tape drives are available to Oracle products?
How much memory is available to the instances of Oracle Database you will run?
Task 2: Install the Oracle Database Software
As the database administrator, you install the Oracle Database server software and any front-end tools
and database applications that access the database. In some distributed processing installations, the
database is controlled by a central computer (database server) and the database tools and applications
Platform Action
o UNIX and Linux Open a terminal session
o Windows Open a Command Prompt window
Use this command window for steps 2 though 4.
Step 2: Set Operating System Environment Variables
Depending on your platform, you may have to set environment variables before starting SQL*Plus, or at
least verify that they are set properly. For example, on most platforms, ORACLE_SID and ORACLE_HOME
must be set. In addition, it is advisable to set the PATH environment variable to include the
ORACLE_HOME/bin directory. Some platforms may require additional environment variables. On the
UNIX and Linux platforms, you must set environment variables by entering operating system commands.
On the Windows platform, Oracle Universal Installer (OUI) automatically assigns values to
ORACLE_HOME and ORACLE_SID in the Windows registry. If you did not create a database upon
installation, OUI does not set ORACLE_SID in the registry; after you create your database at a later time,
you must set the ORACLE_SID environment variable from a command window. UNIX and Linux
installations come with two scripts, oraenv and coraenv that you can use to easily set environment
variables. For all platforms, when switching between instances with different Oracle homes, you must
change the ORACLE_HOME environment variable. If multiple instances share the same Oracle home, you
must change only ORACLE_SID when switching instances.
Example: Setting Environment Variables in UNIX (C Shell)
setenv ORACLE_SID orcl
setenv ORACLE_HOME /u01/app/oracle/product/11.1.0/db_1
setenv LD_LIBRARY_PATH
$ORACLE_HOME/lib:/usr/lib:/usr/dt/lib:/usr/openwin/lib:/usr/ccs/lib
Example: Setting Environment Variables in Windows
Syntax Component
/
Description
Calls for external authentication of the connection
request. A database password is not used in this type of
authentication. The most common form of external
authentication is operating system authentication, where
the database user is authenticated by having logged in
to the host operating system with a certain host user
account. External authentication can also be performed
with an Oracle wallet or by a network service.
AS {SYSOPER | SYSDBA}
username
connect_identifier (1)
An
Oracle
Net
connect
identifier, for
remote
where:
host is the host name or IP address of the
computer hosting the remote database.
Both IP version 4 (IPv4) and IP version 6
(IPv6) addresses are supported. IPv6
addresses must be enclosed in square
brackets. port is the TCP port on which the
Oracle Net listener on hostlistens for
database connections. If omitted, 1521 is
assumed.
service_name is the database service
name to which to connect. Can be omitted
if the Net Services listener configuration on
the remote host designates a default
service.
If
no
default
service
is
configured, service_name must
be
supplied. Each database typically offers a
standard service with a name equal to the
global database name, which is made up
of the DB_NAME and DB_DOMAIN initialization
parameters as follows:
DB_NAME.DB_DOMAIN
EFAULT}
Example:
This simple example connects to a local database as user SYSTEM. SQL*Plus prompts for the SYSTEM
user password.
connect system
Example
This example connects to a local database as user SYS with the SYSDBA privilege. SQL*Plus prompts for
the SYS user password.
connect sys as sysdba
When connecting as user SYS, you must connect AS SYSDBA.
Example
This example connects locally with operating system authentication.
connect /
Example
This example connects locally with the SYSDBA privilege with operating system authentication.
connect / as sysdba
Example
This example uses easy connect syntax to connect as user salesadmin to a remote database running on
the host db1.mycompany.com. The Oracle Net listener (the listener) is listening on the default port
(1521). The database service is sales.mycompany.com. SQL*Plus prompts for the salesadmin user
password.connect salesadmin@db1.mycompany.com/sales.mycompany.com
Example
This example is identical that the listener is listening on the non-default port number 1522.
connect salesadmin@db1.mycompany.com:1522/sales.mycompany.com
Example
This example connects remotely as user salesadmin to the database service designated by the net
service name sales1. SQL*Plus prompts for the salesadmin user password.
connect salesadmin@sales1
Example
This example connects remotely with external authentication to the database service designated by the
net service sales1. SQL*Plus prompts for the salesadmin user password.
connect salesadmin@sales1
Example
This example connects remotely with external authentication to the database service designated by the
net service name sales1.
connect /@sales1
Example
This example connects remotely with the SYSDBA privilege and with external authentication to the
database service designated by the net service name sales1.
connect /@sales1 as sysdba
Because Oracle Database continues to evolve and can require maintenance, Oracle periodically produces
new releases. Not all customers initially subscribe to a new release or require specific maintenance for
their existing As many as five numbers may be required to fully identify a release. The significance of
these numbers is discussed in the sections that follow.
1.4. Release Number Format
To understand the release nomenclature used by Oracle, examine the following example of an Oracle
Database
Note: Starting with release 9.2, maintenance releases of Oracle Database are denoted by a change to
the second digit of a release number. In previous releases, the third digit indicated a particular
maintenance release.
Major Database Release Number
The first digit is the most general identifier. It represents a major new version of the software that
contains significant new functionality.
Database Maintenance Release Number
The second digit represents a maintenance release level. Some new features may also be included.
Application Server Release Number
The third digit reflects the release level of the Oracle Application Server (OracleAS).
Component-Specific Release Number
The fourth digit identifies a release level specific to a component. Different components can have
different numbers in this position depending upon, for example, component patch sets or interim
releases.
Platform-Specific Release Number
The fifth digit identifies a platform-specific release. Usually this is a patch set. When different platforms
require the equivalent patch set, this digit will be the same across the affected platforms.
1.4.1. Checking Your Current Release Number
To identify the release of Oracle Database that is currently installed and to see the release levels of other
database components you are using, query the data dictionary view PRODUCT_COMPONENT_VERSION. A
sample query follows. (You can also query the V$VERSION view to see component-level information.)
Other product release levels may increment independent of the database server.
VERSION
----------11.2.0.0.1
11.2.0.0.1
11.2.0.0.1
STATUS
----------Production
Production
Production
It is important to convey to Oracle the results of this query when you report problems with the software.
1.5. About Database Administrator Security and Privileges
To perform the administrative tasks of an Oracle Database DBA, you need specific privileges within the
database and possibly in the operating system of the server on which the database runs. Access to a
database administrator's account should be tightly controlled.
1.5.1. The Database Administrator's Operating System Account
To perform many of the administrative duties for a database, you must be able to execute operating
system commands. Depending on the operating system on which Oracle Database is running, you might
need an operating system account or ID to gain access to the operating system. If so, your operating
system account might require operating system privileges or access rights that other database users do
not require (for example, to perform Oracle Database software installation). Although you do not need
the Oracle Database files to be stored in your account, you should have access to them.
1.5.2. Administrative User Accounts
Two administrative user accounts are automatically created when Oracle Database is installed:
Note: The SYSDBA and SYSOPER system privileges allow access to a database instance
even when the database is not open. Control of these privileges is totally outside of the
database itself. While referred to as system privileges, SYSDBA and SYSOPER can also be
thought of as types of connections (for example, you specify: CONNECT AS SYSDBA) that
enable you to perform certain database operations for which privileges cannot be granted
in any other fashion.
The manner in which you are authorized to use these privileges depends upon the method of
authentication that you use. When you connect with SYSDBA or SYSOPER privileges, you connect with a
default schema, not with the schema that is generally associated with your username. For SYSDBA this
schema is SYS; for SYSOPER the schema is PUBLIC. Connecting with Administrative Privileges: Example
This example illustrates that a user is assigned another schema (SYS) when connecting with the SYSDBA
system privilege. Assume that the sample user oe has been granted the SYSDBA system privilege and
has issued the following statements:
CONNECT oe
CREATE TABLE admin_test (name VARCHAR2(20));
Later, user oe issues these statements:
CONNECT oe AS SYSDBA
SELECT * FROM admin_test;
User oe now receives the following error:
ORA-00942: table or view does not exist
Having connected as SYSDBA, user oe now references the SYS schema, but the table was created in the
oe schema.
Password files
are
available
These methods are required to authenticate a database administrator when the database is not started
or otherwise unavailable. (They can also be used when the database is available.)
The remainder of this section focuses on operating system authentication and password file
authentication.
Notes:
These methods replace the CONNECT INTERNAL syntax provided with earlier versions of Oracle
Database. CONNECT INTERNAL is no longer supported.
Operating system authentication takes precedence over password file authentication. If you meet
the requirements for operating system authentication, then even if you use a password file, you
will be authenticated by operating system authentication.
Your choice will be influenced by whether you intend to administer your database locally on the same
system where the database resides, or whether you intend to administer many different databases from
a single remote client. Figure 1-2 illustrates the choices you have for database administrator
authentication schemes.
Figure 1-2 Database Administrator Authentication Methods
If you are performing remote database administration, consult your Oracle Net documentation to
determine whether you are using a secure connection. Most popular connection protocols, such as TCP/IP
and DECnet, are not secure.
If
the
database
has
a
password
file
and
you
have
been
granted
the SYSDBA or SYSOPER system privilege, then you can connect and be authenticated by a
password file.
If
the
server
is
not
using
a
password
file,
or
if
you
have
not
been
granted SYSDBA or SYSOPER privileges and are therefore not in the password file, you can use
operating system authentication. On most operating systems, authentication for database
administrators involves placing the operating system username of the database administrator in
a special group, generically referred to as OSDBA. Users in that group are granted
SYSDBA privileges. A similar group, OSOPER, is used to grant SYSOPER privileges to users.
Oracle Universal Installer uses these default names, but you can override them. One reason to override
them is if you have multiple instances running on the same host computer. If each instance is to have a
different person as the principal DBA, you can improve the security of each instance by creating a
different OSDBA group for each instance. For example, for two instances on the same host, the OSDBA
group for the first instance could be named dba1, and OSDBA for the second instance could be
named dba2. The first DBA would be a member of dba1 only, and the second DBA would be a member
of dba2 only. Thus, when using operating system authentication, each DBA would be able to connect
only to his assigned instance.
Membership in the OSDBA or OSOPER group affects your connection to the database in the following
ways:
If you are a member of the OSDBA group and you specify AS SYSDBA when you connect to the
database, then you connect to the database with the SYSDBA system privilege.
If you are a member of the OSOPER group and you specify AS SYSOPER when you connect to
the database, then you connect to the database with the SYSOPER system privilege.
If you are not a member of either of these operating system groups and you attempt to connect
as SYSDBA or SYSOPER, the CONNECT command fails.
2.
Add the account to the OSDBA or OSOPER operating system defined groups.
If not already created, create the password file using the ORAPWD utility:
3.
When you invoke Database Configuration Assistant (DBCA) as part of the Oracle
Database installation process, DBCA creates a password file.
Beginning with Oracle Database 11g Release 1, passwords in the password file are casesensitive unless you include the IGNORECASE = Y command-line argument.
4.
Connect to the database as user SYS (or as another user with the administrative privileges).
5.
If the user does not already exist in the database, create the user and assign a password.
Keep in mind that beginning with Oracle Database 11g Release 1, database passwords are casesensitive. (You can disable case sensitivity and return to preRelease 11g behavior by setting
the SEC_CASE_SENSITIVE_LOGON initialization parameter to FALSE.)
6.
Description
Name to assign to the password file. You must supply a complete path. If you supply
only a file name, the file is written to the current directory.
(Optional) Maximum number of entries (user accounts) to permit in the file.
(Optional) If y, permits overwriting an existing password file.
(Optional) If y, passwords are treated as case-insensitive.
Required Name
Required Location)
orapwORACLE_SID
ORACLE_HOME/dbs
Windows
PWDORACLE_SID.ora
ORACLE_HOME\database
For example, for a database instance with the SID orcldw, the password file must be
named orapworcldw on Linux and PWDorcldw.ora on Windows.
In an Oracle Real Application Clusters environment on a platform that requires an environment
variable to be set to the path of the password file, the environment variable for each instance
must point to the same password file.
Caution:
It is critically important to the security of your system that you protect your password file and
the environment variables that identify the location of the password file. Any user with access to
these could potentially compromise the security of the connection.
ENTRIES
This argument specifies the number of entries that you require the password file to accept. This
number corresponds to the number of distinct users allowed to connect to the database
NONE: Setting this parameter to NONE causes Oracle Database to behave as if the password file
does not exist. That is, no privileged connections are allowed over nonsecure connections.
EXCLUSIVE: (The default) An EXCLUSIVE password file can be used with only one instance of
one database. Only an EXCLUSIVE file can be modified. Using an EXCLUSIVE password file
enables you to add, modify, and delete users. It also enables you to change the SYS password
with the ALTER USER command.
SHARED: A SHARED password file can be used by multiple databases running on the same
server, or multiple instances of an Oracle Real Application Clusters (Oracle RAC) database.
A SHARED password file cannot be modified. Therefore, you cannot add users to a SHARED
password file. Any attempt to do so or to change the password of SYS or other users with
the SYSDBA or SYSOPER privileges
generates
an
error.
All
users
needing SYSDBA or SYSOPER system privileges must be added to the password file
when REMOTE_LOGIN_PASSWORDFILE is set to EXCLUSIVE. After all users are added, you
can change REMOTE_LOGIN_PASSWORDFILE to SHARED, and then share the file.
This option is useful if you are administering multiple databases or an Oracle RAC database.
Suggestion: To achieve the greatest level of security, you should set the
REMOTE_LOGIN_PASSWORDFILE initialization parameter to EXCLUSIVE immediately after
creating the password file.
Note:
You cannot change the password for SYS if REMOTE_LOGIN_PASSWORDFILE is set to SHARED. An
error message is issued if you attempt to do so.
Find all users who have been granted the SYSDBA privilege.
SELECT USERNAME FROM V$PWFILE_USERS WHERE USERNAME != 'SYS' AND SYSDBA='TRUE';
2.
3.
Find all users who have been granted the SYSOPER privilege.
SELECT USERNAME FROM V$PWFILE_USERS WHERE USERNAME != 'SYS' AND SYSOPER='TRUE';
Follow the instructions for creating a password file as explained in "Creating a Password File with
ORAPWD".
2.
3.
as
shown
in
the
following
example,
and
enter
Start up the instance and create the database if necessary, or mount and open an existing
database.
5.
Create users as necessary. Grant SYSDBA or SYSOPER privileges to yourself and other users as
appropriate. Granting and Revoking SYSDBA and SYSOPER Privileges
If your server is using an EXCLUSIVE password file, use the GRANT statement
the SYSDBA or SYSOPER system privilege to a user, as shown in the following example:
to
grant
Do
not
confuse
Description
This column contains the name of the user that is recognized by the password file.
If the value of this column is TRUE, then the user can log on with the SYSDBA system
privileges.
If the value of this column is TRUE, then the user can log on with the SYSOPER system
privileges.
If the value of this column is TRUE, then the user can log on with the SYSASM system
privileges.
Note:
SYSASM is valid only for Oracle Automatic Storage Management instances.
Maintaining a Password File
This section describes how to:
Expand the number of password file users if the password file becomes full
Identify
the
users
who
the V$PWFILE_USERS view.
by
querying
2.
3.
Follow the instructions for creating a new password file using the ORAPWD utility in "Creating a
Password File with ORAPWD". Ensure that the ENTRIES parameter is set to a number larger
than you think you will ever need.
4.
Preinstallation Requirements
login as root
Memory
RAM: At least 4 GB
swap space
The following table describes the relationship between installed RAM and the configured swap space
requirement:
RAM
Swap Space
Between 4 GB and 8 GB
Between 8 GB and 32 GB
More than 32 GB
32 GB
To determine the size of the configured swap space, enter the following command:
SwapTotal:
10860752 kB
To determine the available RAM and swap space, enter the following command:
free
128344
4055260
10860752
shared
0
buffers
13892
cached
3913024
# mkdir /data/
# dd if=/dev/zero of=/data/swapfile.1 bs=1024 count=65536
65536+0 records in
65536+0 records out
67108864 bytes (67 MB) copied, 1.3094 seconds, 51.3 MB/s
# /sbin/mkswap /data/swapfile.1
Setting up swapspace version 1, size = 67104 kB
# /sbin/swapon /data/swapfile.1
/dev/VolGroup00/LogVol00 /
LABEL=/boot
/boot
tmpfs
/dev/shm
ext3
ext3
tmpfs
defaults
defaults
defaults
1 1
1 2
0 0
gid=5,mode=620
defaults
defaults
defaults
defaults
defaults,ttl=5
0 0
0 0
0 0
0 0
0 0
0 0
On SUSE LINUX
On SUSE LINUX, enter one of the following commands:
yast
yast2
Get the page size?
# getconf PAGESIZE
4096
# getconf PAGE_SIZE
4096
Support
swapon: /data/swapfile.1: Invalid argument
# /sbin/swapon /data/swapfile.1
swapon: /data/swapfile.1: Invalid argument
Verify that the file (/data/swapfile.1) has been made as a linux swap file with the command
/sbin/mkswap.
Software (GB)
Data (GB)
Enterprise Edition
4.35
1.68
Standard Edition
3.73
1.48
Minimal Kernel
2.6.9 or later
binutils-2.17.50.0.6
compat-libstdc++-33-3.2.3
compat-libstdc++-33-3.2.3 (32 bit)
elfutils-libelf-0.125
elfutils-libelf-devel-0.125
gcc-4.1.2
gcc-c++-4.1.2
glibc-2.5-24
glibc-2.5-24 (32 bit)
glibc-common-2.5
glibc-devel-2.5
glibc-devel-2.5 (32 bit)
glibc-headers-2.5
ksh-20060214
libaio-0.3.106
libaio-0.3.106 (32 bit)
The numa package link for Linux x86 is /usr/lib and Linux x86-64 is /usr/lib64/
To determine whether the required packages are installed, enter commands similar to the following:
rpm -qa | grep beginning_of_the_package_name
[root@di-rep-db Server]# rpm -qa | grep elfutils
elfutils-libelf-devel-static-0.137-3.el5
elfutils-libelf-devel-0.137-3.el5
elfutils-libelf-0.137-3.el5
If a package is not installed, then install it from the Linux distribution media or download the required
package version from the Linux vendor's Web site.
rpm
rpm
rpm
rpm
rpm
rpm
rpm
rpm
rpm
rpm
rpm
rpm
-Uvh
-Uvh
-Uvh
-Uvh
-Uvh
-Uvh
-Uvh
-Uvh
-Uvh
-Uvh
-Uvh
-Uvh
binutils*
compat-libstdc++*
elfutils-libelf*
gcc*
glibc*
ksh*
libaio*
libgcc*
libstdc++*
make*
numactl-devel*
sysstat*
Database Connectivity
Oracle ODBC Drivers
If you intend to use ODBC, then install the most recent ODBC Driver Manager for Linux. Download and
install the Driver Manager from the following URL:
http://www.unixodbc.org
On Linux x86-64
OEL 4
unixODBC-2.2.11
(32
unixODBC-devel-2.2.11
(64
unixODBC-2.2.11 (64 bit ) or later
bit)
bit)
or
or
later
later
OEL 5
unixODBC-2.2.11
(32
unixODBC-devel-2.2.11
(64
unixODBC-2.2.11 (64 bit) or later
bit)
bit)
or
or
later
later
Checks and sets kernel parameters to values required for successful installation, including:
Shared memory parameters
Semaphore parameters
Open file descriptor and UDP send/receive parameters
Sets permissions on the Oracle Inventory directory.
Reconfigures primary and secondary group memberships for the installation owner, if necessary,
for the Oracle Inventory directory, and for the operating system privileges groups.
Sets up virtual IP and private IP addresses in /etc/hosts.
Sets shell limits to required values, if necessary.
Installs the Cluster Verification Utility packages (cvuqdisk rpm).
Using fixup scripts will not ensure that all the prerequisites for installing Oracle Database are satisfied.
You must still verify that all the preinstallation requirements are met to ensure a successful installation.
Network Setup
DNS
Verify the value of the DNS configuration file /etc/resolv.conf. The nameserver must be not set or set to a
valid DNS server and you can add the two time-out parameters.
Disable secure linux
Disable secure linux by editing the /etc/selinux/config file, making sure the SELINUX flag is set as
follows:
SELINUX=disabled
Alternatively, this alteration can be done using the GUI tool.
Operating System Groups
Log in as root.
Installation Groups
Create OS groups.
/usr/sbin/groupadd -g 501 dba
/usr/sbin/groupadd -g 505 oper
OS Group
Name
OS
Group
ID
OS Users Assigned
to this Group
oinstall
501
oracle
Database Administrator
dba
502
Database Operator
oper
503
Description
Oracle Inventory
Software Owner
and
Oracle
Privilege
Oracle
Group Name
oracle, jhunter
SYSDBA
OSDBA
oracle, jhunter
SYSOPER
OSOPER
OS Group Descriptions
This group must be created the first time you install Oracle software on the system.
Members of the OINSTALL group are considered the "owners" of the Oracle software and
are granted privileges to write to the Oracle central inventory (oraInventory). When you
install Oracle software on a Linux system for the first time, OUI creates
the /etc/oraInst.loc file. This file identifies the name of the Oracle Inventory group (by
default, oinstall), and the path of the Oracle Central Inventory directory.
Ensure that this group is available as a primary group for all planned Oracle software
installation owners. For the purpose of this guide, the oracle installation owner will be
configured with oinstall as its primary group.
Members of the OSDBA group can use SQL to connect to an Oracle instance
as SYSDBA using operating system authentication. Members of this group can perform
critical database administration tasks, such as creating the database and instance startup
and shutdown. The default name for this group is dba. The SYSDBA system privilege allows
access to a database instance even when the database is not open. Control of this privilege
is totally outside of the database itself.
The oracle installation owner should be a member of the OSDBA group (configured as a
secondary group) along with any other DBA user accounts (i.e. jhunter) needing access
to an Oracle instance as SYSDBA using operating system authentication.
The SYSDBA system privilege should not be confused with the database role DBA.
The DBA role does not include the SYSDBA or SYSOPER system privileges.
Members of the OSOPER group can use SQL to connect to an Oracle instance
as SYSOPER using operating system authentication. Members of this optional group have a
limited set of database administrative privileges such as managing and running backups.
The database being created in this guide will not make use of Automatic Storage
Management (ASM) and therefore will not create or assign the ASM related OS groups like
asmadmin, asmdba, and asmoper.
[root@testnode1
[root@testnode1
[root@testnode1
[root@testnode1
-s /bin/bash -c
--------------------------------------------------.bash_profile
--------------------------------------------------OS User:
oracle
--------------------------------------------------SQLPATH
--------------------------------------------------Specifies the directory or list of directories that
SQL*Plus searches for a login.sql file.
--------------------------------------------------SQLPATH=/u01/app/oracle/dba_scripts/sql; export SQLPATH
# --------------------------------------------------# ORACLE_TERM
# --------------------------------------------------# Defines a terminal definition. If not set, it
# defaults to the value of your TERM environment
# variable. Used by all character mode products.
# --------------------------------------------------ORACLE_TERM=xterm; export ORACLE_TERM
# --------------------------------------------------# NLS_DATE_FORMAT
# --------------------------------------------------# Specifies the default date format to use with the
# TO_CHAR and TO_DATE functions. The default value of
# this parameter is determined by NLS_TERRITORY. The
# value of this parameter can be any valid date
# format mask, and the value must be surrounded by
# double quotation marks. For example:
#
#
NLS_DATE_FORMAT = "MM/DD/YYYY"
#
# --------------------------------------------------NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT
# --------------------------------------------------# TNS_ADMIN
# --------------------------------------------------# Specifies the directory containing the Oracle Net
# Services configuration files like listener.ora,
# tnsnames.ora, and sqlnet.ora.
# --------------------------------------------------TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
# --------------------------------------------------# ORA_NLS11
# --------------------------------------------------# Specifies the directory where the language,
# territory, character set, and linguistic definition
# files are stored.
# --------------------------------------------------ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11
# --------------------------------------------------# PATH
export
At the end of this section, you should have the following user, groups, and directory path configuration.
A separate OSDBA group (dba), whose members include oracle, and who are granted the
SYSDBA privilege to administer the Oracle Database.
A separate OSOPER group (oper), whose members include oracle, and who are granted limited
Oracle database administrator privileges.
An Oracle Database software owner (oracle), with the oraInventory group as its primary group,
and with the OSDBA (dba) and OSOPER (oper) group as its secondary group.
OFA-compliant mount points /u01, /u02, and /u03 that will be used for the Oracle software
installation, data files, and recovery files.
During
installation,
OUI
creates
the
Oracle
Inventory
directory
in
the
path /u01/app/oraInventory. This path remains owned by oracle:oinstall, to enable other
Oracle software owners to write to the central inventory.
OFA-compliant data
775 permissions.
OFA-compliant recovery
files
directory /u03/app/oracle/fast_recovery_area owned
by oracle:oinstall with 775 permissions.
by oracle:oinstall with
Soft
Limit
Hard Limit
nofile
at least
1024
at least 65536
Number of processes
available to a single
user
nproc
at least
2047
at least 16384
stack
At
least
10240
KB
Resource
Limit
Shell
oracle
oracle
oracle
oracle
oracle
soft
hard
soft
hard
soft
nproc
nproc
nofile
nofile
stack
2047
16384
1024
65536
10240
Add the following line to the /etc/pam.d/login file, if it does not already exist.
session
required
pam_limits.so
Depending on your shell environment, make the following changes to the default shell startup file in order to
change ulimit settings for the Oracle installation owner.
For the Bourne, Bash, or Korn shell, add the following lines to the /etc/profile file.
Kernel Parameters
The kernel parameters presented in this section are only recommended values as documented by Oracle.
For production database systems, Oracle recommends that you tune these values to optimize the
performance of the system.
Verify that the kernel parameters described in this section are set to values greater than or equal to the
recommended values. Also note that when setting the four semaphore values that all four values need to be
entered on one line.
Oracle Database 11g Release 2 for Linux requires the kernel parameter settings shown below. The values
given are minimums, so if your system uses a larger value, do not change it.
kernel.shmmax = 4294967295
kernel.shmall = 2097152
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.file-max = 6815744
fs.aio-max-nr = 1048576
RHEL/OL/CentOS 6 already comes configured with default values defined for the following
kernel parameters.
kernel.shmmax
kernel.shmall
The default values for these two kernel parameters should be overwritten with the
recommended values defined in this guide.
# +---------------------------------------------------------+
# | KERNEL PARAMETERS FOR ORACLE DATABASE 11g R2 ON LINUX
|
# +---------------------------------------------------------+
# +---------------------------------------------------------+
# | SHARED MEMORY
|
SEMMNI_value
# +---------------------------------------------------------+
# | NETWORKING
|
# ----------------------------------------------------------+
# Defines the local port range that is used by TCP and UDP
# traffic to choose the local port
net.ipv4.ip_local_port_range = 9000 65500
# Default setting in bytes of the socket "receive" buffer which
# may be set by using the SO_RCVBUF socket option
net.core.rmem_default = 262144
# Maximum setting in bytes of the socket "receive" buffer which
# may be set by using the SO_RCVBUF socket option
net.core.rmem_max = 4194304
# Default setting in bytes of the socket "send" buffer which
# may be set by using the SO_SNDBUF socket option
net.core.wmem_default = 262144
# Maximum setting in bytes of the socket "send" buffer which
# may be set by using the SO_SNDBUF socket option
net.core.wmem_max = 1048576
# +---------------------------------------------------------+
# | FILE HANDLES
|
# ----------------------------------------------------------+
# Maximum number of file-handles that the Linux kernel will allocate
fs.file-max = 6815744
# Maximum number of allowable concurrent asynchronous I/O requests requests
fs.aio-max-nr = 1048576
Placing the kernel parameters in the /etc/sysctl.conf startup file persists the required kernel
parameters through reboots. Linux allows modification of these kernel parameters to the current system
while it is up and running, so there's no need to reboot the system after making kernel parameter changes.
To activate the new kernel parameter values for the currently running system, run the following as root.
Between 1 GB and 2 GB
Between 2 GB and 16 GB
More than 16 GB
16 GB
Use the following command to determine the size of the configured swap space.
If necessary, additional swap space can be configured by creating a temporary swap file and adding it to the
current swap. This way you do not have to use a raw device or even more drastic, rebuild your system.
1. As root, make a file that will act as additional swap space, let's say about 500MB.
free
2811532
3786740
6258680
shared
0
buffers
49456
cached
925752
Network Configuration
During the Linux OS install, we already configured the IP address and host name for the database node.
This sections contains additional network configuration steps that will prepare the machine to run the Oracle
database.
Note that the Oracle database server should have a static IP address configured for the public network
(eth0 for this guide). Do not use DHCP naming for the public IP address; you need a static IP address.
Confirm the Node Name is Not Listed in Loopback Address
Ensure that the node name (testnode1) is not included for the loopback address in the /etc/hosts file. If
the machine name is listed in the in the loopback address entry as below:
127.0.0.1
testnode1
localhost4.localdomain4
localhost
localhost.localdomain
localhost4
127.0.0.1
<IP-address>
<fully-qualified-machine-name>
<machine-name>
For example.
127.0.0.1
localhost
localhost.localdomain
localhost4.localdomain4
192.168.1.106
testnode1.idevelopment.info testnode1
localhost4
Software
Releas
e
Location
Database
11.2.0.
1
OTN / eDelivery /
MOS
linux.x64_11gR2_database_1of
2.zip
linux.x64_11gR2_database_2of
2.zip
Oracle
Database
11g Release 2 Examples
11.2.0.
1
OTN / eDelivery /
MOS
linux.x64_11gR2_examples.zip
Oracle
11g Release 2
Software
Releas
e
MOS
Set
Patch
Database
11.2.0.
2
10098816
p10098816_112020_Linux-x8664_1of7.zip
Software
Releas
e
MOS
Set
Patch
p10098816_112020_Linux-x8664_2of7.zip
Oracle
Database
11g Release 2 Examples
11.2.0.
2
10098816
p10098816_112020_Linux-x8664_6of7.zip
Software
Releas
e
Database
11.2.0.
3
10404530
p10404530_112030_Linux-x8664_1of7.zip
p10404530_112030_Linux-x8664_2of7.zip
Oracle
Database
11g Release 2 Examples
11.2.0.
3
10404530
p10404530_112030_Linux-x8664_6of7.zip
Oracle
11g Release 2
MOS
Set
Patch
[oracle@testnode1 ~]$ id
uid=501(oracle) gid=501(oinstall) groups=501(oinstall),502(dba),503(oper)
[oracle@testnode1 ~]$ cd /home/oracle/software/oracle/database
[oracle@testnode1 database]$ ./runInstaller
At any time during installation, if you have a question about what you are being asked to do, click
the Help button on the OUI page.
The prerequisites checks will fail for the following version-dependent reasons. As mentioned at
the beginning of this guide, RHEL6 and OL6 are not certified or supported for use with any Oracle Database
version at the time of this writing.
11.2.0.1: The installer shows multiple "missing package" failures because it does not recognize
several of the newer version packages that were installed. These "missing package" failures can be
ignored as the packages are present. The failure for the "pdksh" package can be ignored because it
is no longer part of RHEL6 and we installed the "ksh" package in its place.
11.2.0.2: The installer should only show a single "missing package" failure for the "pdksh" package.
The failure for the "pdksh" package can be ignored because it is no longer part of RHEL6 and we
installed the "ksh" package in its place.
Configure
Security
Updates
To stay informed with the latest security issues, enter your e-mail address, preferably your
My Oracle Support e-mail address or user name in the Email field. You can select the "I wish
to receive security updates via My Oracle Support" check box to receive security updates.
Enter your My Oracle Support password in the "My Oracle Support Password" field.
For the purpose of this example, un-check the security updates check-box and click
the [Next] button to continue.
Acknowledge the warning dialog indicating you have not provided an email address by
clicking the [Yes] button.
Installation Option
Grid Options
Product Languages
Database Edition
Installation
Location
Specify the Oracle base and Software location (Oracle home) as follows.
OracleBase: /u01/app/oracle
SoftwareLocation: /u01/app/oracle/product/11.2.0/dbhome
_1
Create
Inventory
Since this is the first install on the host, you will need to create the Oracle
Inventory. Use the default values provided by the OUI.
InventoryDirectory: /u01/app/oraInventory
oraInventory Group Name: oinstall
Operating
Groups
System
Prerequisite
Checks
The installer will run through a series of checks to determine if the machine
and OS configuration meet the minimum requirements for installing the
Oracle Database software.
Starting with 11g Release 2, if any checks fail, the installer (OUI) will create
shell script programs called fixup scripts to resolve many incomplete system
configuration requirements. If OUI detects an incomplete task that is
marked "fixable", then you can easily fix the issue by generating the fixup
script by clicking the [Fix & Check Again] button.
The fixup script is generated during installation. You will be prompted to run
the script as root in a separate terminal session. When you run the script,
it raises kernel values to required minimums, if necessary, and completes
Summary
Install Product
After
the
installation
completes,
you
will
be
prompted
to
run
the /u01/app/oraInventory/orainstRoot.sh and /u01/app/oracle/product/1
1.2.0/dbhome_1/root.shscripts. Open a new console window as the root user account
and execute the orainstRoot.sh script.
Execute
Configura
tion
scripts
At the end of the installation, click the [Close] button to exit the OUI.
Archive Mode
Enabled
USE_DB_RECOVERY_FILE_DEST
74
76
76
Tablespace Name
-----------------EXAMPLE
SYSAUX
SYSTEM
TEMP
UNDOTBS1
USERS
TS Type
-----------PERMANENT
PERMANENT
PERMANENT
TEMPORARY
UNDO
PERMANENT
Ext. Mgt.
---------LOCAL
LOCAL
LOCAL
LOCAL
LOCAL
LOCAL
--------avg
70
sum
2,153,775,104
1,765,015,552
6 rows selected.
To obtain a list of all available Oracle DBA scripts while logged into SQL*Plus, run the help.sql script.
SQL> @help.sql
========================================
Automatic Shared Memory Management
========================================
asmm_components.sql
========================================
Automatic Storage Management
========================================
asm_alias.sql
asm_clients.sql
asm_diskgroups.sql
asm_disks.sql
asm_disks_perf.sql
o
o
o
o
o
o
o
Client Machine/Webserver:
Client is the End user who accesses the DB to retrieve the information.
Various ways to access db by client are Sqlplus, Sql developer or other third Party Tools like TOAD/PLSQL
Developer, Web URL. Client can be remote or local to DB servers which means that Webserver &
Middleware layers are optional & DB can be retrieve from its local server itself. In Complex & Critical
Application Setups Multitier approach being followed to make efficient administration, security
enforcement, patch/upgrades, backup, restoration, monitoring, license management, hardware
management of every component.
Middleware Application:
It stands as a middleware layer to client before accessing database which consists of data retrieval
policy, functions, application/java /plsql codes, user interface etc. Oracle CRM, Fusion Middleware and
other vendor application products are found in this layer
Database Server:
Here it comes the Oracle Database, located on Server supporting any platform like Windows, Solaris,
AIX, HP-UX and Linux etc.
Will simplify the correlation between each of the basic (writing basic as 8i to 11g various new
components being added by oracle but I have picked up the most common of all them & will be easy to
understand rather than adding more confusion) database Components & their Usage in following section
with reference to above Oracle Basic Architecture with below flow.
o
o
o
The figure shows a one-to-one correspondence between the User and Server Processes. This is called
a Dedicated Server connection. An alternative configuration is to use a Shared Server where more
than one User Process shares a Server Process.
Sessions: When a user connects to an Oracle server, this is termed a session. The User Global
Area is session memory and these memory structures are described later in this document. The session
starts when the Oracle server validates the user for connection. The session ends when the user logs out
(disconnects) or if the connection terminates abnormally (network failure or client computer failure).
A user can typically have more than one concurrent session, e.g., the user may connect using SQLPlus
and also connect using Internet Developer Suite tools at the same time. The limit of concurrent session
connections is controlled by the DBA.
If a system users attempts to connect and the Oracle Server is not running, the system user receives
the Oracle Not Available error message.
As was noted above, an Oracle database consists of physical files. The database itself has:
Datafiles these contain the organization's actual data.
Redo log files these contain a chronological record of changes made to the database, and enable
recovery when failures occur.
Control files these are used to synchronize all database activities and are covered in more detail in
a later module.
Other key files as noted above include:
Password file specifies which *special* users are authenticated to startup/shut down an Oracle
Instance.
Archived redo log files these are copies of the redo log files and are necessary for recovery in an
online, transaction-processing environment in the event of a disk failure.
Oracle Database Memory Management
o
o
o
The SGA is a read/write memory area that stores information shared by all database processes and by
all users of the database (sometimes it is called the Shared Global Area).
This information includes both organizational data and control information used by the Oracle Server.
The SGA is allocated in memory and virtual memory.
The size of the SGA can be established by a DBA by assigning a value to the parameter
SGA_MAX_SIZE in the parameter filethis is an optional parameter.
The SGA is allocated when an Oracle instance (database) is started up based on values specified in the
initialization parameter file (either PFILE or SPFILE).
Java Pool
Streams Pool
Other structures (for example, lock and latch management, statistical data)
Size
SGA_MAX_SIZE: This optional parameter is used to set a limit on the amount of virtual
memory allocated to the SGA a typical setting might be 1 GB; however, if the value for
SGA_MAX_SIZE in the initialization parameter file or server parameter file is less than the sum the
memory allocated for all components, either explicitly in the parameter file or by default, at the time the
instance is initialized, then the database ignores the setting for SGA_MAX_SIZE. For optimal
performance, the entire SGA should fit in real memory to eliminate paging to/from disk by the operating
system.
DB_CACHE_SIZE: This optional parameter is used to tune the amount memory allocated to the
Database Buffer Cache in standard database blocks. Block sizes vary among operating systems. The
DBORCL database uses 8 KB blocks. The total blocks in the cache defaults to 48 MB on LINUX/UNIX
and 52 MB on Windows operating systems.
LOG_BUFFER: This optional parameter specifies the number of bytes allocated for the Redo Log
Buffer.
SHARED_POOL_SIZE: This optional parameter specifies the number of bytes of memory allocated
to shared SQL and PL/SQL. The default is 16 MB. If the operating system is based on a 64
bit configuration, then the default size is 64 MB.
LARGE_POOL_SIZE: This is an optional memory object the size of the Large Pool defaults to
zero. If the init.ora parameter PARALLEL_AUTOMATIC_TUNING is set toTRUE, then the default size
is automatically calculated.
JAVA_POOL_SIZE: This is another optional memory object. The default is 24 MB of memory.
The size of the SGA cannot exceed the parameter SGA_MAX_SIZE minus the combination of the size of
the
additional
parameters, DB_CACHE_SIZE, LOG_BUFFER, SHARED_POOL_SIZE,
LARGE_POOL_SIZE, and JAVA_POOL_SIZE.
Memory is allocated to the SGA as contiguous virtual memory in units termed granules. Granule size
depends on the estimated total size of the SGA, which as was noted above, depends on the
SGA_MAX_SIZE parameter. Granules are sized as follows:
If the SGA is less than 1 GB in total, each granule is 4 MB.
If the SGA is greater than 1 GB in total, each granule is 16 MB.
Granules are assigned to the Database Buffer Cache, Shared Pool, Java Pool, and other memory
structures, and these memory components can dynamically grow and shrink. Using contiguous memory
improves system performance. The actual number of granules assigned to one of these memory
components can be determined by querying the database view named V$BUFFER_POOL.
Granules are allocated when the Oracle server starts a database instance in order to provide memory
addressing space to meet the SGA_MAX_SIZE parameter. The minimum is 3 granules: one each for the
fixed SGA, Database Buffer Cache, and Shared Pool. In practice, you'll find the SGA is allocated much
more memory than this. The SELECT statement shown below shows a current_size of 1,152 granules.
SELECT name, block_size, current_size, prev_size, prev_buffers
FROM v$buffer_pool;
For additional information on the dynamic SGA sizing, enroll in Oracle's Oracle11g Database Performance
Tuning course.
A PGA is:
a nonshared memory region that contains data and control information exclusively for use by an
Oracle process.
A PGA is created by Oracle Database when an Oracle process is started.
One PGA exists for each Server Process and each Background Process. It stores data and control
information for a single Server Process or a single Background Process.
It is allocated when a process is created and the memory is scavenged by the operating system when
the process terminates. This is NOT a shared part of memory one PGA to each process only.
The collection of individual PGAs is the total instance PGA, or instance PGA.
Database initialization parameters set the size of the instance PGA, not individual PGAs.
The Program Global Area is also termed the Process Global Area (PGA) and is a part of memory
allocated that is outside of the Oracle Instance.
o
o
The content of the PGA varies, but as shown in the figure above, generally includes the following:
Private SQL Area: Stores information for a parsed SQL statement stores bind variable values and
runtime memory allocations. A user session issuing SQL statements has a Private SQL Area that may be
associated with a Shared SQL Area if the same SQL statement is being executed by more than one
system user. This often happens in OLTP environments where many users are executing and using the
same application program.
Dedicated Server environment the Private SQL Area is located in the Program Global Area.
Shared Server environment the Private SQL Area is located in the System Global Area.
Session Memory: Memory that holds session variables and other session information.
SQL Work Areas: Memory allocated for sort, hash-join, bitmap merge, and bitmap create types of
operations.
Oracle 9i and later versions enable automatic sizing of the SQL Work Areas by setting the
WORKAREA_SIZE_POLICY = AUTO parameter (this is the default!) and PGA_AGGREGATE_TARGET
= n (where n is some amount of memory established by the DBA). However, the DBA can let the Oracle
DBMS determine the appropriate amount of memory.
A session that loads a PL/SQL package into memory has the package state stored to the
UGA. The package state is the set of values stored in all the package variables at a specific time. The
state changes as program code the variables. By default, package variables are unique to and persist for
the
life
of
the
session.
The OLAP page pool is also stored in the UGA. This pool manages OLAP data pages, which are equivalent
to data blocks. The page pool is allocated at the start of an OLAP session and released at the end of the
session. An OLAP session opens automatically whenever a user queries a dimensional object such as
a cube.
Note: Oracle OLAP is a multidimensional analytic engine embedded in Oracle Database 11g. Oracle
OLAP cubes deliver sophisticated calculations using simple SQL queries - producing results with speed of
thought response times.
The UGA must be available to a database session for the life of the session. For this reason, the UGA
cannot be stored in the PGA when using a shared server connection because the PGA is specific to a
single process. Therefore, the UGA is stored in the SGA when using shared server connections, enabling
any shared server process access to it. When using a dedicated server connection, the UGA is stored in
the PGA.
Automatic Shared Memory Management (10g)
Prior to Oracle 10G, a DBA had to manually specify SGA Component sizes through the initialization
parameters, such as SHARED_POOL_SIZE, DB_CACHE_SIZE, JAVA_POOL_SIZE, and LARGE_POOL_SIZE
parameters.
Automatic Shared Memory Management enables a DBA to specify the total SGA memory available
through the SGA_TARGET initialization parameter. The Oracle Database automatically distributes this
memory among various subcomponents to ensure most effective memory utilization.
The DBORCL database SGA_TARGET is set in the initDBORCL.ora file: sga_target=1610612736
With automatic SGA memory management, the different SGA components are flexibly sized to adapt to
the SGA available. Setting a single parameter simplifies the administration task the DBA only specifies
the amount of SGA memory available to an instance the DBA can forget about the sizes of individual
components. No out of memory errors are generated unless the system has actually run out of
memory. No manual tuning effort is needed.
The SGA_TARGET initialization parameter reflects the total size of the SGA and includes memory for the
following components:
Fixed SGA and other internal allocations needed by the Oracle Database instance
The log buffer
The shared pool
The Java pool
The buffer cache
The keep and recycle buffer caches (if specified)
Nonstandard block size buffer caches (if specified)
The Streams Pool
If SGA_TARGET is set to a value greater than SGA_MAX_SIZE at startup, then the SGA_MAX_SIZE
value is bumped up to accommodate SGA_TARGET.
When you set a value for SGA_TARGET, Oracle Database 11g automatically sizes the most commonly
configured components, including:
The shared pool (for SQL and PL/SQL execution)
The Java pool (for Java execution state)
The large pool (for large allocations such as RMAN backup buffers)
The buffer cache
There are a few SGA components whose sizes are not automatically adjusted. The DBA must specify the
sizes of these components explicitly, if they are needed by an application. Such components are:
Keep/Recycle
buffer
caches
(controlled
by DB_KEEP_CACHE_SIZE and
DB_RECYCLE_CACHE_SIZE)
Additional buffer caches for non-standard block sizes (controlled by DB_nK_CACHE_SIZE, n =
{2, 4, 8, 16, 32})
Streams Pool (controlled by the new parameter STREAMS_POOL_SIZE)
The granule size that is currently being used for the SGA for each component can be viewed in the
view V$SGAINFO. The size of each component and the time and type of the last resize operation
performed on each component can be viewed in the view V$SGA_DYNAMIC_COMPONENTS.
SQL> select * from v$sgainfo;
More...
Shared Pool
The Shared Pool is a memory structure that is shared by all system users.
It caches various types of program data. For example, the shared pool stores parsed SQL, PL/SQL
code, system parameters, and data dictionary information.
The shared pool is involved in almost every operation that occurs in the database. For example, if a
user executes a SQL statement, then Oracle Database accesses the shared pool.
It consists of both fixed and variable structures.
The variable component grows and shrinks depending on the demands placed on memory size by
system users and application programs.
Memory can be allocated to the Shared Pool by the parameter SHARED_POOL_SIZE in the parameter
file. The default value of this parameter is 8MB on 32-bit platforms and 64MB on 64-bit platforms.
Increasing the value of this parameter increases the amount of memory reserved for the shared pool.
You can alter the size of the shared pool dynamically with the ALTER SYSTEM SET command. An
example command is shown in the figure below. You must keep in mind that the total memory allocated
to the SGA is set by the SGA_TARGET parameter (and may also be limited by the SGA_MAX_SIZE if it
is set), and since the Shared Pool is part of the SGA, you cannot exceed the maximum size of the
SGA. It is recommended to let Oracle optimize the Shared Pool size.
Library Cache
Memory is allocated to the Library Cache whenever an SQL statement is parsed or a program unit is
called. This enables storage of the most recently used SQL and PL/SQL statements. If the Library Cache
is too small, the Library Cache must purge statement definitions in order to have space to load new SQL
and PL/SQL statements. Actual management of this memory structure is through a Least-RecentlyUsed (LRU) algorithm. This means that the SQL and PL/SQL statements that are oldest and least
recently used are purged when more storage space is needed.
The Library Cache is composed of two memory subcomponents:
Shared SQL: This stores/shares the execution plan and parse tree for SQL statements, as well as
PL/SQL statements such as functions, packages, and triggers. If a system user executes an identical
statement, then the statement does not have to be parsed again in order to execute the statement.
Private SQL Area: With a shared server, each session issuing a SQL statement has a private SQL
area in its PGA.
o Each user that submits the same statement has a private SQL area pointing to the same shared SQL
area.
o Many private SQL areas in separate PGAs can be associated with the same shared SQL area.
o This figure depicts two different client processes issuing the same SQL statement the parsed solution is
already in the Shared SQL Area.
The Data Dictionary Cache is a memory structure that caches data dictionary information that has been
recently used.
This cache is necessary because the data dictionary is accessed so often.
Information accessed includes user account information, datafile names, table descriptions, user
privileges, and other information.
The database server manages the size of the Data Dictionary Cache internally and the size depends on
the size of the Shared Pool in which the Data Dictionary Cache resides. If the size is too small, then the
data dictionary tables that reside on disk must be queried often for information and this will slow down
performance.
Server Result Cache
The Server Result Cache holds result sets and not data blocks. The server result cache contains the SQL query result
cache and PL/SQL function result cache, which share the same infrastructure.
Cache hit/miss: First time if an oracle process requesting a block is found in database buffer is known
as a cache hit, else it must fetch it from data file into buffer know as direct IO & should be considered as
cache miss
Database buffer also holds static components keep (db_keep_cache_size) & recycle buffer
(db_recycle_cache_size)
Data blocks of the segments allocated to KEEP buffer cache retained in memory
Database blocks of the segments allocated to RECYCLE are wiped out of memory as soon as they are no
longer needed , making room for other RECYCLE segment blocks
DEFAULT buffer pool holds segment blocks which are not assigned to any of the above buffer pool
By default segments allocated to DEFAULT buffer pool
Oracle also supports non-default db block sizes in database buffer 2K, 4K, 8K, 16K, 32K by parameters
DB_2K_CACHE_SIZE,
DB_4K_CACHE_SIZE,
DB_8K_CACHE_SIZE,
DB_16K_CACHE_SIZE,
DB_32K_CACHE_SIZE
DB buffer Flush occurs when...
-Checkpoint occurs/forced by alter system checkpoint;
-Dirty Buffer list is full & no more free block is available for incoming block
-Alter system flush BUFFER_POOL; is executed
Moving default segment pool to KEEP:
select
owner,segment_name,buffer_pool
segment_name='CDRV_RIC_PART';
alter table SEBS.CDRV_RIC_PART
from
dba_segments
where
owner='SEBS'
and
The Database Buffer Cache is a fairly large memory object that stores the actual data blocks that are
retrieved from datafiles by system queries and other data manipulation language commands.
The purpose is to optimize physical input/output of data.
When Database Smart Flash Cache (flash cache) is enabled, part of the buffer cache can reside in
the flash cache.
This buffer cache extension is stored on a flash disk device, which is a solid state storage device
that uses flash memory.
The database can improve performance by caching buffers in flash memory instead of reading from
magnetic disk.
Database Smart Flash Cache is available only in Solaris and Oracle Enterprise Linux.
A query causes a Server Process to look for data.
The first look is in the Database Buffer Cache to determine if the requested information happens to
already be located in memory thus the information would not need to be retrieved from disk and this
would speed up performance.
If the information is not in the Database Buffer Cache, the Server Process retrieves the information
from disk and stores it to the cache.
Keep in mind that information read from disk is read a block at a time, NOT a row at a time,
because a database block is the smallest addressable storage space on disk.
Database blocks are kept in the Database Buffer Cache according to a Least Recently Used (LRU)
algorithm and are aged out of memory if a buffer cache block is not used in order to provide space for
the insertion of newly needed database blocks.
The block size for a database is set when a database is created and is determined by the init.ora
parameter file parameter named DB_BLOCK_SIZE.
Typical block sizes are 2KB, 4KB, 8KB, 16KB, and 32KB.
The size of blocks in the Database Buffer Cache matches the block size for the database.
The DBORCL database uses an 8KB block size.
This figure shows that the use of non-standard block sizes results in multiple database buffer cache
memory allocations.
Because tablespaces that store oracle tables can use different (non-standard) block sizes, there can be
more than one Database Buffer Cache allocated to match block sizes in the cache with the block sizes in
the non-standard tablespaces. The size of the Database Buffer Caches can be controlled by the
parameters DB_CACHE_SIZE and DB_nK_CACHE_SIZE to dynamically change the memory allocated
to the caches without restarting the Oracle instance.
You can dynamically change the size of the Database Buffer Cache with the ALTER SYSTEM command like
the one shown here:
ALTER SYSTEM SET DB_CACHE_SIZE = 96M;
You can have the Oracle Server gather statistics about the Database Buffer Cache to help you size it to
achieve an optimal workload for the memory allocation. This information is displayed from the
V$DB_CACHE_ADVICE view. In order for statistics to be gathered, you can dynamically alter the
system
by
using
the ALTER
SYSTEM
SET
DB_CACHE_ADVICE
(OFF,
ON,
READY) command. However, gathering statistics on system performance always incurs some overhead
that will slow down system performance.
o
o
The Redo Log Buffer memory object stores images of all changes made to database blocks.
Database blocks typically store several table rows of organizational data. This means that if a single
column value from one row in a block is changed, the block image is stored. Changes include INSERT,
UPDATE, DELETE, CREATE, ALTER, or DROP.
LGWR writes redo sequentially to disk while DBWn performs scattered writes of data blocks to disk.
Scattered writes tend to be much slower than sequential writes.
Because LGWR enable users to avoid waiting for DBWn to complete its slow writes, the database delivers
better performance.
The Redo Log Buffer as a circular buffer that is reused over and over. As the buffer fills up, copies of the
images are stored to the Redo Log Files that are covered in more detail in a later module.
Large Pool
o
o
The Large Pool is an optional memory structure that primarily relieves the memory burden placed on
the Shared Pool. The Large Pool is used for the following tasks if it is allocated:
Allocating space for session memory requirements from the User Global Area where a Shared Server is
in use.
Transactions that interact with more than one database, e.g., a distributed database scenario.
Backup and restore operations by the Recovery Manager (RMAN) process.
RMAN uses this only if the BACKUP_DISK_IO = n and BACKUP_TAPE_IO_SLAVE =
TRUE parameters are set.
If the Large Pool is too small, memory allocation for backup will fail and memory will be allocated from
the Shared Pool.
Parallel
execution
message
buffers
for
parallel
server
operations. The
PARALLEL_AUTOMATIC_TUNING = TRUE parameter must be set.
The Large Pool size is set with the LARGE_POOL_SIZE parameter this is not a dynamic parameter. It
does not use an LRU list to manage memory.
Java Pool
The Java Pool is an optional memory object, but is required if the database has Oracle Java installed and
in use for Oracle JVM (Java Virtual Machine).
The size is set with the JAVA_POOL_SIZE parameter that defaults to 24MB.
The Java Pool is used for memory allocation to parse Java commands and to store data associated with
Java commands.
Storing Java code and data in the Java Pool is analogous to SQL and PL/SQL code cached in the Shared
Pool.
Streams Pool
This pool stores data and control structures to support the Oracle Streams feature of Oracle Enterprise
Edition.
Oracle Steams manages sharing of data and events in a distributed environment.
It is sized with the parameter STREAMS_POOL_SIZE.
If STEAMS_POOL_SIZE is not set or is zero, the size of the pool grows dynamically.
Processes
You need to understand three different types of Processes:
User Process: Starts when a database user requests to connect to an Oracle Server.
Server Process: Establishes the Connection to an Oracle Instance when a User Process requests
connection makes the connection for the User Process.
Background Processes: These start when an Oracle Instance is started up.
Client Process
In order to use Oracle, you must connect to the database. This must occur whether you're using
SQLPlus, an Oracle tool such as Designer or Forms, or an application program. The client process is also
termed the user process in some Oracle documentation.
This generates a User Process (a memory object) that generates programmatic calls through your user
interface (SQLPlus, Integrated Developer Suite, or application program) that creates a session and
causes the generation of a Server Process that is either dedicated or shared.
Server Process
A Server Process is the go-between for a Client Process and the Oracle Instance.
Dedicated Server environment there is a single Server Process to serve each Client Process.
Shared Server environment a Server Process can serve several User Processes, although with some
performance reduction.
Allocation of server process in a dedicated environment versus a shared environment is covered in
further detail in the Oracle11g Database Performance Tuning course offered by Oracle Education.
The first components of the Oracle instance that we will examine are the Oracle background
processes. These processes run in the background of the operating system and are not interacted with
directly. Each process is highly specialized and has a specific function in the overall operation of the
Oracle kernel. While these processes accomplish the same functions regardless of the host operating
system, their implementation is significantly different. On Unix-based systems, owing to Unix's
multiprocess architecture, each Oracle process runs as a separate operating system process. Thus, we
can actually see the processes themselves from within the operating system.
For instance, we can use the ps command on Linux to see these processes, as shown in the following
screenshot. We've highlighted a few of them that we will examine in depth. Note that our background
processes are named in the format ora_ processtype_SID. Since the SID for our database is ORCL, that
name forms a part of the full process name:
PMON
PMON
If the process is a background process the instance most likely cannot continue and will
be shut down
primarily cleans up client-side failures
The core process of the Oracle architecture is the PMON processthe Process Monitor. The
PMON is tasked with monitoring and regulating all other Oracle-related processes. This includes
not only background processes but server processes as well. Most databases run in a dedicated
server mode. In this mode, any user that connects to the database is granted a server process
with which to do work. In Linux systems, this process can actually be viewed at the server level
with the ps -ef command. When the user connects over the network, the process will be labeled
with LOCAL=NO in the process description. Privileged users such as database administrators can
also make an internal connection to the database, provided that we are logging in from the
server that hosts the database. When an internal connection is made, the process is labeled
with LOCAL=YES. We see an example of each in the following screenshot of the ps ef
command on a Linux machine hosting Oracle:
Under ordinary circumstances, when a user properly disconnects his or her session from the database by
exiting the tool used to connect to it, the server process given to that user terminates cleanly. However,
what if instead of disconnecting the connection properly, the machine that the user was connected to was
rebooted? In situations like these, the server process on the database is left running since it hasn't
received the proper instructions to terminate. When this occurs, it is the job of PMON to monitor sessions
and clean up orphaned processes. The PMON normally "wakes up" every 3 seconds to check these
processes and clean them up. In addition to this primary function, PMON is also responsible for
registering databases with network listeners.
SMON
If an Oracle Instance fails, all information in memory not written to disk is lost. SMON is responsible for
recovering the instance when the database is started up again. It does the following:
Rolls forward to recover data that was recorded in a Redo Log File, but that had not yet been recorded
to a datafile by DBWn. SMON reads the Redo Log Files and applies the changes to the data blocks. This
recovers all transactions that were committed because these were written to the Redo Log Files prior to
system failure.
Opens the database to allow system users to logon.
Rolls back uncommitted transactions.
SMON also does limited space management. It combines (coalesces) adjacent areas of free space in the
database's datafiles for tablespaces that are dictionary managed.
It also deallocates temporary segments to create free space in the datafiles.
The SMON, or System Monitor process, has several very important duties. Chiefly SMON is responsible
for instance recovery. Under normal circumstances, databases are shut down using the proper
commands to do so. When this occurs, all of the various components, mainly the datafiles, are properly
recorded and synchronized so that the database is left in a consistent state. However, if the database
crashes for some reason (the database's host machine loses power, for instance), this synchronization
cannot occur. When the database is restarted, it will begin from an inconsistent state. Every time the
instance is started, SMON will check for these marks of synchronization. In a situation where the
database is in an inconsistent state, SMON will perform instance recovery to resynchronize these
inconsistencies. Once this is complete, the instance and database can open correctly. Unlike database
recovery, where some data loss has occurred, instance recovery occurs without intervention from the
DBA. It is an automatic process that is handled by SMON.
The SMON process is also responsible for various cleanup operations within the datafiles themselves.
tempfiles are the files that hold the temporary data that is written when an overflow from certain
memory caches occurs. This temporary data is written in the form of temporary segments within the
tempfile. When this data is no longer needed, SMON is tasked with removing them. The SMON process
can also coalesce data within datafiles, removing gaps, which allows the data to be stored more
efficiently.
The Database Writer writes modified blocks from the database buffer cache to the datafiles.
The purpose of DBWn is to improve system performance by caching writes of database blocks from
the Database Buffer Cache back to datafiles.
Blocks that have been modified and that need to be written back to disk are termed "dirty blocks."
The DBWn also ensures that there are enough free buffers in the Database Buffer Cache to service
Server Processes that may be reading data from datafiles into the Database Buffer Cache.
Performance improves because by delaying writing changed database blocks back to disk, a Server
Process may find the data that is needed to meet a User Process request already residing in memory!
DBWn writes to datafiles when one of these events occurs that is illustrated in the figure below.
For all of the overhead duties of processes such as PMON and SMON, we can probably intuit that there
must be a process that actually reads and writes data from the datafiles. Until later versions, that
process was named DBWR the Database Writer process. The DBWR is responsible for reading and
writing the data that services user operations, but it doesn't do it in the way that we might expect.
In Oracle, almost no operation is executed directly on the disk. The Oracle processing paradigm is to
read data into memory, complete a given operation while the data is still in memory, and write it back to
the disk. We will cover the reason for this in greater depth when we discuss memory caches, but for now
let's simply say it is for performance reasons. Thus, the DBWR process will read a unit of data from the
disk, called a database block, and place it into a specialized memory cache. If data is changed using
an UPDATE statement, for instance, it is changed in memory. After some time, it is written back to the
disk in its new state. If we think about it, it should be obvious that the amount of reading and writing in
a database would constitute a great deal of work for one single process. It is certainly possible that a
single DBWR process would become overloaded and begin to affect performance. That's why, in more
recent versions of Oracle, we have the ability to instantiate multiple database writer processes. So we
can refer to DBWR as DBWn, where "n" is a given instantiation of a database writer process. If our
instance is configured to spawn three database writers, they would be dbw0, dbw1, and dbw2. The
number of the DBWn processes that are spawned is governed by one of our initialization parameters,
namely, db_writer_processes.
Let's take a closer look at how the value for db_writer_processes affects the database writer processes
that we can see in the Linux operating system. We won't go into great depth with the commands that
From the Linux command line, we use the ps ef command along with the grep command that searches
through the processes in the system with the string dbw in their names. This restricts our output to only
those processes that contain dbw, which will be the database writer processes. As we can see in the
preceding screenshot, there is only one database writer process named ora_dbw0_orcl.
As mentioned, the number of the database writer processes is determined by an initialization parameter.
The name of that parameter is db_writer_processes.We can determine the value of this parameter by
logging into the database using SQL*Plus (the command sqlplus / as sysdba) and showing its value using
the show parameter command, as in the following screenshot:
Since we've already determined that we only have a single dbw0 process, it should come as no surprise
that the value for our parameter is 1. However, if we wish to add more database writers, it is simple to
do so. From the SQL*Plus command line, we issue the following command, followed by the shutdown
immediate and startup commands to shut down and start up the database:
The alter system command instructs Oracle to set the db_writer_processes parameter to 4. The
change is recognized when the database is restarted. From here, we type exit to leave SQL*Plus and
return to the Linux command line. We then issue our ps command again and view the results:
As we can see in the preceding screenshot, there are four database writer processes,
calledora_dbw0_orcl, ora_dbw1_orcl, ora_dbw2_orcl, and ora_dbw3_orcl, that align with our
value for db_writer_processes. We now have four database writer processes with which to read and write
data.
LGWR
The Log Writer (LGWR) writes contents from the Redo Log Buffer to the Redo Log File that is in use.
These are sequential writes since the Redo Log Files record database modifications based on the actual
time that the modification takes place.
LGWR actually writes before the DBWn writes and only confirms that a COMMIT operation has
succeeded when the Redo Log Buffer contents are successfully written to disk.
LGWR can also call the DBWn to write contents of the Database Buffer Cache to disk.
The LGWR writes according to the events illustrated in the figure shown below.
LGWR
CKPT
The Checkpoint (CPT) process writes information to update the database control files and headers of
datafiles.
A checkpoint identifies a point in time with regard to the Redo Log Files where instance recovery is to
begin should it be necessary.
It can tell DBWn to write blocks to disk.
A checkpoint is taken at a minimum, once every three seconds.
Think of a checkpoint record as a starting point for recovery. DBWn will have completed writing all
buffers from the Database Buffer Cache to disk prior to the checkpoint, thus those records will not
require recovery. This does the following:
Ensures modified data blocks in memory are regularly written to disk CKPT can call the DBWn
process in order to ensure this and does so when writing a checkpoint record.
Reduces Instance Recovery time by minimizing the amount of work needed for recovery since only
Redo Log File entries processed since the last checkpoint require recovery.
Causes all committed data to be written to datafiles during database shutdown.
We mentioned in the preceding section that the purpose of the DBWn process is to move data in and out
of memory. Once a block of data is moved into memory, it is referred to as a buffer. When a buffer in
memory is changed using an UPDATE statement, for instance, it is called a dirty buffer. Dirty buffers
can remain in memory for a time and are not automatically flushed to disk. The event that signals the
writing of dirty buffers to disk is known as a checkpoint. The checkpoint ensures that memory is kept
available for other new buffers and establishes a point for recovery. In earlier versions of Oracle, the type
of checkpoint that occurred was known as a full checkpoint. This checkpoint will flush all dirty buffers
back to the datafiles on the disk. While full checkpoints represent a complete flush of the dirty buffers,
they are expensive in terms of performance. Since Version 8i, the Oracle kernel makes use of an
incremental checkpoint that intelligently flushes only part of the available dirty buffers when needed. Full
checkpoints only occur now during a shutdown of the database or on demand, using a command.
The process in the instance that orchestrates checkpointing is the CKPT process. The CKPT process uses
incremental checkpoints at regular intervals to ensure that dirty buffers are written out and any changes
recorded in the redo logs are kept consistent for recovery purposes. Unlike the DBWn process, there is
only one CKPT process. Although the incremental checkpoint method is used by CKPT, we can also force
a full checkpoint using the command shown in the following screenshot:
PURPOSE OF CHECKPOINTS
Database blocks are temporarily stored in Database buffer cache. As blocks are read, they are stored
in DB buffer cache so that if any user accesses them later, they are available in memory and need not be
read from the disk. When we update any row, the buffer in DB buffer cache corresponding to the block
containing that row is updated in memory. Record of the change made is kept in redo log buffer. On
commit, the changes we made are written to the disk thereby making them permanent. But where are
those changes written? To the datafiles containing data blocks? No!!! The changes are recorded in online
redo log files by flushing the contents of redo log buffer to them. This is called write ahead logging. If
the instance crashed right now, the DB buffer cache will be wiped out but on restarting the database,
Oracle will apply the changes recorded in redo log files to the datafiles.
Why doesnt Oracle write the changes to datafiles right away when we commit the transaction? The
reason is simple. If it chose to write directly to the datafiles, it will have to physically locate the data
block in the datafile first and then update it which means that after committing, user has to wait until
DBWR searches for the block and then writes it before he can issue next command. This will bring down
the performance drastically. That is where the role of redo logs comes in. The writes to the redo logs are
sequential writes LGWR just dumps the info in redologs to log files sequentially and synchronously so
that the user does not have to wait for long. Moreover, DBWR will always write in units of Oracle blocks
whereas LGWR will write only the changes made. Hence, write ahead logging also improves performance
by reducing the amount of data written synchronously. When will the changes be applied to the
datablocks in datafiles? The data blocks in the datafiles will be updated by the DBWR asynchronously in
response to certain triggers. These triggers are called checkpoints.
Checkpoint is a synchronization event at a specific point in time which causes some / all dirty blocks to
be written to disk thereby guaranteeing that blocks dirtied prior to that point in time get written.
Whenever dirty blocks are written to datafiles, it allows oracle
- to reuse a redo log : A redo log cant be reused until DBWR writes all the dirty blocks protected by
that logfile to disk. If we attempt to reuse it before DBWR has finished its checkpoint, we get the
following message in alert log : Checkpoint not complete.
- to reduce instance recovery time : As the memory available to a database instance increases, it is
possible to have database buffer caches as large as several million buffers. It requires that the database
checkpoint advance frequently to limit recovery time, since infrequent checkpoints and large buffer
caches can exacerbate crash recovery times significantly.
- to free buffers for reads : Dirtied blocks cant be used to read new data into them until they are
written to disk. Thus DBWrR writes dirty blocks from the buffer cache, to make room in the cache.
Various types of checkpoints in Oracle:
Full checkpoint
Thread checkpoint
- File checkpoint
- Parallel Query checkpoint
- Object checkpoint
- Log switch checkpoint
_ Incremental checkpoint
RECO
The Recoverer Process (RECO) is used to resolve failures of distributed transactions in a distributed
database.
Consider a database that is distributed on two servers one in St. Louis and one in Chicago.
Further, the database may be distributed on servers of two different operating systems, e.g. LINUX
and Windows.
The RECO process of a node automatically connects to other databases involved in an in-doubt
distributed transaction.
When RECO reestablishes a connection between the databases, it automatically resolves all in-doubt
transactions, removing from each database's pending transaction table any rows that correspond to the
resolved transactions.
Optional Background Processes
Optional Background Process Definition:
ARCn: Archiver One or more archiver processes copy the online redo log files to archival storage
when they are full or a log switch occurs.
CJQ0: Coordinator Job Queue This is the coordinator of job queue processes for an instance. It
monitors the JOB$ table (table of jobs in the job queue) and starts job queue processes (J nnn) as
needed to execute jobs The Jnnn processes execute job requests created by the DBMS_JOBS package.
Dnnn: Dispatcher number "nnn", for example, D000 would be the first dispatcher process
Dispatchers are optional background processes, present only when the shared server configuration is
used. Shared server is discussed in your readings on the topic "Configuring Oracle for the Shared
Server".
FBDA: Flashback Data Archiver Process This archives historical rows of tracked tables into Flashback
Data Archives. When a transaction containing DML on a tracked table commits, this process stores the
pre-image of the rows into the Flashback Data Archive. It also keeps metadata on the current
rows. FBDA automatically manages the flashback data archive for space, organization, and retention
Of these, you will most often use ARCn (archiver) when you automatically archive redo log file
information (covered in a later module).
ARCn
While the Archiver (ARCn) is an optional background process, we cover it in more detail because it is
almost always used for production systems storing mission critical information.
The ARCn process must be used to recover from loss of a physical disk drive for systems that are
"busy" with lots of transactions being completed.
It performs the tasks listed below.
When a Redo Log File fills up, Oracle switches to the next Redo Log File.
The DBA creates several of these and the details of creating them are covered in a later module.
If all Redo Log Files fill up, then Oracle switches back to the first one and uses them in a round-robin
fashion by overwriting ones that have already been used.
Overwritten Redo Log Files have information that, once overwritten, is lost forever.
ARCHIVELOG Mode:
If ARCn is in what is termed ARCHIVELOG mode, then as the Redo Log Files fill up, they are
individually written to Archived Redo Log Files.
LGWR does not overwrite a Redo Log File until archiving has completed.
Committed data is not lost forever and can be recovered in the event of a disk failure.
Only the contents of the SGA will be lost if an Instance fails.
In NOARCHIVELOG Mode:
The Redo Log Files are overwritten and not archived.
Recovery can only be made to the last full backup of the database files.
All committed transactions after the last full backup are lost, and you can see that this could cost the
firm a lot of $$$.
Innn: I/O slave processes -- simulate asynchronous I/O for systems and devices that do not support it.
In asynchronous I/O, there is no timing requirement for transmission, enabling other processes to start
before the transmission has finished.
For example, assume that an application writes 1000 blocks to a disk on an operating system that
does not support asynchronous I/O.
Each write occurs sequentially and waits for a confirmation that the write was successful.
With asynchronous disk, the application can write the blocks in bulk and perform other work while
waiting for a response from the operating system that all blocks were written.
Parallel Query Slaves -- In parallel execution or parallel processing, multiple processes work together
simultaneously to run a single SQL statement.
By dividing the work among multiple processes, Oracle Database can run the statement more quickly.
For example, four processes handle four different quarters in a year instead of one process handling all
four quarters by itself.
Parallel execution reduces response time for data-intensive operations on large databases such as data
warehouses. Symmetric multiprocessing (SMP) and clustered system gain the largest performance
benefits from parallel execution because statement processing can be split up among multiple CPUs.
Parallel execution can also benefit certain types of OLTP and hybrid systems.
Logical Structure
It is helpful to understand how an Oracle database is organized in terms of a logical structure that is
used to organize physical objects.
Tablespace: An
Oracle
database
must
always
consist
of
at
least
two tablespaces (SYSTEM andSYSAUX), although a typical Oracle database will multiple tablespaces.
A tablespace is a logical storage facility (a logical container) for storing objects such as tables, indexes,
sequences, clusters, and other database objects.
Each tablespace has at least one physical datafile that actually stores the tablespace at the operating
system level. A large tablespace may have more than one datafile allocated for storing objects assigned
to that tablespace.
A tablespace belongs to only one database.
Tablespaces can be brought online and taken offline for purposes of backup and management, except
for the SYSTEM tablespace that must always be online.
Tablespaces can be in either read-only or read-write status.
Datafile: Tablespaces are stored in datafiles which are physical disk objects.
A datafile can only store objects for a single tablespace, but a tablespace may have more than one
datafile this happens when a disk drive device fills up and a tablespace needs to be expanded, then it is
expanded to a new disk drive.
The DBA can change the size of a datafile to make it smaller or later. The file can also grow in size
dynamically as the tablespace grows.
Segment: When logical storage objects are created within a tablespace, for example, an employee
table, a segment is allocated to the object.
Obviously a tablespace typically has many segments.
A segment cannot span tablespaces but can span datafiles that belong to a single tablespace.
Extent: Each object has one segment which is a physical collection of extents.
Extents are simply collections of contiguous disk storage blocks. A logical storage object such as
a table or index always consists of at least one extent ideally the initial extent allocated to an object
will be large enough to store all data that is initially loaded.
As a table or index grows, additional extents are added to the segment.
A DBA can add extents to segments in order to tune performance of the system.
An extent cannot span a datafile.
Block: The Oracle Server manages data at the smallest unit in what is termed a block or data
block. Data are actually stored in blocks.
A physical block is the smallest addressable location on a disk drive for read/write operations.
An Oracle data block consists of one or more physical blocks (operating system blocks) so the data block,
if larger than an operating system block, should be an even multiple of the operating system block size,
e.g., if the Linux operating system block size is 2K or 4K, then the Oracle data block should be 2K, 4K,
8K, 16K, etc in size. This optimizes I/O.
The data block size is set at the time the database is created and cannot be changed. It is set with
the DB_BLOCK_SIZE parameter. The maximum data block size depends on the operating system.
The data block size is set at the time the database is created and cannot be changed. It is set with
the DB_BLOCK_SIZE parameter. The maximum data block size depends on the operating system.
Thus, the Oracle database architecture includes both logical and physical structures as follows:
Physical: Control files; Redo Log Files; Datafiles; Operating System Blocks.
Logical: Tablespaces; Segments; Extents; Data Blocks.
o
o
o
o
Processing a query:
Parse:
Search for identical statement in the Shared SQL Area.
Check syntax, object names, and privileges.
Lock objects used during parse.
Create and store execution plan.
Bind: Obtains values for variables.
o
o
o
o
o
The execution of DDL (Data Definition Language) statements differs from the execution of DML (Data
Manipulation Language) statements and queries, because the success of a DDL statement requires write
access to the data dictionary.
For these statements, parsing actually includes parsing, data dictionary lookup, and
execution. Transaction management, session management, and system management SQL statements
are processed using the parse and execute stages. To re-execute them, simply perform another execute.
Oracle server process scans the Library cache to see if there are caches of this command.
The so-called resolution, is the server before executing this statement, you must calculate its meaning
and implementation plan:
What is emp? Is a table? Synonyms? Or view? Are emp object exists? The sal emp exist? This user has
no permission to view, modify?
To understand the true meaning of the statement, the server must determine how to execute it in the
best way. Is there an index on the id column? If so, is the use of an index position fast, or full table scan
faster?
Oracle server to get the content, we must query the data dictionary. When querying data dictionary,
Oracle server process will first scan Data dictionary cache view the data dictionary exists, if it exists, then
the direct reuse, otherwise, it will be read from the disk to the Data dictionary cache and cache them for
future use .
After
2. SQL parsing, start executing statements. Assuming the data is modified before sal = 80.
First, scan whether there emp database buffer cache in the data block id = 1, if not, the data block is
loaded from disk (s, a data block may contain multiple pieces of data, the data may also be distributed in
a plurality of data blocks in) to the database buffer cache.
Assign an undo segment transaction services that end, this segment of the data blocks to be loaded
database buffer cache. Note: A transaction can only be assigned by an undo segment service, revocation
of data generated by a transaction can not be assigned to multiple undo segment, but an undo segment
for multiple transaction services.
Save image data in a data block before the undo segment (image).
The undo data modification operations recorded in the redo log buffer block in.
In this case, redo logs generated in the redo log buffer is marked commit the mark, and contains the
current SCN and time stamp. Note: After writing redo logs submitted by the redo log buffer, the user
may not be able to commit successfully receive feedback! LGWR process will have to wait until the
contents of the redo log buffer is written only after the redo log file to commit a successful user can
receive feedback.
6. After receiving yards farmers commit success feedback, the transaction is executed successfully.
At this point, id disk emp object = data in block 1, sal could be 80, it could be 50, it could be one, is not
known. How much more does not matter, in other transactions to operate this data block, reading the
database buffer cache is the value that has been committed, that is, sal = 1. If sal = instance crashes
before a sync to disk, the next startup, Oracle SMON background process will extract the redo log file to
restore the database records.
Environment Variables
Operating System Environment Variables
Oracle makes use of environment variables on the server and client computers in both LINUX and
Windows operating systems in order to:
The catalog
.sql script contains definitions of the views and public synonyms for the dynamic performance views.
You must run catalog.sql to create these views and synonyms. After installation, only user SYS or
anyone with SYSDBA role has access to the dynamic performance tables.
V$ Views
The actual dynamic performance views are identified by the prefix V_$. Public synonyms for these views
have the prefix V$. Database administrators and other users should access only the V$ objects, not
the V_$ objects.
GV$ Views
For almost every V$ view described in this chapter, Oracle has a corresponding GV$ (global V$) view. In
Real Application Clusters, querying a GV$ view retrieves the V$ view information from all qualified
instances. In addition to the V$ information, each GV$ view contains an extra column named
INST_ID of datatype NUMBER. The INST_ID column displays the instance number from which the
associated V$ view information was obtained. The INST_ID column can be used as a filter to
retrieve V$ information from a subset of available instances. For example, the following query retrieves
the information from the V$LOCK view on instances 2 and 5:
>
Components
SQL*Plus
Init Params
DB Startup
DB Shutdown
Alert Log
Perf Views
The views, also sometimes called V$ views because their names begin with V$, contain information such
as the following:
SQL execution
DATA DICTIONARY VIEWS: An important part of an Oracle database is its data dictionary, which is a
The definitions of every schema object in the database, including default values for columns
and integrity constraint information
The amount of space allocated for and currently used by the schema objects
The names of Oracle Database users, privileges and roles granted to users, and auditing
information related to users
The data dictionary is a central part of data management for every Oracle database. For
example, the database performs the following actions:
Accesses the data dictionary to find information about users, schema objects, and storage
structures
Modifies the data dictionary every time that a DDL statement is issued Because Oracle Database
stores data dictionary data in tables, just like other data, users can query the data with SQL. For
example, users can run SELECT statements to determine their privileges, which tables exist in
their schema, which columns are in these tables, whether indexes are built on these columns,
and so on.
Base tables
These underlying tables store information about the database. Only Oracle Database should write
to and read these tables. Users rarely access the base tables directly because they are
normalized and most data is stored in a cryptic format.
Views
These views decode the base table data into useful information, such as user or table names,
using joins and WHERE clauses to simplify the information. These views contain the names and
description of all objects in the data dictionary. Some views are accessible to all database users,
whereas others are intended for administrators only.
Views with the Prefix DBA_
Views with the prefix DBA_ show all relevant information in the entire database. DBA_ views are
intended only for administrators.
For example, the following query shows information about all objects in the database:
SELECT OWNER, OBJECT_NAME, OBJECT_TYPE FROM
ORDER BY OWNER, OBJECT_NAME;
DBA_OBJECTS
ALL_OBJECTS
Metadata
Because the ALL_ views obey the current set of enabled roles, query results depend on which roles are
enabled, as shown in the following example:
SQL> SET ROLE ALL;
Role set.
SQL> SELECT COUNT(*) FROM ALL_OBJECTS;
COUNT(*)
---------68295
SQL> SET ROLE NONE;
Role set.
SQL> SELECT COUNT(*) FROM ALL_OBJECTS;
COUNT(*)
---------53771
Application developers should be cognizant of the effect of roles when using ALL_ views in a stored
procedure, where roles are not enabled by default.
Views with the Prefix USER_
The views most likely to be of interest to typical database users are those with the prefix USER_. These
views:
Refer to the user's private environment in the database, including metadata about schema
objects created by the user, grants made by the user, and so on
Display only rows pertinent to the user, returning a subset of the information in the ALL_ views
Has columns identical to the other views, except that the column OWNER is implied
For example, the following query returns all the objects contained in your schema:
SELECT OBJECT_NAME, OBJECT_TYPE FROM
ORDER BY OBJECT_NAME;
USER_OBJECTS
During database operation, Oracle Database reads the data dictionary to ascertain that schema objects
exist and that users have proper access to them. Oracle Database also updates the data dictionary
continuously to reflect changes in database structures, auditing, grants, and data.
For example, if user hr creates a table named interns, then new rows are added to the data dictionary
that reflect the new table, columns, segment, extents, and the privileges that hr has on the table. This
new information is visible the next time the dictionary views are queried.
Public Synonyms for Data Dictionary Views
Oracle Database creates public synonyms for many data dictionary views so users can access them
conveniently. The security administrator can also create additional public synonyms for schema objects
that are used system wide. Users should avoid naming their own schema objects with the same names
as those used for public synonyms.
Cache the Data Dictionary for Fast Access
Much of the data dictionary information is in the data dictionary cache because the database
constantly requires the information to validate user access and verify the state of schema
objects. Parsing information is typically kept in the caches. The COMMENTS columns describing the
tables and their columns are not cached in the dictionary cache, but may be cached in the database
buffer cache.
Other Programs and the Data Dictionary
Other Oracle Database products can reference existing views and create additional data dictionary tables
or views of their own. Application developers who write programs that refer to the data dictionary should
refer to the public synonyms rather than the underlying tables. Synonyms are less likely to change
between releases.
Static parameter file: This has always existed and is known as the PFILE and is commonly
referred to as the init.ora file. The actual naming convention used is to name the
file initSID.ora where SID is the system identifier (database name) assigned to the
database.
Server (persistent) parameter file: This is the SPFILE (also termed the server
parameter file) and is commonly referred to as the spfileSID.ora.
There are two types of parameters:
Implicit parameters. These have no entries in the parameter file and Oracle uses default
values.
Initialization parameter files include the following:
Instance parameters.
The file is only read during database startup so any modifications take effect the next time the database
is started up. This is an obvious limitation since shutting down and starting up an Oracle database is not
desirable in a 24/7 operating environment.
The naming convention followed is to name the file initSID.ora where SID is the system identifier. For
example, the PFILE for the departmental SOBORA2server for the database named DBORCL is
named initDBORCL.ora.
When Oracle software is installed, a sample init.ora file is created. You can create one for your
database by simply copying the init.ora sample file and renaming it. The sample command shown here
creates an init.ora file for a database named USER350. Here the file was copied to the
user's HOME directory and named initUSER350.ora.
$ cp $ORACLE_HOME/dbs/init.ora $HOME/initUSER350.ora
You can also create an init.ora file by typing commands into a plain text file using an editor such as
Notepad.
NOTE: For
a
Windows
operating
is C:\Oracle_Home\database.
system,
the
default
location
for
the
init.ora
file
This is a listing of the initDBORCL.ora file for the database named DBORCL. We will cover these
parameters in our discussion below.
The example below shows the format for specifying values: keyword = value.
Each parameter has a default value that is often operating system dependent.
Generally parameters can be specified in any order.
Comment lines can be entered and marked with the # symbol at the beginning of the
comment.
Enclose parameters in quotation marks to include literals.
Usually operating systems such as LINUX are case sensitive so remember this in specifying file
names.
The basic initialization parameters there are about 255 parameters the actual number changes
with each version of Oracle. Most are optional and Oracle will use default settings for them if you do not
assign values to them. Here the most commonly specified parameters are sorted according to their
category.
DB_BLOCK_SIZE (mandatory) specifies the size of the default Oracle block in the
database. At database creation time, the SYSTEM, TEMP, and SYSAUX tablespaces are created
with this block size. An 8KB block size is about the smallest you should use for any database
although 2KB and 4KB block sizes are legal values.
The flash recovery area contains multiplexed copies of current control files and online
redo logs, as well as archived redo logs, flashback logs, and RMAN backups.
Specifying
this
parameter
without
also
specifying
the DB_RECOVERY_FILE_DEST_SIZE initialization parameter is not allowed.
CONTROL_FILES (mandatory) tells Oracle the location of the control files to be read
during database startup and operation. The control files are typically multiplexed (multiple
copies).
#Control File Configuration
CONTROL_FILES =
("/u01/student/dbockstd/oradata/USER350control01.ctl",
"/u02/student/dbockstd/oradata/USER350control02.ctl")
number
of
seconds
the
So, which parameters should you include in your PFILE when you create a database? I suggest a simple
init.ora file initially - you can add to it as time goes on in this course.
SPFILE
The SPFILE is a binary file. You must NOT manually modify the file and it must always reside on the
server. After the file is created, it is maintained by the Oracle server.
The SPFILE enables you to make changes that are termed persistent across startup and shutdown
operations. You can make dynamic changes to Oracle while the database is running and this is the main
advantage of using this file. The default location is in the $ORACLE_HOME/dbs directory with a default
name of spfileSID.ora. For example, a database named USER350 would have aSPFILE with a name
of spfileUSER350.ora.
As is shown in the figure above, you can create an SPFILE from an existing PFILE by typing in the
command shown while using SQL*Plus. Note that the filenames are enclosed in single-quote marks.
Recreating a PFILE
You can also create a PFILE from an SPFILE by exporting the contents through use of
the CREATE command. You do not have to specify file names as Oracle will use the spfile associated
with the ORACLE_SID for the database to which you are connected.
CREATE PFILE FROM SPFILE;
You would then edit the PFILE and use the CREATE command to create a new SPFILE from the
edited PFILE.
The STARTUP Command
The STARTUP command is used to startup an Oracle database. You have learned about two different
initialization parameter files. There is a precedence to which initialization parameter file is read when an
Oracle database starts up as only one of them is used.
Oracle uses the priorities listed below to decide which parameter file to use during startup.
STARTUP
First Priority: the spfileSID.ora on the server side is used to start up the instance.
Second Priority: If the spfileSID.ora is not found, the default SPFILE on the server side is
used to start the instance.
Third Priority: If the default SPFILE is not found, the initSID.ora on the server side will be
used to start the instance.
A specified PFILE can override the use of the default SPFILE to start an instance. Examples:
STARTUP PFILE=$ORACLE_HOME/dbs/initUSER350.ora
Or
STARTUP PFILE=$HOME/initUSER350.ora
Here is an example coding script within SQL*Plus that demonstrates how to display current parameter
values and to alter these values.
You can also use the ALTER SYSTEM RESET command to delete a parameter setting or revert to a default
value for a parameter.
Starting Up a Database
Instance Stages
Databases can be started up in various states or stages. The diagram shown below illustrates the stages
through which a database passes during startup and shutdown.
NOMOUNT: This stage is only used when first creating a database or when it is necessary to recreate a
database's control files. Startup includes the following tasks.
Open a log file named alert_SID.log and any trace files specified in the initialization
parameter file.
Example startup commands for creating the Oracle database and for the database belonging
to USER350 are shown here.
SQL> STARTUP NOMOUNT PFILE=$ORACLE_HOME/dbs/initDBORCL.ora
SQL> STARTUP NOMOUNT PFILE=$HOME/initUSER350.ora
MOUNT: This stage is used for specific maintenance operations. The database is mounted, but not
open. You can use this option if you need to:
Rename datafiles.
Example startup commands for maintaining the Oracle database and for the database
belonging to USER350 are shown here.
SQL> STARTUP MOUNT PFILE=$ORACLE_HOME/dbs/initDBORCL.ora
SQL> STARTUP MOUNT PFILE=$HOME/initUSER350.ora
OPEN: This stage is used for normal database operations. Any valid user can connect to the
database. Opening the database includes opening datafiles and redo log files. If any of these files are
missing, Oracle will return an error. If errors occurred during the previous database shutdown, the
SMON background process will initiate instance recovery. An example command to startup the database
in OPEN stage is shown here.
SQL> STARTUP PFILE=$ORACLE_HOME/dbs/initDBORCL.ora
SQL> STARTUP PFILE=$HOME/initUSER350.ora
If the database initialization parameter file is in the default location at $ORACLE_HOME/dbs, then you
can simply type the command STARTUP and the database associated with the current value
of ORACLE_SID will startup.
Startup Command Options:
You can force a restart of a running database that aborts the current Instance and starts a new normal
instance with the FORCE option.
SQL> STARTUP FORCE PFILE=$HOME/initUSER350.ora
Sometimes you will want to startup the database, but restrict connection to users with the RESTRICTED
SESSION privilege so that you can perform certain maintenance activities such as exporting or importing
part of the database.
SQL> STARTUP RESTRICT PFILE=$HOME/initUSER350.ora
You may also want to begin media recovery when a database starts where your system has suffered a
disk crash.
SQL> STARTUP RECOVER PFILE=$HOME/initUSER350.ora
On a LINUX server, you can automate startup/shutdown of an Oracle database by making entries in a
special operating system file named oratab located in the/var/opt/oracle directory.
you
must
issue
Restricted Mode
Earlier you learned to startup the database in a restricted mode with the RESTRICT option. If the
database is open, you can change to a restricted mode with the ALTER SYSTEM command as shown
here. The first command restricts logon to users with restricted privileges. The second command
enables all users to connect.
SQL> ALTER SYSTEM ENABLE RESTRICTED SESSION;
SQL> ALTER SYSTEM DISABLE RESTRICTED SESSION;
One of the tasks you may perform during restricted session is to kill current user sessions prior to
performing a task such as the export of objects (tables, indexes, etc.). The ALTER SYSTEM KILL
SESSION 'integer1, integer2' command is used to do this. The values of integer1 and integer2 are
obtained from the SID and SERIAL# columns in the V$SESSION view. The first six SID values shown
below
are
for
background
processes
and
should
be
left
alone! Notice
that
the
users SYS and USER350 are connected. We can kill the session for user account name DBOCKSTD.
Now when DBOCK attempts to select data, the following message is received.
When a session is killed, PMON will rollback the user's current transaction and release all table and row
locks held and free all resources reserved for the user.
READ ONLY Mode
You can open a database as read-only provided it is not already open in read-write mode. This is useful
when you have a standby database that you want to use to enable system users to execute queries while
the production database is being maintained.
An oracle database can be started in various modes. Each mode is used by the DBA's to perform some
specific operation in the database.
To start the database there are 3 modes.
NOMOUNT MOUNT ==> OPEN
STATUS
-----------STARTED
STARTUP MOUNT MODE: (Maintenance phase)
Mounting a database into mount includes the following tasks:
Locating and opening the control file specified in the parameter file.
Reading the control file to obtain the name, status and destination of DATA FILES AND ONLINE
REDO LOG FILES
To perform special maintenance operations
Renaming data files (Data files for an offline tablespace can be renamed when the database is
open)
Enabling and disabling online redo log file archiving, flashback options.
To open the database, the Oracle server first opens all the data files and the online
redo log files, and verify that the database is consistent. If the database isnt
consistentfor example, if the SCNs in the control files dont match some of the SCNs in
the data file headersthe background process will automatically perform an instance
recovery before opening the database. If media recovery rather than instance recovery is
needed, Oracle will signal that a database recovery is called for and wont open the
database until you perform the recovery.
Opening a database includes the following tasks:
Open online data files
Open online redo log files
Command to start Database in mount mode:
SQL> ALTER DATABASE OPEN;
Database altered.
SQL> SELECT STATUS FROM V$INSTANCE;
STATUS
-----------OPEN
If we start an oracle database in restricted mode then only those users who have restricted session
privilege will be able to connect to the database.
Startup restrict include the following tasks.
It open database in restricted mode where only restricted user can access.
The dba uses the STARTUP RESTRICT mode.
Perform an export or import of database data
Perform a data load (with SQL*Loader)
Temporarily prevent typical users from using data
During certain migration and upgrade operations
Command to start Database in STARTUP RESTRICT mode.
SQL> startup restrict
ORACLE instance started.
Total System Global Area 504366872 bytes
Fixed Size 743192 bytes
Variable Size 285212672 bytes
Database Buffers 218103808 bytes
Redo Buffers 307200 bytes
Database mounted.
Database opened.
After database open in restricted mode the alter system command can be used to put the database in
and out of restricted session once it is open:
SQL> alter system enable restricted session;
system altered
SQL> alter system disable restricted session;
system altered
CONCLUSION:
STARTUP NOMOUNT MODE- The nomount state is used by the dba to create a new oracle database.
STARTUP MOUNT MODE- The mount state is used by the dba to perform recovery
STARTUP OPEN MODE- The open state is used by the dba and programmers to work with
the
database in a normal way.
STARTUP FORCE MODE- The force state is used by the dba in worst-case scenarios when you are not
able to shutdown the database using normal/immediate options
STARTUP RESTRICT MODE- Start an oracle database in restricted mode then only those users who have
restricted session privilege can connect.
IMPORTANT NOTE:
Whenever we are shutting a database in a normal way then before shutting the oracle
The server waits for all users to disconnect before completing the shutdown.
Connected clients are disconnected and SQL statements in process are not completed.
alert - A new alert directory for the plain text and XML versions of the alert log.
trace - A replacement for the ancient background dump (bdump) and user dump (udump)
destination. This is where the alert_SID.log is stored.
cdump - The old core dump directory retains it's Oracle 10g name.
Oracle 11g writes two alert logs.
One is written as a plain text file and is named alert_SID.log (for example a database
named USER350 would have an alert log namedalert_USER350.log.
It will be stored to the location specified by DIAGNOSTIC_DEST if you set that parameter. I
found
the
DBORCL
alert
log
named alert_DBORCL.log
located
at /u01/app/oracle/diag/rdbms/dborcl/DBORCL/trace. This location directory was
generated based on a setting of DIAGNOSTIC_DEST = '/u01/app/oracle'.
You can access the alert log via standard SQL using the new V$DIAG_INFO view:
column name format a22;
column value format a55;
select name, value from v$diag_info;
NAME
VALUE
---------------------- ------------------------------------------------------Diag Enabled
TRUE
ADR Base
/u01/app/oracle
ADR Home
/u01/app/oracle/diag/rdbms/dborcl/DBORCL
Diag Trace
/u01/app/oracle/diag/rdbms/dborcl/DBORCL/trace
Diag Alert
/u01/app/oracle/diag/rdbms/dborcl/DBORCL/alert
Diag Incident
/u01/app/oracle/diag/rdbms/dborcl/DBORCL/incident
Diag Cdump
/u01/app/oracle/diag/rdbms/dborcl/DBORCL/cdump
Health Monitor
/u01/app/oracle/diag/rdbms/dborcl/DBORCL/hm
Default Trace File
/u01/app/oracle/diag/rdbms/dborcl/DBORCL/trace/DBORCL_o
ra_25119.trc
Active Problem Count 1
Active Incident Count 2
11 rows selected.
You can enable or disable user tracing with the ALTER SESSION command as shown here.
ALTER SESSION SET SQL_TRACE = TRUE
You can also set the SQL_TRACE = TRUE parameter in the initialization parameter files.
3.
4.
If we start database using pfile then system global area will be static i.e. we cannot modify
system global area at runtime.
5.
7. We can create pfile from an spfile using the SQL> create pfile from spfile; command.
SPFILE:
1. The spfile is a server parameter file.
2.
3.
4.
If we start database using spfile then system global area will be dynamic i.e. we can modify
system global area at runtime without shut down our database.
7.
We can create spfile from a PFILE using the SQL> create spfile from pfile; command.
A SPFILE doesnt need a local copy of the pfile to start oracle from a remote machine. Thus
eliminates configuration problems.
SPFILE is a binary file and modifications to that can only be done through ALTER SYSTEM SET
command.
As SPFILE is maintained by the server, human errors can be eliminated as the parameters are
checked before modification in SPFILE
It is easy to locate SPFILE as it is stored in a central location
Changes to the parameters in SPFILE will take immediate effect without restart of the instance
i.e. Dynamic change of parameters is possible
SPFILE can be backed up by RMAN
SQL>create
NOTE
1)To fire above command your database should be at least connect to ideal instance of target database
2)spfile should be present at Default location [/u01/app/oracle/product/10.2.0/db_1/dbs]
4)To create
command.
SPfile
at
Default
location
[/u01/app/oracle/product/10.2.0/db_1/dbs]
fire
following
NOTE
1)To fire above command your database should be at least connect to ideal instance of target database.
2)pfile should be present at Default location [/u01/app/oracle/product/10.2.0/db_1/dbs]
STORAGE MANAGEMENT
STORAGE MANAGEMENT:
At the finest level of granularity, Oracle stores data in data blocks (also called logical blocks, Oracle
blocks, or pages). One data block corresponds to a specific number of bytes of physical database space
on disk.
The next level of logical database space is an extent. An extent is a specific number of contiguous data
blocks allocated for storing a specific type of information.
The level of logical database storage greater than an extent is called a segment. A segment is a set of
extents, each of which has been allocated for a specific data structure and all of which are stored in the
same tablespace. For example, each table's data is stored in its own data segment, while each index's
data is stored in its own index segment. If the table or index is partitioned, each partition is stored in its
own segment.
Oracle allocates space for segments in units of one extent. When the existing extents of a segment are
full, Oracle allocates another extent for that segment. Because extents are allocated as needed, the
extents of a segment may or may not be contiguous on disk.
A segment and all its extents are stored in one tablespace. Within a tablespace, a segment can include
extents from more than one file; that is, the segment can span datafiles. However, each extent can
contain data from only one datafile.
Although you can allocate additional extents, the blocks themselves are allocated separately. If you
allocate an extent to a specific instance, the blocks are immediately allocated to the free list. However, if
the extent is not allocated to a specific instance, then the blocks themselves are allocated only when the
high water mark moves. The high water mark is the boundary between used and unused space in a
segment.
Overview of Data Blocks
Oracle manages the storage space in the datafiles of a database in units called data blocks. A data block
is the smallest unit of data used by a database. In contrast, at the physical, operating system level, all
data is stored in bytes. Each operating system has a block size. Oracle requests data in multiples of
Oracle data blocks, not operating system blocks. The standard block size is specified by the
DB_BLOCK_SIZE initialization parameter
Data Block Format
Segment Types
Objects in an Oracle database such as tables, indexes, clusters, sequences, etc., are comprised
of segments. There are several different types of segments.
Table: Data are stored in tables. When a table is created with the CREATE TABLE command, a table
segment is allocated to the new object.
The DBA has almost no control over the location of rows in a table.
Cluster: Rows in a cluster segment are stored based on key value columns. Clustering is sometimes
used where two tables are related in a strong-weak entity relationship.
All of the tables in a cluster belong to the same segment and have the same storage
parameters.
Tables may have more than one index, and each index has its own segment.
Each index segment has a single purpose to speed up the process of locating rows in a table
or cluster.
Index-Organized Table: This special type of table has data stored within the index based on primary
key values. All data is retrievable directly from the index structure (a tree structure).
Index Partition: Just as a table can be partitioned, so can an index. The purpose of using a
partitioned index is to minimize contention for the I/O path by spreading index input-output across more
than one I/O path.
When this occurs, intermediate results of sort actions are written to disk so that the sort
operation can continue this allows information to swap in and out of memory by writing/reading
to/from disk.
These LOBs are not stored in the table they are stored as separate segment objects.
The table with the column actually has a "pointer" value stored in the column that points to
the location of the LOB.
NestedTable: A column in one table may consist of another table definition. The inner table is called a
"nested table" and is stored as a separate segment. This would be done for a SALES_ORDER table that
has the SALES_DETAILS (order line rows) stored as a nested table.
Bootstrap Segment: This is a special cache segment created by the sql.bsq script that runs when a
database is created.
Locally Managed Tablespaces use bitmaps to track used and free space Locally
managed is the default for non-SYSTEM permanent tablespaces when the type of extent
management is not specified at the time a tablespace is created.
o Tablespace extents for Locally Managed are either (1) Uniform specified with
the UNIFORM clause or (2) variable extent sizes determined by the system with
the AUTOALLOCATE clause.
Uniform:
Dictionary Managed Tablespaces tables in the data dictionary track space utilization.
Facts about storage parameters:
Segment storage parameters can override the tablespace level defaults with the exception of
two parameters. You cannot override the MINIMUM EXTENT or UNIFORM SIZE tablespace
parameters.
If you do not specify segment storage parameters, then a segment will inherit the tablespace
default parameters.
If tablespace default storage parameters are not set, the Oracle server system default
parameters are used.
Locally
managed
tablespaces cannot have
the
storage
parameters INITIAL, NEXT, PCTINCREASE, and MINEXTENTS specified; however, these
parameters can be specified at the segment level.
When storage parameters of a segment are modified, the modification only applies to extents
that are allocated after the modification takes place.
Extents
Extents are allocated in chunks that are not necessarily uniform in size, but the space allocated is
contiguous on the disk drive as is shown in this figure.
When a database object such as a table grows, additional disk space is allocated to its
segment of the tablespace in the form of an extent.
This figure shows two extents of different sizes for the Department table segment.
In order to develop an understanding of extent allocation to segments, review this CREATE
TABLESPACE command.
CREATE TABLESPACE data
DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf'
SIZE 20M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 40K;
Here the DEFAULT STORAGE parameter is used to specify the size of extents allocated to
segments created within the tablespace.
INITIAL specifies the initial extent size (the first extent allocated).
o A size that is too large here can cause failure of the database if there is not any area on
the disk drive with sufficient contiguous disk space to satisfy the INITIAL parameter.
o When a database is built to store information from an older system that is being
converted to Oracle, a DBA may have some information about how large initial extents
need to be in general and may specify a larger size as is done here at 128K.
NEXT specifies the size of the next extent (2nd, 3rd, etc).
o This is termed an incremental extent.
o This can also cause failure if the size is too large.
o Usually a smaller value is used, but if the value is too small, segment fragmentation can
result.
o This must be monitored periodically by a DBA which is why dictionary managed
tablespaces are NOT preferred.
PCTINCREASE can be very troublesome.
o If you set this very high, e.g. 50% as is shown here, the segment extent size can
increase by 7,655% over just 10 extents.
o Best solution: a single INITIAL extent of the correct size followed by a small value
for NEXT and a value of 0 (or a small value such as 5) for PCTINCREASE.
Use smaller default INITIAL and NEXT values for a dictionary-managed tablespace's default storage
clauses as these defaults can be over-ridden during the creation of individual objects (tables, indexes,
etc.) where the STORAGE clause is used in creating the individual objects.
MINEXTENTS and MAXEXTENTS parameters specify the minimum and maximum number of
extents allocated by default to segments that are part of the tablespace.
The default storage parameters can be overridden when a segment is created as is illustrated in this next
section.
Example of a CREATE TABLE Command
This shows the creation of a table named Orders in the Data01 tablespace.
The storage
parameters
specified
here
override
the
storage
the Data01 tablespace.
parameters
for
Amount_Paid
NUMBER(10,2) )
Ship_Date
Amount_Due
DATE,
NUMBER(10,2),
This figure represents a Locally Managed tablespace where the Locally Managed tablespace's
extent size is specified by the EXTENT MANAGEMENT LOCAL AUTOALLOCATE clauserecall
that AUTOALLOCATE enables Oracle to decide the appropriate extent size for a segment. In an
older Oracle database, it could also represent a Dictionary Managed tablespace.
As segments are dropped, altered, or truncated, extents are released to become free extents
available for reallocation.
The first extent is allocated to a segment, even though the data blocks may be empty.
Oracle formats the blocks for an extent only as they are used - they can actually contain old
data.
Extents for a segment must always be in the same tablespace, but can be in different datafiles.
The first data block of every segment contains a directory of the extents in the segment.
If you delete data from a segment, the extents/blocks are not returned to the tablespace for
reuse. Deallocation occurs when:
o You DROP a segment.
o You use an online segment shrink to reclaim fragmented space in a segment.
ALTER TABLE employees ENABLE ROW MOVEMENT;
ALTER TABLE employees SHRINK SPACE CASCADE;
The 2 main views to find segments and extents information are: dba_segments and dba_extents
SQL> desc dba_segments
Name
Null?
-------------------------- --------------OWNER
SEGMENT_NAME
PARTITION_NAME
SEGMENT_TYPE
SEGMENT_SUBTYPE
TABLESPACE_NAME
HEADER_FILE
HEADER_BLOCK
BYTES
BLOCKS
EXTENTS
INITIAL_EXTENT
NEXT_EXTENT
MIN_EXTENTS
MAX_EXTENTS
MAX_SIZE
RETENTION
MINRETENTION
PCT_INCREASE
FREELISTS
FREELIST_GROUPS
RELATIVE_FNO
BUFFER_POOL
FLASH_CACHE
CELL_FLASH_CACHE
Type
----------------------------------VARCHAR2(30)
VARCHAR2(81)
VARCHAR2(30)
VARCHAR2(18)
VARCHAR2(10)
VARCHAR2(30)
NUMBER
NUMBER
NUMBER
NUMBER
NUMBER
NUMBER
NUMBER
NUMBER
NUMBER
NUMBER
VARCHAR2(7)
NUMBER
NUMBER
NUMBER
NUMBER
NUMBER
VARCHAR2(7)
VARCHAR2(7)
VARCHAR2(7)
Type
-----------------------------------VARCHAR2(30)
VARCHAR2(81)
VARCHAR2(30)
VARCHAR2(18)
VARCHAR2(30)
NUMBER
NUMBER
NUMBER
NUMBER
NUMBER
NUMBER
Another important concept to understand, in case of Table segments, is the "High Watermark" (HWM). It
defines the position of the last formated block for the segment. It means that in case of a Full table scan
(FTS - i.e. select * from table1 ;) Oracle will go through all the segment's blocks up to the HWM position.
Database Block
The Database Block or simply Data Block, as you have learned, is the smallest size unit for
input/output from/to disk in an Oracle database.
A data block may be equal to an operating system block in terms of size, or may be larger in
size, and should be a multiple of the operating system block.
The DB_BLOCK_SIZE parameter sets the size of a database's standard blocks at the time
that a database is created.
DB_BLOCK_SIZE has to be a multiple of the physical block size allowed by the operating
system for a servers storage devices.
If DB_BLOCK_SIZE is not set, then the default data block size is operating system-specific.
The standard data block size for a database is 4KB or 8KB.
Oracle also supports the creation of databases that have more than one block size. This is
primarily done when you need to specify tablespaces with different block sizes in order to
maximize I/O performance.
You've already learned that a database can have up to four nonstandard block
sizes specified.
Block sizes must be sized as a power of two between 2K and 32K in size,
e.g., 2K, 4K, 8K, 16K, or 32K.
A sub cache of the Database Buffer Cache is configured by Oracle for each nonstandard block
size.
Standard Block Size: The DB_CACHE_SIZE parameter specifies the size of the Database Buffer
Cache. However, if SGA_TARGET is set and DB_CACHE_SIZE is not, then Oracle decides how much
memory to allocate to the Database Buffer Cache. The minimum size for DB_CACHE_SIZE must be
specified as follows:
One granule where a granule is a unit of contiguous virtual memory allocation in RAM.
If the total System Global Area (SGA) based on SGA_MAX_SIZE is less than 128MB, then a
granule is 4MB.
If the total SGA is greater than 128MB, then a granule is 16MB.
The default value for DB_CACHE_SIZE is 48MB rounded up to the nearest granule size.
Nonstandard Block Size: If a DBA wishes to specify one or more nonstandard block sizes, the
parameter following parameters are set.
The data block sizes should be a multiple of the operating system's block size within the
maximum limit to avoid unnecessary I/O.
Oracle data blocks are the smallest units of storage that Oracle can use or allocate.
Do not use the specified DB_BLOCK_SIZE value to set nonstandard block sizes.
For
example,
if
the
standard
block
size
is 8K,
do
not
use
the DB_8K_CACHE_SIZE parameter.
Here the nonstandard block size specified with the BLOCKSIZE clause is 32K.
This command will not execute unless the DB_32K_CACHE_SIZE parameter has already
been specified because buffers of size 32K must already be allocated in the Database Buffer
Cache as part of a sub cache.
There are some additional rules regarding the use of multiple block sizes:
If an object is partitioned and resides in more than one tablespace, all of the tablespaces
where the object resides must be the same block size.
Temporary tablespaces must be the standard block size. This also applies to permanent
tablespaces that have been specified as default temporary tablespaces for system users.
What Block Size To Use?
Use the largest block size available with your operating system for a new database.
Using a larger database block size should improve almost every performance factor.
If the database has excessive buffer busy waits (due to a large # of users performing updates
and inserts), then increase the freelists parameter setting for the table or other busy objects.
Data Block Contents
This figure shows the components of a data block. This is the structure regardless of the type of
segment to which the block belongs.
Block header contains common and variable components including the block address, segment type,
and transaction slot information.
The block header also includes the table directory and row directory.
On average, the fixed and variable portions of block overhead total 84 to 107 bytes.
Table Directory used to track the tables to which row data in the block belongs.
o Data from more than one table may be in a single block if the data are clustered.
o The Table Directory is only used if data rows from more than one table are
stored in the block, for example, a cluster.
Row Directory - used to track which rows from a table are in this block.
o The Row Directory includes for each row or row fragment in the row data area.
o When space is allocated in the Row Directory to store information about a row, this
space is not reclaimed upon deletion of a row, but is reclaimed when new rows are
inserted into the block.
o A block can be empty of rows, but if it once contained rows, then data will be allocated in
the Row Directory (2 bytes per row) for each row that ever existed in the block.
Transaction Slots are space that is used when transactions are in progress that will modify
rows in the block.
The block header grows from top down.
Data space (Row Data) stores row data that is inserted from the bottom up.
Free space in the middle of a block can be allocated to either the header or data space, and is
contiguous when the block is first allocated.
Free space is allocated to allow variable character and numeric data to expand and contract as
data values in existing rows are modified.
Free space may fragment as rows in the block are modified or deleted.
Oracle (the SMON background process) automatically and transparently coalesces the free space of a
data block periodically only when the following conditions are true:
An INSERT or UPDATE statement attempts to use a block that contains sufficient free space
to contain a new row piece.
The free space is fragmented so that the row piece cannot be inserted in a contiguous section
of the block.
After coalescing, the amount of free space is identical to the amount before the operation, but the space
is now contiguous. This figure shows before and after coalescing free space.
Table Data in a Segment: Table data is stored in the form of rows in a data block.
The figures below show the block header then the data space (row data) and the free space.
The storage overhead is in the form of "hidden" columns accessible by the DBMS that specify
the length of each succeeding column.
Rows are stored right next to each other with no spaces in between.
Column values are stored right next to each other in a variable length format.
The length of a field indicates the length of each column value (variable length - Note the
Length Column 1, Length Column 2, etc., entries in the figure).
The row is too large to fit into one data block when it is first inserted, or the table
contains more than 255 columns (the maximum for a row piece).
o In this case, Oracle stores the data for the row in a chain of data blocks (one or more)
reserved for that segment.
o Row chaining most often occurs with large rows, such as rows that contain a column of
datatype LONG or LONG RAW.
o Row chaining in these cases is unavoidable.
A row that originally fit into one data block has one or more columns updated so that the
overall row length increases, and the block's free space is already completely filled.
o In this case, Oracle migrates the data for the entire row to a new data block, assuming
the entire row can fit in a new block.
o Oracle preserves the original row piece of a migrated row to point to the new block
containing the migrated row.
o The rowid of a migrated row does not change.
When a row is chained or migrated, I/O performance associated with this row decreases because Oracle
must scan more than one data block to retrieve the information for the row.
Manual Data Block Free Space Management -- Database Block Space Utilization Parameters
Manual data block management requires a DBA to specify how block space is used and when a block is
available for new row insertions.
This is the default method for data block management for dictionary managed
tablespace objects (another reason for using locally managed tablespaces with UNIFORM
extents).
Database block space utilization parameters are used to control space allocation for data and
index segments.
The INITTRANS parameter:
specifies the initial number of transaction slots created when a database block is initially
allocated to either a data or index segment.
These slots store information about the transactions that are making changes to the block at a
given point in time.
If you set INITRANS to 2, then there are 46 bytes (2 * 23) pre-allocated in the header, etc.
If a DBA specifies INITTRANS at 4, for example, this means that 4 transactions can be
concurrently making modifications to the database block.
Also, setting this to a figure that is larger than the default can eliminate the processing
overhead that occurs whenever additional transaction slots have to be allocated to a block's
header when the number of concurrent transactions exceeds the INITTRANS parameter.
The MAXTRANS parameter:
specifies the maximum number of concurrent transactions that can modify rows in a database
block.
This parameter is set to guarantee that there is sufficient space in the block to store data or
index data.
Example: Suppose a DBA sets INITTRANS at 4 and MAXTRANS at 10. Initially, 4 transaction slots
are allocated in the block header. If 6 system users process concurrent transactions for a given block,
then the number of transaction slots increases by 2 slots to 6 slots. Once this space is allocated in the
header, it is not deallocated.
PCTFREE is the only space parameter used for Automatic Segment Space Management.
The parameter guarantees that at least PCTFREE space is reserved for updates to existing
data rows. PCTFREE reserves space for growth of existing rows through the modification of data
values.
This figure shows the situation where the PCTFREE parameter is set to 20 (20%).
New rows can be added to a data block as long as the amount of space remaining is at or
greater than PCTFREE.
After PCTFREE is met (this means that there is less space available than
the PCTFREE setting), Oracle considers the block full and will not insert new rows to the block.
PCTUSED: The parameter PCTUSED is used to set the level at which a block can again be considered
by Oracle for insertion of new rows. It is like a low water mark whereas PCTFREE is a high water
mark. The PCTUSED parameter sets the minimum percentage of a block that can be used for row data
plus overhead before new rows are added to the block.
After a data block is filled to the limit determined by PCTFREE, Oracle Database considers the
block unavailable for the insertion of new rows until the percentage of that block falls beneath
the parameter PCTUSED.
As free space grows (the space allocated to rows in a database block decreases due to
deletions or updates), the block can again have new rows inserted but only if the percentage of
the data block in use falls below PCTUSED.
Example: if PCTUSED is set at 40, once PCTFREE is hit, the percentage of block space
used must drop to 39% or less before row insertions are again made.
Oracle tries to keep a data block at least PCTUSED full before using new blocks.
The PCTUSED parameter is not set when Automatic Segment Space Management is
enabled. This parameter only applies when Manual Segment Space Management is in use.
This figure depicts the situation where PCTUSED is set to 40 and PCTFREE is set to 20 (40% and 20%
respectively).
Both PCTFREE and PCTUSED are calculated as percentages of the available data space Oracle deducts
the space allocated to the block header from the total block size when computing these parameters.
There is a lot of space for the growth of existing rows in a data block.
Performance is improved since data blocks do not need to be reorganized very frequently.
Storage space within a data block may not be used efficiently as there is always some empty
space in the data blocks.
A low PCTFREE has these effects (basically the opposite effect of high PCTFREE):
Performance may suffer due to the need to reorganize data in data blocks more frequently:
o Oracle may need to migrate a row that will no longer fit into a data block due to
modification of data within the row.
o If the row will no longer fit into a single database block, as may be the case for very large
rows, then database blocks are chained together logically with pointers. This also
causes a performance hit. This may also cause a DBA to consider the use of a
nonstandard block size. In these situations, I/O performance will degrade.
o Examine the extent of chaining or migrating with the ANALYZE command. You may
resolve row chaining and migration by exporting the object (table), dropping the object,
and then importing the object.
Decreases performance because data blocks may experience more migrated and chained rows.
Reduces wasted storage space by filling each data block more fully.
Storage space usage is not as efficient due to more unused space in data blocks.
Guidelines for setting PCTFREE and PCTUSED:
If data for an object tends to be fairly stable (doesn't change in value very much), not much free space is
needed (as little as 5%). If changes occur extremely often and data values are very volatile, you may
need as much as 40% free space. Once this parameter is set, it cannot be changed without at least
partially recreating the object affected.
Update activity with high row growth the application uses tables that are frequently
updated affecting row size set PCTFREE moderately high and PCTUSED moderately low to
allow for space for row growth.
PCTFREE = 20 to 25
PCTUSED = 35 to 40
(100 PCTFREE) PCTUSED = 35 to 45
Insert activity with low row growth the application has more insertions of new rows with
very little modification of existing rows set PCTFREE low and PCTUSED at a moderate
level. This will avoid row chaining. Each data block has its space well utilized but once new row
insertion stops, there are no more row insertions until there is a lot of storage space again
available in a data blocks to minimize migration and chaining.
PCTFREE = 5 to 10
PCTUSED = 50 to 60
(100 PCTFREE) PCTUSED = 30 to 45
Performance primary importance and disk space is readily available when disk space
is abundant and performance is the critical issue, a DBA must ensure minimal migration or
chaining occurs by using very high PCTFREE and very low PCTUSED settings. A lot of storage
space will be wasted to minimize migration and chaining.
PCTFREE = 30
PCTUSED = 30
(100 PCTFREE) PCTUSED = 40
Disk space usage is important and performance is secondary the application uses
large tables and disk space usage is critical. Here PCTFREE should be very low
while PCTUSED is very high the tables will experience some data row migration and chaining
with a performance hit.
PCTFREE = 5
PCTUSED = 90
(100 PCTFREE) PCTUSED = 5
Free lists: With Manual Segment Space Management, when a segment is created, it is created with
a Free List that is used to track the blocks allocated to the segment that are available for row
insertions.
A segment can have more than one free list if the FREELISTS parameter is specified in the
storage clause when an object is created.
If a block has free space that falls below PCTFREE, that block is removed from the free list.
Oracle improves performance by not considering blocks that are almost full as candidates for
row insertions.
Automatic Segment Space Management
Free space can be managed either automatically or manually.
Automatic
simplifies
the
management
of
the PCTUSED, FREELISTS, and FREELIST GROUPS parameters.
Automatic generally provides better space utilization where objects may vary considerably in
terms of row size.
This can also yield improved concurrent access handling for row insertions.
A restriction is that you cannot use this approach if a tablespace will contain LOBs.
The free and used space for a segment is tracked with bitmaps instead of free lists.
The bitmap is stored in the header section of the segment, in a separate set of blocks
called bitmapped blocks.
The bitmap tracks the status of each block in a segment with respect to available space.
Think of an individual bit as either being "on" to indicate the block is available or "off" to
indicate a block is or is not available.
When a new row needs to be inserted into a segment, the bitmap is searched for a candidate
block. This search occurs much more rapidly than can be done with a Free List because
a Bit Map Index can often be entirely stored in memory and the use of a Free List requires
searching a chain data structure (linked list).
Automatic segment management can only be enabled at the tablespace level, and only if the tablespace
is locally managed. An example CREATE TABLESPACE command is shown here.
CREATE TABLESPACE user_data
DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf' SIZE 20M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 40K
SEGMENT SPACE MANAGEMENT AUTO;
The SEGMENT SPACE MANAGEMENT AUTO clause specifies the creation of the bitmapped segments.
Automatic segment space management offers the following benefits:
Ease of use.
Better space utilization, especially for the objects with highly varying size rows.
UPDATE statements that update a column value to a smaller value than was previously
required.
INSERT statements, but only if the tablespace allows for compression and the INSERT causes
data to be compressed, thereby freeing up some space in a block.
Both of these statements release space that can be used subsequently by an INSERT
statement.
Released space may or may not be contiguous with the main area of free space in a data
block.
Oracle coalesces the free space of a data block only when:
An INSERT or UPDATE statement attempts to use a block that contains enough free space to
contain a new row piece, or
the free space is fragmented so the row piece cannot be inserted in a contiguous section of
the block.
Oracle does this compression only in such situations, because otherwise the performance of a database
system decreases due to the continuous compression of the free space in data blocks.
Using the Data Dictionary to Manage Storage
Periodically you will need to obtain information from the data dictionary about storage parameter
settings. The following views are useful.
TABLESPACE MANAGEMENT:
A tablespace belongs to only one database, and has at least one datafile that is used to store
data for the associated tablespace.
The term "tablespaces" is misleading because a tablespace can store tables, but can also store
many other database objects such as indexes, views, sequences, etc.
Because disk drives have a finite size, a tablespace can span disk drives when datafiles from
more than one disk drive are assigned to a tablespace. This enables systems to be very, very
large.
Datafiles are always assigned to only one tablespace and, therefore, to only one database.
As is shown in the figure below, a tablespace can span datafiles.
Tablespace Types
There are three types of tablespaces: (1) permanent, (2) undo, and (3) temporary.
Permanent These tablespaces store objects in segments that are permanent that persist
beyond the duration of a session or transaction.
Undo These tablespaces store segments that may be retained beyond a transaction, but are
basically used to:
o Provide read consistency for SELECT statements that access tables that have rows that
are in the process of being modified.
o Provide the ability to rollback a transaction that fails to commit.
Temporary This tablespace stores segments that are transient and only exist for the
duration of a session or a transaction. Mostly, a temporary tablespace stores rows for sort and
join operations.
Beginning with Oracle 10g, the smallest Oracle database is two tablespaces. This applies to Oracle 11g.
o SYSTEM stores the data dictionary.
o SYSAUX stores data for auxiliary applications (covered in more detail later in these notes).
In reality, a typical production database has numerous tablespaces. These include SYSTEM and NONSYSTEM tablespaces.
SYSTEM a tablespace that is always used to store SYSTEM data that includes data about tables,
indexes, sequences, and other objects this metadata comprises the data dictionary.
Every Oracle database has to have a SYSTEM tablespaceit is the first tablespace created
when a database is created.
The SYSTEM tablespace could store user data, but this is not normally donea good rule to
follow is to never allow allow the storage of user segments in the SYSTEM tablespace.
Like the SYSTEM tablespace, SYSAUX requires a higher level of security and it cannot be
dropped or renamed.
Do not allow user objects to be stored in SYSAUX. This tablespace should only store system
specific objects.
MINIMUM EXTENT: Every used extent for the tablespace will be a multiple of this integer
value. Use either T, G, M or K to specify terabytes, gigabytes, megabytes, or kilobytes.
BLOCKSIZE: This specifies a nonstandard block size this clause can only be used if the
DB_CACHE_SIZE parameter is used and at least one DB_nK_CACHE_SIZE parameter is set and
the integer value for BLOCSIZE must correspond with one of the DB_nK_CACHE_SIZE parameter
settings.
LOGGING: This is the default all tables, indexes, and partitions within a tablespace have
modifications written to Online Redo Logs.
NOLOGGING: This option is the opposite of LOGGING and is used most often when large
direct loads of clean data are done during database creation for systems that are being ported
from another file system or DBMS to Oracle.
DEFAULT storage_clause: This specifies default parameters for objects created inside the
tablespace. Individual storage clauses can be used when objects are created to override the
specified DEFAULT.
TEMPORARY: A temporary tablespace can hold temporary database objects, e.g., segments
created during sorts as a result of ORDER BY clauses or JOIN views of multiple tables. A
temporary tablespace cannot be specified for EXTENT MANAGEMENT LOCAL or have the
BLOCKSIZE clause specified.
extent_management_clause: This clause specifies how the extents of the tablespace are
managed and is covered in detail later in these notes.
segment_management_clause: This specifies how Oracle will track used and free space in
segments in a tablespace that is using free lists or bitmap objects.
MAXSIZE: Specifies the maximum disk space allocated to the tablespace. Usually set in
megabytes, e.g., 400M or specified as UNLIMITED.
Tablespace Space Management
Tablespaces can be either Locally Managed to Dictionary Managed. Dictionary managed tablespaces
have been deprecated (are no longer used--are obsolete) with Oracle 11g; however, you may encounter
them when working at a site that is using Oracle 10g.
When you create a tablespace, if you do not specify extent management, the default is locally managed.
Locally Managed
The extents allocated to a locally managed tablespace are managed through the use of bitmaps.
The bitmap value (on or off) corresponds to whether or not an extent is allocated or free for
reuse.
Local management is the default for the SYSTEM tablespace beginning with Oracle 10g.
When the SYSTEM tablespace is locally managed, the other tablespaces in the database must
also be either locally managed or read-only.
Local management reduces contention for the SYSTEM tablespace because space allocation
and deallocation operations for other tablespaces do not need to use data dictionary tables.
The LOCAL option is the default so it is normally not specified.
With the LOCAL option, you cannot specify any DEFAULT STORAGE, MINIMUM EXTENT,
or TEMPORARY clauses.
Extent Management
Advantages of Local Management: Basically all of these advantages lead to improved system
performance in terms of response time, particularly the elimination of the need to coalesce free extents.
Local management avoids recursive space management operations. This can occur in
dictionary managed tablespaces if consuming or releasing space in an extent results in another
operation that consumes or releases space in an undo segment or data dictionary table.
Because locally managed tablespaces do not record free space in data dictionary tables, they
reduce contention on these tables.
Local management of extents automatically tracks adjacent free space, eliminating the need to
coalesce free extents.
The sizes of extents that are managed locally can be determined automatically by the system.
Changes to the extent bitmaps do not generate undo information because they do not update
tables in the data dictionary (except for special cases such as tablespace quota information).
Example: CREATES TABLESPACE command this creates a locally managed Inventory tablespace with
AUTOALLOCATE management of extents.
CREATE TABLESPACE inventory
DATAFILE '/u02/student/dbockstd/oradata/USER350invent01.dbf' SIZE 50M
EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
Example: CREATE TABLESPACE command this creates a locally managed Inventory tablespace with
UNIFORM management of extents with extent sizes of 128K.
PCTINCREASE percent by which each extent after the second extent grows.
o SMON periodically coalesces free space in a dictionary-managed tablespace, but only if
the PCTINCREASE setting is NOT zero.
o Use ALTER TABLESPACE <tablespacename> COALESCE to manually coalesce
adjacent free extents.
ORA 12913
As I said , see the keyword "autoallocate" why extents are allocated with different sizes as autoallocate
specifies extent sizes are system generated. Most likely our tablespace will be autoalloctate LMT.
Who wants to create extents with the same sizes, need to specify UNIFORM.
Lets start with two users . Each user is assigned different , different tablespace. User ROSE is assigned
to test tablespace(uniform). User SONA is assigned to samp tablespace (autoallocate).
When creating tablespace test , I have mentioned UNIFORM so all extents sizes are same. We can see
bytes column from following screen shot.
When creating tablespace samp , I did NOT mention UNIFORM so extents sizes are NOT same. We can
see bytes column from following screen shot.
LMT can use either autoallocate or uniform is all about allocation of new extents when space pressure
increases in the tablespace
UNIFORMLY SIZED EXTENTS - UNIFORM
AUTO SIZED EXTENTS - AUTOALLOCATE
AUTO ALLOCATE
Means that the extent sizes are managed by Oracle. It will choose the optimal next size for the extents
starting with 64 KB. As the segments grow and more extents are needed, Oracle starts allocating larger
and larger sizes then it moves to 1Mb , 8MB ultimately to 64Mb extents. We can make initial extent size
of greater than 64KB , it will allocate extents atleast the amount of the space.
UNIFORM
Create the extents the same size by specifying the size when create the tablespace. i.e. UNIFORM
specifies that the tablespace is managed with uniform extents of SIZE bytes (use K or M to specify the
extent size).
In 10g , If not specified Extent management dictionary automatically tablespace will be created as
LOCALLY MANAGED .
SYSTEM tablespace should be DICTIONARY MANAGED otherwise cannot create dictionary managed
tablespaces.
OFFLINE
RENAME
DROP
SYSTEM
NO
NO
NO
SYSAUX
YES
NO
NO
TEMPORARY
NO
YES
NO
UNDO
NO
YES
YES
PROPERTY_VALUE
It will show what already set smallfile or bigfile.
FIND DEFAULT_PERMANENT_TABLESPACE
PERMANENT TABLESPACE
Permanaent tablespaces can be either small tablespaces or big tablespaces. Small tablespace can be
made up of a number of data files. Big tablespace will only be made up of one data file and this can get
extremely large. We cannot add datafile to a bigfile tablespace.
A bigfile tablespace with 8K blocks can contain a 32 terabyte datafile. A bigfile tablespace with 32K
blocks can contain a 128 terabyte datafile. The maximum number of datafiles in an Oracle Database is
limited (usually to 64K files). We can specify SIZE in kilobytes (K), megabytes (M), gigabytes (G), or
terabytes (T).
Bigfile tablespaces are supported only for locally managed tablespaces with automatic segment space
management.
ORA-12905
The DEFAULT TEMPORARY TABLESPACE cannot be taken off-line The DEFAULT TEMPORARY TABLESPACE
cannot be dropped until create another one. We cannot change a default temporary tablespace into a
permanent tablespace. Oracle10g introduced new feature which we will create the temp file automatically
when we restart the database.
If we make Temporary tablespace goes offline, associated all temp files will be offline status. Even if
tablespace is offline , can add temp file under this tablespace by default ONLINE status. Even a
tablespace is offline, can set temporary tablespace as Default temporary tablespace .
UNDO Tablespace
The Undo tablespace is used for automatic undo management. Note
the UNDO clause within the CREATE command shown in the figure here.
More than one UNDO tablespace can exist, but only one can be active at a time.
the
required
use
of
WHAT IS UNDO?
Word UNDO means to reverse or erase the change. In oracle world, UNDO allows to reverse the
transaction so that database looks like before the transaction started. Also it provides read consistent
image of data i.e. if during a transaction if one session is changing a data where as at the same time
other session wants to read that data then UNDO will provide the state of data before the transaction
starts.
USE OF UNDO
There is few main usage of UNDO in oracle database.
* UNDO is used to reverse the uncommitted transaction on the issue of ROLLBACK command.
*
It
is
used
to
provide
read
consistent
image
of
a
record
* It is used to during database recovery to apply any uncommitted transaction in redo logs to datafiles.
* Flashback Query also uses UNDO to get the image of data back in time.
UNDO MANAGEMENT
Oracle needs UNDO tablespace to create undo segments. If undo tablespace is not created then it will
use SYSTEM tablespace as UNDO and not recommended. To create UNDO tablespace you have to use
UNDO keyword in create tablespace command. Most common configuration of UNDO_MANAGEMENT is
AUTO (Default is MANUAL in 8i & 9i).
will not generate any redo entry. Hence IMU improve the Undo Header contention and undo block
contention by doing UNDO management in memory called IMU Node. Rather than writing a change to
undo buffer ,undo is written to IMU Node. On the other hand memory structure require latches to
maintain serial execution so make sure that enough IMU latches are created by changing processes
parameter. Oracle use In memory Undo Latch to access IMU structures in shared pool. So if you have
high waits on this latch then you can increase the number of latches by increasing processes parameter
or
switch
in-memory
undo
by
setting
in_memory_undo
to
false.
This
enables
Oracles
own
SQL
to
use
IMU.
Default
is
FALSE.
_db_writer_flush_imu Allows Oracle the freedom to artificially age a transaction for increased automatic
cache management. Default is Initialization Parameter Description TRUE
There is no parameter that allow us to change memory allocation for IMU Node but changing
shared_pool_size
parameter
can
help
adjusting
the
IMU
memory
allocation.
To findout how much memory is allocated run the command On UAT database
are
few
reasons
that
you
might
have
contention
with
IMU
*
Few
latches
than
actually
require
*
Not
enough
memory
allocated
for
IMU
Node.
Check
v$latch_children
view.
* Not enough CPU core available i.e. if your system is already suffering from high CPU utilisation.
Few suggestions to try in these cases
* Increase the number of processes parameter so that it you can increase number of latches. Be careful
and
understand
the
other
impact
of
increasing
processes
parameter.
* Increase the shared_pool_size. Again also understand the impact of increasing shared_poo_size on
over database performance.
Provide the consistent read for other user. Oracle uses the snapshot isolation transaction by
default. undo tablespace store the un commit data, so other user can access it.
undo_management: It is recommend to set to AUTO and let oracle manage the space.
undo_retention: The number is in secs. That means how long oracle would keep the data in the
extent after it is commit. The higher number would avoid the Snapshot too old error and
flashback query can query order data. However, the higher number it is set, the more space
would be used.
Undo_tablespace is defined the undo tablespace name.
We can add more data file in the undo tablespace but only one undo table space per database.
Change the undo tablespace
Since I specify the path after the datafile, the file is not name and manages by OMF, hence the drop
tablespace would not remove the file. We have to manually remove it by using OS command, such as rm.
Alternatively, we can use below syntax
alter tablespace myundo1 add datafile size 10M;
ORA-30013
COLUMNS
VALUES
CONTENTS
STATUS
EXTENT_MANAGEMENT
DICTIONARY , LOCAL
ALLOCATION_TYPE
SEGMENT_SPACE_MANAGEMENT
MANUAL , AUTO
TEMPORARY Tablespace
A TEMPORARY tablespace is used to manage space for sort operations. Sort operations generate
segments, sometimes large segments or lots of them depending on the sort required to satisfy the
specification in a SELECT statement's WHERE clause.
Sort operations are also generated by SELECT statements that join rows from within tables and between
tables.
Note the use of the TEMPFILE instead of a DATAFILE specification for a temporary tablespace in the
figure shown below.
You cannot take a default temporary tablespace offline this is done only for system
maintenance or to restrict access to a tablespace temporarily. None of these activities apply to
default temporary tablespaces.
A single user can have more than one temporary tablespace in use by assigning the temporary
tablespace group as the default to the user instead of a single temporary tablespace.
Example: Suppose two temporary tablespaces named TEMP01 and TEMP02 have been
created. This code assigns the tablespaces to a group named TEMPGRP.
SQL> ALTER TABLESPACE temp01 TABLESPACE GROUP tempgrp;
Tablespace altered.
SQL> ALTER TABLESPACE temp02 TABLESPACE GROUP tempgrp;
Tablespace altered.
Example continued: This code changes the database's default temporary tablespace
to TEMPGRP you use the same command that would be used to assign a temporary
tablespace as the default because temporary tablespace groups are treated logically the same as
an individual temporary tablespace.
To drop a tablespace group, first drop all of its members. Drop a member by assigning the
temporary tablespace to a group with an empty string.
To assign a temporary tablespace group to a user, the CREATE USER SQL command is the
same as for an individual tablespace. In this example user350 is assigned the temporary
tablespace TEMPGRP.
Tablespace groups allow users to use more than one tablespace to store temporary segments. It
contains only temporary tablespace. It is created implicitly when the first temporary tablespace is
assigned to it, and is deleted when the last temporary tablespace is removed from the group.
Benefits:
-It allows the user to use multiple temporary tablespaces in different sessions at the same time.
-It allows a single SQL operation to use multiple temporary tablespaces for sorting.
USERS, DATA and INDEXES Tablespaces
Most Oracle databases will have a USERS permanent tablespace.
This tablespace is used to store objects created by individual users of the database.
At SIUE we use the USERS tablespace as a storage location for tables, indexes, views, and
other objects created by students.
A DATA tablespace is also permanent and is used to store application data tables such as
ORDER ENTRY or INVENTORY MANAGEMENT applications.
For large applications, it is often a practice to create a special DATA tablespace to store data
for the application. In this case the tablespace may be named whatever name is appropriate to
describe the objects stored in the tablespace accurately.
Oracle databases having a DATA (or more than one DATA) tablespace will also have an accompanying
INDEXES tablespace.
The purpose of separating tables from their associated indexes is to improve I/O efficiency.
The DATA and INDEXES tablespaces will typically be placed on different disk drives thereby
providing an I/O path for each so that as tables are updated, the indexes can also be updated
simultaneously.
Bigfile Tablespaces
A Bigfile tablespace is best used with a server that uses a RAID storage device with disk stripping a
single datafile is allocated and it can be up to 8EB (exabytes, a million terabytes) in size with up
to 4G blocks.
Normal tablespaces are referred to as Smallfile tablespaces.
Why are Bigfile tablespaces important?
The maximum number of datafiles in an Oracle database is limited (usually to 64K files)
think big herethink about a database for the internal revenue service.
o A Bigfile tablespace with 8K blocks can contain a 32 terabyte datafile.
o A Bigfile tablespace with 32K blocks can contain a 128 terabyte datafile.
o These sizes enhance the storage capacity of an Oracle database.
o These sizes can also reduce the number of datafiles to be managed.
Bigfile tablespaces can only be locally managed with automatic segment space management
except for locally managed undo tablespaces, temporary tablespaces, and the SYSTEM
tablespace.
If a Bigfile tablespace is used for automatic undo or temporary segments, the segment space
management must be set to MANUAL.
Bigfile tablespaces save space in the SGA and control file because fewer datafiles need to be
tracked.
ALTER TABLESPACE commands on a Bigfile tablespace do not reference a datafile because
only one datafile is associated with each Bigfile tablespace.
Example this example creates a Bigfile tablespace named Graph01 (to store data that is graphical in
nature and that consumes a lot of space). Note use of the BIGFILE keyword.
CREATE BIGFILE TABLESPACE graph01
DATAFILE '/u03/student/dbockstd/oradata/USER350graph01.dbf' SIZE 10g;
Example continued: This resizes the Bigfile tablespace to increase the capacity from 10
gigabytes to 40 gigabytes.
Example continued: This sets the AUTOEXTEND option on to enable the tablespace to extend
in size 10 gigabytes at a time.
The keyword DEFAULT is used to specify compression when followed by the compression type.
You can override the type of compression used when creating a table in the tablespace.
Compression has these advantages:
Compression saves disk space, reduces memory use in the database buffer cache, and can
significantly speed query execution during reads.
Compression has a cost in CPU overhead for data loading and DML. However, this cost might
be offset by reduced I/O requirements.
This example creates a compressed tablespace named COMP_DATA. Here the Compress for
OLTP clause specifies the type of compression. You can study the other types of compression on your
own from your readings.
CREATE TABLESPACE comp_data
DATAFILE '/u02/oradata/DBORCL/DBORCLcomp_data.dbf' SIZE 50M
DEFAULT COMPRESS FOR OLTP
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO;
Tablespace created.
Encrypted Tablespaces
Only permanent tablespaces can be encrypted.
Purpose is to protect sensitive data from unauthorized access through the operating system file
system.
Partitioned tables/indexes can have both encrypted and non-encrypted segments in different
tablespaces.
The database must have the COMPATIBLE parameter set to 11.1.0 or higher.
encryption
algorithms. The
default
This example creates an encrypted tablespace named SECURE_DATA that uses 256-bit keys.
CREATE TABLESPACE secure_data
DATAFILE '/u02/oradata/DBORCL/DBORCLsecure_data.dbf' SIZE 50M
ENCRYYPTION USING 'AES256' EXTENT MANAGEMENT LOCAL
DEFAULT STORAGE(ENCRYPT);
Tablespace created.
You cannot encrypt an existing tablespace with the ALTER TABLESPACE statement. You would need to
export the data from an unencrypted tablespace and then import it into an encrypted tablespace.
Read Only Tablespaces
Offline tablespace backup a tablespace can be backed up while online, but offline backup is
faster.
If DBWn fails to write to a datafile after several attempts, Oracle will automatically take the
associated tablespace offline the DBA will then recover the datafile.
Tablespace Sizing
Normally over time tablespaces need to have additional space allocated. This can be accomplished by
setting the AUTOEXTEND option to enable a tablespace to increase automatically in size.
This can be dangerous if a runaway process or application generates data and consumes all
available storage space.
An advantage is that applications will not ABEND because a tablespace runs out of storage
capacity.
This can be accomplished when the tablespace is initially created or by using the ALTER
TABLESPACE command at a later time.
CREATE TABLESPACE application_data
DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf' SIZE 200M
AUTOEXTEND ON NEXT 48K MAXSIZE 500M;
This query uses the DBA_DATA_FILES view to determine if AUTOEXTEND is enabled for selected
tablespaces in the SIUE DBORCL database.
SELECT tablespace_name, autoextensible
FROM dba_data_files;
TABLESPACE_NAME
-----------------------------SYSTEM
SYSAUX
UNDOTBS1
USERS
AUT
--NO
NO
YES
NO
ALTER DATABASE
DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf'
AUTOEXTEND ON MAXSIZE 600M;
This command looks similar to the above command, but this one resizes a datafile while the above
command sets the maxsize of the datafile.
ALTER DATABASE
DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf'
RESIZE 600M;
Dropping Tablespaces
Occasionally tablespaces are dropped due to database reorganization. A tablespace that contains data
cannot be dropped unless the INCLUDING CONTENTS clause is added to the DROP command. Since
tablespaces will almost always contain data, this clause is almost always used.
A DBA cannot drop the SYSTEM tablespace or any tablespace with active segments. Normally you should
take a tablespace offline to ensure no active transactions are being processed.
An example command set that drops the compressed tablespace COMP_DATA created earlier is:
ALTER TABLESPACE comp_data OFFLINE;
DROP TABLESPACE comp_data
INCLUDING CONTENTS AND DATAFILES
CASCADE CONSTRAINTS;
The AND DATAFILES clause causes the datafiles to also be deleted. Otherwise, the tablespace is
removed from the database as a logical unit, and the datafiles must be deleted with operating system
commands.
A
block
size
is
nonstandard
if
it
differs
from
the
size
specified
by
the DB_BLOCK_SIZE initialization parameter.
In order for this to work, you must have already set DB_CACHE_SIZE and at least
one DB_nK_CACHE_SIZE initialization parameter values to correspond to the nonstandard
block size to be used.
Note that the DB_nK_CACHE_SIZE parameter corresponding to the standard block size
cannot be used it will be invalid instead use the DB_CACHE_SIZE parameter for the
standard block size.
Example these parameters specify a standard block size of 8K with a cache for standard block size
buffers of 12M. The 2K and 16K caches will be configured with cache buffers of 8M each.
DB_BLOCK_SIZE=8192
DB_CACHE_SIZE=12M
DB_2K_CACHE_SIZE=8M
DB_16K_CACHE_SIZE=8M
Example this creates a tablespace with a blocksize of 2K (assume the standard block size for the
database was 8K).
CREATE TABLESPACE inventory
DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf'
SIZE 50M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K
BLOCKSIZE 2K;
Managing Tablespaces with Oracle Managed Files
As you learned earlier, when you use an OMF approach, the DB_CREATE_FILE_DEST parameter in the
parameter file specifies that datafiles are to be created and defines their location. The DATAFILE clause
to name files is not used because filenames are automatically generated by the Oracle Server, for
example, ora_tbs1_2xfh990x.dbf.
You can also use the ALTER SYSTEM command to dynamically set this parameter in the SPFILE
parameter file.
ALTER SYSTEM SET
DB_CREATE_FILE_DEST = '/u02/student/dbockstd/oradata';
Additional tablespaces are specified with the CREATE TABLESPACE command shown here that specifies
not the datafile name, but the datafile size. You can also add datafiles with the ALTER
TABLESPACE command.
CREATE TABLESPACE application_data DATAFILE SIZE 100M;
ALTER TABLESPACE application_data ADD DATAFILE;
Setting the DB_CREATE_ONLINE_LOG_DEST_n parameter prevents log files and control files from
being located with datafiles this will reduce I/O contention.
When OMF tablespaces are dropped, their associated datafiles are also deleted at the operating system
level.
Lets try to add a datafile to Tablespace USERS and try to add the same again to UNDOTBS, to
demonstrate that one datafile can be associated with ONLY one tablespace. Trying to add the Datafile
associated already to a tablespace to other Tablespace will error out ORA-01537 - ... file already part
of database
Lets create a new Tablespace in Database MyDB created in my Virtual Linux server.
To create/alter a tablespace, the user should have the CREATE/ALTER TABLESPACE system privileges (To
know different system privileges granted to a user query the DBA_SYS_PRIVS table. As I connected to
database as SYS, the user is SYS)
Bigfile Tablespaces
- If Database is created by mentioning Bigfile as default for TBs creation, then CREATE TABLESPACE..
statement creates the tablespace as Bigfile Tablespace
- Bigfile tablespaces are by default EXTENT MANAGEMENT LOCAL and SEGMENT SPACE
MANAGEMENT AUTO.
- If you specify EXTENT MANAGEMENT DICTIONARY and SEGMENT SPACE MANAGEMENT MANUAL,
then the TBs creation will error out
- A bigfile tablespace with 8K blocks can contain a 32 terabyte datafile. A bigfile tablespace
with 32K blocks can contain a 128 terabyte datafile. The maximum number of datafiles in an Oracle
Database is limited (usually to 64K files). Therefore, bigfile tablespaces can significantly enhance the
storage capacity of an Oracle Database.
- To find the default TBs type using which the database is created can be found by querying
DATABASE_PROPERTIES table
Encrypted Tablespaces
- TBs encryption is applicable to Permanent TBs ONLY
- Any user who is granted privileges on objects stored in an encrypted tablespace can access those
objects without providing any kind of additional password or key
- Data from an encrypted tablespace is automatically encrypted when written to the undo tablespace,
to the redo logs, and to any temporary tablespace. There is no need to explicitly create encrypted undo
or temporary tablespaces, and in fact, you cannot specify encryption for those tablespace types.
- Transparent data encryption supports industry-standard encryption algorithms, including the
following Advanced Encryption Standard (AES) and Triple Data Encryption Standard (3DES) algorithms:
3DES168
AES128(default when USING keyword is not mentioned)
AES192
AES256
- You cannot encrypt an existing tablespace with an ALTER TABLESPACE statement. However, you can
use Data Pump or SQL statements such as CREATE TABLE AS SELECT or ALTER TABLE MOVE to move
existing table data into an encrypted tablespace.
- Encryption algorithm implemented for a TBs can be determined by querying
-v$encrypted_tablespaces
- Tablespace encryption uses the transparent data encryption feature of Oracle Database, which
requires that you create an Oracle wallet to store the master encryption key for the database. The wallet
must be open before you can create the encrypted tablespace and before you can store or retrieve
encrypted data.
When we try to create the encrypted TBs without create/open the oracle wallet, then ORA-28365:
wallet is not open will be thrown
To correct the above error, create a directory named wallet as in here $ORACLE_HOME/admin/
$ORACLE_SID/wallet.
And mention the same in sqlnet.ora file as below
Shutdown/Restart the instance and open the oracle wallet using the ALTER SYSTEM and then create the
encrypted TBs
Undo Purpose
Undo records are used to:
If all statements process, SQLPlus or the programming application will issue a COMMIT to
make database changes permanent.
The command ROLLBACK is used to cancel (not commit) a transaction that is in progress.
SET TRANSACTION Transaction boundaries can be defined with the SET TRANSACTION command.
This has no performance benefit achieved by setting transaction boundaries, but doing so
enables defining a savepoint.
DML statements since the last savepoint are rolled back with the ROLLBACK TO
SAVEPOINT savepoint_name command.
Undo vs. Rollback
In earlier versions of Oracle, the term rollback was used instead of undo, and instead of
managing undo segments, the DBA was responsible for managing rollback segments.
Rollback segments were one of the primary areas where problems often arose; thus, the
conversion to automatic undo management is a significant improvement.
You will see parts of the data dictionary and certain commands still use the term Rollback for
backward compatibility.
Undo Segments
There are two methods for managing undo data:
(1) Automatic undo management automatic undo management is preferred.
This is the type of undo management used when you create an UNDO tablespace and
specify use of automatic undo management.
Automatic undo management is the default for Oracle 11g for a new database.
(2) Manual undo management manual undo management is the only method available for
Oracle 8i and earlier versions of Oracle and is the type of management that involves use of
rollback segments.
Undo data old data values from tables are saved as undo data by writing a copy of the image from a
data block on disk to an undo segment. This also stores the location of the data as it existed before
modification.
Undo segment header this stores a transaction table where information about current transactions
using this particular segment is stored.
A serial transaction uses only one undo segment to store all of its undo data.
Transaction Rollback: Old images of modified columns are saved as undo data to undo segments.
Redo Logs bring both committed and uncommitted transactions forward to the point of
instance failure.
Undo data is used to undo any transactions that were not committed at the point of failure.
These users should be hidden from modifications to the database that have not yet
committed.
Also, if a system user begins a program statement execution, the statement should not see
any changes that are committed after the transaction begins.
Old values stored in undo segments are provided to system users accessing table rows that are
in the process of being changed by another system user in order to provide a readconsistent image of the data.
In the figure shown below, an UPDATE command has a lock on a data block from
the EMPLOYEE table and an undo image of the block is written to the undo segment. The update
transaction has not yet committed, so any concurrent SELECT statement by a different system user will
result in data being displayed from the undo segment, not from the EMPLOYEE table. This readconsistent image is constructed by the Oracle Server.
SYSTEM undo segments are used for modifications to objects stored in the SYSTEM
tablespace.
This type of Undo Segment works identically in both manual and automatic mode.
Databases with more than one tablespace must have at least one non-SYSTEM undo segment for
manual mode or a separate Undo tablespace for automatic mode.
Manual Mode: A non-SYSTEM undo segment is created by a DBA and is used for changes to objects
in a non-SYSTEM tablespace. There are two types of non-SYSTEM undo segments: (1) Private and
(2) Public.
Private Undo Segments: These are brought online by an instance if they are listed in the parameter
file.
Prior to Oracle 9i, undo segments were named rollback segments and the command has
not changed.
These are used with Oracle Real Application Clusters as a pool of undo segments available to
any of the Real Application Cluster instances.
You can learn more about public undo segments by studying the Oracle Real Application
Clusters and Administration manual.
Deferred Undo Segments: These are maintained by the Oracle Server so a DBA does not have to
maintain them.
They can be created when a tablespace is brought offline (immediate, temporary, or recovery).
They are used for undo transactions when the tablespace is brought back online.
They are dropped by the Oracle Server automatically when they are no longer needed.
Automatic Undo Management
The objective is a "set it and forget it" approach to Undo Management.
Oracle allows a DBA to allocate one active Undo tablespace per Oracle Instance.
The Oracle Server automatically maintains undo data in the Undo tablespace.
With 11g, there is no need to set the UNDO_MANAGEMENT parameter in the initialization
to AUTO.
Oracle will automatically use the single Undo Tablespace when in AUTO mode.
If more than one Undo tablespace exists (so they can be switched if necessary, but only one
can be active), the UNDO_TABLESPACE parameter in the initialization file is used to specify the
name of the Undo tablespace to be used by Oracle Server when an Oracle Instance starts up.
If no Undo tablespace exists, Oracle will start up a database and use the SYSTEM tablespace
undo segment for undo.
An alert message will be written to the alert file to warn that no Undo tablespace is available.
If you use the UNDO_TABLESPACE parameter and the tablespace referenced does not exist,
the STARTUP command will fail.
Examples:
UNDO_MANAGMENT=AUTO or UNDO_MANAGMENT=MANUAL
UNDO_TABLESPACE=UNDO01
You can alter the system to change the active Undo tablespace that is in use as follows:
ALTER SYSTEM SET undo_tablespace = UNDO02;
Creating the Undo Tablespace: There are two methods of creating an undo tablespace manually.
1.
2.
In the example command shown above, the Undo tablespace is named UNDO01.
If the Undo tablespace cannot be created, the entire CREATE DATABASE command fails.
You can also create an Undo tablespace with the CREATE UNDO TABLESPACE command.
This is the same as the normal CREATE TABLESPACE command but with the UNDO keyword
added.
Use the ALTER SYSTEM command to switch between Undo tablespaces remember only one Undo
tablespace can be active at a time.
ALTER SYSTEM SET UNDO_TABLESPACE=undo03;
The tablespace is already being used by another instance (in an Oracle RAC environment only)
The database is online while the switch operation is performed, and user transactions can be executed
while this command is being executed.
When the switch operation completes successfully, all transactions started after the switch
operation began are assigned to transaction tables in the new undo tablespace.
The switch operation does not wait for transactions in the old undo tablespace to commit.
If there are any pending transactions in the old undo tablespace, the old undo tablespace
enters into a PENDING OFFLINE mode (status).
In this mode, existing transactions can continue to execute, but undo records for new user
transactions cannot be stored in this undo tablespace.
The DROP TABLESPACE command can be used to drop an Undo tablespace that is no longer needed
it cannot be an active undo tablespace.
DROP TABLESPACE undo02
INCLUDING CONTENTS AND DATAFILES;
However, for consistent read purposes, long-running queries may require this old undo
information for producing older images of data blocks.
Several Oracle Flashback features can also depend upon the availability of older undo
information.
For these reasons, it is desirable to retain the old undo information for as long as possible.
Automatic undo management always uses a specified undo retention period.
This is the minimum amount of time that Oracle Database attempts to retain old undo
information before overwriting it.
Old (committed) undo information that is older than the current undo retention period is said
to be expired and its space is available to be overwritten by new transactions.
Old undo information with an age that is less than the current undo retention period is said to
be unexpired and is retained for consistent read and Oracle Flashback operations.
Oracle Database automatically tunes the undo retention period based on undo tablespace size and
system activity.
You can optionally specify a minimum undo retention period (in seconds) by setting
the UNDO_RETENTION initialization parameter.
You can set this parameter in the initialization file or you can dynamically alter it with
the ALTER SYSTEM command:
ALTER SYSTEM SET UNDO_RETENTION = 43200;
The above command will retain undo segment data for 720 minutes (12 hours) the default
value is 900 seconds (15 minutes).
This sets the minimum undo retention period.
If the tablespace is too small to store Undo Segment data for 720 minutes, then the data is
not retained instead space is recovered by the Oracle Server to be allocated to new active
transactions.
Oracle
11g
automatically tunes
undo
retention
by collecting database
use
statistics
whenever AUTOEXTEND is on.
The RETENTION GUARANTEE clause of the CREATE UNDO TABLESPACE statement can
guarantee retention of Undo data to support DML operations, but may cause database failure if
the Undo tablespace is not large enough unexpired Undo data segments are not overwritten.
Query the RETENTION column of the DBA_TABLESPACES view to determine the setting for
the Undo tablespace possible values are GUARANTEE,NOGUARANTEE, and NOT APPLY (for
tablespaces other than Undo).
Sizing and Monitoring an Undo Tablespace
Three types of Undo data exists in a Undo tablespace:
Active (unexpired) these segments are needed for read consistency even after a transaction
commits.
Expired these segments store undo data that has been committed and all queries for the
data are complete and the undo retention period has been reached.
Unused these segments have space that has never been used.
After the system stabilizes, if you decide to used a fixed-size Undo tablespace, then Oracle
recommends setting the Undo tablespace maximum size to about 10% more than the current
size.
The Undo Advisor software available in Oracle Enterprise Manager can be used to calculate
the amount of Undo retention disk space a database needs.
Undo Data Statistics
The V$UNDOSTAT view displays statistical data to show how well a database is performing.
Each row in the view represents statistics collected for a 10-minute interval.
You can use this to estimate the amount of undo storage space needed for the current
workload.
If workloads vary considerably throughout the day, then a DBA should conduct estimations
during peak workloads.
The column SSOLDERRCNT displays the number of queries that failed with a "Snapshot too
old" error.
SELECT TO_CHAR(end_time, 'yyyy-mm-dd hh24:mi') end_time, undoblks, ssolderrcnt
FROM v$undostat;
In order to size an Undo tablespace, a DBA needs three pieces of information. Two are obtained from the
initialization file: UNDO_RETENTION and DB_BLOCK_SIZE. The third piece of information is
obtained by querying the database: the number of undo blocks generated per second.
SELECT (SUM(undoblks))/SUM((end_time-begin_time) * 86400)
FROM v$undostat;
(SUM(UNDOBLKS))/SUM((END_TIME-BEGIN_TIME)*86400)
-----------------------------------------------.063924708
In this next query, the END_TIME and BEGIN_TIME columns are DATE data and subtractions of these
results in days converting days to seconds is done by multiplying by 86,400, the number of seconds in
a day. This value needs to be multiplied by the size of an undo block the same size as the database
block defined by the DB_BLOCK_SIZE parameter.
The number of bytes of Undo tablespace storage needed is calculated by this query:
SELECT (UR * (UPS * DBS)) + (DBS * 24) As "Bytes"
FROM (SELECT value As UR
FROM v$parameter
WHERE name = 'undo_retention'),
(SELECT (SUM(undoblks)/SUM(((end_time begin_time) * 86400))) As UPS
FROM v$undostat),
(SELECT value As DBS
FROM v$parameter
This may become necessary when long transactions or poorly written transactions consume
limited database resources.
If the database has no resource bottlenecks, then the allocating of quotas can be ignored.
Sometimes undo data space is a limited resource. A DBA can limit the amount of undo data space used
by a group by setting the UNDO_POOL parameter which defaults to unlimited.
If the group exceeds the quota, then new transactions are not processed until old ones
complete.
The group members will receive the ORA-30027: Undo quota violation failed to get %s
(bytes) error message.
Resource plans are covered in more detail in a later set of notes.
Undo Segment Information
The following views provide information about undo segments:
DBA_ROLLBACK_SEGS
V$ROLLNAME -- the dynamic performance views only show data for online segments.
V$ROLLSTAT
V$UNDOSTAT
V$SESSION
V$TRANSACTION
This query lists information about undo segments in the SIUE DBORCL database. Note the two
segments in the SYSTEM tablespace and the remaining segments in the UNDO tablespace.
COLUMN
COLUMN
COLUMN
COLUMN
The owner column above specifies the type of undo segment. SYS means a private undo segment.
This query is a join of the V$ROLLSTAT and V$ROLLNAME views to display statistics on undo
segments currently in use by the Oracle Instance. The usncolumn is a sequence number.
o
o
o
o
This query checks the use of an undo segment by any currently active transaction by joining
the V$TRANSACTION and V$SESSION views.
SELECT s.username, t.xidusn, t.ubafil, t.ubablk, t.used_ublk
FROM v$session s, v$transaction t
WHERE s.saddr = t.ses_addr;
o
o
o
o
Flashback Features
Flashback features allow DBAs and users to access database information from a previous point in time.
Do not run discrete transactions while sensitive queries or transactions are running, unless you
are confident that the data sets required are mutually exclusive.
Schedule long running queries and transactions out of hours, so that the consistent gets will not
need to rollback changes made since the snapshot SCN. This also reduces the work done by the
server, and thus improves performance.
Code long running processes as a series of restartable steps.
Shrink all rollback segments back to their optimal size manually before running a sensitive query
or transaction to reduce risk of consistent get rollback failure due to extent deallocation.
Use a large optimal value on all rollback segments, to delay extent reuse.
Don't fetch across commits. That is, don't fetch on a cursor that was opened prior to the last
commit, particularly if the data queried by the cursor is being changed in the current session.
Use a large database block size to maximize the number of slots in the rollback segment
transaction tables, and thus delay slot reuse.
Commit less often in tasks that will run at the same time as the sensitive query, particularly in
PL/SQL procedures, to reduce transaction slot reuse.
If necessary, add extra rollback segments (undo logs) to make more transaction slots available.
What Is Undo?
Oracle Database creates and manages information that is used to roll back, or undo, changes to the
database. Such information consists of records of the actions of transactions, primarily before they are
committed. These records are collectively referred to as undo.
Undo records are used to:
When a ROLLBACK statement is issued, undo records are used to undo changes that were made to the
database by the uncommitted transaction. During database recovery, undo records are used to undo any
uncommitted changes applied from the redo log to the datafiles. Undo records provide read consistency
by maintaining the before image of the data for users who are accessing the data at the same time that
another user is changing it.
Overview of Automatic Undo Management
Oracle provides a fully automated mechanism, referred to as automatic undo management, for managing
undo information and space. With automatic undo management, the database manages undo segments
in an undo tablespace. Beginning with Release 11g, automatic undo management is the default mode for
a newly installed database. An auto-extending undo tablespace named UNDOTBS1 is automatically
created when you create the database with Database Configuration Assistant (DBCA).
When the instance starts, the database automatically selects the first available undo tablespace. If no
undo tablespace is available, the instance starts without an undo tablespace and stores undo records in
the SYSTEM tablespace. This is not recommended, and an alert message is written to the alert log file to
warn that the system is running without an undo tablespace.
If the database contains multiple undo tablespaces, you can optionally specify at startup that you want to
use a specific undo tablespace. This is done by setting the UNDO_TABLESPACE initialization parameter,
as shown in this example:
UNDO_TABLESPACE = undotbs_01
Note:
Space management for rollback segments is complex. Oracle strongly recommends leaving the database
in automatic undo management mode.
The following is a summary of the initialization parameters for undo management:
Initialization
Parameter
UNDO_MANAGEMENT
UNDO_TABLESPACE
Description
If AUTO or null, enables automatic undo management. If MANUAL, sets manual undo
management mode. The default is AUTO.
Optional, and valid only in automatic undo management mode. Specifies the name
of an undo tablespace. Use only when the database has multiple undo tablespaces
and you want to direct the database instance to use a particular undo tablespace.
When automatic undo management is enabled, if the initialization parameter file contains parameters
relating to manual undo management, they are ignored.
Note:
Earlier releases of Oracle Database default to manual undo management mode. To change to automatic
undo
management,
you
must
first
create
an
undo
tablespace
and
then
change
the UNDO_MANAGEMENT initialization parameter to AUTO . If your Oracle Database is release 9ior
later
and
you
want
to
change
to
automatic
undo
management.
A
null UNDO_MANAGEMENT initialization parameter defaults to automatic undo management mode in
Release 11g and later, but defaults to manual undo management mode in earlier releases. You must
therefore use caution when upgrading a previous release to Release 11g.
About the Undo Retention Period
After a transaction is committed, undo data is no longer needed for rollback or transaction recovery
purposes. However, for consistent read purposes, long-running queries may require this old undo
information for producing older images of data blocks. Furthermore, the success of several Oracle
Flashback features can also depend upon the availability of older undo information. For these reasons, it
is desirable to retain the old undo information for as long as possible.
When automatic undo management is enabled, there is always a current undo retention period, which
is the minimum amount of time that Oracle Database attempts to retain old undo information before
overwriting it. Old (committed) undo information that is older than the current undo retention period is
said to be expired and its space is available to be overwritten by new transactions. Old undo information
with an age that is less than the current undo retention period is said to be unexpired and is retained for
consistent read and Oracle Flashback operations.
Oracle Database automatically tunes the undo retention period based on undo tablespace size and
system activity. You can optionally specify a minimum undo retention period (in seconds) by setting
the UNDO_RETENTION initialization parameter. The exact impact this parameter on undo retention is
as follows:
The UNDO_RETENTION parameter is ignored for a fixed size undo tablespace. The database
always tunes the undo retention period for the best possible retention, based on system activity
and undo tablespace size.
For an undo tablespace with the AUTOEXTEND option enabled, the database attempts to honor
the minimum retention period specified by UNDO_RETENTION. When space is low, instead of
overwriting unexpired undo information, the tablespace auto-extends. If the MAXSIZE clause is
specified for an auto-extending undo tablespace, when the maximum size is reached, the
database may begin to overwrite unexpired undo information. The UNDOTBS1 tablespace that is
automatically created by DBCA is auto-extending.
If the undo tablespace is configured with the AUTOEXTEND option, the database dynamically
tunes the undo retention period to be somewhat longer than the longest-running active query on
the system. However, this retention period may be insufficient to accommodate Oracle Flashback
operations. Oracle Flashback operations resulting in snapshot too old errors are the indicator
that you must intervene to ensure that sufficient undo data is retained to support these
operations. To better accommodate Oracle Flashback features, you can either set
the UNDO_RETENTION parameter to a value equal to the longest expected Oracle Flashback
operation, or you can change the undo tablespace to fixed size.
If the undo tablespace is fixed size, the database dynamically tunes the undo retention period for
the best possible retention for that tablespace size and the current system load. This best
possible retention time is typically significantly greater than the duration of the longest-running
active query.
If you decide to change the undo tablespace to fixed-size, you must choose a tablespace size
that is sufficiently large. If you choose an undo tablespace size that is too small, the following
two errors could occur:
DML could fail because there is not enough space to accommodate undo for new
transactions.
Long-running queries could fail with a snapshot too old error, which means that there
was insufficient undo data for read consistency.
Note:
Automatic tuning of undo retention is not supported for LOBs. This is because undo information for LOBs
is stored in the segment itself and not in the undo tablespace. For LOBs, the database attempts to honor
the minimum undo retention period specified by UNDO_RETENTION. However, if space becomes low,
unexpired LOB undo information may be overwritten.
Retention Guarantee
To guarantee the success of long-running queries or Oracle Flashback operations, you can enable
retention guarantee. If retention guarantee is enabled, the specified minimum undo retention is
guaranteed; the database never overwrites unexpired undo data even if it means that transactions fail
due to lack of space in the undo tablespace. If retention guarantee is not enabled, the database can
overwrite unexpired undo when space is low, thus lowering the undo retention for the system. This
option is disabled by default.
WARNING:
Enabling retention guarantee can cause multiple DML operations to fail. Use with caution.
UNDO_RETENTION = 1800
The longest interval that you will require for Oracle Flashback operations
For example, if you expect to run Oracle Flashback queries for up to 48 hours in the past, your
Oracle Flashback requirement is 48 hours.
You then take the maximum of these two values and use that value as input to the Undo Advisor.
Running the Undo Advisor does not alter the size of the undo tablespace. The advisor just returns a
recommendation. You must use ALTER DATABASE statements to change the tablespace datafiles to
fixed sizes.
The following example assumes that the undo tablespace has one auto-extending
named undotbs.dbf. The example changes the tablespace to a fixed size of 300MB.
datafile
After you have created the advisor task, you can view the output and recommendations in the Automatic
Database Diagnostic Monitor in Enterprise Manager. This information is also available in
the DBA_ADVISOR_* data
dictionary
views
(DBA_ADVISOR_TASKS, DBA_ADVISOR_OBJECTS,DBA_ADVISOR_FINDINGS, DBA_ADVISOR_
RECOMMENDATIONS, and so on).
Managing Undo Tablespaces
This section describes the various steps involved in undo tablespace management and contains the
following sections:
Adding a datafile
Renaming a datafile
TABLESPACE...INCLUDING
The tablespace is already being used by another instance (in a RAC environment only)
The database is online while the switch operation is performed, and user transactions can be executed
while this command is being executed. When the switch operation completes successfully, all transactions
started after the switch operation began are assigned to transaction tables in the new undo tablespace.
View
V$UNDOSTAT
Description
Contains statistics for monitoring and tuning undo space. Use this view to
help estimate the amount of undo space required for the current workload.
The database also uses this information to help tune undo usage in the
system. This view is meaningful only in automatic undo management mode.
Controlfile Structure
Information about the database is stored in different sections of the control file. Each section is a set of
records about an aspect of the database. For example, one section in the control file tracks data files and
contains a set of records, one for each data file. Each section is stored in multiple logical control file
blocks. Records can span blocks within a section.
The control file contains the following types of records:
Circular reuse records
These records contain noncritical information that is eligible to be overwritten if needed. When all
available record slots are full, the database either expands the control file to make room for a new record
or overwrites the oldest record. Examples include records about:
LOG HISTORY
OFFLINE RANGE
ARCHIVED LOG
BACKUP SET
BACKUP PIECE
BACKUP DATAFILE
BACKUP REDOLOG
DATAFILE COPY
BACKUP CORRUPTION
COPY CORRUPTION
DELETED OBJECT
PROXY COPY
Noncircular reuse records
These records contain critical information that does not change often and cannot be overwritten.
Examples of information include tablespaces, data files, online redo log files, and redo threads. Oracle
Database never reuses these records unless the corresponding object is dropped from the tablespace.
Examples of non-circular controlfile sections (the ones that can only expand)
DATABASE (info)
CKPT PROGRESS (Checkpoint progress)
REDO THREAD, REDO LOG (Logfile)
DATAFILE (Database File)
FILENAME (Datafile Name)
TABLESPACE
A Control File is a small binary file that stores information needed to startup an Oracle database and to
operate the database.
A control file(s) is created at the same time the database is created based on
the CONTROL_FILES parameter in the PFILE.
If all copies of the control files for a database are lost/destroyed, then database recovery must
be accomplished before the database can be opened.
An Oracle database reads only the first control file listed in the PFILE; however, it writes
continuously to all of the control files (where more than one exists).
you must never attempt to modify a control file as only the Oracle Server should modify this
file.
While control files are small, the size of the file is affected by the following CREATE
DATABASE or CREATE CONTROLFILE command parameters if they have large values.
o MAXLOGFILES
o MAXLOGMEMBERS
o MAXLOGHISTORY
o MAXDATAFILES
o MAXINSTANCES
Contents of a Control File
Control files record the following information:
Database name recorded as specified by the initialization parameter DB_NAME or the name
used in the CREATE DATABASE statement.
Names and locations of datafiles and online redo log files. This information is updated if a
datafile or redo log is added to, renamed in, or dropped from the database.
If you are using an SPFILE, you can use the steps specified in the figure shown here. The difference is
you name the control file in the first step and create the copy in step 3.
All control files for the database have been permanently damaged and you do not have a
control file backup.
Note: You can change the database name and DBID (internal database identifier) using
the DBNEWID utility.
Example:
The CREATE CONTROLFILE statement can potentially damage specified datafiles and redo
log files.
It is only issued as a command in NOMOUNT stage.
Omitting a filename can cause loss of the data in that file, or loss of access to the entire
database.
If the database had forced logging enabled before creating the new control file, and you
want it to continue to be enabled, then you must specify the FORCE LOGGING clause in
the CREATE CONTROLFILE statement.
If you did not perform recovery, or you performed complete, closed database
recovery in step 8, open the database normally.
ALTER DATABASE OPEN;
If you specified RESETLOGS when creating the control file, use the ALTER
DATABASE statement, indicating RESETLOGS.
ALTER DATABASE OPEN RESETLOGS;
What if a Disk Drive Fails? Recovering a Control File
Use the following steps to recover from a disk drive failure that has one of the databases control files
located on the drive.
Copy a control file from one of the other disk drives to the new disk drive here we assume
that u02 is the new disk drive and control02.ctl is the damaged file.
$ cp /u01/oracle/oradata/control01.ctl /u02/oracle/oradata/control02.ctl
Restart the instance. If the new media (disk drive) does not have the same disk drive name as
the damaged disk drive or if you are creating a new copy while awaiting a replacement disk
drive, then alter the CONTROL_FILES parameter in the PFILE prior to restarting the database.
If you are awaiting a new disk drive, you can alter the CONTROL_FILES parameter to remove
the name of the control file on the damaged disk drive this enables you to restart the
database.
Backup Control Files and Create Additional Control Files
Oracle recommends backup of control files every time the physical database structure changes including:
V$CONTROLFILE gives the names and status of control files for an Oracle Instance.
SHOW PARAMETER CONTROL_FILES command lists the name, status, and location of
control files.
V$BACKUP
V$DATAFILE,
V$TEMPFILE
V$TABLESPACE
V$ARCHIVE
V$LOG
V$LOGFILE
TYPE
----------integer
string
VALUE
-----------------------------7
/data/oracle/app/oracle/oradat
a/CLONEDB/control01.ctl, /data
/oracle/app/oracle/oradata/CLO
NEDB/control02.ctl
DIAGNOSTIC+TUNING
control_management_pack_access
string
SQL> shut immediate;
ORA-01013: user requested cancel of current operation
SQL> shut abort;
ORACLE instance shut down.
SQL> startup nomount;
ORACLE instance started.
Total System Global Area 601272320 bytes
Fixed Size
2230712 bytes
Variable Size
276825672 bytes
Database Buffers
310378496 bytes
Redo Buffers
11837440 bytes
601272320
2230712
276825672
310378496
11837440
bytes
bytes
bytes
bytes
bytes
Checkpoint information
The control file must be available for writing by the Oracle Database server whenever the database is
open. Without the control file, the database cannot be mounted and recovery is difficult.
The control file of an Oracle Database is created at the same time as the database. By default, at least
one copy of the control file is created during database creation. On some operating systems the default is
to create multiple copies. You should create two or more copies of the control file during database
creation. You can also create control files later, if you lose control files or want to change particular
settings in the control files.
Guidelines for Control Files
This section describes guidelines you can use to manage the control files for a database, and contains the
following topics:
If you are not using Oracle-managed files, then the database creates a control file and uses a
default filename. The default name is operating system specific.
If you are using Oracle-managed files, then the initialization parameters you set to enable that
feature determine the name and location of the control files, as described in Chapter 15, "Using
Oracle-Managed Files".
If you are using Automatic Storage Management, you can place incomplete ASM filenames in
the DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST initialization parameters. ASM
then automatically creates control files in the appropriate places. See the sections "About ASM
Filenames" and "Creating a Database That Uses ASM" in Oracle Database Storage Administrator's
Guide for more information.
The database writes to all filenames listed for the initialization parameter CONTROL_FILES in
the database initialization parameter file.
The database reads only the first file listed in the CONTROL_FILES parameter during database
operation.
If any of the control files become unavailable during database operation, the instance becomes
inoperable and should be aborted.
Note:
Oracle strongly recommends that your database has a minimum of two control files and that they
are located on separate physical disks.
One way to multiplex control files is to store a control file copy on every disk drive that stores members
of redo log groups, if the redo log is multiplexed. By storing control files in these locations, you minimize
the risk that all control files and all groups of the redo log will be lost in a single disk failure.
The size of the control file changes between some releases of Oracle Database, as well as when the
number of files specified in the control file changes. Configuration
as MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES,
and MAXINSTANCES affect control file size.
parameters
such
You can subsequently change the value of the CONTROL_FILES initialization parameter to add more
control files or to change the names or locations of existing control files.
2.
Copy an existing control file to a new location, using operating system commands.
3.
Edit the CONTROL_FILES parameter in the database initialization parameter file to add the new
control file name, or to change the existing control filename.
4.
All control files for the database have been permanently damaged and you do not have a control
file backup.
Note:
You can change the database name and DBID (internal database identifier) using the DBNEWID
utility. See Oracle Database Utilities for information about using this utility.
The compatibility level is set to a value that is earlier than 10.2.0, and you must make a change
to an area of database configuration that relates to any of the following parameters from
the CREATE DATABASE or CREATE CONTROLFILE commands: MAXLOGFILES, MAXLOGME
MBERS, MAXLOGHISTORY, and MAXINSTANCES. If compatibility is 10.2.0 or later, you do
not have to create new control files when you make such a change; the control files
automatically expand, if necessary, to accommodate the new configuration information.
For example, assume that when you created the database or recreated the control files, you
set MAXLOGFILES to 3. Suppose that now you want to add a fourth redo log file group to the
database with the ALTER DATABASE command. If compatibility is set to 10.2.0 or later, you can
do so and the controlfiles automatically expand to accommodate the new logfile information.
However, with compatibility set earlier than 10.2.0, yourALTER DATABASE command would
generate an error, and you would have to first create new control files.
For information on compatibility level, see "About The COMPATIBLE Initialization Parameter".
The CREATE CONTROLFILE statement can potentially damage specified datafiles and redo log
files. Omitting a filename can cause loss of the data in that file, or loss of access to the entire
database. Use caution when issuing this statement and be sure to follow the instructions in
"Steps for Creating New Control Files".
If the database had forced logging enabled before creating the new control file, and you want it
to continue to be enabled, then you must specify the FORCE LOGGING clause in the CREATE
CONTROLFILE statement. See "Specifying FORCE LOGGING Mode".
Make a list of all datafiles and redo log files of the database.
If you follow recommendations for control file backups as discussed in "Backing Up Control
Files" , you will already have a list of datafiles and redo log files that reflect the current structure
of the database. However, if you have no such list, executing the following statements will
produce one.
SELECT MEMBER FROM V$LOGFILE;
SELECT NAME FROM V$DATAFILE;
SELECT VALUE FROM V$PARAMETER WHERE NAME = 'control_files';
If you have no such lists and your control file has been damaged so that the database cannot be
opened, try to locate all of the datafiles and redo log files that constitute the database. Any files
not specified in step 5 are not recoverable once a new control file has been created. Moreover, if
you omit any of the files that make up the SYSTEM tablespace, you might not be able to recover
the database.
2.
3.
normally
if
possible.
Use
5.
Create a new control file for the database using the CREATE CONTROLFILE statement.
When creating a new control file, specify the RESETLOGS clause if you have lost any redo log
groups in addition to control files. In this case, you will need to recover from the loss of the redo
logs (step 8). You must specify the RESETLOGS clause if you have renamed the
database. Otherwise, select the NORESETLOGS clause.
6.
Store a backup of the new control file on an offline storage device. See "Backing Up Control
Files" for instructions for creating a backup.
7.
Edit the CONTROL_FILES initialization parameter for the database to indicate all of the control
files now part of your database as created in step 5 (not including the backup control file). If you
are renaming the database, edit the DB_NAME parameter in your instance parameter file to
specify the new name.
8.
Recover the database if necessary. If you are not recovering the database, skip to step 9.
If you are creating the control file as part of recovery, recover the database. If the new control
file was created using the NORESETLOGS clause (step 5), you can recover the database with
complete, closed database recovery.
If the new control file was created using the RESETLOGS clause, you must specify USING
BACKUP CONTROL FILE. If you have lost online or archived redo logs or datafiles, use the
procedures for recovering those files.
9.
If you did not perform recovery, or you performed complete, closed database recovery in
step 8, open the database normally.
ALTER DATABASE OPEN;
the
control
file,
use
the ALTER
Back up the control file to a binary file (duplicate of existing control file) using the following
statement:
ALTER DATABASE BACKUP CONTROLFILE TO '/oracle/backup/control.bkp';
Produce SQL statements that can later be used to re-create your control file:
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
This command writes a SQL script to a trace file where it can be captured and edited to
reproduce the control file. View the alert log to determine the name and location of the trace file.
With the instance shut down, use an operating system command to overwrite the bad control file
with a good copy:
% cp /u03/oracle/prod/control03.ctl
2.
/u02/oracle/prod/control02.ctl
With the instance shut down, use an operating system command to copy the current copy of the
control file to a new, accessible location:
% cp /u01/oracle/prod/control01.ctl
2.
/u04/oracle/prod/control03.ctl
Edit the CONTROL_FILES parameter in the initialization parameter file to replace the bad
location with the new location:
CONTROL_FILES = (/u01/oracle/prod/control01.ctl,
/u02/oracle/prod/control02.ctl,
/u04/oracle/prod/control03.ctl)
3.
If you have multiplexed control files, you can get the database started up quickly by editing
the CONTROL_FILES initialization
parameter.
Remove
the
bad
control
file
from CONTROL_FILES setting and you can restart the database immediately. Then you can perform the
reconstruction of the bad control file and at some later time shut down and restart the database after
editing the CONTROL_FILES initialization parameter to include the recovered control file.
2.
Edit the CONTROL_FILES parameter in the database initialization parameter file to delete the
old control file name.
3.
Description
Displays database information from the control file
Lists the names of control files
Displays information about control file record sections
Displays the names of control files as
the CONTROL_FILES initialization parameter
specified
in
Redo Log Files enable the Oracle Server or DBA to redo transactions if a database failure occurs. This
is their ONLY purpose to enable recovery.
Transactions are written synchronously to the Redo Log Buffer in the System Global Area.
As the Redo Log Buffer fills, the contents are written to Redo Log Files.
This includes uncommitted transactions, undo segment data, and schema/object management
information.
During database recovery, information in Redo Log Files enable data that has not yet been
written to datafiles to be recovered.
Redo Thread
If a database is accessed by multiple instances, a redo log is called a redo thread.
Having a separate thread for each instance avoids contention when writing to what would
otherwise be a single set of redo log files - this eliminates a performance bottleneck.
Redo Log File Organization Multiplexing
The figure shown below provides the general approach to organizing on-line Redo Log Files. Initially
Redo Log Files are created when a database is created, preferably in groups to provide for
multiplexing. Additional groups of files can be added as the need arises.
Each Redo Log Group has identical Redo Log Files (however, each Group does not have to
have the same number of Redo Log Files).
If you have Redo Log Files in Groups, you must have at least two Groups. The Oracle Server
needs a minimum of two on-line Redo Log Groups for normal database operation.
The LGWR concurrently writes identical information to each Redo Log File in a Group.
Thus, if Disk 1 crashes as shown in the figure above, none of the Redo Log Files are truly lost
because there are duplicates.
The log sequence number is assigned by the Oracle Server as it writes to a log group
and the current log sequence number is stored in the control files and in the header
information of all Datafiles this enables synchronization between Datafiles and Redo
Log Files.
If the group has more members, you need more disk drives in order for the use of
multiplexed Redo Log Files to be effective.
A Redo Log File stores Redo Records (also called redo log entries).
These enable the protection of rollback information as well as the ability to roll forward for
recovery.
Each time a Redo Log Record is written from the Redo Log Buffer to a Redo Log File, a System
Change Number (SCN) is assigned to the committed transaction.
Where to Store Redo Log Files and Archive Log Files
Guidelines for storing On-line Redo Log Files versus Archived Redo Log Files:
1. Separate members of each Redo Log Group on different disks as this is required to ensure
multiplexing enables recovery in the event of a disk drive crash.
2. If possible, separate On-line Redo Log Files from Archived Log Files as this reduces
contention for the I/O path between the ARCn and LGWRbackground processes.
3. Separate Datafiles from On-line Redo Log Files as this reduces LGWR and DBWn contention. It
also reduces the risk of losing both Datafiles and Redo Log Files if a disk crash occurs.
You will not always be able to accomplish all of the above guidelines your ability to meet these
guidelines will depend on the availability of a sufficient number of independent physical disk drives.
Redo Log File Usage
Redo Log Files are used in a circular fashion.
One log file is written in sequential fashion until it is filled, and then the second redo log
begins to fill. This is known as a Log Switch.
When the last redo log is written, the database begins overwriting the first redo log again.
The Redo Log file to which LGWR is actively writing is called the current log file.
Log files required for instance recovery are categorized as active log files.
Log files no longer needed for instance recovery are categorized as inactive log files.
Active log files cannot be overwritten by LGWR until ARCn has archived the data when
archiving is enabled.
LGWR cannot write to a Redo Log Group because it is pending archiving Database
operation halts until the Redo Log Group becomes available (could be through turning off
archiving) or is archived.
A Redo Log Group is unavailable due to media failure Oracle generates an error message
and the database instance shuts down. During media recovery, if the database did not archive
the bad Redo Log, use this command to disable archiving so the bad Redo Log can be dropped:
ALTER DATABASE CLEAR UNARCHIVED LOG
4.
A Redo Log Group fails while LGWR is writing to the members Oracle generates an error
message and the database instance shuts down. Check to see if the disk drive needs to be
turned back on or if media recovery is required. In this situation, just turn on the disk drive and
Oracle will perform automatic instance recovery.
Sometimes a Redo Log File in a Group becomes corrupted while a database instance is in operation.
Clear the Redo Log Files in a Group (here Group #2) with the statement:
ALTER DATABASE CLEAR LOGFILE GROUP 2;
How large should Redo Log Files be, and how many Redo Log Files are enough?
The size of the redo log files can influence performance, because the behavior of the DBWn and ARCn
processes (but not the LGWR process) depend on the redo log sizes.
It may not always be possible to provide a specific size recommendation for redo log files, but
redo log files in the range of a hundred megabytes to a few gigabytes are considered reasonable.
Size your online redo log files according to the amount of redo your system generates. A rough
guide is to switch logs at most once every twenty minutes; however more often switches are
common when using Data Guard for primary and standby databases.
It is also good for the file size to be such that a filled group can be archived to a single offline
storage unit when such an approach is used.
If the LGWR generates trace files and an alert file entry that Oracle is waiting because a
checkpoint is not completed or a group has not been archived, then test adding another redo log
group (with its files).
This provides facts and guidelines for sizing Redo Log files.
The file size depends on the size of transactions that process in the database.
o Large batch update transactions require larger Redo Log Files, 5MB or more in size.
o Databases that primarily support on-line, transaction-processing (OLTP) can work
successfully with smaller Redo Log Files.
Set the size large enough so that the On-line Redo Log Files switch about once every 20
minutes.
o If your Log Files are 4MB in size and switches are occurring on the average of once
every 10 minutes, then double their size!
o You can specify the log switch interval to 20 minutes (a typical value) with
the init.ora command shown here that sets the ARCHIVE_LAG_TARGETparameter in
seconds ( there are 1200 seconds in 20 minutes).
ARCHIVE_LAG_TARGET = 1200
or to set the parameter dynamically
ALTER SYSTEM SET ARCHIVE_LAG_TARGET = 1200
Determine if LGWR has to wait (meaning you need more groups) by:
o Check the LGWR trace files the trace files will provide information about LGWR waits.
o Check the alert_SID.log file for messages indicating that LGWR has to wait for a group
because a checkpoint has not completed or a group has not been archived.
The parameter MAXLOGFILES in the CREATE DATABASE command specifies the maximum number of
Redo Log Groups you can have group numbers range from 1 to MAXLOGFILES.
When MAXLOGFILES is not specified, the CREATE DATABASE command uses a default value
specific to each operating system check the operation system documentation.
With Oracle 11g if your exceed the maximum number of Redo Log Groups, Oracle
automatically causes the control file to expand in size to accommodate the new maximum
number.
LGWR writes from the Redo Log Buffer to the current Redo Log File when:
a transaction commits
There is more than 1MB of changed rows in the Redo Log Buffer
Prior to DBWn writing modified blocks from the Database Buffer Cache to Datafiles.
Checkpoints also affect Redo Log File usage.
During a checkpoint the DBWn background process writes dirty database buffers (buffers that
have modified data) from the Database Buffer Cache to datafiles.
The CKPT background process updates the control file to reflect that a checkpoint has been
successfully completed.
If a log switch occurs as a result of a checkpoint, then the CKPT process updates the headers
of the datafiles.
Checkpoints can occur for all datafiles in the database or only for specific datafiles. A checkpoint occurs,
for example, in the following situations:
when an Oracle Instance is shut down with the normal, transactional, or immediate option.
Beginning with Oracle 10g, the database self-tunes checkpointing to achieve good recovery
times with low impact on normal throughput.
This method reduces the time required for cache recovery and makes the recovery bounded
and predictable by limiting the number of dirty buffers and the number of redo records generated
between the most recent redo record and the last checkpoint.
DBAs specify a target (bounded) time to complete the cache recovery phase of recovery with
the FAST_START_MTTR initialization parameter, and Oracle automatically varies the
incremental checkpoint writes to meet that target.
Checkpoint frequency is affected by several factors, including log file size and the setting of the
FAST_START_MTTR_TARGET initialization parameter.
o If the FAST_START_MTTR_TARGET parameter is set to limit the instance recovery
time, Oracle automatically tries to checkpoint as frequently as necessary.
o Under this condition, the size of the log files should be large enough to avoid
additional checkpointing due to under sized log files.
o The optimal size can be obtained by querying the OPTIMAL_LOGFILE_SIZE column
from the V$INSTANCE_RECOVERY view. The value shown is expressed in
megabytes.
SQL> SELECT OPTIMAL_LOGFILE_SIZE FROM V$INSTANCE_RECOVERY;
OPTIMAL_LOGFILE_SIZE
-------------------256
o
You can also obtain sizing advice on the Redo Log Groups page of Oracle Enterprise
Manager Database Control.
Oracle automatically detects this and uses a 4Kb default for those disk drives.
This can result in significant disk drive wastage. You can check this with this SQL statement.
SELECT name, value
FROM v$sysstate
WHERE name = 'redo wastage';
Result:
NAME
VALUE
---------------------------- --------redo wastage
17941684
With Oracle 11g Release 2 you can specify a block size for online redo log files with
the BLOCKSIZE keyword
in
the CREATE DATABASE, ALTERDATABASE,
and CREATE CONTROLFILE statements. The permissible block sizes are 512, 1024, and 4096.
This example shows use of the BLOCKSIZE parameter to create 512Kb blocks.
ALTER DATABASE orcl ADD LOGFILE
GROUP 4 ('/u01/logs/orcl/redo04a.log','/u01/logs/orcl/redo04b.log')
SIZE 100M BLOCKSIZE 512 REUSE;
This query shows the blocksize for your database.
SQL> SELECT BLOCKSIZE FROM V$LOG;
BLOCKSIZE
--------512
Log Switches and Checkpoints
This figure shows commands used to cause Redo Log File switches and Checkpoints.
If the file to be added already exists and is being reused, it must have the same size and you
must use the REUSE option in the command immediately after the filename specification.
STATUS
---------------INACTIVE
INACTIVE
CURRENT
INACTIVE
2. Drop one or more of the inactive Redo Log Groups keeping at least two current On-line Redo Log
Groups.
3. Use operating system commands to delete the files that stored the dropped Redo Log Files.
4. Recreate the groups with larger file sizes. Continue this sequence until all groups have been resized.
Obtaining Redo Log Group and File Information
Two views, V$LOG and V$LOGFILE are used to store information about On-line Redo Log files. The
following example queries display information from SIUE'sDBORCL database. The files in each group
are 64M in size.
SELECT group#, sequence#, bytes/1024, members, status
FROM v$log;
GROUP#
---------1
2
3
SEQUENCE#
---------31
32
30
BYTES/1024
---------65536
65536
65536
MEMBERS
STATUS
---------- ---------------2
INACTIVE
2
CURRENT
2
INACTIVE
Unused the Redo Log Group has never been used this status only occurs for a newly
added Redo Log Group.
Active the Redo Log Group is active, but not the current Group it is needed for crash
recovery and may be in use for block recovery. It may not yet be archived.
Clearing
the
Log
is
being
recreated
after
an ALTER
DATABASE
CLEAR
LOGFILE command.
Clearing_Current the current Redo Log Group is being cleared of a closed group.
Invalid the file cannot be accessed and needs to be dropped and recreated.
Stale the contents of the file are incomplete drop it and recreate it.
Deleted the file is no longer in use you can use operating system commands to delete the
associated operating system file.
As a Redo Log File is the file to record the information recorded in the Redo Log Buffer by LGWR (Log
Writer), changes to the database are recorded in the Redo Log Buffer is used to recover data in the event
of future disability. Changes indicate the role of the Redo Log File to figure in the process of being stored
in the Data File follows.
changed by a DML query data changes are stored in the Redo Log Buffer, a state that is stored in the
Database Buffer Cache File shall assume that the state of the store afterwards.
The commit command occurs when LGWR Process the Redo Log Buffer to the saved changes to the
SCN (system commit number) to attach Redo Log File to save it. After the data is stored in the Redo Log
File changes are recorded in the Redo Log Buffer is deleted.
LGWR writes the final number assigned to each of the commit SCN data and save the data to the
Redo Log File to commit SCN section in the Control File.
Current
o The status of the file that information is recorded in the Redo Log Buffer is called by the
current LGWR Current state.
Active
o Is filled all the space on the file to another file crossed the Current state due to the log
switch, but the stored information is referred to as the state of the unwritten log file in
the Data File Active state.
InActive
o The contents of the status log file that is stored in the Redo Log File is stored in the Data
File
o This status can only delete the redo log file.
commit SCN
o The content of the LGWR Redo Log Buffer is accomplished by the SCN and synchronize
and commit granted every time you save the Redo Log File. (Redo Log is the commit SCN
numbers and commit SCN number data in the File consistent; the commit every update.)
checkpoint SCN
o commit rather than being stored as data is updated when the Redo Log File by the time
the information through the checkpoint signal stored in the Database Buffer Cache is
stored at a time Data File, as shown by the amount of data being stored at this time.
The Instance Recovery are allowed to proceed through a process where the header information of the
control file and data file is used. As indicated above, the control file is saved SCN numbers of the data
stored in the current and Redo Log File, data file to the SCN number for the current stored data is stored.
This aspect of the SCN number that exists in between the two control file SCN number is greater, commit
is Redo Log File, the data is saved, but, Data File, the Redo Log File from the SCN number as lacking
because the data is not stored properly will be able to recover the data.
* However, if the SCN numbers stored in the control file in the startup process produces smaller error
control file version.
2 Min groups of Redo Log File is in Oracle, and the minimum number of members of one dog per
group.
* On a production must have a stable minimum of three groups of two or more members per
group can run the Oracle.
log Switch for generating a checkpoint is generated by cyclic group (round robin fashion)
is a member size and contents of the log file in the same group are the same.
dispersing each member of the group to another position, because it is safe for administration plays
an important role in the Database recovery.
if each member of the group of the contents, such that several parallel at the same time, however, if
there is a member on the same disk is recorded in series.
If you add a group, creating one or more files to be enclosed in the appropriate file members should give
() parentheses.
... Size 5M -> means it will be set to the file size of each member of the group to 5MB (can be omitted).
should try to be alter database statement specifies the number of groups to become added to the end.
3 it can be seen that the group is deleted. But alter database statement in Redo Log File to remove a
control file is recorded in the Redo Log File in the removal of information that the actual file is not deleted
while it remains.
Because you must delete the files manually, as follows:
SQL>! Rm /home/oracle/disk5/redo03_d.log
SQL>! Rm /home/oracle/disk5/redo03_c.log
SQL>! Rm /home/oracle/disk4/redo03_b.log
SQL>! Rm /home/oracle/disk3/redo03_a.log
Above the minimum number of groups had called two Redo Log File. If you try to delete one
group while the remaining two are not removed with the following error:
I will remove three of redo03_a.log member files a group from the list above.
Above each group of Redo Log File should have had at least one more member.
When you remove a group of 3 redo03.log file, the following error occurs:
Because of the method used to generate a log switch force should change to the state of the log file.
If the current state of the Redo Log File is the same as the picture shown above if you run the following
command: The status is changed.
Redo Log
- Oracle In case your fault occur if data changes occur and record all the details and content before
change after change.
- The memory of the place where the records may be content Redo log Buffer, a file Redo log File.
Write Ahead Log - Change to change the actual data that the first record in the redo log data before
and after
Log Force at Commit - complete commit request comes in, to save all the redo record related to the
redo log file from the user and then commit the
After the data user to commit a Roll forward to more, but you also need to save Rollback after a DB in
the state that has not yet been Rollback failed rollback is complete, even when the kill occurs before the
data when the DB is still Checkpoint Kill must have all Rollback.
So is located with regard to the contents within Undo Change Vector.
The created on the PGA Change Vector is copied into their Redo Record Format in a Row unit Redo Log
Buffer.
2) on the PGA created after the change vector and copy to calculate the capacity needed in the redo log
buffer should acquire the Latch.
All memory resources (shared pool, database buffer cache, etc.) each have a proper latch.
Redo Log Buffer too, like to write the contents to the Redo Log Buffer shall obtain Redo Copy Latch Be
the first to have secured the Latch. If at the same time lead to overload in the process of obtaining a
Redo Copy Latch If the number of server processes to change the data May.
Redo Copy Latch Change Vector is all because there must be a need to have multiple records until the
Redo Log Buffer.
Number of Redo Copy Latch is " _log_simultaneous_copies can be adjusted by the hidden parameter
named "(the default CPU count x2)
3) Redo Copy Latch secure server process must ensure the Redo Allocation Latch to record the
information on the Redo Log Buffer.
Starting with the 9i Redo Log Bufferr divided into several spaces Shared Redo Strand feature was
introduced that assigns a value to each space Redo Allocation Latch can be set to LOG_PARALLEISM
parameters. (Default: 1)
Beginning with the Strand 10g Shared Redo more extended concept of Private Redo Strand feature was
introduced.
Starting with 10g each server processes created after the Change Vector Private Redo Strand there,
creating a space that is immediately written to the LGWR Redo Log File, if necessary.
Due to the introduction of the Private Redo Strand each process are further improved in performance,
reduced part was Latch contention exists to ensure this is also referred to as Zero Copy Redo.
10g Since the LOG_PARALLEISM parameters _LOG_ changes to PARALLEISM_DYNAMIC (hidden
parameter), if this value is set to True Oracle to automatically manage the Stand
number. (Recommended value is CPU count / 8)
4) If in certain circumstances the Redo Log Buffer LGWR some of the information contained in this Redo
Log Buffer is recorded in the Redo Log File
- Every 3 seconds
LGWR process will be days if not Sleep status to become a Rdbms Ipc Message of wait events in T time
Out is the time of 3 seconds every once and Wake Up the R EDO Log Buffer in Redo Log File to record
content that should be to be found. So if you want to record Flush the content part is recorded in the
Redo Log Buffer after recording the Redo Log File.
When the above conditions LGWR will be what is in the Redo Log Buffer in the Redo Log Buffer after
recording the Redo Log File
The recorded information to Redo Log File will flush.
LGWR and DBWR writes down the Block Unit As you write down the contents of the Redo Log Buffer to
Redo Log File. The size of the Block write down, but Block size is determined by the DB_BLOCK_SIZE
LOG Buffer LGWR is writing down is not the value of the DB_BLOCK_SIZE OS Block Size and may differ
depending on the OS type.
needed
redo
to
recover without
necessary
map
for
Health Check
Redo configuration
. The above, as shown in the mirror the other disks by physically LOG1 in turn, benefits by
Managing availability and enclose two Redo members of the management in order to do this as a
group. GROUP 1:
A_LOG1, B_LOG1 GROPU 2: A_LOB2, B_LOG2
To Redo Management
- Redo Log rename (used in the file location changes or file name change)
?
1 ALTER DATABASE RENAME FILE '/diska/logs/log1a.log', '/diska/logs/log2a.log'
2 TO '/diskc/logs/log1c.log', '/diskc/logs/log2c.log';
The online redo logs are crucial for recovery operations of an Oracle database. The online redo log
consists of two or more pre-allocated files that store all changes in the database. Each instance of an
Oracle database has a redo log file line to protect the database in case of crash.
2-1 Threads dedicated to redo
Each instance of a database has its redo log groups. These redo log files, multiplexed or not, are
managed by a single thread in an Oracle instance if Oracle Parallel Server is not used: the thread LGWR
(Log Writer).
2-2 The contents of redo log files
Redo records are put into a buffer in a circular fashion in the redo log buffer of the SGA for an Oracle
instance and are written in a redo log files by the LGWR process back plane (Oracle background process
Log Writer). Therefore That a transaction is committed (commit), the process LGWR writes redo records
from the transaction redo log buffer in the SGA to a redo log file, and SCN (system change number) is
assigned to identify redo records with each committed transaction.
It is only when all redo records associated with a given transaction are written to disk in the redo log files
is the user process notified that the transaction is committed.
Redo records can also be written in a redo log file before the corresponding transaction is committed. If
the redo log buffer is full, or another transaction commits, the process LGWR flush all of the redo log
redo log buffer entries to a redo log file, even though redo records are not validated. If necessary, Oracle
can reverse these changes.
2-3- Writing in the redo log files
Redo log files consist of at least two redo log files online (online redo log). Oracle requires a minimum of
two files to ensure that a redo log file is always available for writing while the other is being archived (if
archivelog mode is active).
The LGWR process writes redo log files in circular mode: when the current redo log file is filled, LGWR
begins writing the next redo log file. When the last redo log file is filled, LGWR returns to the first redo
log file and writes it and restarting a new cycle.
Redo log files are available to LGWR filled process for reuse according to the active ARCHIVELOG mode or
not:
If archiving is not enabled (NOARCHIVELOG mode), a filled redo log file is available once the
changes recorded in the latter were written in the data files.
If archiving is enabled (ARCHIVELOG mode), a filled redo log file is available to process LGWR
once the changes recorded in the latter were written in the data file and once the redo log file was
archived.
The implementation of the redo log files multiplexing is to create redo log file groups. Each redo log file
in a group is called a member. Members in a group of redo log files must have the same size.
According to the example given in the diagram, the LGWR process writes simultaneously in A_LOG1 and
B_LOG1 members of Group1 group of redo log files and then in the A_LOG2 and B_LOG2 members
Group2 group of redo log files after a log switch.
It is recommended that the members of a group of redo log files on different disks.
parameter MAXLOGFILES used in the command CREATE DATABASE determines the number of redo
log file groups. The only way to change this limit requires to recreate the database or alter its file
controls. If the parameter MAXLOGFILES is not specified, Oracle applies a default value dependent on
the operating system.
The initialization parameter LOG_FILES (in the initialization file of the instance) can temporarily
decrease the maximum number of redo log file groups without exceeding the setting MAXLOGFILES .
The parameter MAXLOGMEMBERS used in the command CREATE DATABASE determines the
maximum number of members in a group of redo log files. As the parameter MAXLOGFILES , the only
way to increase this value requires to recreate the database or alter files checks. If the
parameter MAXLOGMEMBERS is not specified, Oracle applies a default value dependent on the
operating system.
records_total
------------32
In the context of practical cases: 32 redo log file groups can be created at maximum.
3- Case study
In the practical case that follows, it is proposed to see the main controls for handling Oracle redo log files
to perform a reorganization of redo log files.
The initial order to create the basic practice where data is recalled below:
The command CREATE DATABASE requires us not to create more than 32 redo log file groups. Each group
will not contain more than 2 members, 2 redo log files.
When creating GSC database, 3 redo log file groups were created 3 groups that contain only one redo log
file. After reconstruction, there will always be three redo log file groups, but each group will contain two
redo log files as shown in the following diagram:
In the context of practical cases, members of the same group can not be placed on different disks.
3-1- Views V $ LOG and V $ LOGFILE to collect information
The views V $ LOG and V $ LOGFILE provide information on redolog file groups. These views are based
on the information in the control files.
3-1-1- view V $ LOG
The view V $ LOG gives precise information about redo log files:
select group #,
# thread,
# sequence,
bytes,
members,
archived,
status,
# first_change,
to_char (first_time, 'dd / mm / yy hh: mi: ss') as FIRST_TIME
from v $ log
The view V $ LOG says although he has three redo log file groups, three groups with only one member or
redo log file (members = 1). Each redo log file has a size of 1 MB
The active redo log file is the file in the group 1 for which the status is CURRENT (status column).
The log sequence number is 1066 (sequence #) to the first group of redo log files, for the second group
in 1064 and 1065 for the third group, which makes perfect sense considering the circularity in the redo
log files .
The column first_change # indicates the first SCN (system change number) in the group of redo log files
and the time that is what is given by the SCN first_time column.
3-1-2- view V $ LOGFILE
The view V $ LOGFILE provides for its physical constitution and locations of members of redo log file
groups:
The status is INVALID when a redo log file in a group can not be accessed.
The status is STALE When Oracle suspect that a redo log file is incomplete or incorrect until the redo log
file in question is a member of the active group.
3-2- manually Force a log switch
The command alter system allows file group switcher redo log:
The V $ LOG view actually confirms the switch log at the end of the command: the redo log file group
becomes active group 2 with a log sequence number incremented by 1.
The alert file of the proceedings confirms the switch made manual:
it is imperative to ensure that there will be at least two redo log file groups available after
deletion.
an error message appears when attempting to remove a member of an active group of redo log
files. A log switch must be performed beforehand.
it is allowed to only remove a member of a group of redo log files, provided that this member is
not unique and the last of the group (the ORA-00361 message is displayed if: unable to remove the
last member ).
the physical file is not deleted remotely disk.
In the case practice: group 1 is not active and has only one member, therefore only the following syntax
can be used.
The command alter database add logfile allows the user to specify a number for the group.
alter
database
add
logfile
('/sdata/oracle/v8/TSTT1ORA/redolog/redo1_01.log') size 1M
group
The redo1_02.log member of the group 1 created will be added with the command alter database add
logfile member .
The V $ LOG view shows good two members in the group of redo log files # 1 and when it is a group of
redo log files again indicated status in the V $ LOG view isUNUSED :
select group #,
# thread,
# sequence,
bytes,
members,
archived,
status,
# first_change,
to_char (first_time, 'dd / mm / yy hh: mi: ss') as FIRST_TIME
from v $ log;
Group 2 will be recreated by adding and removing members (log switches are made):
The log sequence number is logged in the V $ views LOGHIST also giving SCN (system change number)
starting a sequence (first_change #) and the SCN corresponding to a log switch.
For example, the sequence of log 1068 started on 14/01/2005 1:50 to 01: the first SCN is the 312770
number (first_change #) and the last is the SCN number 312820 since a log switch was made for
312820 the SCN (switch_change #). The log sequence contains 1068 including 50 redo records.
The parameter MAXLOGHISTORY when creating the database governs the maximum number of retention of
information of log switches in the control files. To change this setting, or a control file must be recreated,
or the database must be rebuilt.
The view v $ controlfile_record_section allows to know the parameter MAXLOGHISTORY :
records_total
------------1815
In the context of practical cases: 1815 switches are based on historical log in V $ LOGHIST.
The only redo log files play the important role to record the data transactions, and provide the recovery
mechanism to ensure data integrity.
Online redo log files have those characteristics:
A set of identical copies of online redo log files is called an online redo log file group.
The LGWR background process concurrently writes the same information to all online redo log
files in a group.
The Oracle server needs a minimum of two online redo log file groups for the normal operation of
a database.
Online Redo Log File Memebers
MAXLOGFILES: it's the parameter in CREATE DATABASE command specifies the absolute
maximum of online redo log file groups.
The maximum and default value for MAXLOGFILES depends on operating system.
MAXLOGMEMBERS: the parameter in CREATE DATABASE command determines the maximum
number of members per redo log file group.
You need to know the mechanism of how Online Redo Log Files work to be able to utilize it ensuing the
data availability.
The Oracle server sequentially records all changes made to the database in the Redo Log Buffer. The redo
entries are written from the Redo Log Buffer to the current online redo log file group by the LGWR
process.
LGWR
writes
under
the
following
situations:
DBWn writes a number of dirty database buffers, which are covered by the log that is being
checkpointed, to the data files.
The checkpoint background process CKPT updates the control file to reflect that it has completed
a checkpoint successfully. If the checkpoint is caused by a log switch, CKPT also updates the headers of
the data files.
A checkpoint occurs in the following situations:
There are a coupld of dynamic views for online redo log files that you can retrive information of online
redo log files,ex:
V$LOG
V$LOGFILE
Archived Redo Log Files
o
o
o
o
o
o
Each group has at least one redo file. It must have at least two distinct groups of redo files (also called
redo threads), each containing at least one member. For if you have only one redo file, this will override
Oracle redo file and we will lose all transactions. Each database redo its file groups. These groups,
multiplexed or not, are called instance of the redo thread. In typical configurations, only one instance
of the database accesses the Oracle database. Thus, only one thread is present. In a RAC environment,
two or more instances simultaneously access a single database and each instance its own redo thread.
Redo files are filled with redo records. A redo record, also called a redo entry is composed of a group
of vectors of change, which is a description of a change made to the base block. For example, if you
change the value of an employee's salary in the table, it generates a redo record containing change
vectors describing the changes to the data segment of the block of the table, the segment of the data
block undo and undo segment transaction table.
The redo entries record data that can be used to rebuild after wholes changes made to the base, undo
segments included. In addition, the redo file also protects cancellation data. When restoring the database
Redo records can also be written to the redo before the corresponding transaction file is validated. If the
redo log buffer is full or other transaction has been committed, LGWR redo void all entries in the redo log
buffer file redo, though some redo records should not be validated. If necessary, Oracle can reverse
these changes.
Figure 1
If archiving is disabled (the base in NOARCHIVELOG mode), a full redo file is available after the
changes recorded in it are written to the data files.
If archiving is enabled (The database is in ARCHIVELOG mode), a full redo file is available to
LGWR after the changes made in it are written to the data files were archived.
Active redo files (Fluent) and Inactive
The views V $ THREAD, V $ LOG, V $ LOGFILE and V $ LOG_HISTORY provide information on Redo files.
V $ THREAD gives information about the file being redo.
SQL > desc v $ thread
Name
- ---------------------------------------
Null ?
----- ---
THREAD #
STATUS
ENABLED
GROUPS
INSTANCE
OPEN_TIME
CURRENT_GROUP#
SEQUENCE#
CHECKPOINT_CHANGE#
CHECKPOINT_TIME
ENABLE_CHANGE#
ENABLE_TIME
DISABLE_CHANGE#
DISABLE_TIME
LAST_REDO_SEQUENCE#
LAST_REDO_BLOCK
LAST_REDO_CHANGE#
LAST_REDO_TIME
Type
-----------NUMBER
VARCHAR2 ( 6 )
VARCHAR2 ( 8 )
NUMBER
VARCHAR2 ( 80 )
DATE
NUMBER
NUMBER
NUMBER
DATE
NUMBER
DATE
NUMBER
DATE
NUMBER
NUMBER
NUMBER
DATE
The view V $ LOG provides information by reading the control file instead of reading the data dictionary.
SQL > desc v $ log
Name
- -------------------------------------- -
Null ?
--------
Type
-------------
GROUP #
THREAD #
SEQUENCE #
BYTES
MEMBERS
ARCHIVED
STATUS
FIRST_CHANGE #
NUMBER
NUMBER
NUMBER
NUMBER
NUMBER
VARCHAR2 ( 3 )
VARCHAR2 ( 16 )
NUMBER
FIRST_TIME
DATE
from v $ log ;
-------
---------
BYTES
FIRST_CHANGE #
FIRST_TIME
-------
-------------
----------
41
52428800
NO
INACTIVE
1867281
18 / 09 / 05
42
52428800
NO
CURRENT
1889988
18 / 09 / 05
40
52428800
NO
INACTIVE
1845207
18 / 09 / 05
Null ?
----- ---
Type
---------------
GROUP #
STATUS
TYPE
MEMBER
NUMBER
VARCHAR2 ( 7 )
VARCHAR2 ( 7 )
VARCHAR2 ( 513 )
IS_RECOVERY_DEST_FILE
VARCHAR2 ( 3 )
To create a new redo file group or a member, you must have the ALTER DATABASE system privilege. The
base can have a maximum MAXLOGFILES groups.I.3.1. Creating groups redo
To create a new group of redo files, use the query ALTER DATABASE ADD LOGFILE clause with.
For example:
ALTER DATABASE ADD LOGFILE ( /oracle/dbs/log1c.rdo ' , ' /oracle/dbs/log2c.rdo ' ) SIZE
500K;
You must specify the full path and name for the new members, otherwise they will be
created in the default directory or in the current directory after the OS.
The use of group numbers facilitates administration redo file groups. The group number must be between
1 and MAXLOGFILES. Please do not skip the group numbers (eg 10, 20.30), if space in the control files
will be consumed unnecessarily.
I.3.2. Creating file members redo
In some cases, it is not necessary to create a group of completely redo files. The group may already exist
because one or more members have been removed (eg following a disk failure). In this case, you can
add new members to the existing group.
The base can have a maximum MAXLOGMEMBERS members.
To create a new redo file member of an existing group, use ALTER DATABASE ADD LOGFILE clause with
MEMBER. In the following example we add a new member to redo group number 2:
Note that the file name must be indicated, but its size is not compulsory. The size of the new member is
determined from the size of existing members.
When, unwanted ALTER DATABASE, you can alternatively identify the target group by specifying all the
other group members in the TO clause, as shown in the following example:
One can use the OS commands to move the redo files. After using ALTER databse to give their new
names (location) known by the base. This procedure is the necessary, for example, if the disk currently
used for some redo files to be removed, or if the data files and redo some files are in the same disc and
should be separated to minimize contention.
To rename a member redo files, you must have the ALTER DATABASE system privilege.
In addition, we must also have system privileges to copy files to the desired directory and privileges to
open and save the database.
Before moving the redo files, or any other change in the basic structures, back up the database
completely. As a precaution after renommination or moving a set of redo files, immediately back up the
control file.
IMMEDIATE SHUTDOWN
2. Copy the redo files to the new location.
The HOST command can be used to run OS commands without leaving SQL * Plus. In some
OS using a character instead of HOST. For example, on UNIX using the exclamation point
(!).
The following example uses OS commands (UNIX) to move members of the redo files to a new location:
CONNECT / as SYSDBA
STARTUP MOUNT
4. Rename the member of the redo file.
ALTER DATABASE
RENAME FILE ' /diska/logs/log1a.rdo ' , ' /diska/logs/log2a.rdo '
TO ' /diskc/logs/log1c.rdo ' , ' /diskc/logs/log2c.rdo ' ;
5. Open the base normally.
Changing the redo file takes effect at the opening of the base.
Deleting a Group
To delete a group of redo files, it must have the ALTER DATABASE system privilege.
Before deleting a group of redo files, you need to consider the following restrictions and precautions:
An instance requires at least two groups of redo files, regardless of the number of members in
the group. (A group contains one or more members.)
You can delete a group of redo files, only if it is inactive. If you need to remove the current
group, first, we force a log switch.
Make sure the redo file group is archived (if archiving is enabled) before deleting it.
SQL >
SELECT
ALTER
DATABASE
3 ;
When a group is deleted from the database and that it does not use OMF, OS files will not be deleted
from the disk. You must use the OS commands to remove them physically.
When using OMF, cleaning OS files automatically.
I.6. Deleting files members redo
To remove a member from a redo file, you must have the ALTER DATABASE system privilege.
To remove an inactive member of a redo file, use the ALTER DATABASE DROP LOGFILE clause with
MEMBER.
Deleting a member
ALTER
DATABASE
When a member of a journal is deleted, the OS file is not deleted from the disk.
To remove a member of an active group, we must force first log switch.
I.7. Forcing Logs Switches
The log switch occurs when LGWR stops writing in a newspaper group and began writing in another. By
default, a log switch occurs automatically when the redo file current group is full.
You can force a log switch for the current group is inactive and available for maintenance on the redo
files. For example, we delete the currently active group, but we are unable to remove it while active. It
should also force a log switch if the currently active group needs to be archived at a specific time before
the members of the group are completely filled. This is useful in configurations where the redo files are
quite large and take longer to complete.
To force a log switch, you must have the ALTER SYSTEM privilege. Use the ALTER SYSTEM command with
SWITCH LOGFILE clause.
The following command forces a log switch:
Initialization of a group
ALTER
This command can be used if you can not delete the redo files, there are two situations:
If the redo corrupted file is not yet archived, use the UNARCHIVED key.
ALTER
This command initializes the corrupt redo files and prevents archiving.
If one sets the redo file needed to restore or backup, we can not restore from this backup.
If we initialize a non redo archived file, it should make another backup of the database.
To initialize a non archived redo file that is needed to put a tablespace offline online, use the DATAFILE
clause UNRECOVERABLE in control DATABASE CLEAR LOGFILE.
If we initialize a redo file needed to bring an offline tablespace online, you will be unable to bring the
tablespace online again. We are obliged to delete the tablespace or perform an incomplete recovery. Note
that the tablespace offline normally does not need restoration.
T1
One thread
Two threads
Four threads
No thread
Undo
Redo
Vectors newspapers
Vector changes
T3
The redo records are written to the redo file through the process
DBWR
CKPT
LGWR
RDWR
T4
T5
no log file.
a log file.
T6
T7
Unconditional
T8
T9
a RBA
the RDBA
T10
V $ LOGFILE
V $ LOGFILES
V $ THREADS
V $ THREAD
T11
T12
1 and MAXLOGMEMBER
1 and 10
1 and MAXLOGFILES
1 and MAXLOGMEMBERS
ALTER LOGFILE
ALTER SYSTEM
ALTER DATABASE
No system privilege
T14
INACTIVE
OFFLINE
ASSETS
T15
T16
T17
T18
Default is FALSE
Default is TRUE
T19
We can initialize a log file without stopping the database with the command
T20
To initialize a non archived log that is needed to put an offline tablespace online,
using
Solutions:
T1:
T2:
A
B
and
and
T3:
T4:
T5:
T6:
D
B
and
T7:
T8:
C
D
and
T9:
C
A
T10:
and
T11:
and
T12:
T13:
T14:
T15:
and
T16:
T17:
T18:
T19:
and
D
D
T20: B
We cannot resize the redo log files. We must drop the redolog file and recreate them .This is only method
to resize the redo log files. A database requires atleast two groups of redo log files, regardless the
number of the members. We cannot the drop the redo log file if its status is current or active . We have
change the status to "inactive" then only we can drop it.
When a redo log member is dropped from the database, the operating system file is not deleted from
disk. Rather, the control files of the associated database are updated to drop the member from the
database structure. After dropping a redo log file, make sure that the drop completed successfully, and
then use the appropriate operating system command to delete the dropped redo log file. In my case i
have four redo log files and they are of 50MB in size .I will resize to 100 MB. Below are steps to resize
the redo log files.
Step 1 : Check the Status of Redo Logfile
SQL> select group#,sequence#,bytes,archived,status from v$log;
A redo log file records all changes to the database, in most cases before the changes are written
to the datafiles.
To recover from an instance or a media failure, redo log information is required to roll datafiles
forward to the last committed transaction.
Ensuring that you have at least two members for each redo log file group dramatically reduces
the likelihood of data loss because the database continues to operate if one member of a redo log
file is lost.
Redo Log File Architecture
Online redo log files are filled with redo records. A redo record, also called a redo entry, is made
up of a group of change vectors, each of which describes a change made to a single block in the
database.
Redo entries record data that you can use to reconstruct all changes made to the database,
including the undo segments. When you recover the database by using redo log files, Oracle
reads the change vectors in the redo records and applies the changes to the relevant blocks.
The LGWR process writes redo information from the redo log buffer to the online redo log files
under a variety of circumstances:
o When a user commits a transaction, even if this is the only transaction in the log
buffer.
o When the redo log buffer becomes one-third full.
o When the buffer contains approximately 1MB of changed records. This total does not
include deleted or inserted records.
Characteristics of Redo Log File.
Redo log files have the following characteristics:
Redo log files provide the means to redo transactions in the event of a database failure. Every
transaction is written synchronously to the Redo Log Buffer, then gets flushed to the redo log files
in order to provide a recovery mechanism in case of media failure.
This includes transactions that have not yet been committed, undo segment information, and
schema and object management statements.
Redo log files are used in a situation such as an instance failure to recover committed data that
has not been written to the datafiles.
Online Redo Log Contents
A redo record, also called a redo entry, is made up of a group of change vectors, each of which is
a description of a change made to a single block in the database.
For example : If you change a salary value in an employee table, you generate a redo record containing
change vectors that describe changes to the data segment block for the table, the rollback segment data
block, and the transaction table of the rollback segments.
How
Redo entries record data that you can use to reconstruct all changes made to the database,
including the rollback segments. Therefore, the online redo log also protects rollback data.
When you recover the database using redo data, Oracle reads the change vectors in the redo
records and applies the changes to the relevant blocks.
Oracle Writes to the Online Redo Log.
The online redo log of a database consists of two or more online redo log files.
Oracle requires a minimum of two files to guarantee that one is always available for writing while
the other is being archived (if in ARCHIVELOG mode).
LGWR writes to online redo log files in a circular fashion. When the current online redo log file
fills, LGWR begins writing to the next available online redo log file.
When the last available online redo log file is filled, LGWR returns to the first online redo log file
and writes to it, starting the cycle again.
The above figure illustrates the circular writing of the online redo log file. The numbers next to
each line indicate the sequence in which LGWR writes to each online redo log file.
NOARCHIVELOG mode : If archiving is disabled a filled online redo log file is available once the
changes recorded in it have been written to the datafiles.
ARCHIVELOG mode : If archiving is enabled a filled online redo log file is available to LGWR
once the changes recorded in it have been written to the datafiles and once the file has been
archived.
Oracle provides the capability to multiplex an instances online redo log files to safeguard against
damage to its online redo log files.
When multiplexing online redo log files, LGWR concurrently writes the same redo log information
to multiple identical online redo log files, thereby eliminating a single point of redo log failure.
Note:
Oracle recommends that you multiplex your redo log files. The loss of the log file data can be
catastrophic if recovery is required.
Adding Online Redo Log File Groups
In some cases it might need to create additional log file groups. For example:Adding groups can
solve availability problems.
To create a new group of online redo log files, use the following SQL command:
ALTER DATABASE [database]
You specify the name and location of the members with the file specification. The value of the
GROUP parameter can be selected for each redo log file group. If you omit this parameter, the
Oracle server generates its value automatically.
Example:
Use the fully specified name of the log file members; otherwise the files are created in a default
directory of the database server.
If the file already exists, it must have the same size, and you must specify the REUSE option. You
can identify the target group either by specifying one or more members of the group or by
specifying the group number.
Example:
Oracle Database lets you save filled groups of redo log files to one or more offline destinations,
known collectively as the archived redo log, or more simply the archive log.
The process of turning redo log files into archived redo log files is called archiving.
This process is only possible if the database is running in ARCHIVELOG mode. You can choose
automatic or manual archiving
An archived redo log file is a copy of one of the filled members of a redo log group.
It includes the redo entries and the unique log sequence number of the identical member of the
redo log group.
For example :
If you are multiplexing your redo log, and if group 1 contains identical member files a_log1 and b_log1,
then the archiver process (ARCn) will archive one of these member files. Should a_log1 become
corrupted, then ARCn can still archive the identical b_log1.
The archived redo log contains a copy of every group created since you enabled archiving.
When the database is running in ARCHIVELOG mode, the log writer process (LGWR) cannot reuse
and hence overwrite a redo log group until it has been archived.
The background process ARCn automates archiving operations when automatic archiving is
enabled.
The database starts multiple archiver processes as needed to ensure that the archiving of filled
redo logs does not fall behind.
Uses of Archived Redo Log Files
You can use archived redo logs to:
Recover a database
Get information about the history of a database using the LogMiner utility.
Running a Database in NOARCHIVELOG Mode
When you run your database in NOARCHIVELOG mode, you disable the archiving of the redo log.
The database control file indicates that filled groups are not required to be archived.
Therefore, when a filled group becomes inactive after a log switch, the group is available for
reuse by LGWR.
NOARCHIVELOG mode protects a database from instance failure but not from media failure.
Only the most recent changes made to the database, which are stored in the online redo log
groups, are available for instance recovery.
If a media failure occurs while the database is in NOARCHIVELOG mode, you can only restore
the database to the point of the most recent full database backup.
In NOARCHIVELOG mode you cannot perform online tablespace backups, nor can you use online
tablespace backups taken earlier while the database was in ARCHIVELOG mode.
To restore a database operating in NOARCHIVELOG mode, you can use only whole database
backups taken while the database is closed.
Therefore, if you decide to operate a database in NOARCHIVELOG mode, take whole database
backups at regular, frequent intervals.
When you run a database in ARCHIVELOG mode, you enable the archiving of the redo log.
The database control file indicates that a group of filled redo log files cannot be reused by LGWR
until the group is archived.
A filled group becomes available for archiving immediately after a redo log switch occurs.
Changing the Database Archiving Mode
To change the archiving mode of the database, use the ALTER DATABASE statement with the
ARCHIVELOG or NOARCHIVELOG clause. To change the archiving mode, you must be connected
to the database with administrator privileges (AS SYSDBA).
The following steps switch the database archiving mode from NOARCHIVELOG to ARCHIVELOG:
Shut down the database instance.
SHUTDOWN
An open database must first be closed and any associated instances shut down before you can
switch the database archiving mode.
You cannot change the mode from ARCHIVELOG to NOARCHIVELOG if any datafiles need media
recovery.
Back up the database.
Before making any major change to a database, always back up the database to protect against
any problems.
This will be your final backup of the database in NOARCHIVELOG mode and can be used if
something goes wrong during the change to ARCHIVELOG mode.
Edit the initialization parameter file to include the initialization parameters that specify the
destinations for the archive log files .
Start a new instance and mount, but do not open, the database.
STARTUP MOUNT
To enable or disable archiving, the database must be mounted but not open.
Change the database archiving mode. Then open the database for normal operations.
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
NOARACHIVELOG mode:
o The Redo Log Files are overwritten each time a log switch occurs, but the files are never
archived.
o When a Redo Log File (group) becomes inactive it is available for reuse by LGWR.
o This mode protects a database from instance failure, but NOT from media failure.
o In the event of media failure, database recovery can only be accomplished to the last full
backup of the database!
o You cannot perform tablespace backups in NOARCHIVELOG mode.
ARCHIVELOG mode
o Full On-line Redo Log Files are written by the ARCn process to specified archive locations,
either disk or tape you can create more than one archiver process to improve
performance.
o A database control file tracks which Redo Log File groups are available for reuse (those
that have been archived).
o The DBA can use the last full backup and the Archived Log Files to recover the database.
o A Redo Log File that has not been archived cannot be reused until the file is archived if
the database stops awaiting archiving to complete, add an additional Redo Log Group.
This figure shows the archiving of log files by the ARCn process as log files are reused by LGWR.
Usually the parameter does not need to be set or changed - Oracle starts
additional ARCn processes as necessary to keep from falling behind on archiving.
Use additional ARCn processes to ensure automatic archiving of filled redo log files does not
fall behind.
Archiving
to
a
single
destination
was
once
accomplished
by
specifying
the LOG_ARCHIVE_DEST initialization parameter in the init.ora file it has since been
replaced in favor of the LOG_ARCHIVE_DEST_n parameter (see next bullet).
Multiplexing
can
be
specified
for
up
to 31 locations
by
using
the LOG_ARCHIVE_DEST_n parameters (where n is a number from 1 to 31). This can also be
used
to
duplex
the
files
by
specifying
a
value
for
the LOG_ARCHIVE_DEST_1 and LOG_ARCHIVE_DEST_2 parameters.
When multiplexing, you can specify remote disk drives if they are available to the server.
These examples show setting the init.ora parameters for the possible archive destination specifications:
1.
3.
Example of Multiplexing Three Archive Log Destinations (for those DBAs that are very
risk averse):
LOG_ARCHIVE_DEST_1 = 'LOCATION = /u01/student/dbockstd/oradata/archive'
LOG_ARCHIVE_DEST_2 = 'LOCATION = /u02/student/dbockstd/oradata/archive'
LOG_ARCHIVE_DEST_3 = 'LOCATION = /u03/student/dbockstd/oradata/archive'
The LOCATION keyword specifies an operating system specific path name.
Note: If
you
use
a LOG_ARCHIVE_DEST_n parameter,
then
you
the LOG_ARCHIVE_DEST or LOG_ARCHIVE_DUPLEX_DESTparameters.
Specify
the
naming
pattern
to
use
for
naming
the LOG_ARCHIVE_FORMAT command in the init.ora file.
Archive
Redo
Log
cannot
use
Files
with
LOG_ARCHIVE_FORMAT = arch_%t_%s_%r.arc
where %t = thread number.
%s = log sequence number.
%r = reset logs ID (a timestamp value).
This
example
shows
a
sequence
of
Archive
Redo
Log
files
generated
using
the LOG_ARCHIVE_FORMAT to specify naming the Redo Log Files all of the logs are for thread 1 with
log sequence numbers of 100, 101, and 102 with reset logs ID 509210197 indicating the files are from
the same database.
/disk1/archive/arch_1_101_509210197.arc,
/disk1/archive/arch_1_102_509210197.arc
/disk2/archive/arch_1_100_509210197.arc,
/disk2/archive/arch_1_101_509210197.arc,
/disk2/archive/arch_1_102_509210197.arc
/disk3/archive/arch_1_100_509210197.arc,
/disk3/archive/arch_1_101_509210197.arc,
/disk3/archive/arch_1_102_509210197.arc
Viewing Information on Archive Redo Log Files
Information about the status of the archiving can be obtained from the V$INSTANCE dynamic
performance view. This shows the status for the DBORCL database.
SELECT archiver FROM v$instance;
ARCHIVE
----------STARTED
Several dynamic performance views contain useful information about archived redo logs, as summarized
in the following table.
Dynamic
View
Performance
Description
V$DATABASE
V$ARCHIVED_LOG
Performance
Description
V$ARCHIVE_DEST
V$ARCHIVE_PROCESSES
V$BACKUP_REDOLOG
V$LOG
Displays all redo log groups for the database and indicates which
need to be archived.
V$LOG_HISTORY
the
state
of
the
various
archive
A final caution about automatic archiving Archive Redo Log files can consume a large quantity of
space. As you dispose of old copies of database backups, dispose of the associated Archive Redo Log
files.
Difference of having database in ARCHIVE and NOARCHIVELOG MODE
Note: Proofread any scripts before using. Always try scripts on a test instance first. This Blog is not
responsible for any damage.
ARCHIVELOG and NOARCHIVELOG Mode Comparison ARCHIVELOG MODE:
Advantages:
1. You can perform hot backups (backups when the database is online).
2. The archive logs and the last full backup (offline or online) or an older backup can completely recover the
database without losing any data because all changes made in the database are stored in the log file.
Disadvantages:
1. It requires additional disk space to store archived log files. However, the agent offers the option to purge the
logs after they have been backed up, giving you the opportunity to free disk space if you need it.
NOARCHIVELOG MODE:
Advantages:
1. It requires no additional disk space to store archived log files.
Disadvantages:
1. If you must recover a database, you can only restore the last full offline backup. As a result, any changes made
to the database after the last full offline backup are lost.
2. Database downtime is significant because you cannot back up the database online. This limitation becomes a
very serious consideration for large databases.
Important!!!
NOARCHIVELOG mode does not guarantee Oracle database PITR (Point-in-Time-Recovery) recovery if there is
a disaster.
If the Oracle database is expected to maintain in NOARCHIVELOG mode, then it must backup full Oracle
database files while the database is offline and can be restored only till last full offline backup time (All changes
after that backup are lost).
When backups scheduled using RMAN utility, ensure that the database runs in ARCHIVELOG mode
Recover a database
Get information about the history of a database using the LogMiner utility
to
run
your
database
The choice of whether to enable the archiving of filled groups of redo log files depends on the availability
and reliability requirements of the application running on the database. If you cannot afford to lose any
data in your database in the event of a disk failure, use ARCHIVELOG mode. The archiving of filled redo
log files can require you to perform extra administrative operations.
A database backup, together with online and archived redo log files, guarantees that you can
recover all committed transactions in the event of an operating system or disk failure.
If you keep an archived log, you can use a backup taken while the database is open and in
normal system use.
You can keep a standby database current with its original database by continuously applying the
original archived redo logs to the standby.
You can configure an instance to archive filled redo log files automatically, or you can archive manually.
For convenience and efficiency, automatic archiving is usually best. Figure 11-1 illustrates how the
archiver process (ARC0 in this illustration) writes filled redo log files to the database archived redo log.
If all databases in a distributed database operate in ARCHIVELOG mode, you can perform coordinated
distributed database recovery. However, if any database in a distributed database is
in NOARCHIVELOG mode, recovery of a global distributed database (to make all databases consistent)
is limited by the last full backup of any database operating in NOARCHIVELOG mode.
Figure 11-1 Redo Log File Use in ARCHIVELOG Mode
Controlling Archiving
This section describes how to set the archiving mode of the database and how to control the archiving
process. The following topics are discussed:
2.
3.
Start a new instance and mount, but do not open the database.
STARTUP MOUNT
To enable or disable archiving, the database must be mounted but not open.
4.
Change the database archiving mode. Then open the database for normal operations.
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
5.
Metho
d
1
Initialization Parameter
Host
LOG_ARCHIVE_DEST_n
Local
remote
Example
or
LOG_ARCHIVE_DEST_1
'LOCATION=/disk1/arc'
LOG_ARCHIVE_DEST_2
'SERVICE=standby1'
where:
n is an integer from 1 to 10
2
LOG_ARCHIVE_DEST and
Local only
LOG_ARCHIVE_DUPLEX_DES
T
LOG_ARCHIVE_DEST = '/disk1/arc'
LOG_ARCHIVE_DUPLEX_DEST
'/disk2/arc'
Indicates
A local file system location.
Remote archival through Oracle Net
service name.
Example
LOG_ARCHIVE_DEST_1
'LOCATION=/disk1/arc'
LOG_ARCHIVE_DEST_2
'SERVICE=standby1'
=
=
If you use the LOCATION keyword, specify a valid path name for your operating system. If you
specify SERVICE, the database translates the net service name through the tnsnames.ora file to a
connect descriptor. The descriptor contains the information necessary for connecting to the remote
database. The service name must have an associated database SID, so that the database correctly
updates the log history of the control file for the standby database.
Perform
the
following
steps
to
set
the
destination
the LOG_ARCHIVE_DEST_n initialization parameter:
1.
for
archived
redo
logs
using
2.
Set the LOG_ARCHIVE_DEST_n initialization parameter to specify from one to ten archiving
locations. The LOCATION keyword specifies an operating system specific path name. For
example, enter:
Note:
If the COMPATIBLE initialization parameter is set to 10.0.0 or higher, the database requires
the
specification
of
resetlogs
ID
( %r )
when
you
include
the LOG_ARCHIVE_FORMAT parameter. The default for this parameter is operating system
dependent. For example, this is the default format for UNIX:
LOG_ARCHIVE_FORMAT=%t_%s_%r.dbf
The incarnation of a database changes when you open it with the RESETLOGS option.
Specifying %r causes the database to capture the resetlogs ID in the archived redo log file
name.
The following example shows a setting of LOG_ARCHIVE_FORMAT:
LOG_ARCHIVE_FORMAT = arch_%t_%s_%r.arc
This setting will generate archived logs as follows for thread 1; log sequence numbers 100, 101,
and 102; resetlogs ID 509210197. The identical resetlogs ID indicates that the files are all from
the same database incarnation:
/disk1/archive/arch_1_100_509210197.arc,
/disk1/archive/arch_1_101_509210197.arc,
/disk1/archive/arch_1_102_509210197.arc
/disk2/archive/arch_1_100_509210197.arc,
/disk2/archive/arch_1_101_509210197.arc,
/disk2/archive/arch_1_102_509210197.arc
/disk3/archive/arch_1_100_509210197.arc,
/disk3/archive/arch_1_101_509210197.arc,
/disk3/archive/arch_1_102_509210197.arc
2.
Specify
destinations
for
the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST parameter
(you
can
also
specify LOG_ARCHIVE_DUPLEX_DESTdynamically using the ALTER SYSTEM statement). For
example, enter:
LOG_ARCHIVE_DEST = '/disk1/archive'
LOG_ARCHIVE_DUPLEX_DEST = '/disk2/archive'
3.
Valid/Invalid: indicates whether the disk location or service name information is specified and
valid
Enabled/Disabled: indicates the availability state of the location and whether the database can
use the destination
Several combinations of these characteristics are possible. To obtain the current status and other
information about each destination for an instance, query the V$ARCHIVE_DEST view.
The characteristics determining a locations status that appear in the view are shown in Table 11-1. Note
that for a destination to be used, its characteristics must be valid, enabled, and active.
Table 11-1 Destination Status
STATUS
Characteristics
Vali
Enable
d
d
Activ
e
Meaning
VALID
True
True
True
INACTIVE
Fals
e
n/a
n/a
ERROR
True
True
False
FULL
True
True
False
DEFERRED
True
False
True
DISABLED
True
False
False
BAD
PARAM
n/a
n/a
n/a
The availability state of the destination is DEFER, unless there is a failure of its parent destination, in
which case its state becomes ENABLE.
If you are operating your standby database in managed recovery mode, you can keep your standby
database synchronized with your source database by automatically applying transmitted archived redo
logs.
To transmit files successfully to a standby database, either ARCn or a server process must do the
following:
Transmit the archived logs in conjunction with a remote file server (RFS) process that resides
on the remote server
Each ARCn process has a corresponding RFS for each standby destination. For example, if three
ARCn processes are archiving to two standby databases, then Oracle Database establishes six RFS
connections.
Creating
file
names
on
the
the STANDBY_ARCHIVE_DEST parameter
Updating the standby database control file (which Recovery Manager can then use for recovery)
standby
database
by
using
Archived redo logs are integral to maintaining a standby database, which is an exact replica of a
database. You can operate your database in standby archiving mode, which automatically updates a
standby database with archived redo logs from the original database.
Omitting the MANDATORY attribute for a destination is the same as specifying OPTIONAL.
You
must
have
at
least
declare OPTIONAL or MANDATORY.
one
local
destination,
which
you
can
If you DEFER a MANDATORY destination, and the database overwrites the online log without
transferring the archived log to the standby site, then you must transfer the log to the standby
manually.
If you are duplexing the archived logs, you can establish which destinations are mandatory or optional by
using the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST parameters. The following rules
apply:
Any
destination
declared
by LOG_ARCHIVE_DUPLEX_DEST is
if LOG_ARCHIVE_MIN_SUCCEED_DEST
=
1 and
mandatory
LOG_ARCHIVE_MIN_SUCCEED_DEST = 2.
optional
if
most
between
easily
of
which you
values
declare
for
Meaning
The database can reuse log files only if at least one of the OPTIONAL destinations
succeeds.
The database can reuse log files only if at least two of the OPTIONAL destinations
succeed.
The database can reuse log files only if all of the OPTIONAL destinations succeed.
4
greater
or
This scenario shows that even though you do not explicitly set any of your destinations
to MANDATORY using the LOG_ARCHIVE_DEST_n parameter, the database must successfully archive
to one or more of these locations when LOG_ARCHIVE_MIN_SUCCEED_DEST is set to 1, 2, or 3.
Scenario for Archiving to Both Mandatory and Optional Destinations
Consider a case in which:
Meaning
The database ignores the value and uses the number of MANDATORY destinations (in
this example, 2).
The database can reuse log files even if no OPTIONAL destination succeeds.
The database can reuse logs only if at least one OPTIONAL destination succeeds.
The database can reuse logs only if both OPTIONAL destinations succeed.
5
or
greater
This case shows that the database must archive to the destinations you specify as MANDATORY,
regardless of whether you set LOG_ARCHIVE_MIN_SUCCEED_DEST to archive to a smaller number of
destinations.
Change the destination by deferring the destination, specifying the destination as optional, or
changing the service.
ARCn reopens a destination only when starting an archive operation from the beginning of the
log file, never during a current operation. ARCn always retries the log copy from the beginning.
If you specified REOPEN, either with a specified time the default, ARCn checks to see whether
the time of the recorded error plus the REOPEN interval is less than the current time. If it is,
ARCn retries the log copy.
state.
Meaning
Disable archivelog tracing. This is the default.
Track archival of redo log file.
Track archival status for each archivelog destination.
Track archival operational phase.
Track archivelog destination activity.
Track detailed archivelog destination activity.
Track archivelog destination parameter modifications.
Track ARCn process state activity.
Track FAL (fetch archived log) server related activities.
Supported in a future release.
Tracks asynchronous LGWR activity.
RFS physical client tracking.
ARCn/RFS heartbeat tracking.
Track real-time apply
Track redo apply activity (media recovery or physical standby)
You can combine tracing levels by specifying a value equal to the sum of the individual levels that you
would like to trace. For example, setting LOG_ARCHIVE_TRACE=12, will generate trace level 8 and 4
output. You can set different values for the primary and any standby database.
The default value for the LOG_ARCHIVE_TRACE parameter is 0. At this level, the archivelog process
generates appropriate alert and trace entries for error conditions.
You can change the value of this parameter dynamically using the ALTER SYSTEM statement. The
database must be mounted but not open. For example:
ALTER SYSTEM SET LOG_ARCHIVE_TRACE=12;
Changes initiated in this manner will take effect at the start of the next archiving operation.
Viewing Information About the Archived Redo Log
You can display information about the archived redo log using dynamic performance views or
the ARCHIVE LOG LIST command.
This section contains the following topics:
Description
Shows if the database is in ARCHIVELOG or NOARCHIVELOG mode
and if MANUAL (archiving mode) has been specified.
Displays historical archived log information from the control file. If you
use a recovery catalog, the RC_ARCHIVED_LOG view contains similar
information.
Describes the current instance, all archive destinations, and the current
value, mode, and status of these destinations.
Displays information about the state of the various archive processes for
an instance.
Contains information about any backups of archived logs. If you use a
recovery
catalog,
the RC_BACKUP_REDOLOG
contains
similar
information.
Displays all redo log groups for the database and indicates which need to
be archived.
Contains log history information such as which logs have been archived
and the SCN range for each archived log.
For example, the following query displays which redo log group requires archiving:
SELECT GROUP#, ARCHIVED FROM SYS.V$LOG;
GROUP#
-------1
2
ARC
--YES
NO
The oldest filled redo log group has a sequence number of 11160.
The next filled redo log group to archive has a sequence number of 11163.
USER MANAGEMENT
o
o
o
You can tie authentication to third-party providers like Kerberos or DCE (called network
authentication) (Needs EE+Security Pack)
or provide it from the middle tier (called multitier authentication).
(2) Authorization
With Virtual Private Databases (VPDs), Oracle allows column masking to hide columns.
When you select the row, Oracle will only display NULL for the secure columns.
If you're securing at the row level and column level, it's probably easier to just implement VPDs and not the
secure views.
What's a Virtual Private Databases (VPDs)?
A VPD is just asking Oracle to put a where clause on DML against an object with a security policy on it.
A security policy is defined with DBMS_RLS package.
A security policy is normally defined in a CONTEXT (a piece of data that says how the where clause should
be built).
(4) Audit
o
o
o
o
Although most predefined accounts are locked, you should secure them by changing their passwords.
You can expire passwords of unused accounts.
Also make sure that unused accounts are locked.
Password aging, expiration rules and history can be managed using profiles. (see below)
ALTER USER HR ACCOUNT LOCK;
MAX_BYTES
-1
Limits can be imposed at the user session level, or for each database call.
You can define limits on CPU time, number of logical reads, number of concurrent sessions for each user,
session idle time, session elapsed connect time and the amount of private SGA space for a session.
Use AUDIT SESSION to gather information about limits CONNECT_TIME, LOGICAL_READS_PER_SESSION.
About Profiles
o
o
PASSWORD_VERIFY_FUNCTION
PASSWORD
NULL
PASSWORD_LOCK_TIME
PASSWORD
1
PASSWORD_LIFE_TIME
PASSWORD
180
FAILED_LOGIN_ATTEMPTS
PASSWORD
10
PASSWORD_GRACE_TIME
PASSWORD
7
CONNECT_TIME
KERNEL
UNLIMITED
COMPOSITE_LIMIT
KERNEL
DEFAULT
CPU_PER_SESSION
KERNEL
DEFAULT
CPU_PER_CALL
KERNEL
DEFAULT
LOGICAL_READS_PER_SESSION
KERNEL
DEFAULT
LOGICAL_READS_PER_CALL
KERNEL
DEFAULT
PRIVATE_SGA
KERNEL
DEFAULT
PASSWORD_VERIFY_FUNCTION
PASSWORD
DEFAULT
PASSWORD_LIFE_TIME
PASSWORD
30
PASSWORD_REUSE_TIME
PASSWORD
120
PASSWORD_GRACE_TIME
PASSWORD
3
PASSWORD_LOCK_TIME
PASSWORD
5
SESSIONS_PER_USER
KERNEL
1
PASSWORD_REUSE_MAX
PASSWORD
10
CONNECT_TIME
KERNEL
600
FAILED_LOGIN_ATTEMPTS
PASSWORD
3
IDLE_TIME
KERNEL
20
(b) Assign profile to user and check the users' resource constraints:
SQL> alter user scott profile my_profile;
SQL> conn scott/pwd
SQL> select * from user_resource_limits;
RESOURCE_NAME
LIMIT
-------------------------------- ---------------------------------------COMPOSITE_LIMIT
UNLIMITED
SESSIONS_PER_USER
1
CPU_PER_SESSION
UNLIMITED
CPU_PER_CALL
UNLIMITED
LOGICAL_READS_PER_SESSION
UNLIMITED
LOGICAL_READS_PER_CALL
UNLIMITED
IDLE_TIME
20
CONNECT_TIME
600
PRIVATE_SGA
UNLIMITED
Oracle User Accounts
User Account Creation
The CREATE USER command creates a system user as shown here.
CREATE USER Scott IDENTIFIED BY Tiger;
The user Scott is a standard "dummy" user account found on many Oracle systems for the
purposes of system testing it needs to be disabled to remove a potential hacker access route.
The IDENTIFIED BY clause specifies the user password.
In order to create a user, a DBA must have the CREATE USER system privilege.
Users also have a privilege domain initially the user account has NO privileges it is empty.
In order for a user to connect to Oracle, you must grant the user the CREATE SESSION
system privilege.
Each username must be unique within a database. A username cannot be the same as the
name of a role (roles are described in a later module).
Each user has a schema for the storage of objects within the database (see the figure below).
Two users can name objects identically because the objects are referred to globally by using a
combination of the username and object name.
Example: User350.Employee each user account can have a table named Employee
because each table is stored within the user's schema.
Database Authentication
Database authentication involves the use of a standard user account and password. Oracle performs
the authentication.
Each password must be made up of single-byte characters, even if the database uses a multibyte character set.
Advantages:
o User accounts and all authentication are controlled by the database. There is no reliance
on anything outside of the database.
Oracle provides strong password management features to enhance security when using
database authentication.
It is easier to administer when there are small user communities.
Oracle recommends using password management that includes password aging/expiration, account
locking, password history, and password complexity verification.
External Authentication
External Authentication requires the creation of user accounts that are maintained by
Oracle. Passwords are administered by an external service such as theoperating system or a network
service (Oracle Networks Network authentication through the network is covered in the
course Oracle Database Administration Fundamentals II). This option is generally useful when a user
logs on directly to the machine where the Oracle server is running.
In
order
for
the
operating
system
to
authenticate
users,
a
DBA
sets
the init.ora parameter OS_AUTHENT_PREFIX to some set value the default value isOPS$ in
order to provide for backward compatibility to earlier versions of Oracle.
This prefix is used at the operating system level when the user's account username.
You can also use a NULL string (a set of empty double quotes: "" ) for the prefix so that the
Oracle username exactly matches the Operating System user name. This eliminates the need for
any prefix.
#init.ora parameter
OS_AUTHENT_PREFIX=OPS$
#create user command
CREATE USER OPS$Scott
IDENTIFIED EXTERNALLY
DEFAULT TABLESPACE users
TEMPORARY TABLESPACE temp
QUOTA UNLIMITED ON Users;
When Scott attempts to connect to the database, Oracle will check to see if there is a database user
named OPS$Scott and allow or deny the user access as appropriate. Thus, to use SQLPlus to log on to
the system,
the LINUX/UNIX user Scott enters the following command from the operating system:
$ sqlplus /
All references in commands that refer to a user that is authenticated by the operating system must
include the defined prefix OPS$.
Oracle allows operating-system authentication only for secure connections this is the default. This
precludes use of Oracle Net or a shared server configuration and prevents a remote user from
impersonating another operating system user over a network.
The REMOTE_OS_AUTHENT parameter can be set to force acceptance of a client operating system user
name from a nonsecure connection.
Changes in the parameter take effect the next time the instance starts and the database is
mounted.
Global Authentication
Central authentication can be accomplished through the use of Oracle Advanced Security software for a
directory service.
Global users termed Enterprise Users are authenticated by SSL (secure socket layers) and the user
accounts are managed outside of the database.
Global Roles are defined in a database and known only to that database and authorization for the roles
is done through the directory service. The roles can be used to provide access privileges
Enterprise Roles can be created to provide access across multiple databases. They can consist of one
or more global roles and are essentially containers for global roles.
Creating a Global User Example:
Most users don't need their own schemas this approach separates users from databases.
CREATE USER inventory_schema IDENTIFIED GLOBALLY AS '';
In the directory create multiple enterprise users and a mapping object to tell the database how
to map users DNs to the shared schema.
Middle-tier server authenticates itself with the database server and client an application user
or another application.
Client (a database user) is not authenticated by the middle-tier server instead the identity
and database password are passed through the middle-tier server to the database server for
authentication.
Global users are authenticated by the middle-tier server and it passes either a Distinguished
Name (DN) or Certificate through the middle-tier for retrieval of a client user name.
The middle-tier server proxies a client through the GRANT CONNECT THROUGH clause of the ALTER
USER statement.
ALTER USER Scott GRANT CONNECT THROUGH Proxy_Server
WITH ROLE ALL EXCEPT Inventory;
for
the
Users with privileges to create certain types of objects can create those objects in the
specified tablespace.
Oracle Database limits the amount of space that can be allocated for storage of a user's
objects within the specified tablespace to the amount of the quota.
By default, a user has no quota on any tablespace in the database.
If the user has the privilege to create a schema object, then you must assign a quota to allow
the user to create objects.
Minimally, assign users a quota for the default tablespace, and additional quotas for other
tablespaces in which they can create objects.
Temporary Tablespace
The default Temporary Tablespace for a user is also the SYSTEM tablespace.
Allowing this situation to exist for system users will guarantee that user processing will cause
contention with access to the data dictionary.
Generally a DBA will create a TEMP tablespace that will be shared by all users for processing
that requires sorting and joins.
Tablespace Quotas
Assigning a quota ensures that users with privileges to create objects can create those objects in the
tablespace.
A quota also ensures the amount of space allocated for storage by an individual user is not
exceeded. The default is NO QUOTA on any tablespace so a quota must be set or else the Oracle user
account cannot be used to create any objects.
Assigning Other Tablespace Quotas: You can assign a quota on tablespaces other than
the DEFAULT and TEMPORARY tablespaces for users.
This is often done for senior systems analysts and programmers who are authorized to create
objects in a DATA tablespace.
If you change a quota and the new quota is smaller than the old one, then the following rules apply:
For users who have already exceeded the new quota, new objects cannot be created, and
existing objects cannot be allocated more space until the combined space of the user's objects is
within the new quota.
For users who have not exceeded the new quota, user objects can be allocated additional
space up to the new quota.
Granting the UNLIMITED TABLESPACE privilege to a user account overrides all quota settings for all
tablespaces.
Revoking Tablespace Access
A DBA can revoke tablespace access by setting the user's quota to zero for the tablespace through use
of the ALTER USER command. This example alters the user named SCOTT for the USERS tablespace.
ALTER USER Scott QUOTA 0 ON Users;
Existing objects for the user will remain within the tablespace, but cannot be allocated additional disk
space.
Alter User Command
Users can use the ALTER USER command to change their own password.
To make any other use of the command, a user must have the ALTER USER system privilege something the DBA should not give to individual users.
Changing a user's security setting with the ALTER USER command changes future sessions, not a
current session to which the user may be connected.
Example ALTER USER command:
Dropping a user causes the user and the user schema to be immediately deleted from the
database.
If the user has created objects within their schema, it is necessary to use the CASCADE option
in order to drop a user.
If you fail to specify CASCADE when user objects exist, an error message is generated and the
user is not dropped.
In order for a DBA to drop a user, the DBA must have the DROP USER system privilege.
CAUTION: You need to exercise caution with the CASCADE option to ensure that you don't drop a user
where views or procedures exist that depend upon tables that the user created. In those cases,
dropping a user requires a lot of detailed investigation and careful deletion of objects.
If you want to deny access to the database, but do not want to drop the user and the user's objects, you
should revoke the CREATE SESSION privilege for the user temporarily.
You cannot drop a user who is connected to the database - you must first terminate the user's session
with the ALTER SYSTEM KILL SESSION command.
Data Dictionary Tables for User Accounts
The only data dictionary table used by a DBA for user account information is DBA_USERS.
COLUMN username FORMAT A15;
COLUMN account_status FORMAT A20;
COLUMN default_tablespace FORMAT A19;
SELECT username, account_status, default_tablespace
FROM dba_users;
USERNAME
--------------OUTLN
USER350
DBOCK
SYS
SYSTEM
USER349
SCOTT
TSMSYS
DIP
DBSNMP
ORACLE_OCM
ACCOUNT_STATUS
DEFAULT_TABLESPACE
-------------------------------------OPEN
SYSTEM
OPEN
USERS
OPEN
DATA01
OPEN
SYSTEM
OPEN
SYSTEM
EXPIRED
SYSTEM
EXPIRED
USERS
EXPIRED & LOCKED
SYSTEM
EXPIRED & LOCKED
SYSTEM
EXPIRED & LOCKED
SYSAUX
EXPIRED & LOCKED
SYSTEM
11 rows selected.
Site Licensing
One of the DBA's responsibilities is to ensure that the Oracle Server license agreement is maintained.
A DBA can track and limit session access for users concurrently accessing the database through use of
the LICENSE_MAX_SESSIONS,LICENSE_SESSIONS_WARNING,
and LICENSE_MAX_USERS parameters in the PFILE. If an organization's license is unlimited, these
parameters may have their value set to 0.
S_WARNING
--------80
S_CURRENT
--------65
S_HIGH
-----82
USERS_MAX
--------50
Privileges
General
Authentication means to authenticate a system user account ID for access to an Oracle database.
Authorization means to verify that a system user account ID has been granted the right, called
a privilege, to execute a particular type of SQL statement or to access objects belonging to another
system user account.
System privileges allow a system user to perform a specific type of operation or set of
operations. Typical operations are creating objects, dropping objects, and altering objects.
Schema Object privileges allow a system user to perform a specific type of operation on a
specific schema object. Typical objects include tables, views, procedures, functions, sequences,
etc.
Table privileges are schema object privileges specifically applicable to Data Manipulation
Language (DML) operations and Data Definition Language (DDL) operations for tables.
View privileges apply to the use of view objects that reference base tables and other views.
Type privileges apply to the creation of named types such as object types, VARRAYs, and
nested tables.
System Privileges
As Oracle has matured as a product, the number of system privileges has grown. The current number
is over 100. A complete listing is available by querying the view named SYSTEM_PRIVILEGE_MAP.
Those enabling system wide operations, for example, CREATE SESSION, CREATE TABLESPACE.
Those enabling the management of an object that is owned by the system user, for example,
CREATE TABLE.
Those enabling the management of an object that is owned by any system user, for example,
CREATE ANY TABLE.
If you can create an object, such as that privilege provided by the CREATE TABLE privilege, then you
can also drop the objects you create.
Some examples of system privileges include:
Category
Privilege
SESSION
Create
Alter Session
TABLESPACE
Create
Alter
Drop
Unlimited Tablespace
TABLE
Create
Create
Session
Any
Tablespace
Tablespace
Tablespace
Table
Table
INDEX
Alter
Any
Drop
Any
Select Any Table
Table
Table
Create
Any
Alter Any Index
Index
Some privileges that you might expect to exist, such as CREATE INDEX, do not exist since if you
can CREATE TABLE, you can also create the indexes that go with it and use the ANALYZE command.
Some privileges, such as UNLIMITED TABLESPACE cannot be granted to a role (roles are covered in
Module 14-3)
Granting System Privileges
The command to grant a system privilege is the GRANT command. Some example GRANT commands
are shown here.
In general, you can grant a privilege to either a user or to a role. You can also grant a privilege
to PUBLIC - this makes the privilege available to every system user.
The WITH ADMIN OPTION clause enables the grantee (person receiving the privilege) to grant the
privilege or role to other system users or roles; however, you cannot use this clause unless you have,
yourself, been granted the privilege with this clause.
The GRANT ANY PRIVILEGE system privilege also enables a system user to grant or revoke privileges.
The GRANT ANY ROLE system privilege is a dangerous one that you don't give to the average system
user since then the user could grant any role to any other system user.
SYSDBA and SYSOPER Privileges
SYSDBA and SYSOPER are special privileges that should only be granted to a DBA.
This table lists example privileges associated with each of these special privileges.
SYSOPER
STARTUP
SHUTDOWN
ALTER DATABASE OPEN | MOUNT
RECOVER DATABASE
ALTER DATABASE ARCHIVELOG
RESTRICTED SESSION
ALTER DATABASE BEGIN/END BACKUP
SYSDBA
SYSOPER PRIVILEGES THAT INCLUDE THE
WITH ADMIN OPTION.
CREATE DATABASE
When
you
allow
database
access
through
a Password
File using
the REMOTE_LOGIN_PASSWORDFILE parameter that was discussed in an earlier module, you can
add users to this password file by granting them SYSOPER or SYSDBA system privileges.
You cannot grant the SYSDBA or SYSOPER privileges by using the WITH ADMIN OPTION. Also, you
must have these privileges in order to grant/revoke them from another system user.
Displaying System Privileges
You can display system privileges by querying the DBA_SYS_PRIVS view. Here is the result of a query
of the SIUE Oracle database.
PRIVILEGE
-------------------------DROP TABLESPACE
ALTER TABLESPACE
ADM
------NO
NO
You
can
view
the
users
who
have SYSOPER and SYSDBA privileges
by
querying v$pwfile_users. Note: Your student databases will display no rows selectedthis output
comes from the DBORCL database.
SELECT * FROM v$pwfile_users;
USERNAME
--------------INTERNAL
SYS
DBOCK
JAGREEN
SYSDB
SYSOP
--------- -------TRUE TRUE
TRUE
TRUE
TRUE
FALSE
TRUE
TRUE
The view SESSION_PRIVS gives the privileges held by a user for the current logon session.
Revoking System Privileges
The REVOKE command can be used to revoke privileges from a system user or from a role.
Only privileges granted directly with a GRANT command can be revoked.
There are no cascading effects when a system privilege is revoked. For example, the DBA grants
the SELECT ANY TABLE WITH ADMIN OPTION to systemuser1, and then system user1 grants
the SELECT ANY TABLE to system user2, then if system user1 has the privilege revoked,
system user2 still has the privilege.
System Privilege Restrictions
Oracle provides for data dictionary protection by enabling the restriction of access to dictionary objects
to the SYSDBA and SYSOPER roles.
For example, if this protection is in place, the SELECT ANY TABLE privilege to allow a user to access
views and tables in other schemas would not enable the system user to access dictionary objects.
The appropriate init.ora parameter is O7_DICTIONARY_ACCESSIBILITY and it is set to FALSE,
SYSTEM privileges allowing access to objects in other schemas would not allow access to the dictionary
schema. If it is set =TRUE, then access to the SYS schema is allowed (this is the behavior of Oracle 7).
Schema Object Privileges
Schema object privileges authorize the system user to perform an operation on the object, such as
selecting or deleting rows in a table.
A user account automatically has all object privileges for schema objects created within his/her
schema. Any privilege owned by a user account can be granted to another user account or to a role.
The following table provided by Oracle Corporation gives a map of object privileges and the type of
object to which a privilege applies.
OBJECT PRIVILEGE
Table
View
Sequence
Procedure
ALTER
XXX
XXX
XXX
XXX
DELETE
XXX
XXX
EXECUTE
XXX
INDEX
XXX
XXX
INSERT
XXX
XXX
REFERENCES
XXX
SELECT
XXX
XXX
UPDATE
XXX
XXX
XXX
Here the SELECT and ALTER privileges were granted for the Orders table belonging to the system
user User350. These two privileges were granted to allsystem users through the PUBLIC specification.
In the 3rd example, User349 receives the SELECT privilege on User350's Order_Details table and
can also grant that privilege to other system users via the WITH GRANT OPTION.
In
the
4th
example,
the Order_Details table.
associated
with
In the 5th example UPDATE privilege is allocated for only two columns (Price and Description) of
the Order_Details table.
Notice the difference between WITH ADMIN OPTION and WITH GRANT OPTION - the first applying to
System privileges (these are administrative in nature), the second applying to Object privileges.
Revoking Schema Object Privileges
Object privileges are revoked the same way that system privileges are revoked.
Several example REVOKE commands are shown here. Note the use of ALL (to revoke all object privileges
granted to a system user) and ON (to identify the object).
REVOKE SELECT ON dbock.orders FROM User350;
REVOKE ALL on User350.Order_Details FROM User349;
REVOKE ALL on User350.Order_Details FROM User349 CASCADE CONSTRAINTS;
In the latter example, the CASCADE CONSTRAINTS clause would drop referential integrity constraints
defined by the revocation of ALL privileges.
There is a difference in how the revocation of object privileges affects other users. If user1 grants a
SELECT on a table with GRANT OPTION to user2, anduser2 grants the SELECT on the table to user3, if
the SELECT privilege is revoked from user2 by user1, then user3 also loses the SELECT privilege. This
is a critical difference.
Table Privileges
Table privileges are schema object privileges specifically applicable to Data Manipulation Language
(DML) operations and Data Definition Language (DDL) operations for tables.
DML Operations
As was noted earlier, privileges to DELETE, INSERT, SELECT, and UPDATE for a table or view should
only be granted to a system user account or role that need to query or manipulate the table data.
INSERT and UPDATE privileges can be restricted for a table to specific columns.
A selective INSERT causes a new row to have values inserted for columns that are specified in
a privilege all other columns store NULL or pre-defined default values.
Users attempting DDL on a table may need additional system or object schema privileges, e.g.,
to create a table trigger, the user requires the CREATE TRIGGER system privilege as well as
the ALTER TABLE object privilege.
View Privileges
As you've learned, a view is a virtual table that presents data from one or more tables in a database.
Views show the structure of underlying tables and are essentially a stored query.
Views store no actual data the data displayed is derived from the tables (or views) upon
which the view is based.
Your
account
must
have
been
granted
appropriate SELECT, INSERT, UPDATE,
or DELETE object privileges on base objects underlying the view, or
Been granted the SELECT ANY TABLE, INSERT ANY TABLE, UPDATE ANY TABLE,
or DELETE ANY TABLE system privileges.
To grant other users to access your view, you must have object privileges on the underlying
objects with the GRANT OPTION clause or system privileges with the ADMIN OPTION clause.
To use a view, a system user account only requires appropriate privileges on the view itself privileges
on the underlying base objects are NOT required.
Procedure Privileges
EXECUTE and EXECUTE ANY PROCEDURE
The EXECUTE privilege is the only schema object privilege for procedures.
Grant this privilege only to system users that will execute a procedure or compile another
procedure that calls a procedure.
The EXECUTE ANY PROCEDURE system privilege provides the ability to execute any procedure in a
database.
Roles can be used to grant privileges to users.
Definer and Invoker Rights
In order to grant EXECUTE to another user, the procedure owner must have all necessary object (or
system) privileges for objects referenced by the procedure. The individual user account
granting EXECUTE on a procedure is termed the Definer.
A user of a procedure requires only the EXECUTE privilege on the procedure, and does NOT require
privileges on underlying objects. A user of a procedure is termed the Invoker.
At runtime, the privileges of the Definer are checked if required privileges on referenced objects have
been revoked, then neither the Definer or any Invoker granted EXECUTE on the procedure can
execute the procedure.
Other Privileges
CREATE PROCEDURE or CREATE ANY PROCEDURE system privileges must be granted to a user
account in order for that user to create a procedure.
To alter a procedure (manually recompile), a user must own the procedure or have the ALTER ANY
PROCEDURE system privilege.
Procedure owners must have appropriate schema object privileges for any objects referenced in the
procedure body these must be explicitly granted and cannot be obtained through a role.
Type Privileges
Type privileges are typically system privileges for named types that include object types, VARRAYs, and
nested tables. The system privileges in this area are detailed in this table.
Privilege
CREATE TYPE
CREATE ANY TYPE
ALTER ANY TYPE
DROP ANY TYPE
EXECUTE ANY TYPE
The CONNECT and RESOURCE roles are granted the CREATE TYPE system privilege and the DBA role
includes all of the above privileges.
Object Privileges
Define a table.
User1
User2
User3
User1 performs the following DDL in his schema:
CREATE TYPE Type1 AS OBJECT (
Attribute_1 NUMBER);
CREATE TYPE Type2 AS OBJECT (
Attribute_2 NUMBER);
GRANT EXECUTE ON Type1 TO User2;
GRANT EXECUTE ON Type2 TO User2 WITH GRANT OPTION;
User2 performs the following DDL in his schema:
CREATE TABLE Tab1 OF User1.Type1;
CREATE TYPE Type3 AS OBJECT (
Attribute_3 User1.Type2);
CREATE TABLE Tab2 (
Column_1 User1.Type2);
The following statements succeed because User2 has EXECUTE privilege
the GRANT OPTION:
GRANT EXECUTE ON Type3 TO User3;
GRANT SELECT on Tab2 TO User3;
However,
the
following
grant
fails
on User1's TYPE1 with the GRANT OPTION:
GRANT SELECT ON Tab1 TO User3;
not
Roles
General
The Role database object is used to improve the management of various system objects, such as tables,
indexes, and clusters by granting privileges to access these objects to roles. As you learned in earlier
studies, there are two types of privileges, System and Object. Both types of privileges can be allocated
to roles.
The concept of a role is a simple one a role is created as a container for groups of privileges that are
granted to system users who perform similar, typical tasks in a business.
Example: A system user fills the position of Account_Manager. This is a business role. The role is
created as a database object and privileges are allocated to the role. In turn the role is allocated to all
employees that work as account managers, and all account managers thereby inherit the privileges
needed to perform their duties.
This figure shows privileges being allocated to roles, and the roles being allocated to two types of system
users Account_Mgr and Inventory_Mgr.
Role Benefits
Easier privilege management: Use roles to simplify privilege management. Rather than
granting the same set of privileges to several users, you can grant the privileges to a role, and
then grant that role to each user.
Dynamic privilege management: If the privileges associated with a role are modified, all
the users who are granted the role acquire the modified privileges automatically and
immediately.
Selective availability of privileges: Roles can be enabled and disabled to turn privileges on
and off temporarily. Enabling a role can also be used to verify that a user has been granted that
role.
Can be granted through the operating system: Operating system commands or utilities
can be used to assign roles to users in the database.
Predefined Roles
Numerous predefined roles are created as part of a database. These are listed and described in the
following table.
The first three roles are provided to maintain compatibility with previous versions of Oracle and may not
be created automatically in future versions of Oracle. Oracle Corporation recommends that you design
your own roles for database security, rather than relying on these roles.
ROLE
Script to
Create Role
DESCRIPTION
CONNECT
SQL.BSQ
Includes
system
privileges: ALTER
SESSION
(This
SQL.BSQ
CATEXP.SQL
CATEXP.SQL
SQL.BSQ
SQL.BSQ
SQL.BSQ
CATALOG.SQL
CATHS.SQL
RESOURCE
DBA
EXP_FULL_DATABASE
IMP_FULL_DATABASE
DELETE_CATALOG_ROLE
EXECUTE_CATALOG_ROLE
SELECT_CATALOG_ROLE
RECOVERY_CATALOG_OWNER
HS_ADMIN_ROLE
AQ_ADMINISTRATOR_ROLE
in
the
data
We grant this role to students that need to design with the Internet Developer Suite that
includes Oracle Designer, Reports, Forms and other rapid application development software.
Normally the RESOURCE role would not be granted to organizational members who are not
information technology professionals.
You should design your own roles to provide data security.
Commands for Creating, Altering, and Dropping Roles
Creating Roles
Sample commands to create roles are shown here. You must have the CREATE ROLE system privilege.
The database using a password a role authorized by the database can be protected by an
associated password. If you are granted a role protected by a password, you can enable or
disable the role by supplying the proper password for the role in a SET ROLE statement.
However, if the role is made a default role and enabled at connect time, the user is not required
to enter a password.
An
application
using
a
specified
package -The INDENTIFIED
USING package_name clause lets you create an application role, which is a role that can be
enabled only by applications using an authorized package.
o
o
Externally by the operating system, network, or other external source the following
statement creates a role named ACCTS_REC and requires that the user be authorized by an
external source before it can be enabled:
CREATE ROLE Accts_Rec IDENTIFIED EXTERNALLY;
Altering Roles
Use the ALTER ROLE command as is shown in these examples.
ALTER ROLE Account_Mgr IDENTIFIED BY <password>;
ALTER ROLE Inventory_Mgr NOT IDENTIFIED;
Granting Roles
General facts about roles:
To grant a privilege to a role, you must be granted a system privilege with the ADMIN
OPTION or have the GRANT ANY PRIVILEGE system privilege.
To grant a role, you must have been granted the role yourself with the ADMIN OPTION or
have the GRANT ANY ROLE system privilege.
You cannot grant a role that is IDENTIFIED GLOBALLY as global roles are controlled entirely
by the enterprise directory service.
Use the GRANT command to grant a role to a system user or to another role, as is shown in these
examples.
GRANT Account_Mgr TO User150;
GRANT Inventory_Mgr TO Account_Mgr, User151;
GRANT Inventory_Mgr TO User152 WITH ADMIN OPTION;
Can grant or revoke the system privilege or role to or from any user or other database role.
Can further grant the system privilege or role with ADMIN OPTION.
Have the GRANT ANY OBJECT PRIVILEGE system privilege (to grant/revoke privileges on
behalf of the object owner), or
Have been granted an object privilege by the owner with the WITH GRANT OPTION clause.
You cannot grant system privileges and roles with object privileges in the same GRANT statement.
Example: This grants SELECT, INSERT, and DELETE privileges for all columns of the EMPLOYEE table
to two user accounts.
GRANT SELECT, INSERT, DELETE ON Employee TO User350, User349;
Example: This grants
the ALL keyword.
all
object
privileges
on
to
user
by
use
of
The grantee can grant object privileges to other users and roles in the database.
The grantee can grant corresponding privileges on the views to other users and roles.
The grantee CANNOT use the WITH GRANT OPTION when granting object privileges to a
role.
GRANT SELECT, INSERT, DELETE ON Employee TO User350 WITH GRANT OPTION;
Granting Column Privileges
Use this approach to control privileges on individual table columns.
Before granting an INSERT privilege for a column, determine if any columns have NOT
NULL constraints.
Granting an INSERT privilege on a column where other columns are specified NOT
NULL prevents inserting any table rows.
Example: This
grants
the INSERT and UPDATE privileges
and First_Name columns of the Employee table.
on
Users with GRANT ANY ROLE can also revoke any role.
You cannot revoke the ADMIN OPTION for a role or system privilege you must revoke the
privilege or role and then grant it again without the ADMIN OPTION.
REVOKE Account_Mgr FROM User151;
REVOKE Account_Mgr FROM Inventory_Mgr;
REVOKE Access_MyBank_Acct FROM PUBLIC;
The second example revokes the role Account_Mgr from the role Inventory_Mgr. The third example
revokes the role Access_MyBank_Acct from PUBLIC.
When revoking object privileges:
To revoke an object privilege you must have previously granted the object privilege to the user
or role, or you have the GRANT ANY OBJECT PRIVILEGE system privilege.
You can only revoke object privileges you directly granted, not grants made by others to whom
you granted the GRANT OPTION but there is a cascading effect object privilege grants
propagated with the GRANT OPTION are revoked if the grantor's object privilege is revoked.
Example: You are the original grantor, this REVOKE will revoke the specified privileges from the users
specified.
REVOKE SELECT, INSERT, DELETE ON Employee FROM User350, Inventory_Mgr;
all
columns,
then
issue
You as the DBA grant the CREATE VIEW system privilege to User350 WITH ADMIN
OPTION.
User349 still has the CREATE VIEW system privilege and the Special_Inventory view
continues to exist.
Cascading revoke effects do occur for system privileges related to DML operations.
Example:
User350 creates a procedure that updates the Employee table, but User350 has not received
specific privileges on the Employee table.
Oracle revokes the role from all system users and roles.
The role is automatically removed from all user default role lists.
There is NO impact on objects created such as tables because the creation of objects is not
depending on privileges received through a role.
In order to drop a role, you must have been granted the role with the ADMIN OPTION or have
the DROP ANY ROLE system privilege.
DROP ROLE Account_Mgr;
Guidelines for Creating Roles
Role names are usually an application task or job title because a role has to include the privileges
needed to perform a task or work in a specific job. The figure shown here uses both application tasks
and
job titles for role names.
Use the following steps to create, assign, and grant users roles:
1. Create a role for each application task. The name of the application role corresponds to a task in the
application, such as PAYROLL.
2. Assign the privileges necessary to perform the task to the application role.
3. Create a role for each type of user. The name of the user role corresponds to a job title, such
as PAY_CLERK.
4. Grant application roles to users roles.
5. Grant users roles to users.
The PAY_CLERK role has been granted all of the privileges that are necessary to perform the
payroll clerk function.
The PAY_CLERK_RO (RO for read only) role has been granted only SELECT privileges on
the tables required to perform the payroll clerk function.
The user can log in to SQL*Plus to perform queries, but cannot modify any of the data,
because the PAY_CLERK is not a default role, and the user does not know the password
for PAY_CLERK.
When the user logs in to the payroll application, it enables the PAY_CLERK by providing the
password. It is coded in the program; the user is not prompted for it.
Role Data Dictionary Views
The following views provide information about roles that are useful for managing a database.
SID
---------55
1
49
51
47
SERIAL#
---------19
91
23
38
37
SPID
USERNAME
PROGRAM
---------------------------------------4616
VEYSI
sqlplus@ora.localdomain (TNS V1-V3)
4920
SYS
sqlplus@ora.localdomain (TNS V1-V3)
4923
VEY
sqlplus@ora.localdomain (TNS V1-V3)
5375
oracle@ora.localdomain (J000)
5377
oracle@ora.localdomain (J001)
Profiles
Restrict database usage by a system user profiles restrict users from performing operations
that exceed reasonable resource utilization. Examples of resources that need to be managed:
o Disk storage space.
o I/O bandwidth to run queries.
o CPU power.
o Connect time.
Enforce password practices how user passwords are created, reused, and validated.
Profiles are assigned to users as part of the CREATE USER or ALTER USER commands (User
creation is covered in Module 14).
o User accounts can have only a single profile.
o A default profile can be created a default already exists within Oracle
named DEFAULT it is applied to any user not assigned another profile.
Profiles only take effect when resource limits are "turned on" for the database as a whole.
Profile Specifications
Profile specifications include:
Password history
Account locking
CPU time
Idle time
Connect time
Concurrent sessions
System
users
not
assigned
a
specific
profile
are
automatically
assigned
the DEFAULT profile. The DEFAULT profile has only one significant restriction it doesn't specify a
password verification function.
This query lists the resource limits for the DEFAULT profile.
COLUMN profile FORMAT A10;
COLUMN resource_name FORMAT a30;
COLUMN resource FORMAT a8;
COLUMN limit FORMAT a15;
SELECT * FROM DBA_PROFILES
WHERE PROFILE = 'DEFAULT';
PROFILE RESOURCE_NAME
RESOURCE LIMIT
---------- ------------------------------ -------- --------------DEFAULT COMPOSITE_LIMIT
KERNEL UNLIMITED
DEFAULT SESSIONS_PER_USER
KERNEL UNLIMITED
DEFAULT CPU_PER_SESSION
KERNEL UNLIMITED
DEFAULT CPU_PER_CALL
KERNEL UNLIMITED
DEFAULT LOGICAL_READS_PER_SESSION
KERNEL UNLIMITED
DEFAULT LOGICAL_READS_PER_CALL
KERNEL UNLIMITED
DEFAULT IDLE_TIME
KERNEL UNLIMITED
DEFAULT CONNECT_TIME
KERNEL UNLIMITED
DEFAULT PRIVATE_SGA
KERNEL UNLIMITED
DEFAULT FAILED_LOGIN_ATTEMPTS
PASSWORD 10
DEFAULT PASSWORD_LIFE_TIME
PASSWORD UNLIMITED
DEFAULT PASSWORD_REUSE_TIME
PASSWORD UNLIMITED
DEFAULT PASSWORD_REUSE_MAX
PASSWORD UNLIMITED
DEFAULT PASSWORD_VERIFY_FUNCTION
PASSWORD NULL
DEFAULT PASSWORD_LOCK_TIME
PASSWORD UNLIMITED
DEFAULT PASSWORD_GRACE_TIME
PASSWORD UNLIMITED
16 rows selected.
Creating a Profile
A DBA creates a profile with the CREATE PROFILE command.
A DBA must have the CREATE PROFILE system privilege in order to use this command.
Example:
Resource limits that are not specified for a new profile inherit
the DEFAULT profile. These clauses are covered in detail later in these notes.
the
limit
set
in
Assigning Profiles
Profiles can only be assigned to system users if the profile has first been created. Each system user is
assigned only one profile at a time. When a profile is assigned to a system user who already has a
profile, the new profile replaces the old one the current session, if one is taking place, is not affected,
but subsequent sessions are affected. Also, you cannot assign a profile to a role or another profile (Roles
are covered in Module 16).
As was noted above, profiles are assigned with the CREATE USER and ALTER USER command. An
example CREATE USER command is shown here this command is covered in more detail in Module 14.
CREATE USER USER349
IDENTIFIED BY secret
PROFILE Accountant
PASSWORD EXPIRE;
User created.
SELECT username, profile FROM dba_users WHERE username = 'USER349';
USERNAME
-------------USER349
PROFILE
----------------ACCOUNTANT
Altering Profiles
Profiles can be altered with the ALTER PROFILE command.
A DBA must have the ALTER PROFILE system privilege to use this command.
When a profile limit is adjusted, the new setting overrides the previous setting for the limit,
but these changes do not affect current sessions in process.
Example:
ALTER PROFILE Accountant LIMIT
CPU_PER_CALL default
LOGICAL_READS_PER_SESSION 20000
SESSIONS_PER_USER 1;
Test this limit by trying to connect
twice with the account user349.
Dropping a Profile
Profiles no longer required can be dropped with the DROP PROFILE command.
The CASCADE clause revokes the profile from any user account to which it was assigned
the CASCADE clause MUST BE USED if the profile has been assigned to any user account.
When a profile is dropped, any user account with that profile is reassigned
the DEFAULT profile.
Examples:
DROP PROFILE Accountant;
ERROR at line 1:
Changes that result from dropping a profile only apply to sessions that are created after the
change current sessions are not modified.
Password Management
Password management can be easily controlled by a DBA through the use of profiles.
Enabling Password Management
Password management is enabled by creating a profile and assigning the profile to system users
when their account is created or by altering system user profile assignments.
Password limits set in this fashion are always enforced. When password management is in use, an
existing user account can be locked or unlocked by theALTER USER command.
Password Account Locking: This option automatically locks a system user account if the user fails to
execute proper login account name/password entries after a specified number of login attempts.
Password Expiration/Aging: Specifies the lifetime of a password after the specified period, the
password must be changed.
Password History: This option ensures that a password is not reused within a specified period of time
or number of password changes.
This is implemented by use of a password verification function. A DBA can write such a
function or can use the default function namedVERIFY_FUNCTION.
The function that is used for password complexity verification is specified with the profile
parameter, PASSWORD_VERIFY_FUNCTION.
If NULL is specified (the default), no password verification is performed.
The default VERIFY_FUNCTION has the characteristics shown in the figure below.
When
a
DBA
connected
as
the
user SYS executes
the utlpwdmg.sql script
(located
at $ORACLE_HOME/rdbms/admin/utlpwdmg.sql)
,
the
Oracle
Server
creates
the VERIFY_FUNCTION . The script also executes the ALTER PROFILE command given below the
command modifies the DEFAULT profile.
Example of executing the utlpwdmg.sql script.
SQL> Connect SYS as SYSDBA
SQL> start $ORACLE_HOME/rdbms/admin/utlpwdmg.sql
Function created.
Profile altered.
This ALTER PROFILE command is part of the utlpwdmg.sql script and does not need to be executed
separately.
-- This script alters the default parameters for Password Management
-- This means that all the users on the system have Password Management
-- enabled and set to the following values unless another profile is
-- created with parameter values set to different value or UNLIMITED
-- is created and assigned to the user.
ALTER PROFILE DEFAULT LIMIT
PASSWORD_LIFE_TIME 60
PASSWORD_GRACE_TIME 10
PASSWORD_REUSE_TIME 1800
PASSWORD_REUSE_MAX UNLIMITED
FAILED_LOGIN_ATTEMPTS 3
PASSWORD_LOCK_TIME 1/1440
PASSWORD_VERIFY_FUNCTION Verify_Function;
Use these parameters values when setting parameters to values that are less than a day:
Description
Total CPU time measured in hundredths of seconds
Maximum CPU time allowed for a statement parse, execute, or
fetch operation, in hundredths of a second.
Maximum number of concurrent sessions allowed for each user
name
Maximum total elapsed connect time measured in minutes
Maximum continuous inactive time in a session measured in
minutes when a query or other operation is not in progress.
Number of data blocks (physical and logical reads) read per
session from either memory or disk.
Maximum number of data blocks read for a statement parse,
execute, or fetch operation.
Total Resource cost, in service units, as a composite weighted
sum
of
CPU_PER_SESSION,
CONNECT_TIME,
LOGICAL_READS_PER_SESSION, and PRIVATE_SGA.
Maximum amount of memory a session can allocate in the shared
pool of the SGA measured in bytes, kilobytes, or megabytes
(applies to Shared Server only).
Profile limits enforced at the session level are enforced for each connection where a system
user can have more than one concurrent connection.
If a session-level limit is exceeded, then the Oracle Server issues an error message such
as ORA-02391: exceeded simultaneous SESSION_PER_USER limit, and then disconnects
the system user.
Resource limits can also be set at the Call-level, but this applies to PL/SQL programming
limitations and we do not cover setting these Call-level limits in this course.
Step
2. Create
a
new
profile
or
modify
an
existing
profile
to
use
a COMPOSITE_LIMIT parameter. Here the Accountant profile is recreated based on the
command given earlier in these notes, then altered to set the COMPOSITE_LIMIT to 300. We
also ensure that user349 is assigned this profile.
CREATE PROFILE Accountant LIMIT
SESSIONS_PER_USER 4
CPU_PER_SESSION unlimited
CPU_PER_CALL 6000
LOGICAL_READS_PER_SESSION unlimited
LOGICAL_READS_PER_CALL 100
IDLE_TIME 30
CONNECT_TIME 480
PASSWORD_REUSE_TIME 1
PASSWORD_LOCK_TIME 7
PASSWORD_REUSE_MAX 3;
ALTER PROFILE Accountant LIMIT
COMPOSITE_LIMIT 300;
Profile altered.
ALTER USER user349 PROFILE Accountant;
User altered.
Step 3. Test the new limit. The COMPOSITE_COST can be computed. This is the
formula. This table compares high/low values for CPU andCONNECT usage to compute the
composite cost and indicates if the resource limit is exceeded.
High
CPU
High
Connec
t
Medium
CPU
Low
Connec
t
Low
CPU
Medium
Connec
t
Low
CPU
Low
Connec
t
CPU
(Seconds)
Connect
(Seconds)
0.06
250
0.05
40
0.02
175
0.02
40
Composite Cost
Exceede
d
Limit
of 300
Yes
No
No
No
Excessive overhead from operating system context switching between Oracle Database server
processes when the number of server processes is high.
Inefficient scheduling because the O/S may deschedule database servers while they hold
latches, which is inefficient.
Inappropriate allocation of resources by not prioritizing tasks properly among active processes.
Inability to manage database-specific resources, such as parallel execution servers and active
sessions
Example: Allocate 80% of available CPU resources to online users leaving 20% for batch users and
jobs.
The Resource Manager enables you to classify sessions into groups based on session attributes,
and to then allocate resources to those groups in a way that optimizes hardware utilization for your
application environment.
The elements of the Resource Manager include:
Resource consumer group Sessions grouped together based on the resources that they
require the resource manager allocates resources to consumer groups, not individual sessions.
Resource plan this is a database object a container for resource directives on how
resources should be allocated.
Resource plan directive this associates a resource consumer group to a resource plan.
You can use the DBMS_RESOURCE_MANAGER PL/SQL package to create and maintain these
elements. The objects created are stored in the data dictionary.
Some special consumer groups always exist in the data dictionary and cannot be modified or deleted:
SYS_GROUP the initial consumer group for all sessions created by SYS or SYSTEM.
OTHER_GROUPS this group contains all sessions not assigned to a consumer group. Any
resource plan must always have a directive for the OTHER_GROUPS.
It
allocates
CPU
resources
among
three
resource
consumer
groups
named OLTP, REPORTING, and OTHER_GROUPS.
Oracle provides a predefined procedure named CREATE_SIMPLE_PLAN so that a DBA can create
simple resource plans.
A resource plan can reference subplans. This figure illustrates a top plan and all descending plans and
groups.
In
order
to
administer
the
Resource
Manager,
a
DBA
must
have
the ADMINISTER_RESOURCE_MANAGER system privilege this privilege is part of the DBA role
along with the ADMIN option.
The DBA can grant privileges to the user named HR an internal user for Oracle human
resources software.
The Resource Manager is not enabled by default. This command (or init.ora file parameter) by the DBA
actives the Resource Manager and sets the top plan.
RESOURCE_MANAGER_PLAN = DAYTIME.
Activate or deactivate the Resource Manager dynamically or change plans with the ALTER SYSTEM
command.
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = Alternate_Plan;
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = ;
DBA_USERS
DBA_PROFILES
COLUMN username FORMAT A15;
COLUMN password FORMAT A20;
COLUMN account_status FORMAT A30;
SELECT username, password, account_status
FROM dba_users;
USERNAME
PASSWORD
ACCOUNT_STATUS
--------------- -------------------- -----------------------------OUTLN
4A3BA55E08595C81
OPEN
USER350
2D5E5DB47A5419B2
OPEN
DBOCK
0D25D10037ACDC6A
OPEN
SYS
DCB748A5BC5390F2
OPEN
SYSTEM
EED9B65CCECDB2E9
OPEN
USER349
E6677904C9407D8A
EXPIRED
TSMSYS
3DF26A8B17D0F29F
EXPIRED & LOCKED
DIP
CE4A36B8E06CA59C
EXPIRED & LOCKED
DBSNMP
E066D214D5421CCC
EXPIRED & LOCKED
ORACLE_OCM
6D17CF1EB1611F94
EXPIRED & LOCKED
10 rows selected.
COLUMN profile FORMAT A16;
COLUMN resource_name FORMAT A26;
COLUMN resource_type FORMAT A13;
COLUMN limit FORMAT A10;
SELECT profile, resource_name, resource_type, limit
FROM dba_profiles
WHERE resource_type = 'PASSWORD';
PROFILE RESOURCE_NAME
RESOURCE_TYPE LIMIT
---------- -------------------------- ------------- ---------ACCOUNTANT FAILED_LOGIN_ATTEMPTS
PASSWORD
DEFAULT
DEFAULT FAILED_LOGIN_ATTEMPTS
PASSWORD
3
ACCOUNTANT PASSWORD_LIFE_TIME
PASSWORD
DEFAULT
DEFAULT PASSWORD_LIFE_TIME
PASSWORD
60
ACCOUNTANT PASSWORD_REUSE_TIME
PASSWORD
1
DEFAULT PASSWORD_REUSE_TIME
PASSWORD
1800
ACCOUNTANT PASSWORD_REUSE_MAX
PASSWORD
3
DEFAULT PASSWORD_REUSE_MAX
PASSWORD
UNLIMITED
ACCOUNTANT PASSWORD_VERIFY_FUNCTION PASSWORD
DEFAULT
DEFAULT PASSWORD_VERIFY_FUNCTION PASSWORD
VERIFY_FUN
ACCOUNTANT PASSWORD_LOCK_TIME
PASSWORD
7
DEFAULT PASSWORD_LOCK_TIME
PASSWORD
.0006
ACCOUNTANT PASSWORD_GRACE_TIME
PASSWORD
DEFAULT
DEFAULT PASSWORD_GRACE_TIME
PASSWORD
10
14 rows selected.
Initialization Parameter
Description
DB_CREATE_FILE_DEST
Defines the location of the default file system directory or ASM disk group
where the database creates datafiles or tempfiles when no file specification is
given in the create operation. Also used as the default location for redo log and
control files if DB_CREATE_ONLINE_LOG_DEST_n are not specified.
DB_CREATE_ONLINE_LOG_DEST_
Defines the location of the default file system directory or ASM disk group for
redo log files and control file creation when no file specification is given in the
create operation. By changing n, you can use this initialization parameter
multiple times, where n specifies a multiplexed copy of the redo log or control
file. You can specify up to five multiplexed copies.
DB_RECOVERY_FILE_DEST
Defines the location of the flash recovery area, which is the default file system
directory or ASM disk group where the database creates RMAN backups when
no format option is used, archived logs when no other local destination is
The file system directory specified by either of these parameters must already exist: the database does
not create it. The directory must also have permissions to allow the database to create the files in it. The
default location is used whenever a location is not explicitly specified for the operation creating the file.
The database creates the filename, and a file thus created is an Oracle-managed file. Both of these
initialization parameters are dynamic, and can be set using the ALTER SYSTEM or ALTER SESSION
statement.
Setting the DB_CREATE_FILE_DEST Initialization Parameter
Include the DB_CREATE_FILE_DEST initialization parameter in your initialization parameter file to identify
the default location for the database server to create:
Datafiles
Tempfiles
Redo log files
Control files
Block change tracking files
You specify the name of a file system directory that becomes the default location for the creation of the
operating system files for these entities. The following example sets /u01/oradata as the default
directory to use when creating Oracle-managed files:
DB_CREATE_FILE_DEST = '/u01/oradata'
Setting the DB_RECOVERY_FILE_DEST Parameter
Include the DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE parameters in your
initialization parameter file to identify the default location in which Oracle Database should create:
Control files
RMAN backups (datafile copies, control file copies, backup pieces, control file autobackups)
Archived logs
Flashback logs
You specify the name of file system directory that becomes the default location for creation of the
operating system files for these entities. For example:
DB_RECOVERY_FILE_DEST = '/u01/oradata'
DB_RECOVERY_FILE_DEST_SIZE = 20G
Include the DB_CREATE_ONLINE_LOG_DEST_n initialization parameter in your initialization parameter
file to identify the default location for the database server to create:
Redo log files
Control files
You specify the name of a file system directory that becomes the default location for the creation of the
operating system files for these entities. You can specify up to five multiplexed locations. For the creation
of redo log files and control files only, this parameter overrides any default location specified in the
DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST initialization parameters. If you do not specify a
DB_CREATE_FILE_DEST parameter, but you do specify the DB_CREATE_ONLINE_LOG_DEST_n
parameter, then only redo log files and control files can be created as Oracle-managed files. It is
recommended that you specify at least two parameters. For example:
DB_CREATE_ONLINE_LOG_DEST_1 = '/u02/oradata'
DB_CREATE_ONLINE_LOG_DEST_2 = '/u03/oradata'
This allows multiplexing, which provides greater fault-tolerance for the redo log and control file if one of
the destinations fails.
Creating Oracle-Managed Files
If you have met any of the following conditions, then Oracle Database creates Oracle-managed files for
you, as appropriate, when no file specification is given in the creation operation:
You have included any of the DB_CREATE_FILE_DEST, DB_REDOVERY_FILE_DEST, or
DB_CREATE_ONLINE_LOG_DEST_n initialization parameters in your initialization parameter file.
Files of one database type are easily distinguishable from other database types.
Files are clearly associated with important attributes specific to the file type.
For example, a datafile name may include the tablespace name to allow for easy association of datafile to
tablespace, or an archived log name may include the thread, sequence, and creation date. No two
Oracle-managed files are given the same name. The name that is used for creation of an Oraclemanaged file is constructed from three sources:
A file name template that is chosen based on the type of the file. The template also depends on
the operating system platform and whether or not automatic storage management is used.
A unique string created by Oracle Database or the operating system. This ensures that file
creation does not damage an existing file and that the file cannot be mistaken for some other
file.
As a specific example, filenames for Oracle-managed files have the following format on a Solaris file
system:
<destination_prefix>/o1_mf_%t_%u_.dbf
Where:
<destination_prefix>
<destination_location>/<db_unique_name>/<datafile>
is
where:
<db_unique_name>
is
the
globally
unique
name
(DB_UNIQUE_NAME initialization parameter) of the target database. If there is
no DB_UNIQUE_NAMEparameter, then the DB_NAME initialization parameter value is
used.
Names for other file types are similar. Names on other platforms are also similar, subject to the
constraints of the naming rules of the platform.
The examples on the following pages use Oracle-managed file names as they might appear with a
Solaris file system as an OMF destination.
Scenario 2:
Create and Manage a Database with Database and Flash Recovery Areas. In this
scenario, a DBA creates a database where the control files and redo log files are multiplexed. Archived
logs and RMAN backups are created in the flash recovery area. The following tasks are involved in
creating and maintaining this database:
1. Setting the initialization parameters
The DBA includes the following generic file creation defaults:
DB_CREATE_FILE_DEST = '/u01/oradata'
DB_RECOVERY_FILE_DEST_SIZE = 10G
DB_RECOVERY_FILE_DEST = '/u02/oradata'
LOG_ARCHIVE_DEST_1 = 'LOCATION = USE_DB_RECOVERY_FILE_DEST'
The DB_CREATE_FILE_DEST parameter sets the default file system directory for datafiles, tempfiles,
control files, and redo logs. The DB_RECOVERY_FILE_DEST parameter sets the default file system
directory for control files, redo logs, and RMAN backups. The LOG_ARCHIVE_DEST_1 configuration
'LOCATION=USE_DB_RECOVERY_FILE_DEST' redirects archived logs to the DB_RECOVERY_FILE_ DEST
location. The DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST parameters set the default
directory for log file and control file creation. Each redo log and control file is multiplexed across the two
directories.
2. Creating a database
3. Managing control files
4. Managing the redo log
5. Managing tablespaces
Tasks 2, 3, 4, and 5 are the same as in Scenario 1, except that the control files and redo logs are
multiplexed across the DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST locations.
6. Archiving redo log information
Archiving online logs is no different for Oracle-managed files than it is for unmanaged files. The archived
logs are created in DB_RECOVERY_FILE_DEST and are Oracle-managed files.
7. Backup, restore, and recover
An Oracle-managed file is compatible with standard operating system files, so you can use operating
system utilities to backup or restore Oracle-managed files. All existing methods for backing up, restoring,
and recovering the database work for Oracle-managed files. When no format option is specified, all disk
backups by RMAN are created in the DB_RECOVERY_FILE_DEST location. The backups are Oraclemanaged files.
An E-Commerce Architecture
This figure shows a typical Internet architecture.
The organization has an Intranet that connects client computers to one or more Database
Servers.
The client computers also connect to the Internet through an Application Web Server.
Oracle Net
Oracle Net Services is Oracle's solution for providing enterprise wide connectivity in distributed,
heterogeneous computing environments.
Objective is for Oracle Net Services to make it easy to manage network configurations while
maximizing performance and enabling network diagnostic capabilities when problems arise.
Oracle protocol support that maps the foundation layer's technology to industry-standard
protocols.
Oracle supports Java client applications that access an Oracle database with a Java Database
Connectivity (JDBC) Driver. This is a standard Java interface to connect to a relational DBMS. Oracle
offers the following drivers:
JDBC OCI Driver used for clients with Oracle client software installed.
JDBC Thin Driver used for clients without an Oracle installation that use applets.
Web clients can run programs that access Oracle databases directly without a Web Server.
A database can accept HTTP, FTP, or WebDAV protocol connections that can connect to Oracle
XML DB in an Oracle database instance.
The figure shows a client with a HTTP connection that connects through a web server like Apache.
This figure shows a client using a Web Browser such as Internet Explorer with a JDBC Thin driver that
uses a Java version of Oracle Net called JavaNet to communicate with the Oracle database server that is
configured with Oracle Net.
Location
Transparency
Many companies have more than one databases, often distributed, that support different client
applications.
Each database is represented in Oracle Net by one or more services.
Client computers use the service name to identify the database to be accessed.
The information about the database service and its location in the network is transparent to
the client because the information needed for a connection is stored in a repository.
Oracle Net Services offer several types of naming methods that support localized configuration
on each client computer, or centralized configuration that can be accessed by all clients in the
network.
Easy-to-use graphical user interfaces enable you to manage data stored in the naming
methods.
Naming Methods Centralized Configuration and Management
One approach to establishing network connectivity is to centralize the management of a repository of
service names by the use of a Directory Server as is shown in the figure below.
This approach provides network administrators the ability to configure and manage the
network of databases with a central facility.
It authenticates database access and eliminates the need for any client and server
configuration files.
Oracle Net and Oracle software are scalable meaning that an organization can maximize the use of
system resources. One way this is done is through ashared server architecture that allows many client
computers to connect to a server.
The shared server approach:
Client computers communicate their requests for data by routing requests through one or
more dispatcher processes.
When a server process becomes idle, it will select the next client to serve in the queue.
Server processes are pooled and a small pool of server processes can share a large number
of client computers.
One server process starts and is dedicated to each client connection until the connection is
completed.
This does introduce a little processing delay required to create the server process in memory.
Shared server works better than dedicated server if there are a large number of connections
because it reduces server memory requirements.
Enables a database server to timeout an idle web session and assign the connection to an
active session.
The idle session remains open and the connection can be reestablished when the session
becomes active with a data request.
The listener process handles client requests and hands the request off to the appropriate
server.
A listener process can listen for more than one database instance.
Client computers are configured with protocol addresses that enable them to send connection
requests to a listener.
After a connection is established, the client computer and Oracle Database Server
communicate directly.
Database Service and Database Instance Identification
An Oracle database is a service to a client computer that runs on a server (In a Windows server, you can
see these services quite easily through the Control Panel).
A database can have more than one service associated with it although one is typical.
For example, one service might be dedicated to system users accessing financial data while
another one is dedicated to system users accessing warehouse data.
Using more than one service can enable a DBA to allocate system resources.
Service Name:
SERVICE_NAMES init.ora parameter specifies the service name in the databases initialization
parameter file.
The service name defaults to a global database name when it is not specified this is a
name that comprises the database name from the DB_NAMEparameter and the domain name
from the DB_DOMAIN parameter.
The SERVICE_NAMES parameter in the initialization parameter file (init.ora) can specify
more than one service entry as shown below.
o
This also enables a DBA to limit resource allocations for clients requesting a service.
This enables using a pool of Multi-threaded service dispatchers to be used for clients
requesting sobora1.siue.edu, for example, while a different dispatcher or pool of dispatchers
could be configured to service sobora2.siue.edu, for example.
Instance Name:
INSTANCE_NAME parameter in the initialization parameter file specifies the instance name.
This figure shows two database servers, each connected to a single database that is opened as
two separate instances, each with a unique parameter file called an instance parameter file
(ifile).
Accessing a Service
The connect description describes the database location and database service name.
Includes the HOST= specification of the database server (the specification can be the
database name, e.g., sobora2.siue.edu or the IP address, e.g.146.163.252.41).
Includes the PROTOCOL= specification for the network protocol (TCP).
Includes the PORT= specification the standard listener port is 1521 for Oracle software
other ports can be used as long as no other service is using the port on the server an
alternative port, such as 1523 could be assigned if port 1521 was already in use for another
service on the host.
The listener process for a database instance knows the services for which it can handle
connection requests, because an Oracle database dynamically registers this information with the
listener.
This process of registration is called service registration.
Service registration provides a listener process with information about the database instances
and the service handlers available for each instance.
INSTANCE_NAME parameter:
Can be added to the connect descriptor to listen for a specific instance of a database where
multiple instances may be in use.
DBORCL =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = sobora2.isg.siue.edu)(PORT = 1521))
(CONNECT_DATA=(SERVICE_NAME=DBORCL)
(INSTANCE_NAME=DBORCL_repository)
)
)
SERVER= parameter another approach is to specify a particular service handler as part of the connect
descriptor.
DBORCL =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = sobora2.isg.siue.edu)(PORT=1521))
)
(CONNECT_DATA=(SERVICE_NAME=DBORCL)
(SERVER=shared)
)
)
This figure shows more detail with a Listener and a Dispatcher for a Shared Server Process.
The Listener hands the connection request to the Dispatcher for future communication. The
steps are:
1. The listener receives a client connection request.
2. The listener hands the connect request directly to the dispatcher.
3. The client is now connected to the dispatcher.
This figure shows more detail with a Listener for a Dedicated Server Process.
The Listener passes a connection request to a dedicated server process -- first it starts the
process. The steps are:
1. The listener receives a client connection request.
2. The listener starts a dedicated server process.
3. The listener provides the location of the dedicated server process to the client in a redirect
message.
4. The client connects directly to the dedicated server.
CONNECT dbock/password@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)
(HOST=sobora2.siue.edu)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=DBORCL)))
Example: This example uses a simple net service name of DBORCL as the connect
identifier.
o The net service name is mapped to the proper connect descriptor by using a repository
of connection information that is access through one of Oracles naming methods.
CONNECT dbock/password@dborcl
Oracle Net supports the following naming methods:
Local Naming.
o With this approach a local configuration file named tnsnames.ora is stored on each client
computer.
o Net service names are stored in the tnsnames.ora file as was described above.
o The file can be configured for individual client machines and client needs. This is the
approach taken at SIUE.
o Local naming is most appropriate for simple distributed networks with a small number of
services that change infrequently.
Directory Naming.
o This approach was described earlier in these notes.
o Service addresses and net service names are stored in a Lightweight Directory Access
Protocol (LDAP)-compliant directory server.
External Naming.
o A third-party naming service already configured for your environment is used.
After a naming method is configured, the client computers must be enabled for the naming method
following three steps:
1. The client contacts a naming method
o This step converts the connect identifier to a connect descriptor.
o With local naming for a Windows computer, this is accomplished by storing
the tnsnames.ora file on the $Oracle_Home/Network/Admin directory specified for
the client machine when the Oracle software was initially loaded onto the machine.
2. Based on the identified connect descriptor, the client forwards a request to the listener
address given in the connect descriptor.
3. The client connection is accepted by the listener (usually uses a TCP/IP protocol). If the
client information received in the connect descriptor matchesclient information in the
database and in its listener configuration file (named listener.ora), a connection is made;
otherwise, an error message is returned.
Configuring the Local Naming Method
Client Configuration
Local Naming configuration requires storing a tnsnames.ora file on each client computer.
The local naming method adds net service names to the tnsnames.ora file.
The tnsnames.ora file specifies connect descriptors for one or more databases.
Provides
a wizard interface
that
prompts
for
information
needed
to
build
a tnsnames.ora file automatically.
If you select Custom Installation as an option when configuring your network connection, you
can select the naming method to use.
If you select Directory Naming or any other method other than Local Naming, the naming
method has to already be set up.
You can also configure the tnsnames.ora file manually by adding service names to the file by using a
simple text editor like Notepad.
Listener Configuration on the Server
Listener service configured to listen for one or more databases.
Includes one or more listening protocol addresses and associated destination service
information.
A listener name alias can resolved through a tnsnames.ora file located on the server (NOT
the client tnsnames.ora file).
We do not use this approach at SIUE, but if we did, an example entry in the tnsnames.ora file
would be:
# tnsnames.ora Network Configuration File:
# /u01/app01/oracle1/product/11.2.0.3/dbhome_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.
DBORCL.SIUE.EDU =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = sobora2.isg.siue.edu)(PORT = 1521))
)
(CONNECT_DATA =
(SID = DBORCL)
)
)
EMTEST =
(DESCRIPTION =
)
LISTENER_DBACLASS =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 146.163.252.41)(PORT = 1523))
)
The LISTENER_DBACLASS alias specified above can be used to enable any Oracle software
to connect to a student database through the listener.
A configured listener can be managed with the Listener Control Utility (LSNRCTL).
Ensure software release of the listener is appropriate for the Oracle database software
release, e.g., use a listener designed for Oracle 11g, 10g or 9i as appropriate.
The screen shot below gives an example of using the lsnrctl command in a LINIX
environment.
dbock/@sobora2.isg.siue.edu=>lsnrctl
LSNRCTL for Linux: Version 10.2.0.4.0 - Production on 22-JUL-2009 11:12:36
Copyright (c) 1991, 2007, Oracle. All rights reserved.
Welcome to LSNRCTL, type "help" for information.
LSNRCTL>
The status of services for which a listener is listening can be checked with the
listener SERVICES command.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=sobora2.isg.siue.edu)
(PORT=1521)))
Services Summary...
Service "DBORCL.siue.edu" has 2 instance(s).
Instance "DBORCL", status UNKNOWN, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:25 refused:3
LOCAL SERVER
Instance "DBORCL", status READY, has 1 handler(s) for this service...
Handler(s):
IPC protocol.
(address=(protocol=ipc)(key=PNPKEY))
When a listener service is contacted by a client, one of these actions is performed as is shown in this
figure.
If the database service is running a dispatcher service, then the listener hands the request to
the dispatcher the process that manages the connection of many clients to the same server in a
multi-threaded server environment.
If a dispatcher is not in use, the listener can spawn a dedicated server process or allocate a prespawned dedicated server process and pass the client connection to this dedicated server process
(one server per client as we have discussed in earlier lectures).
Here Add (to add new listener) Reconfigure(exist listener), delete(exist) Rename(exist).
Choose Add And click next
The important thing you must Select TCP here. Click next
The exist port is 1521 however user for different listener different port for example 1522
Click Finish
AS You see there are two listener script my first and second listener.
Listen the incoming network request from client and forward to the Oracle instance.
PMON would register itself to the listener. Generally take ~ 1 min for PMON to connect to Listener. Until
the PMON is connect to Listener, user can not connect to the Database from remote and would get ORA12514: TNS: listener does not currently know of service requested in connect descriptor.
We can use alter system register to force the PMON to register the listener
Default Listener
By default, We do not need to configure which Listener Oracle Database would connect to. It would
connect to default listener LISTENER.
Listener command
lsnrctl start
lsnrctl stop
lsnrctl status
lsnrctl service
If there is no listener , user can not connect from remote and would get ORA-12541 TNS: no listener
---- Below section are optional , but if you do set the LOCAL_LISTENER in the parameter file, the
tnsname.ora must have the correspond entry. However, if there is no SID_LIST defined in the
listener.ora, the LOCAL_LISTENER and TNSNAMES.ora must configure. Otherwise the SID does not know
which listener to go to.
tnsnames.ora
DATABASE LINKS
The central concept in distributed database system is Database Link. A dblink allows (client) users to
access data on remote database. A connection between from one database to another in same host. A
connection between two physical database servers. i.e., (from an oracle database server to another
database server).
POINTS TO NOTE:
When many users require an access path to remote oracle database, Oracle recommends to create
PUBLIC database link for all users.
When Oracle uses a directory server, an administrator can easily manage global database links for all
databases (DB LINK is centralized).
Database Users of DB LINKS - (Security Context)
When creating the db link , need to determine which user should connect to the remote database to
access the data.
FIXED USER CURRENT USER CONNECTED USER
DB Links connect to the remote database in one of the three methods;
FIXED USER LINK
Whose USERID/PASSWORD is part of the link definition?
Users connect using the USERNAME/PASSWORD referenced.
Every time the link connects with the same USERID/PASSWORD.
SAMP table is exist in orcltest database. I want to access (samp) table. using dblink from orclprod
database. So I create a dblink in ORCLPROD, pointing to ORCLTEST.
In orclprod Database:
user1 ( exist in ORCLPROD database ) trying to access samp table from ORCLTEST database using
dblink.
DNAME
LOC
10
ACCOUNTING
NEW YORK
20
RESEARCH
DALLAS
30
SALES
CHICAGO
40
OPERATIONS
BOSTON
Database
Required For
Local
Local
CREATE SESSION
Remote
A database link is a pointer that defines a one-way communication path from an Oracle Database server
to another database server. The link pointer is actually defined as an entry in a data dictionary table. To
access the link, you must be connected to the local database that contains the data dictionary entry.
A database link connection is one-way in the sense that a client connected to local database A can use a
link stored in database A to access information in remote database B, but users connected to database B
cannot use the same link to access data in database A. If local users on database B want to access data
on database A, then they must define a link that is stored in the data dictionary of database B.
A database link connection allows local users to access data on a remote database. For this connection to
occur, each database in the distributed system must have a unique global database name in the
network domain. The global database name uniquely identifies a database server in a distributed system.
The below figure shows an example of user scott accessing the emp table on the remote database with
the global name hq.acme.com: Database links are either private or public. If they are private, then only
the user who created the link has access; if they are public, then all database users have access.
One principal difference among database links is the way that connections to a remote database occur.
Users access a remote database through the following types of links:
Type of Link
Connected user link
Fixed user link
Description
Users connect as themselves, which means that they must have an account
on the remote database with the same username and password as their
account on the local database.
Users connect using the username and password referenced in the link. For
example, if Jane uses a fixed user link that connects to the hq database
with the username and password scott/tiger, then she connects as scott,
Jane has all the privileges in hq granted to scott directly, and all the
default roles that scott has been granted in the hq database.
A user connects as a global user. A local user can connect as a global user
in the context of a stored procedure, without storing the global user's
password in a link definition. For example, Jane can access a procedure
that Scott wrote, accessing Scott's account and Scott's schema on
the hq database. Current user links are an aspect of Oracle Advanced
Security.
Create database links using the CREATE DATABASE LINK statement. After a link is created, you can
use it to specify schema objects in SQL statements.
What Are Shared Database Links?
A shared database link is a link between a local server process and the remote database. The link is
shared because multiple client processes can use the same link simultaneously.
When a local database is connected to a remote database through a database link, either database can
run in dedicated or shared server mode. The following table illustrates the possibilities:
Different users accessing the same schema object through a database link can share a network
connection.
When a user needs to establish a connection to a remote server from a particular server process,
the process can reuse connections already established to the remote server. The reuse of the
connection can occur if the connection was established on the same server process with the
same database link, possibly in a different session. In a non-shared database link, a connection
is not shared across multiple sessions.
When you use a shared database link in a shared server configuration, a network connection is
established directly out of the shared server process in the local server. For a non-shared
database link on a local shared server, this connection would have been established through the
local dispatcher, requiring context switches for the local dispatcher, and requiring data to go
through the dispatcher.
For example, assume that employees submit expense reports to Accounts Payable (A/P), and further
suppose that a user using an A/P application needs to retrieve information about employees from
the hq database. The A/P users should be able to connect to the hq database and execute a stored
procedure in the remote hq database that retrieves the desired information. The A/P users should not
need to be hq database users to do their jobs; they should only be able to access hq information in a
controlled way as limited by the procedure.
mfg.division3.acme_tools.com
While several databases can share an individual name, each database must have a unique global
database
name.
For
example,
the
network
domainsus.americas.acme_auto.com and uk.europe.acme_auto.com each
contain
a sales database. The global database naming system distinguishes the sales database in
the americas division from the sales database in the europe division as follows:
sales.us.americas.acme_auto.com
sales.uk.europe.acme_auto.com
Owner
User who created the link. View
ownership data through:
DBA_DB_LINKS
ALL_DB_LINKS
USER_DB_LINKS
Public
Global
Description
Creates link in a specific schema of the local
database. Only the owner of a private database link or
PL/SQL subprograms in the schema can use this link
to access database objects in the corresponding
remote database.
Determining the type of database links to employ in a distributed database depends on the specific
requirements of the applications using the system. Consider these features when making your choice:
Type
of
Features
This link is more secure than a public or global link, because only the owner of the
private link, or subprograms within the same schema, can use the link to access the
remote database.
When many users require an access path to a remote Oracle Database, you can create
a single public database link for all users in a database.
When an Oracle network uses a directory server, an administrator can conveniently
manage global database links for all databases in the system. Database link
management is centralized and simple.
Description
A local user accessing a database link in which no fixed
username and password have been specified. If SYSTEM
accesses a public link in a query, then the connected user
is SYSTEM, and the database connects to the SYSTEM
schema in the remote database.
Fixed user
CREATE
PUBLIC
DATABASE LINK hq
CONNECT
TO
CURRENT_USER using
'hq';
Administrator's
CREATE
PUBLIC
DATABASE LINK hq
CONNECT
TO
jane
IDENTIFIED BY doe
USING 'hq';
deprecated.
It is
If the current user database link is not accessed from within a stored object, then the current
user is the same as the connected user accessing the link. For example, if scott issues
a SELECT statement through a current user link, then the current user is scott.
When executing a stored object such as a procedure, view, or trigger that accesses a database
link, the current user is the user that owns the stored object, and not the user that calls the
object. For example, if jane calls procedure scott.p (created by scott), and a current user link
appears within the called procedure, then scott is the current user of the link.
If the stored object is an invoker-rights function, procedure, or package, then the invoker's
authorization ID is used to connect as a remote user. For example, if user jane calls
procedure scott.p (an invoker-rights procedure created by scott), and the link appears inside
procedure scott.p, then jane is the current user.
You cannot connect to a database as an enterprise user and then use a current user link in a
stored procedure that exists in a shared, global schema. For example, if user jane accesses a
stored procedure in the shared schema guest on database hq, she cannot use a current user
link in this schema to log on to a remote database.
Creation of Database Links: Examples
Create database links using the CREATE DATABASE LINK statement. The table gives examples of SQL
statements
that
create
database
links
in
a
local
database
to
the
remote sales.us.americas.acme_auto.com database:
SQL Statement
CREATE
DATABASE
sales.us.americas.acme_auto.com
'sales_us';
LINK
USING
Connects
To
Database
Sales using
net service
name
sales_us
Sales using
service
name
am_sls
Sales using
net service
name
sales_us
Sales using
net service
name rev
Connects As
Link Type
Connected user
Private
connected
user
Private
current
user
scott using
password tiger
Private
fixed user
scott using
password tiger
Public
fixed user
Sales using
net service
name sales
scott using
password
tiger,
authenticated
as anupam using
password bhide
Shared
public
fixed user
schema_object is a logical data structure like a table, index, view, synonym, procedure,
package, or a database link.
global_database_name is the name that uniquely identifies a remote database. This name
must be the same as the concatenation of the remote database initialization
parameters DB_NAME and DB_DOMAIN, unless the parameter GLOBAL_NAMES is set
to FALSE, in which case any name is acceptable.
For example, using a database link to database sales.division3.acme.com, a user or application can
reference remote data as follows:
SELECT * FROM scott.emp@sales.division3.acme.com; # emp table in scott's schema
SELECT loc FROM scott.dept@sales.division3.acme.com;
If GLOBAL_NAMES is
set
to FALSE,
then
you
can
use
any
name
for
the
link
to sales.division3.acme.com. For example, you can call the link foo. Then, you can access the remote
database as follows:
SELECT name FROM scott.emp@foo; # link name different from global name
Execute DESCRIBE operations on some remote objects. The following remote objects, however,
do support DESCRIBE operations:
Tables
Views
Procedures
Functions
Obtain nondefault roles on a remote database. For example, if jane connects to the local
database and executes a stored procedure that uses a fixed user link connecting
as scott, jane receives scott's default roles on the remote database. Jane cannot issue SET
ROLE to obtain a nondefault role.
Use a current user link without authentication through SSL, password, or NT native
authentication
Materialized Views
Materialized views in Oracle
Oracle materialized views were first introduced in Oracle8.
Materialized views are schema objects that can be used to summarize, precompute, replicate and
distribute data.
In mview, the query result is cached as a concrete table that may be updated from the original base
tables from time to time. This enables much more efficient access, at the cost of some data being
potentially out-of-date. It is most useful in datawarehousing scenarios, where frequent queries of the
actual base tables can be extremely expensive.
Oracle uses materialized views (also known as snapshots in prior releases) to replicate data to nonmaster sites in a replication environment and to cache expensive queries in a datawarehouse
environment. A materialized view is a database object that contains the results of a query. They are local
copies of data located remotely, or are used to create summary tables based on aggregations of a table's
data.
A materialized view is a replica of a target master from a single point in time. We can define a
materialized view on a base/master table (at a master site), partitioned table, view, synonym or a
HH24:MI:SS')
from
HH24:MI:SS')
from
1. Read only
Cannot be updated and complex materialized views are supported.
2. Updateable
Can be updated even when disconnected from the master site.
Are refreshed on demand.
Consumes fewer resources.
Requires Advanced Replication option to be installed.
3. Writeable
Created with the for update clause.
Changes are lost when view is refreshed.
Requires Advanced Replication option to be installed.
Note: For read-only, updatable, and writeable materialized views, the defining query of the materialized
view must reference all of the primary key columns in the master.
Read-Only Materialized Views
We can make a materialized view read-only during creation by omitting the FOR UPDATE clause or
disabling the equivalent option in the Replication Management tool. Read-only materialized views use
Note:
1.
Do not use column aliases when we are creating an updatable materialized view. Column aliases
cause an error when we attempt to add the materialized view to a materialized view group using the
CREATE_MVIEW_REPOBJECT procedure.
2.
An updatable materialized view based on a master table or master materialized view that has
defined column default values does not automatically use the master's default values.
3.
Updatable materialized views do not support the DELETE CASCADE constraint.
The following types of materialized views cannot be masters for updatable materialized views:
ROWID materialized views
Complex materialized views
Read-only materialized views
However, these types of materialized views can be masters for read-only materialized views.
Additional Restrictions for Updatable Materialized Views Based on Materialized Views, those must:
Belong to a materialized view group that has the same name as the materialized view group at
it's master materialized view site.
Reside in a different database than the materialized view group at it's master materialized view
site.
Be based on another updatable materialized view or other updatable materialized views, not on a
read-only materialized view.
Be based on a materialized view in a materialized view group that is owned by PUBLIC at the
master materialized view site.
Writeable Materialized Views
A writeable materialized view is one that is created using the FOR UPDATE clause but is not part of a
materialized view group. Users can perform DML operations on a writeable materialized view, but if we
refresh the materialized view, then these changes are not pushed back to the master and the changes
are lost in the materialized view itself. Writeable materialized views are typically allowed wherever fastrefreshable read-only materialized views are allowed.
Note: writeable materialized views are rarely used.
Materialized Views Types
Both the master site and the materialized view site must have compatibility level (COMPATIBLE
initialization parameter) 9.0.1 or higher to replicate user-defined types and any objects on which they
are based.
We cannot create refresh-on-commit materialized views based on a master with user-defined
types. Refresh-on-commit materialized views are those created using the ON COMMIT REFRESH clause in
the CREATE MATERIALIZED VIEW statement.
Advanced Replication does not support type inheritance.
Materialized View Log
Updatable Materialized View Log
Materialized View Groups
A materialized view group in a replication system maintains a partial or complete copy of the objects at
the target replication group at it's master site or master materialized view site. Materialized view groups
cannot span the boundaries of the replication group at the master site or master materialized view site.
Group A at the materialized view site contains only some of the objects in the corresponding Group A at
the master site. Group B at the materialized view site contains all objects in Group B at the master site.
Under no circumstances, however, could Group B at the materialized view site contain objects from
Group A at the master site. A materialized view group has the same name as the master group on which
the materialized view group is based. For example, a materialized view group based on a personnel
master group is also named personnel.
In addition to maintaining organizational consistency between materialized view sites and their master
sites or master materialized view sites, materialized view groups are required for supporting updatable
materialized views. If a materialized view does not belong to a materialized view group, then it must be a
read-only or writeable materialized view.
Refresh Groups
Managing MVs is much easier in Oracle 10g with the introduction of the powerful new tuning advisors
that can tell us a lot about the design of the MVs. Tuning recommendations that can generate a complete
script that can be implemented quickly, saving significant time and effort. The ability to force rewriting or
abort the query can be very helpful in decision-support systems where resources must be conserved, and
where a query that is not rewritten should not be allowed to run amuck inside the database.
Related Views
DBA_MVIEWS
DBA_MVIEW_LOGS
DBA_MVIEW_KEYS
DBA_REGISTERED_MVIEWS
DBA_REGISTERED_MVIEW_GROUPS
DBA_MVIEW_REFRESH_TIMES
DBA_MVIEW_ANALYSIS
Related Package/Procedures
DBMS_MVIEW package
REFRESH
REFRESH_ALL
REFRESH_ALL_MVIEWS
REFRESH_DEPENDENT
REGISTER_MVIEW
UNREGISTER_MVIEW
PURGE_LOG
DBMS_REPCAT package
DBMS_REFRESH package
Materialized View Log
Materialized View Log
ADR
Automatic Diagnostic Repository (ADR) (ADR)
In an effort to make trouble resolution easier for the DBA Oracle 11g introduced the Fault Diagnosability
Infrastructure. The Fault Diagnosability Infrastructure assists in preventing, detecting, diagnosing, and
resolving database related problems. Problems such as database bugs and various forms of corruption
are made easier to support with the Fault Diagnosability Infrastructure. A number of changes come with
the Fault Diagnosability Infrastructure such as where the alert log is generated.
alert -
cdump -
trace - This contains trace files generated by the system, as well as a text copy of the alert log.
incident - This directory contains multiple subdirectories, one for each incident.
This
This
is
is
the
the
location
location
of
of
the
the
core
XML-formatted
dumps
for
alert
the
log..
database.
There is a lot of Metadata to be stored with regards to ADR. Each Oracle database (and ASM instance)
has a V$DIAG_INFO view that provides information on the various ADR directories and other metadata
related to ADR, such as active incidents. Here is an example of a query against the V$DIAG_INFO view:
ADR is special repository that auto-maintained by Oracle11g about critical errors. ADR is maintained in
memory.
Oracle Database Release 11g. ADRCI enables:
Viewing diagnostic data within the Automatic Diagnostic Repository (ADR).
Viewing Health Monitor reports.
Packaging of incident and problem information into a zip file for transmission to Oracle Support.
ADR made up of a directory structure like the following.
/u01/app/oracle/diag/rdbms/orcl/orcl/alert
/u01/app/oracle/diag/rdbms/orcl/orcl/cdump
/u01/app/oracle/diag/rdbms/orcl/orcl/hm
/u01/app/oracle/diag/rdbms/orcl/orcl/incident
/u01/app/oracle/diag/rdbms/orcl/orcl/trace
Automatic Diagnostic Repository (ADR) is a file-based repository that aids the DBA in identifying,
diagnosing, and resolving problems. Oracles stated goals for ADR are:
Providing first-failure diagnosis
Allowing for problem prevention
Limiting damage and interruptions after a problem is detected
Reducing problem diagnostic time
Reducing problem resolution time
Simplifying customer interaction with Oracle Support
ADR accomplishes this with new features like an always-on memory-based tracing system to capture
diagnosis information from many different database components when a problem is detected, similar to
an aircrafts black box.
Another new feature, Incident Packaging Services (IPS), simplifies the task of collecting diagnostic data
(traces, dumps, log files) related to a critical error. ADR assigns an incident number to a detected error
and adds it to all diagnostic information thats related to it. A DBA can then easily package all related
information into a zip file to upload to Oracle Support. ADR defines a problem as an error such as an
ORA-00600 internal error. Problems are tracked inside of ADR by a problem key, which consists of a text
string, an error code and parameters that describe the problem.
An incident is a specific occurrence of a problem. ADR assigns a unique number for each incident, writes
an entry in the alert log, sends an alert to OEM, gathers diagnostic information, and stores that
information in an ADR sub-folder.
Using the ADRCI command-line application, you can then see the information saved for an incident, add
or remove files from the incident inventory, and save all the related files into a zip file.
To use ADRCI, you just need execute permissions. Since ADR is outside of the database, you can access
it without having the instance available.
USER_DUMP_DEST
$ADR_HOME/TRACE
BACKGROUND_DUMP_DEST
$ADR_HOME/TRACE
BACKGROUND_DUMP_DEST
$ADR_HOME/ALERT&TRACE
Core Dumps
CORE_DUMP_DEST
$ADR_HOME/CDUMP
Incident
dumps
USER|BACKGROUND_DUMP_DEST
$ADR_HOME/INCIDENT/INCDIR_N
Foreground
process
traces
Background
process
traces
Alert
data
log
ADR
Alert log
In oracle 11g trace. alert not saved in *_DUMP_DEST directory even you set those parameters in
init.ora.11g ignore *_DUMP_DEST and store data in new format , directory structure is given below
Diag
ADR root
|
rdbms
|
Database Name
|
SID
ADR_HOME ( User Define Env Variable )
_____________|__________________________________________
|
|
|
|
|
|
|
|
|
Trace alert cdump hm incpkg incedent stage sweep metadata lck
Note: ADR_HOME is user define variable , I have define this variable make life easier
ADR root where ADR directory structure start.11g new initialize parameter DIAGNOSTIC_DEST decide
location of ADR root,
is
set
to
In 11g alert file is saved in 2 location, one is in alert directory ( in XML format) and old style alert file in
trace directory. Within ADR base, there can be many ADR homes, where each ADR home is the root
directory for all diagnostic data for a particular instance. The location of an ADR home for a database is
shown on the above graphic.
to
specify
how
long
to
keep
the
data
We can change retention policy using adrci MMON purge data automatically on expired ADR data.
adrci> show control
ADR Home = /u01/app/oracle
*************************************************************************
ADRID SHORTP_POLICY LONGP_POLICY LAST_MOD_TIME LAST_AUTOPRG_TIME LAST_MANUPRG_TIME
ADRDIR_VERSION
ADRSCHM_VERSION
ADRSCHMV_SUMMARY
ADRALERT_
VERSION
CREATE_TIME
-------------------- -------------------- ----------------------------------------------------------- ------------------------------------------------------------------------------- --------------------------------------- -------------------- ----------------------------------------------------------3667832353
720
13:24:01.088681 -07:00
2008-07-22 00:20:04.113441
8760
2008-07-02
1
2
2008-07-02 13:24:01.088681 -07:00
1 rows fetched
adrci>
Change Retention
adrci> set
control
(SHORTP_POLICY
adrci> set control (LONGP_POLICY = 4380 )
360
No username/password need to log in to ADRCI, ADRCI interact with file system and ADR data is secured
only by operating system permissions on the ADR directories.
adrci>>show alert
$adrci
adrci>
adrci>
adrci>
adrci>
adrci>
set editor vi
show alert ( it will open alert in vi editor )
show alert -tail ( Similar to Unix tail command )
show alert -tail 200 ( Similar to Unix Command tail -200 )
show alert -tail -f ( Similar to Unix command tail -f )
Since alert log saved as XML format ( log.xml ) , you can query xml file as well
Below is example to check all ORA- in alert log
You can spool output for ADRCI using spool command same as we use in sqlplus
adrci>>SHOW INCIDENT
ADR Home = /u01/app/oracle/diag/rdbms/orcl2/orcl2:
******************************************************************
INCIDENT_ID
PROBLEM_KEY
CREATE_TIME
----------------- ---------------------------------- -------------------------------------------------9817
ORA 600 [kcidr_reeval_3] 2008-05-14 18:41:03.609077 +05:30
1 incident info records fetched
adrci>SHOW INCIDENT
ADR Home = /u01/app/oracle/diag/rdbms/orcl2/orcl2:
*********************************************************************
INCIDENT_ID
PROBLEM_KEY
CREATE_TIME
---------------------- -----------------------------
+05:30
We can use IPS CREATE PACKAGE command to create a logical package for above incident
You can add additional files if needed, But file should be in ADR, below in example we adding alert log to
package.
Log a SR and upload this zip file to Oracle Support for diagnose and resolution.
IPS in Summary
$ adrci
adrci> help ips
adrci> show incident
( For example above command show incident No 9817 for ORA-600 [XYZ] )
adrci> ips create package incident 9817 <= ( it will give package No.)
adrci> ips create package incident 9817
Created package 4 based on incident id 9817, correlation level typical
adrci> ips add incident 9817 package 4
Added incident 9817 to package 4
adrci>
adrci>>ips add file
/u01/app/oracle/diag/rdbms/orcl2/orcl2/trace/alert_orcl2.log package 4
Added file /u01/app/oracle/diag/rdbms/orcl2/orcl2/trace/alert_orcl2.log to
package 4
adrci>>ips generate package 4 in /tmp
Generated package 4 in file /tmp/ORA600kci_20080814184516_COM_1.zip, mode
complete
adrci>>
Reactive: The fault Diagnosability infrastructure can invoke Health Monitor checks automatically
in response to critical errors.
Manual: DBA can manually run Health Monitor health checks Manually
------NUMBER
VARCHAR2(64)
NUMBER
VARCHAR2(15)
NUMBER
VARCHAR2(1)
VARCHAR2(1)
VARCHAR2(64)
$adrci
1
HM_RUN_1
Database Cross Check
2
2
2008-08-05 04:01:56.783059 +05:30
2008-08-08 04:02:04.007178 +05:30
2008-08-08 04:02:04.007178 +05:30
0
0
5
0
0
0
21
HM_RUN_21
CHECK_NAME
NAME_ID
MODE
START_TIME
RESUME_TIME
END_TIME
MODIFIED_TIME
TIMEOUT
FLAGS
STATUS
SRC_INCIDENT_ID
NUM_INCIDENTS
ERR_NUMBER
REPORT_FILE
2 rows fetched
Create HM Report
adrci>>CREATE REPORT HM_RUN HM_RUN_21
You can create and view Health Monitor checker reports using the ADRCI utility. Make sure that Oracle
environment variables are set properly, The ADRCI utility starts and displays its prompt as shown above.
You then enter the SHOW HM_RUN command to list all the checker runs registered in the ADR repository.
Locate the checker run for which you want to create a report and note the checker run name using the
corresponding RUN_NAME field. you can generate the report using the CREATE REPORT HM_RUN
command. You view the report using the SHOW REPORT HM_RUN command or by running
dbms_hm.get_run_report on sql prompt
FLASHBACK TECHNOLOGY
Oracle Flashback Technology is a group of Oracle Database features that let us view past
states of database objects or to return database objects to a previous state without using pointin-time media recovery. Flashback Database is a part of the backup & recovery enhancements in
Oracle 10g Database that are called Flashback Features .
Flashback Database enables us to wind our entire database backward in time, reversing the effects of
unwanted database changes within a given time window. The effects are similar to database point-intime recovery. It is similar to conventional point in time recovery in its effects, allowing us to return a
database to its state at a time in the recent past.
Flashback Database can be used to reverse most unwanted changes to a database, as long as the
datafiles are intact. Oracle Flashback Database lets us quickly recover an Oracle database to a previous
time to correct problems caused by logical data corruptions or user errors.
What are the Benefits?
According to many studies and reports, Human Error accounts for 30-35% of data loss episodes. This
makes Human Errors one of the biggest single causes of downtime. With Flashback Database feature
Oracle is trying to fight against user and operator errors in an extremely fast and effective way.
In most cases, a disastrous logical failure caused by human error can be solved by performing a
Database Point-in-Time Recovery (DBPITR). Before 10g the only way to do a DBPITR was incomplete
media recovery. Media Recovery is a slow and time-consuming process that can take a lot of hours. On
the other side, by using of Flashback Database a DBPITR can be done in an extremely fast way: 25 to
105 times faster than usual incomplete media recovery and in result it can minimize the downtime
significantly.
In 10G R2, Oracle combines fixed SGA area and redo buffer together. If there is a free space after Oracle
puts the combined buffers into a granule, that space is added to the redo buffer. The sizing of the redo
log buffer is fully controlled by Oracle. According to SGA and its atomic sizing by granules, Oracle will
calculate automatically the size of the log buffer depending of the current granule size. For smaller SGA
size and 4 MB granules, it is possible redo log buffer size + fixed SGA size to be multiple of the granule
size. For SGAs bigger than 128 MB, the granule size is 16 MB. We can see current size of the redo log
buffer, fixed SGA and granule by querying the V$SGAINFO view , and can query the V$SGASTAT view to
display detailed information on the SGA and its structures.
To find
current
size
of
the
flashback
buffer, we
can
use
the
following
query:
SQL> SELECT * FROM v$sgastat WHERE NAME = 'flashback generation buff';
There is no official information from Oracle that confirms the relation between 'flashback generation buff'
structure in SGA and the real flashback buffer structure. This is only a suggestion. A similar message
message is written to the alertSID.log file during opening of the database .
Allocated 3981204 bytes in shared pool for flashback generation buffer Starting background process
RVWR RVWR started with pid=16, OS id=5392 .
RVWR writes periodically flashback buffer contents to flashback database logs. It is an asynchronous
process and we dont have control over it. All available sources are saying that RVWR writes periodically
to flashback logs. The explanation for this behavior is that Oracle is trying to reduce the I/O and CPU
overhead that can be an issue in many production environments.
Flashback log files can be created only under the Flash Recovery Area (that must be configured before
enabling the Flashback Database functionality). RVWR creates flashback log files into a directory named
FLASHBACK under FRA. The size of every generated flashback log file is again under Oracles control.
According to current Oracle environment during normal database activity flashback log files have size
of 8200192 bytes. It is very close value to the current redo log buffer size. The size of a generated
flashback log file can differs during shutdown and startup database activities. Flashback log file sizes can
differ during high intensive write activity as well.
Flashback log files can be written only under FRA (Flash Recovery Area). FRA is closely related and is
built on top of Oracle Managed Files (OMF). OMF is a service that automates naming, location, creation
and deletion of database files. By using OMF and FRA, Oracle manages easily flashback log files. They are
created with automatically generated names with extension .FLB. For instance, this is the name of one
flashback log file: O1_MF_26ZYS69S_.FLB
By its nature flashback logs are similar to redo log files. LGWR writes contents of the redo log buffer to
online redo log files, RVWR writes contents of the flashback buffer to flashback database log files. Redo
log files contain all changes that are performed in the database, that data is needed in case of media or
instance recovery. Flashback log files contain only changes that are needed in case of flashback
operation. The main differences between redo log files and flashback log files are :
Flashback log files are never archived - they are reused in a circular manner.
Redo log files are used to forward changes in case of recovery while flashback log files are used
to backward changes in case of flashback operation.
Flashback log files can be compared with UNDO data (contained in UNDO tablespaces) as well.
While UNDO data contains changes at the transaction level, flashback log files contain UNDO data at the
data block level. While UNDO tablespace doesnt record all operations performed on the database (for
instance, DDL operations), flashback log files record that data as well. In few words, flashback log files
contain the UNDO data for our database.
To Summarize :
UNDO data doesnt contain all changes that are performed in the database while flashback logs
contain all altered blocks in the database .
UNDO data is used to backward changes at the transaction level while flashback logs are used to
backward changes at the database level .
We can query the V$FLASHBACK_DATABASE_LOGFILE to find detailed info about our flashback log files.
Although this view is not documented it can be very useful to check and monitor generated flashback
logs.
There is a new record section within the control file header that is named FLASHBACK LOGFILE
RECORDS. It is similar to LOG FILE RECORDS section and contains info about the lowest and highest SCN
contained in every particular flashback database log file .
***************************************************************************
FLASHBACK LOGFILE RECORDS
***************************************************************************
(size = 84, compat size = 84, section max = 2048, section in-use = 136,
last-recid= 0, old-recno = 0, last-recno = 0)
(extent = 1, blkno = 139, numrecs = 2048)
FLASHBACK LOG FILE #1:
(name #4) E:\ORACLE\FLASH_RECOVERY_AREA\ORCL102\FLASHBACK\O1_MF_26YR1CQ4_.FLB
Thread 1 flashback log links: forward: 2 backward: 26
size: 1000 seq: 1 bsz: 8192 nab: 0x3e9 flg: 0x0 magic: 3 dup: 1
Low scn: 0x0000.f5c5a505 05/20/2006 21:30:04
High scn: 0x0000.f5c5b325 05/20/2006 22:00:38
What does a Flashback Database operation ?
When we perform a flashback operation, Oracle needs all flashback logs from now on to the desired time.
They will be applied consecutively starting from the newest to the oldest. For instance, if we want to
flashback the database to SCN 4123376440, Oracle will read flashback logfile section in control file and
will check for the availability of all needed flashback log files. The last needed flashback log should be
this with Low scn and High scn values between the desired SCN 4123376440 .
In current environment this is the file with name: O1_MF_26YSTQ6S_.FLB and with values of:
Low SCN : 4123374373
High SCN : 4123376446
Note: If we want to perform successfully a flashback operation we will always need to have available at
least one archived (or online redo) log file. This is a particular file that contains redo log information
about changes around the desired flashback point in time (SCN 4123376440). In this case, this is the
archived redo log with name: ARC00097_0587681349.001 that has values of:
First change#: 4123361850
Next change#: 4123380675
The flashback operation will not succeed without this particular archived redo log. The reason for
this :Flashback log files contain information about before-images of data blocks, related to some SCN
(System Change Number). When we perform flashback operation to SCN 4123376440, Oracle cannot
apply all needed flashback logs and to complete successfully the operation because it applying beforeimages of data. Oracle needs to restore each data block copy (by applying flashback log files) to its state
at a closest possible point in time before SCN 4123376440. This will guarantee that the subsequent redo
apply operation will forward the database to SCN 4123376440 and the database will be in consistent
state. After applying flashback logs, Oracle will perform a forward operation by applying all needed
archive log files (in this case redo information from the file: ARC00097_0587681349.001) that will
forward
the
database
state
to
the
desired
SCN.
Oracle cannot start applying redo log files before to be sure that all data blocks are returned to their
state before the desired point in time. So, if desired restore point of time is 10:00 AM and the oldest
restored data block is from 09:47 AM then we will need all archived log files that contain redo data for
the time interval between 09:47 AM and 10:00 AM. Without that redo data, the flashback operation
cannot succeed. When a database is restored to its state at some past target time using Flashback
Database, each block changed since that time is restored from the copy of the block in the flashback logs
A flashback log is created whenever necessary to satisfy the flashback retention target, as long
as there is enough space in the flash recovery area.
A flashback log can be reused; once it is old enough that it is no longer needed to satisfy the
flashback retention target.
If the database needs to create a new flashback log and the flash recovery area is full or there is
no disk space, then the oldest flashback log is reused instead.
If the flash recovery area is full, then an archived redo log may be automatically deleted by the
flash recovery area to make space for other files. In such a case, any flashback logs that would require
the use of that redo log file for the use of FLASHBACK DATABASE are also deleted.
Note : Re-using the oldest flashback log shortens the flashback database window. If enough flashback
logs are reused due to a lack of disk space, the flashback retention target may not be satisfied.
Limitations of Flashback Database :
Since Flashback Database works by undoing changes to the datafiles that exist at the moment
that we run the command, it has the following limitations: Flashback Database can only undo changes to
a datafile made by an Oracle database. It cannot be used to repair media failures, or to recover
from accidental deletion of datafiles.
We cannot use Flashback Database to undo a shrink datafile operation.
If the database control file is restored from backup or re-created, all accumulated flashback log
information is discarded. We cannot use FLASHBACK DATABASE to return to a point in time before the
restore or re-creation of a control file.
When using Flashback Database with a target time at which a NOLOGGING operation was in progress,
block corruption is likely in the database objects and datafiles affected by the NOLOGGING operation. For
example, if we perform a direct-path INSERT operation in NOLOGGING mode, and that operation runs
from 9:00 to 9:15 , and we later need to use Flashback Database to return to the target time 09:07 on
that date, the objects and datafiles updated by the direct-path INSERT may be left with block corruption
after
the
Flashback
Database
operation
completes.
If possible, avoid using Flashback Database with a target time or SCN that coincides with a NOLOGGING
operation. Also, perform a full or incremental backup of the affected datafiles immediately after any
NOLOGGING operation to ensure recoverability to points in time after the operation. If we expect to use
Flashback Database to return to a point in time during an operation such as a direct-path INSERT,
consider
performing
the
operation
in
LOGGING
mode.
o
o
o
o
o
Always write down the current SCN or/and create a restore point (10g R2) before to start a
flashback operation.
Flashback database is the only one flashback operation that can be performed to undone result of
a TRUNCATE command (FLASHBACK DROP, FLASHBACK TABLE, or FLASHBACK QUERY cannot be used for
this).
Dropping of tablespace cannot be reversed with Flashback Database. After such an operation, the
flashback database window begins at the time immediately following that operation.
Shrink a datafile cannot be reversed with Flashback Database. After such an operation, the
flashback database window begins at the time immediately following that operation.
Resizing of datafile cannot be reversed with Flashback Database. After such an operation, the
flashback database window begins at the time immediately following that operation. If we need to
perform flashback operation in this time period, we must offline this datafile before performing of
flashback operation.
Recreating or restoring of control file prevents using of Flashback Database before this point of
time.
We can flashback database to a point in time before a RESETLOGS operation. This feature is
available from 10g R2 because the flashback log files are not deleted after RESETLOGS operation. We
cannot do this in 10g R1 because old flashback logs are deleted immediately after an RESETLOGS
operation.
Dont exclude the SYSTEM tablespace from flashback logging. Otherwise we will not be able to
flashback the database.
The DB_FLASHBACK_RETENTION_TARGET parameter is a TARGET parameter. It doesnt
guarantee the flashback database window. Our proper configuration of the Flashback Database should
guarantee it.
Monitor regularly the size of the FRA and generated flashback logs to ensure that there is no
space pressure and the flashback log data is within the desired flashback window
Oracle Flashback features use the Automatic Undo Management to obtain metadata and transaction
historical data.
Undo data is persistent and survives database shutdown.
You can use the Flashback options to
recover data from user errors,
compare table data at two points in time,
view transaction actions (the set of actions performed in a given transaction).
Undo table drops
Revert the entire database to a previous point in time.
o
o
o
o
o
o
VALUE
--------1440
VALUE
-----------------------------------------------------
string
big integer
D:\oracle\product\10.2.0\flash_recovery_area
5G
If the database Flashback feature is off then follow the below steps :
1.) The Database must be started through SPFILE.
SQL> show parameter spfile
NAME
TYPE
----------------spfile
string
VALUE
---------------------------------------------D:\ORACLE\PRODUCT\10.2.0\DB_1\
DATABASE\SPFILENOIDA.ORA
2.) The Database must be in Archive log mode.
SQL>
SQL>
SQL>
SQL>
shut immediate
startup mount
alter database archivelog ;
alter database open ;
set
db_recovery_file_dest='D:\oracle\product\10.2.0\flash_recovery_area'
5.) Set the recovery file destination size. This is the hard limit on the total space to be used by
target database recovery files created in the flash recovery area .
SQL> alter system set db_recovery_file_dest_size=5G scope=both;
System altered.
6.) Set the flash back retention target . This is the upper limit (in minutes) on how far back in
time the database may be flashed back. How far back one can flashback a database depends
on how much flashback data Oracle has kept in the flash recovery area.
SQL> alter system set db_flashback_retention_target=1440 scope=both;
System altered.
7.) Convert the Database to FLASHBACK ON state.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.
Total System Global Area 830472192 bytes
Fixed Size
2074760 bytes
Variable Size
213911416 bytes
Database Buffers
608174080 bytes
Redo Buffers
6311936 bytes
Database mounted.
SQL> ALTER DATABASE FLASHBACK ON;
Database altered.
SQL> alter database open;
Database altered.
SQL> select NAME, FLASHBACK_ON
NAME
FLASHBACK_ON
from
v$database;
Main application of Flashback Technologies is to point out logical errors and undo erroneous
changes without performing point in time recovery. There are various technologies that come under
Flashback Umbrella. Each one of them is discussed and demonstrated in this tutorial.
First of all, set the Undo Retention to 1 Hour and Retention Guarantee to avoid lower limit
errors.
1) Flashback Drop
In earlier database releases if a table was accidentally dropped, one had to recover the database using
point-in-time recovery. While this would restore the table, it would also revert all other database objects
to that same point. Alternately, one could import the table back into the database if an appropriate
export file happened to exist. But invariably none of these alternatives was well suited to the desired
task. This has been vastly simplified and improved with Flashback Drop. It simply reverses the effects of a
DROP TABLE operation.
Note Only tables which are in locally-managed (as opposed to dictionary-managed) tablespaces and
those not contained within the SYSTEM tablespace may be the subject of a Flashback Drop operation.
Other objects which are excluded from a Flashback Drop include partitioned index-organized tables (IOTs),
and those to which fine-grained auditing (FGA) and virtual private database (VPD) policies have been
applied.
To support Flashback Drop a structure called the recycle bin exists within the database.
It is used to Undrop dropped tables. Uses LIFO method while Undrop and after undrop the
Table is renamed to original while its relevant indexes, triggers etc. still have the system generated
names and cannot be revert to original names automatically.
RECYCLEBIN=ON
Prior to Oracle 10g, a DROP command permanently removed objects from the database. In Oracle 10g, a
DROP command places the object in the recycle bin. The extents allocated to the segment are not
reallocated until we purge the object. we can restore the object from the recycle bin at any time. This
feature eliminates the need to perform a point-in-time recovery operation. Therefore, it has minimum
impact to other database users.
In Oracle 10g the default action of a DROP TABLE command is to move the table to the recycle bin (or
rename it), rather than actually dropping it. The PURGE option can be used to permanently drop a table.
The recycle bin is a logical collection of previously dropped objects, with access tied to the DROP
privilege. The contents of the recycle bin can be shown using the SHOW RECYCLEBIN command and
purged using the PURGE TABLE command. As a result, a previously dropped table can be recovered from
the recycle bin.
Recycle Bin: A recycle bin contains all the dropped database objects until :
There is no room in the tablespace for new rows or updates to existing rows.
We can view the dropped objects in the recycle bin from two dictionary views:
user_recyclebin list all dropped user objects.
dba_recyclebin
list all dropped system-wide objects.
If an object is dropped and recreated multiple times all dropped versions will be kept in the recycle bin,
subject to space. Where multiple versions are present it's best to reference the tables via the
recyclebin_name. For any references to the ORIGINAL_NAME it is assumed the most recent object is
drop version in the referenced question. During the flashback operation the table can be renamed.
FLASHBACK TABLE flashback_drop_test TO BEFORE DROP RENAME TO flashback_drop_test_old;
Several purge options exist :
PURGE RECYCLEBIN;
PURGE DBA_RECYCLEBIN;
-- Specific table.
-- Specific index.
-- All tables in a specific tablespace.
-- All tables in a specific tablespace for a
-- The current users entire recycle bin.
-- The whole recycle bin.
Tables with Fine Grained Access policies are not protected by the recycle bin.
Partitioned index-organized tables are not protected by the recycle bin.
The recycle bin does not preserve referential integrity .
Flashback Database
RECYCLE BIN concept has been in introduced in Oracle 10g onwards. This is similar to WINDOWS
RECYCLEBIN and objects are stored in FLASHBACK area.
The Recycle Bin is a virtual container where all dropped objects reside. Underneath the covers, the
objects are occupying the same space as when they were created. If table EMP was created in the USERS
tablespace, the dropped table EMP remains in the USERS tablespace. Dropped tables and any associated
objects such as indexes, constraints, nested tables, and other dependant objects are not moved, they
are simply renamed with a prefix of BIN$$. You can continue to access the data in a
dropped table or even use Flashback Query against it. Each user has the same rights and privileges on
Recycle Bin objects before it was dropped. You can view your dropped tables by querying the new
RECYCLEBIN view. Objects in the Recycle Bin will remain in the database until the owner of the dropped
objects decides to permanently remove them using the new PURGE command. The Recycle Bin objects
are counted against a user's quota. But Flashback Drop is a non-intrusive feature. Objects in the Recycle
Bin will be automatically purged by the space reclamation process if
o
o
There is no issues with DROPping the table, behaviour wise. It is the same as in 8i / 9i. The space is not
released immediately and is accounted for within the same tablespace / schema after the drop.
When we drop a tablespace or a user there is NO recycling of the objects.
About The Recycle Bin
Previously, when a table was dropped in the Oracle database the space used by the table and its
dependent objects was immediately reclaimed for free space within the tablespace. In current releases of
the database a DROP TABLE will not immediately reclaim the space, although, if you query the
DBA_FREE_SPACE or similar data dictionary views it appears to have done so. Instead, when objects are
dropped they are placed in the recycle bin from where they may be restored. This concept is similar to
the recycle bin you would find in an MS Windows as well as other environments. When objects are
moved to the recycle bin, in actuality they are just renamed but otherwise remain in the same state in
which they existed just prior to the drop. They also remain in the same tablespace. After an object is
dropped, it no longer appears in the object views of the data dictionary, such as the administrator views
DBA_OBJECTS or DBA_TABLES. However, since it continues to occupy the same space within the
tablespace, it will be visible in a view such as DBA_SEGMENTS, in the form of its recycle-bin generated
new object name.
Both individual users and administrators have their own view of the recycle bin. In the first example
below, notice the user view once a table has been dropped. A superficial look at the recycle bin is
available via the SHOW RECYCLEBIN command while a more comprehensive one is obtained by querying
the USER_RECYCLEBIN view or RECYCLEBIN synonym.
While the object remains in the recycle bin, one can every query it or perform Flashback Query upon the
object.
The administrator view into the bin is available from the data dictionary view DBA_RECYCLEBIN. This
likewise maintains the relationships between bin-resident objects and their original names.
Any given object may be dropped several times. Therefore the bin must have the ability to uniquely
identify each instance. Therefore, you will notice that the renamed form of an object as it exists in the bin
follows this basic form:
Version a version number as the same schema object could be dropped several times before the
bin is purged.
******
***********************
If a database object already exists in the database with the same name, an error is returned unless you
also specify the RENAME TO clause. Since the dependent objects keep their name you may need to
rename them before performing the flashback with the RENAME clause.
If the same object was dropped multiple times, the instance that was most recently moved to the recycle
bin is recovered. To restore an older version of that object, use the system-generated name. To illustrate,
the following query indicates that there are several instances of the database object within the bin. Using
the appropriate technique, we can restore the oldest one to the schema.
In other cases, a more focused purge may be performed. The examples shown next purge previous
incarnations of the CUSTOMERS table or the CUSTOMERS_INDEX index.
One could also refer to an object as part of the PURGE command by using its BIN$ bin-resident name as
well. If one has sufficient privileges, all recycled objects previously stored in a given tablespace, or the
entire database, may be purged, as shown next.
Bypa
ssing the Recycle Bin
One can bypass the recycle bin and permanently and immediately drop a table and its dependent
objects. If you issue the DROP TABLEPURGE command, it will not move the objects to the recycle bin.
About Implicitly Dropped Objects Objects which are implicitly dropped as a result of DROP
TABLESPACEINCLUDING CONTENTS, DROP CLUSTER or DROP USERCASCADE commands are never
moved to the recycle bin. Such objects cannot be recovered using FLASHBACK DROP.
BIN$zbjrBdpw==$0
BIN$zbjra9wy==$0
Automatic cleanup under space pressure: While objects are in the recycle bin,
their corresponding space is also reported in DBA_FREE_SPACE because their
space is automatically reclaimable. The free space in a particular tablespace is
then consumed in the following order:
1.
2.
Free space corresponding to recycle bin objects. In this case, recycle bin objects are automatically
purged from the recycle bin using a first in, first out (FIFO) algorithm.
3.
Free space automatically allocated if the tablespace is auto-extensible. Suppose that you create a
new table inside the TBS1 tablespace. If there is free space allocated to this tablespace that does
not correspond to a recycle bin object, this free space is used as a first step. If this is not enough,
free space is used that corresponds to recycle bin objects that reside inside TBS1. If the free
space of some recycle bin objects is used, these objects are purged automatically from the
recycle bin. At this time, you can no longer recover these objects by using the Flashback Drop
feature. As a last resort, the TBS1 tablespace is extended (if possible) if the space requirement is
not yet satisfied.
Using the AS OF TIMESTAMP clause within a SQL statement to flashback to a specific point-intime.
Using the AS OF SCN clause within a SQL statement to flashback to a database SCN.
Explicitly calling the DBMS_FLASHBACK () package for a session to perform similar flashback
operations at the entire session level.
SELECTAS OF TIMESTAMP
To illustrate a brief initial example, notice in the query below the average value for ListPrice within the
Products table as of what is the current point-in-time.
Thereafter, a 10% price increase is implemented for all products and this is reflected in a new average
value. The transaction is committed and the update made permanent to the database.
Suppose a sophisticated sales analysis application running at a later point in time has noticed a
significant decline in sales as of certain time. Management might inquire as to what the average list price
of products was at that point, as compared with the current average. Now notice how a query can include
the AS OF TIMESTAMP clause to satisfy this request. This clause allows one to specify a timestamp value,
often with the help of the TO_TIMESTAMP () system-supplied function. The undo data is read and the
query results reflect the prior point-intime.
With a little bit of logic, any table can thus revert to an earlier state using this feature as well. Notice the
following example. At the same time, we have other far more elegant means of actually undoing an
update using flashback technology, but this example illustrates the underlying capability.
This means that user or application errors which inadvertently delete rows which should not have been
deleted, or performed other database updates erroneously may be undone. Bear in mind that the
flashback operation actually pertains to the object and not the query or the database as a whole. This
becomes clear when performing a join operation. In the example below, the Products table is flashed back
to a prior point but is joined with the Members table in its current state.
By using the AS OF TIMESTAMP clause on all tables within the query, one can create a hybrid query which
uses different states of the tables within the same query.
Using our hypothetical scenario above, one could revert the PRODUCTS table to a prior point in time, and
flashback the SALES table to a later point in time. One could then examine what the net sales value
would have been over that period without the list price increase.
Flashback Query
Use to query all data at a specified point in time.
SELECTAS OF SCN
One may alternately flashback a query to a particular SCN. There are various methods by which the
desired SCN might be computed. One helpful method involves the use of the pseudo-column
ORA_ROWSCN. This pseudo-column refers to the most recent COMMIT SCN which resulted in the row
being updated.
Regardless of the method used to determine the desired SCN point, this example shows another update
to the table is issued and committed. The query, however, flashes back to the SCN when the rows were
still present.
Using DBMS_FLASHBACK[]
Using the DBMS_FLASHBACK() system-supplied package the entire database session, or perhaps just a
transaction, may be flashed back to a prior point. This allows all queries, PL/SQL program units, and so on
to operate in that state without changes. In fact, perhaps using a logon system event trigger, one could
implicitly set one or more database sessions to a prior point in time and then have the applications
operate for the session as if the application was running at a prior point. In this way an application user
could use their application and issue transactions as if the time period were a point in the past. Or
consider a PL/SQL application which opens a cursor while in flashback mode, and then opens a cursor on
the same database objects while in normal mode, with the results of the two compared. Operating in this
mode involves the following simple steps:
The transaction must first enable flashback query to a specific point in time or SCN point using
the ENABLE_AT_TIME() or ENABLE_AT_SYSTEM_CHANGE_NUMBER() program units.
The transaction must complete by disabling flashback queries, using the DISABLE() program unit.
The FLASHBACK object privilege is required in order to perform flashback queries on an object, as shown
above. (Of course the SELECT object privilege would also be required). Like other object privileges, this is
implicitly available to the owner but must be explicitly granted to other users. Also, in the case of the
DBMS_FLASHBACK() package which, like other system-supplied packages which are owned by SYS, the
EXECUTE privilege must be granted to each user who will employ it.
Next, the database session returns to using the latest state of the production database.
Note Note that flashback query does not apply to such objects as data dictionary fixed tables, dynamic
performance tables, external tables, and so on. Part of what this means is that system functions and
pseudo-columns like SYSDATE and others will retain their current values even if the transaction or session
is operating in flashback mode.
Alternately one may flashback a transaction or session to an SCN. Notice this similar example shown
next.
Exam
ple: Consider the initial setup for this example. We obtain the current SCN number as an initial reference
point. Thereafter, several updates on the Teams table are performed, including the insertion and then
deletion of a Team row. These updates are part of committed transactions.
The current SCN is determined and this is the reference point that we will use. The clause VERSIONS
BETWEEN SCN MINVALUE AND MAXVALUE will reference undo data within the range specified. If explicit
SCN values are not used, the keywords MINVALUE and MAXVALUE will use the full range of undo data
available. The clause AS OF SCN xxxx provides a reference point for which the row versions should be
evaluated. If this is omitted, then the most recent SCN is used.
The results of the below query can be interpreted as follows:
The first row corresponds to the version of the row that was deleted. Given that
VERSIONS_ENDSCN is null, it means that the row still existed as of that VERSIONS_STARTSCN
number.
The second row corresponds to the inserted row with Name value of Support. The
VERSIONS_ENDSCN value indicates this version of the row no longer existed as of that SCN.
The third row corresponds to the row with a Name value HR when it was inserted. It also still
exists as of the current SCN.
Flashback Transaction Query is complementary to the Flashback Versions Query feature. Using a Versions
Query, one might identify all of the versions of a given row within a table, as you have just seen. Next,
using Flashback Transaction Query, one can use Versions Query information to query a view named
FLASHBACK_TRANSACTION_QUERY. The FLASHBACK_TRANSACTION_QUERY view indicates the transaction
which created the row version and the SQL code necessary to undo each of the changes made by that
transaction. By invoking that SQL code, one could undo the changes, thereby reverting one or more
tables to their original state.
Database Configuration
Flashback Transaction Query requires that supplemental redo log data be added to the standard redo
processing of the database. While more extensive options of this feature are required for other database
facilities such as standby databases using Oracle Data Guard, Flashback Transaction Query only requires
that minimal supplemental redo logging be enabled. This is done with the following command:
When this option is first enabled, all existing shared SQL cursors within the SQL cache are invalidated,
meaning that a temporary performance loss will occur until the cache is reloaded over the course of
time. A query to the V$DATABASE view can confirm that minimal supplemental redo logging is enabled.
Quer
ying FLASHBACK_TRANSACTION_QUERY
In addition to a properly configured database, to query the view FLASHBACK_TRANSACTION_QUERY one
must have the SELECT ANY TRANSACTION system privilege. The first example shown here returns
information about all transactions, both active and committed, for the TEAMS table.
This next example identifies all of the database updates which were part of a given transaction.
5) Flashback Transaction
With Flashback Transaction, you can reverse a transaction and dependent transactions. Uses the
DBMS_FLASHBACK package to back-out a transaction.
Enable Supplemental Logging:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
Grant necessary Privileges to User:
SQL> GRANT EXECUTE ON DBMS_FLASHBACK TO HR;
SQL> GRANT SELECT ANY TRANSACTION TO HR;
Back-out a Transaction:
SQL> EXEC DBMS_FLASHBACK.TRANSACTION_BACKOUT(NUMTXNS=>1,
XIDS=>SYS.XID_ARRAY('07000A0082020000'));
(Hint: Use DESC DBMS_FLASHBACK to see all procedures & their parameters)
One of following options can be specified to fine tune back-out operations.
NOCASCADE: Default. Backs out specified transactions, which are expected to have no dependent
transactions.
CASCADE: Backs out specified transactions and all dependent transactions in a postorder fashion (that
is, children are backed out before parents are backed out).
NOCASCADE_FORCE: Backs out specified transactions, ignoring dependent transactions. Server
executes undo SQL statements for specified transactions in reverse order of commit times.
NONCONFLICT_ONLY: Backs out changes to no conflicting rows of the specified transactions. Database
remains consistent, but transaction atomicity is lost.
6) Flashback Table
Use to recover tables to specific point in time. Requires Undo Data and Row Movement must be enabled
for the respective table. There are two distinct table related flashback table features in oracle, flashback
table which relies on undo segments and flashback drop which lies on the recycle bin not the undo
segments.
Flashback table lets we recover a table to a previous point in time, we don't have to take the tablespace
offline during a recovery, however oracle acquires exclusive DML locks on the table or tables that we are
recovering, but the table continues to be online. When using flashback table oracle does not preserve the
ROWIDS when it restores the rows in the changed data blocks of the tables, since it uses DML operations
to perform its work, we must have enabled row movement in the tables that we are going to flashback,
only flashback table requires we to enable row movement. If the data is not in the undo segments then
Flashed back
tables
This feature allows one to permanently flashback one or more tables to a specific point-in-time or SCN. It
is most useful to recover from user or application error. For example, suppose that a serious application
logic bug was found indicating that updates performed over a recent period of time were all erroneous
and must be permanently undone. This must take place while the application continues to operate. The
Flashback Table operation would be the ideal solution. The source of the original data for the flashback
table operation is also the undo data. The undo data is read online and the table restored to the point
designated. Previously, one might need to take a portion of the database offline and perform a
complicated point-in-time recovery operation. Or, a more intricate set of steps would be needed using
only Flashback Query. However, this task is simpler and more efficient using Flashback Table. While
Flashback Table primarily restores tables, it also automatically maintains dependent objects such as
indexes (either standard indexes or partitioned indexes in the case of partitioned tables), triggers, and
constraints. Furthermore, if the table had been replicated as part of a distributed database configuration,
the replicated objects are maintained during the flashback operation too. Once performed, this statement
is executed as a single transaction. This means that either all updates must be flashed back successfully
or the entire flashback transaction is rolled back. The flashback operation may itself be undone, reverting
the table to a different point in time if necessary.
Restriction on flashback table recovery : we cannot use flashback table on SYS objects we cannot
flashback a table that has had preceding DDL operations on the table like table structure changes,
dropping columns, etc The flashback must entirely exceed or it will fail, if flashing back multiple tables all
tables must be flashed back or none. Any constraint violations will abort the flashback operation we
cannot flashback a table that has had any shrink or storage changes to the table (pct-free, initrans and
maxtrans. The following example creates a table, inserts some data and flashbacks to a point prior to the
data insertion. Finally it flashbacks to the time after the data insertion.
To perform Flashback Table, the following prerequisites are needed:
You must have been granted the FLASHBACK ANY TABLE or have the FLASHBACK object privilege.
You must also have the SELECT, INSERT, DELETE and ALTER privileges on the table.
Row movement must be enabled on the table by means of the ALTER TABLEENABLE ROW
MOVEMENT statement.
To determine the appropriate flashback time, you can use Flashback Versions Query and Flashback
Transaction Query. Both allow you to establish the specific time to flashback the table. Once the proper
flashback time is determined, the FLASHBACK TABLE command can be used to flashback one or more
tables either to a point-in-time or a SCN.
If this prerequisite step has not been taken then a flashback operation will result in the following
error: ORA-08189: cannot flashback the table because row movement is not enabled
Prepare Your Tables For FlashbackOne cannot flashback a table to a point prior to its ability to
support row movement. In other words, if one wishes to flashback a table and is prevented from doing so
because row movement was not enabled, simply enabling row movement will not allow that same
flashback operation to then be performed. One may only flashback a table to a point after row movement
has been enabled.
Next, perform the Flashback Table operation. This first example uses a time stamp to flashback the
CUSTOMERS table. You can use either of the methods shown to specify the timestamp:
The structure of the table must be stable and must have existed at a time consistent with the timestamp
indicated. Otherwise an error such as the following would occur:ORA-01466: unable to read data table definition has changed This next example uses a SCN number to flashback the tables.
Typically, a SCN number will be used if a referential integrity constraint exists. In this case referential
integrity exists between the CUSTOMERS and SALES tables, thus a FLASHBACK TABLE statement will be
used to group the tables within the same operation. By default, the triggers are disabled when executing
this statement. However, if you need to override the default behavior, use the ENABLE TRIGGER clause.
In the following example, the triggers are enabled throughout the Flashback operation:
Rewinds database. Uses Flashback Logs to perform operations. Enable Flashback Logs as already
mentioned in 4) above.
Flashback Database command is a fast alternative to performing an incomplete recovery. In order to
flashback the database we must have SYSDBA privilege and the flash recovery area must have been
prepared in advance.The database can be taken back in time by reversing all work done sequentially. The
database must be opened with resetlogs as if an incomplete recovery has happened. This is ideal if we
There are several components within the database that support this feature. A description of each
component appears in the table below. As well, you will find an illustration of the Flashback Database
architecture following the table.
About target parameters Parameters such as DB_FLASHBACK_RETENTION_TARGET are, as the name
implies, parameters that specifytarget values and not absolute values. This means that while the
database will endeavor to achieve the target, it is not guaranteed and is dependent upon other factors. In
the case of DB_FLASHBACK_RETENTION_TARGET, the actual retention time is dependent upon the
flashback area also having sufficient space, as directed by the parameter DB_RECOVERY_FILE_DEST_SIZE.
By default, flashback logs are generated for all permanent tablespaces. If you have a tablespace for
which you do not want to log flashback data, you can execute the ALTER TABLESPACE command to
exclude a tablespace. Such a tablespace must be taken offline prior to flashing back the database. This
next example excludes the SIDERISUSERS tablespace from participating in the flashback of the database:
To determine which tablespaces are to be excluded from participating in the flashback of the database,
query the V$TABLESPACE as displayed in the following example.
Using the EM graphical interface, which automatically creates and executes an RMAN script
Assuming that the flash recovery area has been configured and the database placed in flashback mode,
then one need only know the SCN or point-in-time to which the database should be flashed back, and the
operation may be launched.
Any valid timestamp expressions or literal values may also be stated instead if one wishes to perform a
point-in-time flashback. Notice this example.
Thereafter, the database must be opened. Generally one will open it with the RESETLOGS option.
About Restore Points
Restore points are simply alias or mnemonic names assigned to SCNs. In this way, rather than recording
tedious SCN numbers and potentially causing a serious database recovery error due to a typographical
error, one can instead refer to an easily recognizable restore point name. A restore point is created at any
time using the CREATE RESTORE POINT command. The current SCN is associated with this label.
If the database is operating in ARCHIVELOG mode and the flash recovery area has been configured, then
one may define a guaranteed restore point. This will ensure that the flashback logs are maintained for as
long as necessary so as to support a flashback database operation to that point.
The V$RESTORE_POINT view will list the current set of restore points and which are guaranteed. In the
case of guaranteed restore points, it will also indicate the amount of flashback log storage currently
required to maintain this point.
The V$FLASHBACK_DATABASE_LOG data dictionary view likewise reports useful information. It reveals
the SCN and point-in-time currently supported by the flashback area. If the point-in-time does not match
the number of minutes specified by RETENTION_TARGET then one may need to find additional space for
the recovery area.
Two other important pieces of information are FLASHBACK_SIZE and ESTIMATED_FLASHBACK_SIZE. The
first reveals the current size of the flashback data while the second indicates the size actually needed,
based upon current transaction history, to satisfy the retention target. In the example above it is
expected that much more space will eventually be needed for the flash recovery area. The view
V$FLASHBACK_DATABASE_STAT maintains statistics to compute the amount of flashback space needed.
At various sample points, usually hourly, it indicates that amount of flashback log bytes written, data file
bytes read and written, and redo bytes written. Data file bytes are more resource consumptive since they
are random, while the logs are sequential writes in nature.
V$SYSSTAT reveal the number of operations, rather than the bytes, which utilize the flashback logs. The
number of flashback log writes is indicative of the amount of block changes made by transactions. The
number of physical reads for flashback data when performing a flashback database operation.
The RETENTION clause permits the keyword designations YEAR, MONTH and DAY. Most of the attributes
of a flashback archive may be modified using the ALTER FLASHBACK ARCHIVE command. In this example
we expand the quota permitted for the tablespace and decide to allow additional archive space to be
taken from
another tablespace.
For the most part one will rely upon the database to retain the data archive for the duration specified. On
occasion one might want to manually purge this data. This is permitted, as you can see next. The clauses
PURGE BEFORE SCN xxx and PURGE BEFORE TIMESTAMP (TimeStamp) are also supported. Once the data
is purged from the archive, then the historical row state information is only available if it exists within the
undo data, and this is almost certainly not sufficient to support the retention period within our scenario.
Of course, a flashback archive which is no longer needed and no longer in use may be dropped.
The data dictionary maintains metadata for the flashback archives defined. General information is
available from the view DBA_FLASHBACK_ARCHIVE.
for
each
flashback
archive
is
maintained
within
the
view
Note The system privilege FLASHBACK ARCHIVE ADMINISTER is required in order to administer
flashback archives within the database.
Default Flashback Archive
The next step is to enable flashback archiving for selected tables. We may designate which flashback
archive is appropriate for each table in question, or a default flashback archive can be designated for use
when a specific one is not selected. First, in order to designate one of the flashback archives as the
default, this would be done as shown here:
The STATUS column within DBA_FLASHBACK_ARCHIVE will indicate if a default archive has been
established for the database.
The table owner may now manage archiving on individual tables, utilizing the attributes of each one to
which they have access. In this example a table is associated with a specific archive.
In this case archiving is enabled for a table, but the default flashback archive is implicitly selected.
Archiving may be disabled for a table, which will no longer consume space allocated to the archive and
will therefore be dependent upon undo data for any flashback queries issued against it.
Note Nearly all DDL operations which affect the logical structure of the table will be forbidden
once archiving is enabled for a table. The only exception is the ALTER TABLEADD COLUMN command,
which is permitted. If this were not the case then one could contravene the purpose and intention of
archiving by modifying its logical structure. Illegal DDL operations attempted on such tables will generate
the error ORA-55610: Invalid DDL statement on history-tracked table.
Once archiving is enabled, an internal object is used within the designated tablespace to support the
archive records. The administrator may view these internal objects from the view
DBA_FLASHBACK_ARCHIVE_TABLES.
Database Cloning
What is Cloning?
Database Cloning is a procedure that can be used to create an identical copy of the existing Oracle
database. DBAs sometimes need to clone databases to test backup and recovery strategies or export a
table that was dropped from the production database and import it back into the production database.
Cloning can be done on separate hosts or on the same host and is different from standby database.
Reason for Cloning
In every oracle development and production environment there will become the need to transport the
entire database from one physical machine to another. This copy may be used for development,
production testing, beta testing, etc, but rest assured that this need will arise and management will ask
you to perform this task quickly.
Listed below are the most typical uses:
Cold Cloning
Hot Cloning
RMAN Cloning
Here
is
brief
explanation
how
to
perform
cloning
in
all
these
three
methods
out
the
path
and
names
of
datafiles,
control
files,
and
redo
log
files.
database
is
using
pfile,
use
OS
command
to
copy
the
pfile
to
backup
location.
Shutdown the RIS database, here we are doing cold cloning so we need to stop all DB services.
SQL> shutdown
Copy all data files, control files, and redo log files of RIS database to a target database location.
$ mkdir /u02/RISCLON/oradata
$ cp /u01/RIS/oradata/* /u02/RISCLON/oradata/
Create appropriate directory structure in clone database for dumps and specify them in the parameter
file.
$ mkdir -p /u02/RISCLON/{bdump,udump}
Edit the clone database parameter file and make necessary changes to the clone database
$ cd /u02/RISCLON/
$ vi initRISCLON.ora
db_name=RISCLON
control_files=/u02/RISCLON/oradata/cntrl01.ctl
background_dump_dest=/u02/RISCLON/bdump
user_dump_dest=/u02/RISCLON/udump
.
.
:wq!
.
.
.
.
the
control
files
successfully
created,
open
the
database
with
resetlogs
option.
database
is
using
pfile,
use
OS
command
to
copy
the
pfile
to
backup
location.
.
.
.
.
.
.
should
be
cloned
should
be
cloned
.
.
NOTE: db_file_name_convert and log_file_name_convert parameters are required only if the source
database
directory
structure
and
clone
database
directory
structure
differs.
4. Configure the listener using listener.ora file and start the listener
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = RIS)
(ORACLE_HOME = /u01/oracle/product/10.2.0/db_1/)
(SID_NAME =RIS)
)
(SID_DESC =
(GLOBAL_DBNAME = RISCLON)
(ORACLE_HOME = /u02/oracle/product/10.2.0/db_1/)
(SID_NAME =RISCLON)
)
)
5. Add the following information to the tnsnames.ora file.
con_RISCLON =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 200.168.1.22)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RISCLON)
)
)
6. Startup the database in NOMOUNT stage and exit.
Are they same? Well lets have a look on the above activities that helps in finding the differences between
them.
What is a Database Clone?
* A database clone is an activity/procedure which is performed by every DBA on regular basis or when
there is a requirement or request to do so from the different departments i.e. Test/Development teams.
* Cloning is nothing but creating a copy of production system in to a test or development environment.
i.e. Having an exact image of production database in test area.
* Cloning is a procedure for preparing and creating a test or development servers with the copy of Oracle
production database for testing upgrades, migrating an existing system to new hardware.
* A cloning process includes a copy of Oracle Home (Directories and Binaries) backup and Database
(Database related files) backup to prepare the instance on another server.
* Though, it is possible to clone a database on the same server, Oracle doesnt suggest to clone a
database on the same server, where the production database is running.
What is a Database Refresh?
* A Database Refresh is also referred to as a database clone. However, we dont clone Oracle Home
rather we clone the Database as refresh.
* Refreshing a database is something like applying the changes or updates of production database to the
database where the database is already cloned. i.e. lets say you have cloned a database a month back,
and now you are asked for doing refresh of a database, then you will perform the backup of database
and prepare the clone the instance again on test server. This is nothing but refreshing.
* Refreshing of a particular table, group of tables, schema, or tablespace will be done using traditional
export/import, transportable Tablespaces, or data pump methods.
* When an Oracle patch is applied on Production System, or in doubt, you have to prepare and clone the
database again with the copy of Oracle Home (Directories and Binaries) Backup and Database (Database
related files) Backup to prepare the instance.
* The difference between Cloning and Refreshing is that cloning process includes Oracle Home and
database Clone; where as Refreshing process only includes database clone.
* If seen, the words, Clone and Refresh are used interchangeably for the sake of convenient.
When and why we Clone a Database?
* Generally production (PROD) database is cloned for various reasons and needs i.e. for something to be
tested or something to be developed later those to be moved to production.
* Its normal and quite common thing is that whenever there is any change or update to be performed
and do not know the impact or effect after applying it on production (PROD), its required to be applied
and tested on *NON* production database first (TEST or DEV), after the confirmation of change success,
given by the users, then the changes will be moved to production.
* A Cloned test instance (TEST) for testing team/environment is exclusively used for testing the changes
or issues which will become severe on Production. Oracle Support gives the solution as fix when there is
an issue in the database, so this fix needs to perform or apply on test/development databases.
* A Cloned development instance (DEV) for development team/environment is used for developing the
new changes and then deploying the same on Production.
* A Cloned patch instance is used for patching to know the impact and the time required to apply the
same on Production.
Definition of an Oracle Patch : Patches are software programs for individual BUG FIXES. Oracle
issues product fixes software, usually it is called as Patches, it is used to fix a particular problem. (Bugs,
Security weakness , Improving Performance etc).
Patches are associated with particular versions of Oracle products. When we apply patch to Oracle
Software, a small collection of files are replaced to fix certain bugs and database Version number
doesnt change.
Overview of CPU
CPU was introduced in JAN 2005 to provide SECURITY FIXES.
CPU are sets of patches containing fixes for security fault.
Critical Patch Updates are Collections of Security fixes for Oracle Products. They are available to
customers with valid support contracts.
CPU PATCHES ARE ALWAYS CUMULATIVE , that means fixes from previous Oracle security alerts and
critical patch updates are included in current patch. However each advisory describes only the security
fixes added since the previous Critical Patch Update advisory. (Not required to have previous security
patches applied before applying the latest patches).
Critical Patch Updates and Security Alerts for information about Oracle Security Advisories.
CPU patches are collection of patches applied to fix multiple security vulnerabilities. Suppose after
applying latest patchset for current release; If there is any bug occurrence , then oracle release cpu
patches in regular interval is used to fix those bug. CPU patch based on latest patchset.
Overview of PSU
PSU was introduced in JULY 2009.
PSU is limited from 25 to 100 new bug fixes.
PSUs are also well tested by Oracle compared to one off patches.
PSU are patch sets but some major differences respect to regular patch sets.
Oracle Introduced new method for patching i.e. Patch set Updates or PSU.
PSUs are cumulative and include all of the security fixes from CPU patches, plus additional fixes. An
Oracle PSU having recommended bug fixes and "proactive" cumulative patches, So the DBA
choose to apply all patches in the PSU patch bundle (which includes additional fixes).
If PSU patch is applied , We cannot apply CPU patch ( until dB upgrade to new version) - Any Specific
reason ?
10.2.0.4.1 1 indicates for PSU patch.
if we have 10.2.0.4 then it's well contain all fixes in 10.2.0.3
So ,the fifth no of the database version is incremented for each PSU. All PSUs are denoted by the last
digit - (10.2.0.4.1 , 10.2.0.4.2) . The initial PSU is version 10.2.0.4.1, the next PSU for Release will be
10.2.0.4.2 and so on.
If we choose to apply CPU, then last digit will indicate to CPU
If we choose to apply PSU, then last digit will indicate to PSU
Once a PSU is applied , only PSU can be applied in future quarters until the databases is
upgraded to new base version.
How can we check applied Patch
CPU select * from registry$history;
PSU opatch lsinv -bugs_fixed | grep -i PSU
PSU opatch lsinventory -bugs_fixed | grep -i 'DATABASE PSU'
PSUs are referenced by their 5th place in the Oracle version numbers which makes it easier to track (e.g.
10.2.0.3.1) and will not change the version of oracle binaries (like sqlplus, exp/imp etc.)
and/or
Patching is one of the most common task performed by DBA's in day-to-day life . Here , we
will discuss about the various types of patches which are provided by Oracle . Oracle issues
product fixes for its software called patches. When we apply the patch to our Oracle software
installation, it updates the executable files, libraries, and object files in the software home directory
. The patch application can also update configuration files and Oracle-supplied SQL schemas. Patches are
applied by using OPatch, a utility supplied by Oracle, OUI or Enterprise Manager Grid Control.
Oracle Patches are of various kinds .Here, we are broadly categorizing it into two groups.
1.) Patchset:
2.) Patchset Updates:
1.) Patchset: A group of patches form a patch set. Patchsets are applied by invoking OUI (Oracle
Universal Installer). Patchsets are generally applied for Upgradation purpose. This results in a version
change for our Oracle software, for example, from Oracle Database 11.2.0.1.0 to Oracle Database
11.2.0.3.0. We will cover this issue later.
2.) Patchset Updates: Patch Set Updates are proactive cumulative patches containing recommended
bug fixes that are released on a regular and predictable schedule. Oracle has categories as :
i.) Critical Patch Update (CPU) now refers to the overall release of security fixes each quarter
rather than the cumulative database security patch for the quarter. Think of the CPU as the
overarching quarterly release and not as a single patch .
ii.) Patch Set Updates (PSU) are the same cumulative patches that include both the security fixes and
priority fixes. The key with PSUs is they are minor version upgrades (e.g., 11.2.0.1.1 to
11.2.0.1.2). Once a PSU is applied, only PSUs can be applied in future quarters until the
database is upgraded to a new base version.
iii.) Security Patch Update (SPU) terminology is introduced in the October 2012 Critical Patch
Update as the term for the quarterly security patch. SPU patches are the same as previous CPU
patches, just a new name . For the database, SPUs can not be applied once PSUs have been
applied until the database is upgraded to a new base version.
iv.) Bundle Patches are the quarterly patches for Windows and Exadata which include both the
quarterly security patches as well as recommended fixes.
PSUs(PatchSet Updates) or CPUs(Critical Patch Updates) ,SPU are applied via opatch utility.
How to get Oracle Patches :
We obtain patches and patch sets from My Oracle Support (MOS) . The ability to download a
specific patch is based on the contracts associated to the support identifiers in our My Oracle
Support account. All MOS users are able to search for and view all patches, but we will be
prevented from downloading certain types of patches based on our contracts.
While applying Patchset or patchset upgrades , basically there are two entities in the Oracle Database
environment
i. ) Oracle Database Software
ii.) Oracle Database
Here , we will cover the Opatch Utility in details along with example.
OPatch is the recommended (Oracle-supplied) tool that customers are supposed to use in order
to apply or rollback patches. OPatch is PLATFORM specific . Release is based on Oracle Universal
Installer version . OPatch resides in $ORACLE_HOME/OPatch . OPatch supports the following :
Applying an interim patch.
Rolling back the application of an interim patch.
Detecting conflict when applying an interim patch after previous interim patches have
been applied. It also suggests the best options to resolve a conflict .
Reporting on installed products and interim patch.
The patch metadata exist in the inventory.xml and action.xml files exists
under<stage_area>/<patch_id>/etc/config/
Inventory .xml file have the following information :
Bug number
Unique Patch ID
Date of patch year
Required and Optional components
OS platforms ID
Instance shutdown is required or not
Patch can be applied online or not
Actions
Login to metalink.
Click "Patches & Updates" link on top menu.
On the patch search section enter patch number and select the platform of your database.
Click search.
On the search results page, download the zip file.
2.) Opatch version :
Oracle recommends that we use the latest released OPatch , which is available for download
from My Oracle Support . OPatch is compatible only with the version of Oracle Universal Installer
that is installed in the Oracle home. We can get all Opatch command by using Opatch
help command .
3.) Stop all the Oracle services :
Before applying Optach , make sure all the Oracle services are down . If they are not down then
stop/down the oracle related Services . Let's crosscheck it
/u01/app/oracle/oradata
/u01/app/oraInventary/orinv-b4-ptch.tar.gz
/u01/app/oraInventary
from dba_registry_history
Notes :
i.) If we are using a Data Guard Physical Standby database, we must install this patch on both the
primary
database
and
the
physical
standby
database
.
ii.) While applying patching take care of mount point status .There should be sufficient Space .
Compatibility Matrix
Database Upgrade are common but risky task for a DBA if not done properly. Here, I am listing detailed method of
upgrade with verification and validation.
Minimum Version of the Oracle database software that can be directly upgraded to Oracle 11g Release 2, So before
upgrade remote DBA needs to check this.
Source Database
9.2.0.8 or higher
10.1.0.5 or higher
10.2.0.2 or higher
11.1.0.6 or higher
Target Database
11.2.x
11.2.x
11.2.x
11.2.x
The following database software version will require an indirect upgrade path. In this case DBA needs to do double
effort, because two upgrades are needed.
Source Database ---> Upgrade Path for Target Database--->Target Database
7.3.3 (or lower)-----> 7.3.4 ---> 9.2.0.8 ---->11.2.x
8.0.5 (or lower)----> 8.0.6 ---> 9.2.0.8 ---->11.2.x
8.1.7 (or lower)----> 8.1.7.4---> 10.2.0.4---->11.2.x
9.0.1.3 (or lower)----> 9.0.1.4-- ->10.2.0.4---->11.2.x
Oracle Enterprise Management is Web-Based tool to manage Oracle Database.OEM using for perform
administrative task abd view performance statistics.
How to use Database Control
a) ORACLE_HOME/bin/emctl start dbconsole [To start DB Control]
b) ORACLE_HOME/bin/emctl status dbconsole [To check status of DB Control]
c) ORACLE_HOME/bin/emctl stop dbconsole
[To stop DB Control]
If you didnt install OEM through installation Oracle Database 11g Then You need downloand OEL and
installa it by your self.
So,i will write a article about that next days.
In this article i will take part of starting and stoping options of OEM.
Firstly we have to go OEM directory $ORACLE_HOME/bin/
Status OEM
[oracle@orcl bin]$ emctl status dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0
Copyright (c) 1996, 2009 Oracle Corporation. All rights
reserved.https://orcl.localdomain:1158/em/console/aboutApplication
Oracle Enterprise Manager 11g is not running.
-----------------------------------------------------------------Logs are generated in directory /u01/app/oracle/product/11.2.0/db_1/orcl.localdomain_orcl/sysman/log
Starting OEM
emctl start dbconsole;
Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0
Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved.
https://orcl.localdomain:1158/em/console/aboutApplication
Starting Oracle Enterprise Manager 11g Database Control............ started.
-----------------------------------------------------------------Logs are generated in directory /u01/app/oracle/product/11.2.0/db_1/orcl.localdomain_orcl/sysman/log
Stoping OEM
emctl stop dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0
Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved.
https://orcl.localdomain:1158/em/console/aboutApplication
Stopping Oracle Enterprise Manager 11g Database Control ...
... Stopped.