Oracle Demo
Oracle Demo
When a database is started on a database server, Oracle allocates a memory area called the System Global Area (SGA) and starts one or more Oracle processes. This combination of the SGA and the Oracle processes is called an Oracle instance. Oracle instance background processes
Database Writer Log Writer Checkpoint System Monitor Process Monitor Archiver Recoverer
Database buffer cache Redo log buffer Shared pool Java pool Large pool (optional) Data dictionary cache Other miscellaneous information
From Oracle9i We can change its SGA configuration while the instance is running. With the dynamic SGA infrastructure, the sizes of the a) Data buffer cache b) shared pool c) large pool can be changed without shutting down the instance.
Page 1
The database buffer cache is divided into a) Dirty list: buffers that have been modified and are waiting to be written into disk and b) least recently used: (LRU) list. Buffers that are unmodified can be used as free buffers. Oracle9i supports multiple block size in a database. The default block size is the block size used for the System Tablespace. You specify the standard block size by setting the initialization parameter DB_BLOCK_SIZE. Legitimate values are from 2K to 32K. To specify the size of the standard block size cache, you set the initialization parameter DB_CACHE_SIZE. Sizes and numbers of non-standard block size buffers are specified by the following parameters: DB_2K_CACHE_SIZE DB_4K_CACHE_SIZE DB_8K_CACHE_SIZE DB_16K_CACHE_SIZE DB_32K_CACHE_SIZE The Shared Pool portion of the SGA contains three memory structures. a)Library Cache, b) Data Dictionary and c) Control Structures The size of the Shared Pool can be customized using the parameter SHARED_POOL_SIZE.
Page 2
ROWID Oracle uses a ROWID datatype to store the address (rowid) of every row in the database. Physical rowids store the addresses of rows in ordinary tables (excluding index-organized tables), clustered tables, table partitions and subpartitions, indexes, and index partitions and subpartitions. Logical rowids store the addresses of rows in index-organized tables.
Trace Files :
Each server and background process can write to an associated trace file. When a process detects an internal error, it dumps information about the error to its trace file. If an internal error occurs and information is written to a trace file, the administrator should contact Oracle support. When one of the Oracle background processes (such as dbwr,lgwr, pmon, smon and so on ) encounter an exception, they will write to a trace file. These trace files are also recorded in the alert.log. Trace files are also created for diagnostic dump events. An ORA-00600 error also produces a trace file. Alert Log File: Each database also has an Alert log. The ALERT file of a database is a chronological log of messages and errors. Oracle's alert.log chronologically records messages and errors arising from the daily database operation. Also, there are pointers to trace files and dump files.
Page 3
These messages include startups and shutdowns of the instance Messages to the operator console Errors causing trace files. Create, alter and drop SQL statements on databases, tablespaces and rollback segments. Errors when a materialized view is refreshed. ORA-00600 (internal) errors. ORA-01578 errors (block corruption) ORA-00060 errors (deadlocks) alert.log is a text file that can be opened with any text editor. The directory where it is found can be determined by the background_dump_dest initialization parameter: Select value from v$parameter where name = 'background_dump_dest';
If the background_dump_dest parameter is not specified, Oracle will write the alert.log into the $ORACLE_HOME/RDBMS/trace directory. Oracle does not consider a transaction as committed until LGWR successfully writes your transactions redo entries and a commit record to the transaction log. UROWID A single datatype called the universal rowid, or UROWID, supports both logical and physical rowids, as well as rowids of foreign tables such as non-Oracle tables accessed through a gateway. A column of the UROWID datatype can store all kinds of rowids. The value of the COMPATIBLE initialization parameter must be set to 8.1 or higher to use UROWID columns. You can also influence checkpoints with SQL statements.
Page 4
SQL> ALTER SYSTEM CHECKPOINT Directs Oracle to record a checkpoint for the node, and ALTER SYSTEM CHECKPOINT GLOBAL directs Oracle to record a checkpoint for every node in a cluster. SQL-induced checkpoints are heavyweight. This means that Oracle records the checkpoint in a control file shared by all the redo threads. Oracle also updates the datafile headers.
Page 5
DEMO ON DATABASE CREATION Before going to create a database we need to set the ENVIRONMENT VARIABLES. These variables can be set at the $ prompt also using EXPORT command but whenever the user logout from the session these variables loose their value. When a user log into the unix account his local profile will be read first and exports all the default environment variables. So declare the environment for ORACLE in the BASH_PROFILE file. $ cd $ vi .bash_profile ORACLE_SID=DBWIL (an SID should be unique per server) ORACLE_HOME=/oraeng/app/oracle/product/9.0.1 PATH=$ORACLE_HOME/bin:$PATH :wq! To save the File After saving execute the file to set ENVIRONMENT VARIABLES $ . .bash_profile To observe the environment variables for your current session using command $ env PWD=/users/FmiliB21 ORACLE_SID=DBWIL USERID=FmiliB21 REMOTEHOST=192.168.0.3 HOSTNAME=oradba12 choice=3 PVM_RSH=/usr/bin/rsh QTDIR=/usr/lib/qt-2.3.1 LESSOPEN=|/usr/bin/lesspipe.sh %s dir_1=milinda dir_2=milinda XPVM_ROOT=/usr/share/pvm3/xpvm
Page 6
KDEDIR=/usr USER=FmiliB21 LS_COLORS= MACHTYPE=i386-redhat-linux-gnu scripts=/.exam/TEST/SCRIPTS d1=disk2 SQLPATH=/.exam/TEST/SCRIPTS d2=disk3 MAIL=/var/spool/mail/FmiliB21 INPUTRC=/etc/inputrc LANG=en_US : To create the database atleast the instance should be started (i.e., SGA should be created and the Background processes need to be started). Some PARAMETERS are required to start the INSTANCE for a database. The location of this PARAMETER file [ INIT.ORA ] is /oraeng/app/oracle/product/9.0.1/dbs (i.e., /oraeng/app/oracle/product/9.0.1/dbs ) NOTE : we have to copy this default file with your INIT<SID>.ORA using the command $ cd /oraeng/app/oracle/product/9.0.1/dbs $ cp init.ora initDBWIL.ora $ vi initDBWIL.ora #################################################### ########################### # # replace DEFAULT with your database name db_name= DBWIL instance_name=DBWIL db_files = 80 # SMALL # db_files = 400 # MEDIUM
Page 7
# db_files = 1500 # LARGE db_file_multiblock_read_count = 8 # SMALL # db_file_multiblock_read_count = 16 # MEDIUM # db_file_multiblock_read_count = 32 # LARGE db_cache_size=4m # # # # # # db_block_buffers = 100 SMALL db_block_buffers = 550 MEDIUM db_block_buffers = 3200 LARGE
shared_pool_size = 8000000 # SMALL # shared_pool_size = 5000000 # MEDIUM # shared_pool_size = 9000000 # LARGE log_buffer = 32768 # log_buffer = 32768 # log_buffer = 163840 # LARGE # audit_trail = true auditing # timed_statistics = true statistics max_dump_file_size = 10240 size to 5 Meg each # if you want # if you want timed # limit trace file
Page 8
# Uncommenting the line below will cause automatic archiving if archiving has # been enabled using ALTER DATABASE ARCHIVELOG. # log_archive_start = true # log_archive_dest = disk$rdbms:[oracle.archive] # log_archive_format = "T%TS%S.ARC" # If using private rollback segments, place lines of the following # form in each of your instance-specific init.ora files: # rollback_segments = (name1, name2) # If using public rollback segments, define how many # rollback segments each instance will pick up, using the formula # # of rollback segments = transactions / transactions_per_rollback_segment # In this example each instance will grab 40/5 = 8: # transactions = 40 # transactions_per_rollback_segment = 5 # Global Naming -- enforce that a dblink has same name as the db it connects to global_names = TRUE db_domain=ORADBA12 # Edit and uncomment the following line to provide the suffix that will be # appended to the db_name parameter (separated with a dot) and stored as the # global database name when a database is created. If your site uses # Internet Domain names for e-mail, then the part of your e-mail address after # the '@' is a good candidate for this parameter value.
Page 9
# global database
# FOR DEVELOPMENT ONLY, ALWAYS TRY TO USE SYSTEM BACKING STORE # vms_sga_use_gblpagfil = TRUE # FOR BETA RELEASE ONLY. Enable debugging modes. Note that these can # adversely affect performance. On some non-VMS ports the db_block_cache_* # debugging modes have a severe effect on performance. #_db_block_cache_protect = true # memory protect buffers #event = "10210 trace name context # data block checking #event = "10211 trace name context # index block checking #event = "10235 trace name context # memory heap checking #event = "10049 trace name context # memory protect cursors
forever, level 2" forever, level 2" forever, level 1" forever, level 2"
# define parallel server (multi-instance) parameters #ifile = ora_system:initps.ora # define two control files by default control_files = (/disk1/oradata/DBWIL/control DBWIL1.ctl, /disk2/oradata/DBWIL/controlDBWIL1.ctl) background_dump_dest =/disk1/oradata/DBWIL/bdump user_dump_dest =/disk1/oradata/DBWIL/udump core_dump_dest =/disk1/oradata/DBWIL/cdump
Page 10
# Uncomment the following line if you wish to enable the Oracle Trace product # to trace server activity. This enables scheduling of server collections # from the Oracle Enterprise Manager Console. # Also, if the oracle_trace_collection_name parameter is non-null, # every session will write to the named collection, as well as enabling you # to schedule future collections from the console. # oracle_trace_enable = TRUE # Uncomment the following line, if you want to use some of the new 8.1 # features. Please remember that using them may require some downgrade # actions if you later decide to move back to 8.0. compatible = 9.2.0 distributed_transactions=0 undo_management=AUTO(this implies that undo management
will be done by oracle itself if it is set to undo_management=mannul then we need to specify the roll back segments in the rollback_segment parameter)
undo_tablespace=UNDO_TBS undo_retention=900 (undo retention is used to tell the sga that for how much time the data will be present in undo segment, after/before commit undo segment contains the OLD/NEW data before/After commit for read consistency and easy availability of data to increase the peromance.) # #################################################### ###############################
Page 11
Some of the PARAMETERS you have to set in this file for your database instance: db_name=DBWIL control_files=(/disk1/oradata/DBWIL/contDBWIL1.c tl, /disk1/oradata/DBWIL/contDBWIL2.ctl) background_dump_dest=/disk1/oradata/DBWIL/bdump core_dump_dest=/disk1/oradata/DBWIL/cdump user_dump_dest=/disk1/oradata/DBWIL/udump undo_management=AUTO undo_tablespace=UNDO_TBS undo_retention=900 According to your parameters you have to create the Directory structure using command $ cd /disk1/oradata $ mkdir DBWIL $ cd DBWIL $ mkdir bdump cdump udump **************************************************Directories Created******************************************************* You can see the directories using command $ ls -l /disk1/oradata/DBWIL NOTE: come back to your home directory using command $ cd {enter} ] drwxrwxrwx 2 FmiliB21 dba 4096 Dec 26 20:36 bdump drwxrwxrwx 2 FmiliB21 dba 4096 Dec 26 20:36 cdump drwxrwxrwx 2 FmiliB21 dba 4096 Dec 26 20:36 udump As the parameter file is configured now we can go along and create the database. $ sqlplus / as sysdba
Page 12
SQL> startup nomount (oracle reads init.ora and starts the instance accordingly)
SQL> CREATE DATABASE DBWIL DATAFILE '/disk1/oradata/DBWIL/system01.dbf' SIZE 90m UNDO TABLESPACE UNDO_TBS DATAFILE '/disk1/oradata/DBWIL/undotbs01.dbf'SIZE 10m DEFAULT TEMPORARY TABLESPACE TEMPFILE '/disk1/oradata/DBWIL/temp01.dbf' SIZE 5m LOGFILE GROUP 1 ('/disk1/oradata/DBWIL/redo1a.log') SIZE 300K, GROUP 2 ('/disk1/oradata/DBWIL/redo2a.log') size 300k CONTROLFILE REUSE; Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers 46494708 279540 41943040 4194304 77824 bytes bytes bytes bytes bytes
Database is under creation. Wait Create Database DBWIL ERROR at line 1: ORA-01501: CREATE DATABASE failed ORA-01101: database being created currently mounted by some other instance
Page 13
The database is created successfully. After the database creation we need to execute the following scripts for the perfect usage of the database. SQL> SQL> SQL> SQL> @$ORACLE_HOME/rdbms/admin/catalog.sql @$ORACLE_HOME/rdbms/admin/catproc.sql connect system/manager @/oraeng/app/oracle/product/9.0.1/
sqlplus/admin/pupbld.sql
*****************************************DONE WITH DATABASE CREATION********************************************
Page 14
DEMO ON TABLESPACE MANAGEMENT Selecting status and contents of the Tablespace. SQL> select tablespace_name, block_size, extent_management, segment_space_management, status, contents from dba_tablespaces; TABLESPACE_NAME BLOCK_SIZE EXTENT_MAN SEGMEN STATUS CONTENTS ------------------------------ ---------- --------------- --------- --------SYSTEM 2048 DICTIONARY MANUAL ONLINE PERMANENT UNDOTBS 2048 LOCAL MANUAL ONLINE UNDO TEMP 2048 LOCAL MANUAL ONLINE TEMPORARY USER_DATA 2048 LOCAL MANUAL ONLINE PERMANENT SAMPLE 2048 DICTIONARY MANUAL ONLINE PERMANENT RCVCAT 2048 LOCAL MANUAL ONLINE PERMANENT UNDOTBS1 2048 LOCAL MANUAL ONLINE UNDO UNDOTBS_02 2048 LOCAL MANUAL ONLINE UNDO UNDOTBS_01 2048 LOCAL MANUAL ONLINE UNDO 9 rows selected.
Page 15
Creating Tablespace (USERS,INDEX,TEMP,RBS) for storing different types of segments Creating Users Tablespace. SQL> create tablespace TS_MILIND_USERS datafile '/disk2/oradata/milinda/milind_users01.dbf'size 2m; Tablespace created. Creating Index Tablespace with DICTIONARY Managed. SQL> create tablespace TS_MILIND_INDEX datafile '/disk2/oradata/milinda/milind_index01.dbf' size 2m default storage ( maxextents 100 ) extent management DICTIONARY; Tablespace created. Creating Temp Tablespace. SQL> create temporary tablespace TS_milind_TEMP tempfile '/disk2/oradata/milinda/milind_temp01.dbf'size 2m; Tablespace created. Creating RBS Tablespace. SQL> create tablespace TS_MILIND_RBS datafile '/disk2/oradata/milinda/milind_rbs01.dbf'size 2m; Tablespace created. Select the Tablespace information. SQL> select tablespace_name, initial_extent, next_extent,
Page 16
min_extents, max_extents, pct_increase, extent_management, segment_space_management from dba_tablespaces; TABLESPACE_NAME INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS PCT_INCREASE ----------------- -------------- ----------- ---------- ----------- ------------ EXTENT_MAN SEGMEN --------- -----TS_MILIND_USERS 65536 1 2147483645 LOCAL MANUAL TS_MILIND_TEMP 1048576 LOCAL MANUAL 1 0 100
TS_MILIND_INDEX 10240 1 50 DICTIONARY MANUAL TS_MILIND_RBS 2147483645 13 rows selected. 65536 1 LOCAL
MANUAL
Selecting information about DATAFILES. SQL> select file_name, file_id, tablespace_name, bytes from dba_data_files order by file_id; FILE_NAME Id TABLESPACE_NAME BYTES -----------------------------------------------------------
--
--Page 17
10 11
Adding another datafile to the existing Tablespace (USERS). SQL> alter tablespace TS_MILIND_USERS add datafile '/disk3/oradata/milinda/milind_users02.dbf' size 2m reuse; Tablespace altered. Selecting Datafiles and their sizes of a particular Tablespace. SQL> select tablespace_name, file_name, bytes from dba_data_files where tablespace_name='TS_MILIND_USERS'; TABLESPACE_NAME FILE_NAME BYTES --------------- --------------------------------------- ---------TS_MILIND_USERS /disk2/oradata/milinda/milind_users01.dbf 2097152 TS_MILIND_USERS /disk3/oradata/milinda/milind_users02.dbf 2097152 2 rows selected. Making USERS Tablespace offline. SQL> alter tablespace TS_MILIND_USERS offline; Tablespace altered. Selecting status and contents of the Tablespace. SQL> select tablespace_name, block_size, extent_management, segment_space_management, status, contents from dba_tablespaces;
Page 18
TABLESPACE_NAME BLOCK_SIZE EXTENT_MAN SEGMEN STATUS CONTENTS ------------------------------ ---------- --------------- --------- --------SYSTEM 2048 DICTIONARY MANUAL ONLINE PERMANENT UNDOTBS 2048 LOCAL MANUAL ONLINE UNDO TEMP 2048 LOCAL MANUAL ONLINE TEMPORARY USER_DATA 2048 LOCAL MANUAL ONLINE PERMANENT SAMPLE 2048 DICTIONARY MANUAL ONLINE PERMANENT RCVCAT 2048 LOCAL MANUAL ONLINE PERMANENT UNDOTBS1 2048 LOCAL MANUAL ONLINE UNDO UNDOTBS_02 2048 LOCAL MANUAL ONLINE UNDO UNDOTBS_01 2048 LOCAL MANUAL ONLINE UNDO TS_MILIND_USERS 2048 LOCAL MANUAL OFFLINE PERMANENT TS_MILIND_TEMP 2048 LOCAL MANUAL ONLINE TEMPORARY TS_MILIND_INDEX 2048 DICTIONARY MANUAL ONLINE PERMANENT TS_MILIND_RBS 2048 LOCAL MANUAL ONLINE PERMANENT 13 rows selected. Till ORACLE 8i we make TEMP Tablespace temporary. SQL> alter tablespace TS_milind_TEMP temporary; [From Oracle9i we can't change TEMPORARY to PERMANENT or vice versa when the Tablespace extent management is LOCAL.]
Page 19
Making INDEX Tablespace read-only. SQL> alter tablespace TS_MILIND_INDEX read only; Tablespace altered. Selecting status and contents of the Tablespace. SQL> select tablespace_name, block_size, extent_management, segment_space_management, status, contents from dba_tablespaces; TABLESPACE_NAME BLOCK_SIZE EXTENT_MAN SEGMEN STATUS CONTENTS ------------------------------ ---------- --------------- --------- --------SYSTEM 2048 DICTIONARY MANUAL ONLINE PERMANENT UNDOTBS 2048 LOCAL MANUAL ONLINE UNDO TEMP 2048 LOCAL MANUAL ONLINE TEMPORARY USER_DATA 2048 LOCAL MANUAL ONLINE PERMANENT SAMPLE 2048 DICTIONARY MANUAL ONLINE PERMANENT RCVCAT 2048 LOCAL MANUAL ONLINE PERMANENT UNDOTBS1 2048 LOCAL MANUAL ONLINE UNDO UNDOTBS_02 2048 LOCAL MANUAL ONLINE UNDO UNDOTBS_01 2048 LOCAL MANUAL ONLINE UNDO TS_MILIND_USERS 2048 LOCAL MANUAL OFFLINE PERMANENT TS_MILIND_TEMP 2048 LOCAL MANUAL ONLINE TEMPORARY
Page 20
TS_MILIND_INDEX MANUAL READ ONLY PERMANENT TS_MILIND_RBS MANUAL ONLINE PERMANENT 13 rows selected. Changing the name of a Datafile in a Tablespace
First select the Tablespace name and its related datafile to be renamed. SQL> select tablespace_name, file_name, bytes from dba_data_files where tablespace_name='TS_MILIND_USERS'; TABLESPACE_NAME FILE_NAME BYTES --------------- --------------------------------------- ---------TS_MILIND_USERS /disk2/oradata/milinda/milind_users01.dbf TS_MILIND_USERS /disk3/oradata/milinda/milind_users02.dbf 2 rows selected. [When the Tablespace is OFFLINE, the Bytes column shows NULL] A. Make the desired Tablespace offline. SQL> alter tablespace TS_MILIND_USERS offline; * ERROR at line 1: ORA-01539: tablespace 'TS_MILIND_USERS' is not online [Dont worry! already the Tablespace is offline.] B. Copy or Move the desired datafile to new location at OS level. $ cp /disk3/oradata/milinda/.dbf \ /disk2/oradata/milinda/.dbf Copied.
Page 21
C. Issuing the alter Tablespace command SQL> alter tablespace TS_MILIND_USERS rename datafile '/disk3/oradata/milinda/milind_users02.dbf' to '/disk2/oradata/milinda/milind_users02.dbf' Tablespace altered. D. Bring the Tablespace online. SQL> alter tablespace TS_MILIND_USERS online; Tablespace altered. Select the Tablespace name and its related datafile to be renamed. SQL> select tablespace_name, file_name, bytes from dba_data_files where tablespace_name='TS_MILIND_USERS'; TABLESPACE_NAME FILE_NAME BYTES --------------- --------------------------------------- ---------TS_MILIND_USERS /disk2/oradata/milinda/milind_users01.dbf 2097152 TS_MILIND_USERS /disk2/oradata/milinda/milind_users02.dbf 2097152 2 rows selected. Changing the SIZE of a Datafile. Select the datafile and its size to be changed. SQL> select tablespace_name, file_name, bytes from dba_data_files where tablespace_name='TS_MILIND_USERS';
Page 22
TABLESPACE_NAME FILE_NAME BYTES --------------- --------------------------------------- ---------TS_MILIND_USERS /disk2/oradata/milinda/milind_users01.dbf 2097152 TS_MILIND_USERS /disk2/oradata/milinda/milind_users02.dbf 2097152 2 rows selected. Changing the size of a Datafile. SQL> alter database datafile '/disk2/oradata/milinda/milind_users01.dbf' resize 3m; Database altered. Select the Datafile and its size, which has changed. SQL> select tablespace_name, file_name, bytes from dba_data_files where tablespace_name='TS_MILIND_USERS'; TABLESPACE_NAME FILE_NAME BYTES --------------- --------------------------------------- ---------TS_MILIND_USERS /disk2/oradata/milinda/milind_users01.dbf 3145728 TS_MILIND_USERS /disk2/oradata/milinda/milind_users02.dbf 2097152 2 rows selected. Extending the size of a Datafile automatically. SQL> alter database datafile '/disk2/oradata/milinda/milind_users02.dbf'
Page 23
autoextend on next 1m maxsize 5m; Database altered. Selecting more info about Datafiles. SQL> select file_name,bytes,autoextensible,increment_by from dba_data_files; File-Name INCREMENT_BY -----------------------------------------------------/disk1/oradata/UDEMO/system01.dbf 0 /disk1/oradata/UDEMO/UDEMO_index01.dbf 0 /disk1/oradata/UDEMO/undotbs01.dbf 0 /disk1/oradata/UDEMO/UDEMO_users01.dbf 0 /disk1/oradata/UDEMO/UDEMO_rbs01.dbf 0 /disk1/oradata/UDEMO/UDEMO_users02.dbf 512 6 rows selected. Select the Tablespace information. SQL> select tablespace_name,initial_extent,next_extent, min_extents,max_extents,pct_increase,status from dba_tablespaces; Ts-Name Pct Status I-Ext N-Ext Min Max-E AUT --- -NO NO NO NO NO YES
Page 24
--------------- --------- --------- ---- -------------- ---------SYSTEM 10240 10240 1 121 50 ONLINE UNDOTBS 65536 1 2147483645 ONLINE TEMP 1048576 1048576 1 0 ONLINE TS_UDEMO_USERS 65536 1 2147483645 ONLINE TS_UDEMO_TEMP 1048576 1048576 1 0 ONLINE TS_UDEMO_INDEX 10240 10240 1 100 50 READ ONLY TS_UDEMO_RBS 65536 1 2147483645 ONLINE 7 rows selected.
DEMO ON EXTENT MANAGEMENT Select the info about TABLESPACES and their DATAFILES. SQL> select file_name,tablespace_name from dba_data_files; File-Name TsName --------------------------------------------- -------------/disk1/oradata/UDEMO/system01.dbf SYSTEM /disk1/oradata/UDEMO/UDEMO_users01.dbf TS_UDEMO_USERS /disk1/oradata/UDEMO/undotbs01.dbf UNDOTBS
Page 25
Selecting the STORAGE parameters Info about TS_MY_USERS Tablespace. SQL> select tablespace_name, initial_extent, next_extent, min_extents, max_extents, pct_increase, status from dba_tablespaces where tablespace_name='TS_MY_USERS'; Ts-Name I-Ext N-Ext Min Max-E Pct Status --------------- --------- --------- ---- -------------- ---------TS_UDEMO_USERS 10240 10240 1 121 50 ONLINE CREATE table JUNK1 in TS_MY_USERS Tablespace. SQL> create table JUNK1 (eno number(4), name varchar2(20)) default tablespace TS_MY_USERS; Table Created Try to see the STORAGE parameters Info about TABLESPACE (TS_MY_USERS) and its SEGMENTS from DBA_TABLESPACES and DBA_SEGMENTS. SQL> select tablespace_name, initial_extent, next_extent, min_extents, max_extents, pct_increase, status from dba_tablespaces where tablespace_name='TS_MY_USERS'; SQL> select segment_name, tablespace_name, initial_extent, next_extent, min_extents, max_extents, pct_increase from dba_segments where tablespace_name = 'TS_MY_USERS';
Page 26
Ts-Name I-Ext N-Ext Min Max-E Pct Status --------------- --------- --------- ---- -------------- ---------TS_UDEMO_USERS 10240 10240 1 121 50 ONLINE Segment-Name Ts-Name I-Ext N-Ext Min Max-E Pct ------------ --------------- --------- --------- --- ----------- ---BONUS TS_UDEMO_USERS 10240 10240 1 121 50 SALGRADE TS_UDEMO_USERS 10240 10240 1 121 50 DUMMY TS_UDEMO_USERS 10240 10240 1 121 50 JUNK1 TS_UDEMO_USERS 10240 10240 1 121 50 EMP TS_UDEMO_USERS 10240 10240 1 121 50 DEPT TS_UDEMO_USERS 10240 10240 1 121 50 6 rows selected.
Try to CREATE another table TEST1 with your own STORAGE parameters SQL> create table TEST1 (eno number(4), name varchar2(20)) storage (initial 100k next 100k minextents 2 maxextents 30 pctincrease 0 ) tablespace TS_MY_USERS;
Page 27
Table created. Now try to see the STORAGE parameters Info about TABLESPACE (TS_MY_USERS) and its SEGMENTS from DBA_TABLESPACES and DBA_SEGMENTS. SQL> select tablespace_name, initial_extent, next_extent, min_extents, max_extents, pct_increase, status from dba_tablespaces where tablespace_name='TS_MY_USERS'; SQL> select segment_name, tablespace_name, initial_extent, next_extent, min_extents, max_extents, pct_increase from dba_segments where tablespace_name = 'TS_MY_USERS; Ts-Name I-Ext N-Ext Min Max-E Pct Status --------------- --------- --------- ---- -------------- ---------TS_UDEMO_USERS 10240 10240 1 121 50 ONLINE Segment-Name Ts-Name I-Ext N-Ext Min Max-E Pct ------------ --------------- --------- --------- --- ----------- ---BONUS TS_UDEMO_USERS 10240 10240 1 121 50 SALGRADE TS_UDEMO_USERS 10240 10240 1 121 50 DUMMY TS_UDEMO_USERS 10240 10240 1 121 50 JUNK1 TS_UDEMO_USERS 10240 10240 1 121 50
Page 28
7 rows selected. Try to INSERT some rows in both the tables ( JUNK1, TEST1 ) SQL> begin for i in 1..2000 loop insert into junk1 values (i,'wilshire'); insert into test1 values (i,'wilshire'); end loop; end; / Wait... PL/SQL procedure successfully completed. Selecting information about these created TABLES. SQL> select segment_name, bytes, blocks, extents, initial_extent, next_extent, min_extents, max_extents from dba_segments where tablespace_name='TS_MY_USERS';
Segment-Name Bytes Blks Exts I-Ext N-Ex Min M-Exts ------------ ---------- ----- ----- --------- -------- ----------Page 29
10240 121 10240 121 10240 121 112640 121 204800 30 10240 121 10240 121
5 5 5 55 100 5 5
1 1 1 5 2 1 1
10240 ##### 10240 ##### 10240 ##### 10240 ##### 102400 ##### 10240 ##### 10240 #####
7 rows selected. Selecting EXTENT info about these SEGMENTS. SQL> select segment_name, extent_id, file_id, block_id, bytes, blocks from dba_extents where tablespace_name='TS_MY_USERS' order by extent_id,block_id; Segment-Name Extent_id File_id Block_id Bytes Blocks ------------ ---------- -------- --------- --------- ------BONUS 0 3 12 10240 5 SALGRADE 0 3 17 10240 5 DUMMY 0 3 22 10240 5 EMP 0 3 2 10240 5 TEST1 0 3 32 102400 50
Page 30
JUNK1 10240 DEPT 10240 JUNK1 10240 TEST1 102400 JUNK1 20480 30720 40960
0 5 0 5 1 5 1 50 2 10 3 15 4 20
3 3 3 3 3 3 3
12 rows selected. Select the info about JUNK1 table from DBA_SEGMENTS. SQL> select segment_name, bytes, blocks, extents, initial_extent, next_extent, min_extents, max_extents, pct_increase from dba_segments where segment_name = 'JUNK1'; Segment-Name Bytes Blocks Exts I-Ext N-Ext Min Max-E Pct ------------ ---------- ------- ----- --------- -------- ---- ----------- ---JUNK1 112640 55 5 10240 55296 1 121 50 Now if we INSERT some more rows in this table then how much FREE SPACE is required? INSERT some rows in JUNK1 table segment. SQL> insert into junk1 select * from junk1; Inserting the rows.
Page 31
Selecting EXTENT info about the SEGMENTS. SQL> select segment_name, extent_id, file_id, block_id, bytes, blocks from dba_extents where tablespace_name='TS_MY_USERS' order by extent_id,block_id; Segment-Name Extent_id File_id Block_id Bytes Blocks ------------ ---------- -------- --------- --------- ------BONUS 0 3 12 10240 5 SALGRADE 0 3 17 10240 5 DUMMY 0 3 22 10240 5 TEST1 0 3 32 102400 50 DEPT 0 3 7 10240 5 EMP 0 3 2 10240 5 JUNK1 0 3 27 10240 5 1 3 132 10240 5 TEST1 1 3 82 102400 50
Page 32
2 10 3 15 4 20 5 30 6 45
3 3 3 3 3
14 rows selected. See the FREE SPACE info about TS_MY_USERS Tablespace. SQL> select tablespace_name, file_id, block_id, bytes, blocks from dba_free_space where tablespace_name='TS_MY_USERS' order by block_id; Ts-Name FILE_ID BLOCK_ID Bytes Blocks --------------- ---------- ---------- ---------- -----TS_UDEMO_USERS 3 257 3670016 1792 Now try to INSERT some rows in TEST1 table. SQL> insert into test1 select * from test1; Inserting the rows. Wait.... 5000 rows created. Selecting EXTENT info about the SEGMENTS. SQL> select segment_name, extent_id, file_id, block_id, bytes, blocks from dba_extents
Page 33
where tablespace_name='TS_MY_USERS' order by extent_id,block_id; Segment-Name Extent_id File_id Block_id Bytes Blocks ------------ ---------- -------- --------- --------- ------BONUS 0 3 12 10240 5 SALGRADE 0 3 17 10240 5 DUMMY 0 3 22 10240 5 TEST1 0 3 32 102400 50 DEPT 0 3 7 10240 5 EMP 0 3 2 10240 5 JUNK1 0 3 27 10240 5 1 3 132 10240 5 TEST1 1 3 82 102400 50 JUNK1 2 3 137 20480 10 TEST1 2 3 257 102400 50 JUNK1 3 3 147 30720 15 4 3 162 40960 20 5 3 182 61440 30 6 3 212 92160 45
Page 34
15 rows selected. Now we Drop the JUNK1 table segment and try to see the FREE SPACE Info. SQL> drop table JUNK1; SQL> select tablespace_name, file_id, block_id, bytes, blocks from dba_free_space where tablespace_name='TS_MY_USERS' order by block_id; Table dropped. Ts-Name FILE_ID BLOCK_ID Bytes Blocks --------------- ---------- ---------- ---------- -----TS_UDEMO_USERS 3 27 10240 5 TS_UDEMO_USERS 3 132 256000 125 TS_UDEMO_USERS 3 307 3567616 1742 Now create another table with different STORAGE parameters. We try to see how the EXTENTS are ALLOCATED. SQL> create table junk2 ( eno number(5), name varchar2(20)) storage (initial 50k next 50k maxextents 10) tablespace TS_MY_USERS; Table created. We are selecting info about EXTENT ALLOCATION for the segment and FREE SPACE in the TABLESPACE.
Page 35
SQL> select segment_name, extent_id, file_id, block_id, bytes, blocks from dba_extents where tablespace_name='TS_MY_USERS' order by extent_id,block_id; SQL> select tablespace_name, file_id, block_id, bytes, blocks from dba_free_space where tablespace_name='TS_MY_USERS' order by block_id; Segment-Name EXTENT_ID FILE_ID BLOCK_ID Bytes Blocks ------------ ---------- ---------- ---------- --------- ------EMP 0 3 2 10240 5 DEPT 0 3 7 10240 5 BONUS 0 3 12 10240 5 SALGRADE 0 3 17 10240 5 DUMMY 0 3 22 10240 5 TEST1 0 3 32 102400 50 JUNK2 0 3 307 51200 25 TEST1 1 3 82 102400 50 TEST1 2 3 257 102400 50 9 rows selected. Ts-Name FILE_ID BLOCK_ID Bytes Blocks
Page 36
--------------- ---------- ---------- ---------- -----TS_UDEMO_USERS 3 27 10240 5 TS_UDEMO_USERS 3 132 256000 125 TS_UDEMO_USERS 3 332 3516416 1717 Now we see how the EXTENTS are going to be ALLOCATED when a INSERT is going on. From one SESSION we are going on INSERTING rows in table JUNK2 SQL> begin for i in 1..15000 loop insert into junk2 values (i,'wilshire'); end loop; end; / From ANOTHER SESSION we try to watch the EXTENT ALLOCATION. Here we are spooling the output in (alloc.log). SQL> SPOOL alloc.log SQL> select segment_name, bytes, blocks, extents, initial_extent, next_extent,max_extents,pct_increase from dba_segments where segment_name='JUNK2'; SQL> select segment_name, extent_id, file_id, block_id, bytes, blocks from dba_extents where segment_name='JUNK2' order by extent_id,block_id;
Page 37
SQL> select tablespace_name, file_id, block_id, bytes, blocks from dba_free_space where tablespace_name='TS_MY_USERS' order by block_id; Segment-Name BYTES Blocks Extents I-Ext N-Ext Max-E Pct ------------ ---------- ------- ------- --------- -------- -------- ---JUNK2 65536 32 1 51200 ######## Segment-Name EXTENT_ID FILE_ID BLOCK_ID BYTES Blocks ------------ ---------- ---------- ---------- --------- ------JUNK2 0 5 193 65536 32 JUNK2 1 5 353 65536 32 TABLESPACE_NAME FILE_ID BLOCK_ID BYTES Blocks ------------------------------ ---------- ------------------- ------USER_DATA 3 1185 2818048 1376 Segment-Name BYTES Blocks Extents I-Ext N-Ext Max-E Pct ------------ ---------- ------- ------- --------- -------- -------- ---JUNK2 327680 160 5 51200 ########
Page 38
Segment-Name EXTENT_ID FILE_ID BLOCK_ID BYTES Blocks ------------ ---------- ---------- ---------- --------- ------JUNK2 0 5 193 65536 32 JUNK2 2 5 385 65536 32 JUNK2 3 5 417 65536 32 JUNK2 4 5 449 65536 32 JUNK2 5 5 481 65536 32 TABLESPACE_NAME FILE_ID BLOCK_ID BYTES Blocks ------------------------------ ---------- ------------------- ------USER_DATA 3 1185 2818048 1376 Segment-Name BYTES Blocks Extents I-Ext N-Ext Max-E Pct ------------ ---------- ------- ------- --------- -------- -------- ---JUNK2 393216 192 6 51200 ######## Segment-Name EXTENT_ID FILE_ID BLOCK_ID BYTES Blocks ------------ ---------- ---------- ---------- --------- ------JUNK2 0 5 193 65536 32 JUNK2 1 5 353 65536 32 JUNK2 2 5 385 65536 32
Page 39
3 32 4 32 5 32
5 5 5
TABLESPACE_NAME FILE_ID BLOCK_ID BYTES Blocks ------------------------------ ---------- ------------------- ------USER_DATA 3 1185 2818048 1376 Segment-Name BYTES Blocks Extents I-Ext N-Ext Max-E Pct ------------ ---------- ------- ------- --------- -------- -------- ---JUNK2 393216 192 6 51200 ######## Segment-Name EXTENT_ID FILE_ID BLOCK_ID BYTES Blocks ------------ ---------- ---------- ---------- --------- ------JUNK2 0 5 193 65536 32 JUNK2 1 5 353 65536 32 JUNK2 2 5 385 65536 32 JUNK2 3 5 417 65536 32 JUNK2 4 5 449 65536 32 JUNK2 5 5 481 65536 32
Page 40
TABLESPACE_NAME FILE_ID BLOCK_ID BYTES Blocks ------------------------------ ---------- ------------------- ------USER_DATA 3 1185 2818048 1376 Segment-Name BYTES Blocks Extents I-Ext N-Ext Max-E Pct ------------ ---------- ------- ------- --------- -------- -------- ---JUNK2 393216 192 6 51200 ######## Segment-Name EXTENT_ID FILE_ID BLOCK_ID BYTES Blocks ------------ ---------- ---------- ---------- --------- ------JUNK2 0 5 193 65536 32 JUNK2 1 5 353 65536 32 JUNK2 2 5 385 65536 32 JUNK2 3 5 417 65536 32 JUNK2 4 5 449 65536 32 JUNK2 5 5 481 65536 32 TABLESPACE_NAME FILE_ID BLOCK_ID BYTES Blocks ------------------------------ ---------- ------------------- ------USER_DATA 3 1185 2818048 1376
Page 41
BYTES Pct
Blocks Extents
I-Ext
Segment-Name EXTENT_ID FILE_ID BLOCK_ID BYTES Blocks ------------ ---------- ---------- ---------- --------- ------JUNK2 0 5 193 65536 32 JUNK2 1 5 353 65536 32 JUNK2 2 5 385 65536 32 JUNK2 3 5 417 65536 32 JUNK2 4 5 449 65536 32 JUNK2 5 5 481 65536 32 TABLESPACE_NAME FILE_ID BLOCK_ID BYTES Blocks ------------------------------ ---------- ------------------- ------USER_DATA 3 1185 2818048 1376 Segment-Name BYTES Blocks Extents I-Ext N-Ext Max-E Pct ------------ ---------- ------- ------- --------- -------- -------- ---JUNK2 393216 192 6 51200 ######## Segment-Name EXTENT_ID BYTES Blocks FILE_ID BLOCK_ID
$ less spool.log Segment-Name Bytes Blocks Extents I-Ext N-Ext Max-E Pct ------------ ---------- ------- ------- --------- -------- ----------- ---JUNK2 51200 25 1 51200 51200 10 50 Segment-Name EXTENT_ID FILE_ID BLOCK_ID Bytes Blocks ------------ ---------- ---------- ---------- --------- ------JUNK2 0 3 307 51200 25 Segment-Name Bytes Blocks Extents I-Ext N-Ext Max-E Pct ------------ ---------- ------- ------- --------- -------- ----------- ---JUNK2 51200 25 1 51200 51200 10 50 Segment-Name EXTENT_ID FILE_ID BLOCK_ID Bytes Blocks ------------ ---------- ---------- ---------- --------- ------JUNK2 0 3 307 51200 25 Segment-Name Bytes Blocks Extents I-Ext N-Ext Max-E Pct ------------ ---------- ------- ------- --------- -------- ----------- ---JUNK2 51200 25 1 51200 51200 10 50
Page 43
Segment-Name EXTENT_ID FILE_ID BLOCK_ID Bytes Blocks ------------ ---------- ---------- ---------- --------- ------JUNK2 0 3 307 51200 25 Segment-Name Bytes Blocks Extents I-Ext N-Ext Max-E Pct ------------ ---------- ------- ------- --------- -------- ----------- ---JUNK2 51200 25 1 51200 51200 10 50 Segment-Name EXTENT_ID FILE_ID BLOCK_ID Bytes Blocks ------------ ---------- ---------- ---------- --------- ------JUNK2 0 3 307 51200 25 Segment-Name Bytes Blocks Extents I-Ext N-Ext Max-E Pct ------------ ---------- ------- ------- --------- -------- ----------- ---JUNK2 51200 25 1 51200 51200 10 50 Segment-Name EXTENT_ID FILE_ID BLOCK_ID Bytes Blocks ------------ ---------- ---------- ---------- --------- ------JUNK2 0 3 307 51200 25
Page 44
Page 45
SQL> select segment_name, tablespace_name, owner, file_id, block_id, status from dba_rollback_segs ; Segment-Name Ts-Name BLOCK_ID Status ------------ -------------------- -------SYSTEM SYSTEM 2 ONLINE _SYSSMU1$ UNDOTBS 33 OFFLINE _SYSSMU2$ UNDOTBS 97 OFFLINE _SYSSMU3$ UNDOTBS 161 OFFLINE _SYSSMU4$ UNDOTBS 225 OFFLINE _SYSSMU5$ UNDOTBS 289 OFFLINE _SYSSMU6$ UNDOTBS 353 OFFLINE _SYSSMU7$ UNDOTBS 417 OFFLINE _SYSSMU8$ UNDOTBS 481 OFFLINE _SYSSMU9$ UNDOTBS 545 OFFLINE _SYSSMU10$ UNDOTBS 609 OFFLINE 11 rows selected. Ownr FILE_ID
------- ---------- ---SYS PUBLIC PUBLIC PUBLIC PUBLIC PUBLIC PUBLIC PUBLIC PUBLIC PUBLIC PUBLIC 1 2 2 2 2 2 2 2 2 2 2
Page 46
Querying more info about Tablespaces. SQL> select tablespace_name, initial_extent, next_extent, min_extents, max_extents, pct_increase, extent_management, allocation_type from dba_tablespaces order by 7; Ts-Name I-Ext N-Ext Min Max-E Pct EXTENT_MAN Alloc-T --------------- --------- --------- ---- -------------- ---------- -------SYSTEM 10240 10240 1 121 50 DICTIONARY USER TS_UDEMO_USERS 10240 10240 1 121 50 DICTIONARY USER UNDOTBS 65536 1 2147483645 LOCAL SYSTEM TS_UDEMO_TEMP 1048576 1048576 1 0 LOCAL UNIFORM TEMP 1048576 1048576 1 0 LOCAL UNIFORM Create DICTIONARY Managed Tablespace for storing ROLLBACK Info. SQL> create tablespace TS_MY_RBS datafile '/disk2/oradata/my/my_rbs01.dbf' size 5m reuse default storage (initial 100k next 100k maxextents 50) extent management dictionary ; Tablespace created. We create a ROLLBACK segment (my_RBS1) without any STORAGE parameters
Page 47
SQL> create rollback segment my_RBS1 tablespace RBS_DATA; Rollback segment created. We try to query more Info about Rollback Segments. SQL> select segment_name, owner, tablespace_name, initial_extent, next_extent, min_extents, max_extents, status from dba_rollback_segs; Segment-Name Owner Ts-Name I-Ext N-Ext Min Max-E Status ------------ ------- --------------- ------- ---------- ------- -------SYSTEM SYS SYSTEM 51200 51200 2 121 ONLINE _SYSSMU1$ PUBLIC UNDOTBS 131072 2 32765 OFFLINE _SYSSMU2$ PUBLIC UNDOTBS 131072 2 32765 OFFLINE _SYSSMU3$ PUBLIC UNDOTBS 131072 2 32765 OFFLINE _SYSSMU4$ PUBLIC UNDOTBS 131072 2 32765 OFFLINE _SYSSMU5$ PUBLIC UNDOTBS 131072 2 32765 OFFLINE _SYSSMU6$ PUBLIC UNDOTBS 131072 2 32765 OFFLINE _SYSSMU7$ PUBLIC UNDOTBS 131072 2 32765 OFFLINE _SYSSMU8$ PUBLIC UNDOTBS 131072 2 32765 OFFLINE _SYSSMU9$ PUBLIC UNDOTBS 131072 2 32765 OFFLINE _SYSSMU10$ PUBLIC UNDOTBS 131072 2 32765 OFFLINE
Page 48
TS_UDEMO_RBS
102400
102400
We try to create another Rollback segment with STORAGE parameters. SQL> create rollback segment my_RBS2 tablespace TS_MY_RBS storage (initial next minextents maxextents 150k 150k 3 30);
Rollback segment created. Selecting EXTENT information about ROLLBACK segments. SQL> SELECT segment_name, owner, tablespace_name, initial_extent,next_extent, min_extents, max_extents, status FROM dba_rollback_segs; Segment-Name Ownr Ts-Name I-Ext N-Ext Min Max-E Status ------------ ------- --------------- ------- ---------- ------- -------SYSTEM SYS SYSTEM 51200 51200 2 121 ONLINE _SYSSMU1$ PUBLIC UNDOTBS 131072 2 32765 OFFLINE _SYSSMU2$ PUBLIC UNDOTBS 131072 2 32765 OFFLINE _SYSSMU3$ PUBLIC UNDOTBS 131072 2 32765 OFFLINE
Page 49
_SYSSMU4$ PUBLIC 2 32765 OFFLINE _SYSSMU5$ PUBLIC 2 32765 OFFLINE _SYSSMU6$ PUBLIC 2 32765 OFFLINE _SYSSMU7$ PUBLIC 2 32765 OFFLINE _SYSSMU8$ PUBLIC 2 32765 OFFLINE _SYSSMU9$ PUBLIC 2 32765 OFFLINE _SYSSMU10$ PUBLIC 2 32765 OFFLINE UDEMO_RBS1 SYS 2 50 OFFLINE UDEMO_RBS2 SYS 3 30 OFFLINE 13 rows selected.
131072 131072 131072 131072 131072 131072 131072 102400 153600 102400 153600
Whenever the rollback segment is created, DEFAULT it is OFFLINE. We have to make it ONLINE. SQL> alter rollback segment my_RBS1 ONLINE; SQL> alter rollback segment my_RBS2 ONLINE; Rollback segment altered. Rollback segment altered. NOTE: To bring rollback segments online automatically set ROLLBACK_SEGMENTS parameter in init<SID>.ora file rollback_segments = (my_RBS1,my_RBS2)
Page 50
Else every time whenever you STARTUP your database MANUALLY you have to make the ROLLBACK SEGMENTS ONLINE. We try to change the STORAGE parameters of rollback segment (my_RBS1). This time we add another parameter called OPTIMAL. SQL> alter rollback segment my_RBS1 storage (optimal 300k maxextents 100); Rollback segment altered. Now we like to see some of the STATISTICS info about Rollback Segment from $ROLLSTAT. SQL> select usn, xacts, extents, rssize, optsize, shrinks, hwmsize from v$rollstat; XACTS Exts RSSIZE OPTSIZE SHRINKS HWMSIZE ---------- ---------- ----- ---------- ---------- --------- ---------0 0 8 407552 0 407552 11 0 2 202752 307200 0 202752 12 0 3 458752 0 458752 USN
Selecting EXTENT information about ROLLBACK segments. SQL> select segment_name, owner, tablespace_name, initial_extent, next_extent, min_extents, max_extents, Status from dba_rollback_segs;
Page 51
Segment-Name Ownr Ts-Name I-Ext N-Ext Min Max-E Status ------------ ------- --------------- ------- ---------- ------- -------SYSTEM SYS SYSTEM 51200 51200 2 121 ONLINE _SYSSMU1$ PUBLIC 2 32765 OFFLINE _SYSSMU2$ PUBLIC 2 32765 OFFLINE _SYSSMU3$ PUBLIC 2 32765 OFFLINE _SYSSMU4$ PUBLIC 2 32765 OFFLINE _SYSSMU5$ PUBLIC 2 32765 OFFLINE _SYSSMU6$ PUBLIC 2 32765 OFFLINE _SYSSMU7$ PUBLIC 2 32765 OFFLINE _SYSSMU8$ PUBLIC 2 32765 OFFLINE _SYSSMU9$ PUBLIC 2 32765 OFFLINE _SYSSMU10$ PUBLIC 2 32765 OFFLINE UNDOTBS UNDOTBS UNDOTBS UNDOTBS UNDOTBS UNDOTBS UNDOTBS UNDOTBS UNDOTBS UNDOTBS 131072 131072 131072 131072 131072 131072 131072 131072 131072 131072
Page 52
TS_UDEMO_RBS TS_UDEMO_RBS
102400 153600
102400 153600
We can explicitly assign a ROLLBACK SEGMENT to a transaction SQL> SET TRANSACTION USE rollback segment my_RBS1; NOTE: (it should be the 1st statement of the transaction) observing the usage of OPTIMAL settings. Here we have to use minimum 2 SESSIONS. From one SESSION we do some DML Operations in between using COMMIT statement (for using other rollback segments) from user user1. From another SESSION try to see how the statements using ONLINE ROLLBACK segments and AUTO SHRINKING upto the OPTIMAL value. From First session do some DML operations (user1). SQL> delete from emp_roll where rownum<3501; SQL> commit SQL> insert into emp_roll select * from emp_roll where rownum<4501; SQL> commit SQL> delete from emp_roll where rownum<4501; SQL> commit SQL> insert into emp_roll select * from emp_roll where rownum<3501; At the same time From Second session Query the info from V$ROLLSTAT view.
Page 53
SQL> select a.name, b.xacts, b.extents, b.rssize, b.optsize, b.shrinks, b.shrinks, b.hwmsize from v$rollname a, v$rollstat b where a.usn=b.usn; [Here we continuously execute this statement and record the output into a SPOOL file ( rbs_stat.log ).] --------------- ----- ---------- ---------- --------- ---------- ---------SYSTEM 0 9 458752 0 458752 MY_RBS1 0 9 919552 307200 1 1124352 MY_RBS2 0 9 1380352 0 1380352 Name A-Tr EXTENTS RSSIZE OPTSIZE SHRINKS HWMSIZE --------------- ----- ---------- ---------- --------- ---------- ---------SYSTEM 0 9 458752 0 458752 MY_RBS1 0 9 919552 307200 1 1124352 MY_RBS2 0 9 1380352 0 1380352 Name A-Tr EXTENTS RSSIZE OPTSIZE SHRINKS HWMSIZE --------------- ----- ---------- ---------- --------- ---------- ---------SYSTEM 0 9 458752 0 458752 MY_RBS1 0 9 919552 307200 1 1124352 MY_RBS2 0 9 1380352 0 1380352
Page 54
NOTE: In the case of rollback segment (my_RBS1) we had seen that whenever a new transaction HITS rollback segment it is SHRINKING upto the minimum of OPTIMAL size and INCREASING again, when the previous transaction is completed. Whereas the another Rollback Segment (my_RBS2) once it occupied the extents, its not going to release them back to the Tablespace even the transaction is completed.
See the Rollback Segment Statistics from V$ROLLSTAT. SQL> select a.name, b.xacts, b.extents, b.rssize, b.optsize, b.shrinks, b.shrinks, b.hwmsize from v$rollname a, v$rollstat b where a.usn=b.usn; Name A-Tr Exts RSSIZE OPTSIZE SHRINKS HWMSIZE ------------------ ----- ----- ---------- ------------------- ---------SYSTEM 0 8 407552 0 407552 UDEMO_RBS1 0 12 1226752 307200 1 1226752 UDEMO_RBS2 0 9 1380352 0 1380352 You can MANUALLY shrink the rollback segment SQL> alter rollback segment my_RBS2 SHRINK to 200k; Rollback segment altered. (If you do not specify the size, it will shrink upto optimal size if not, to minextents.) See the Rollback Segment Statistics from V$ROLLSTAT.
Page 55
SQL> select a.name, b.xacts, b.extents, b.rssize, b.optsize, b.shrinks, b.shrinks, b.hwmsize from v$rollname a, v$rollstat b where a.usn=b.usn; Name A-Tr Exts RSSIZE OPTSIZE SHRINKS HWMSIZE ------------------ ----- ----- ---------- ------------------- ---------SYSTEM 0 8 407552 0 407552 UDEMO_RBS1 0 12 1226752 307200 1 1226752 UDEMO_RBS2 0 2 305152 1 1380352 Observing PENDING OFFLINE status. When you take a Rollback Segment offline, it does not actually go offline until all ACTIVE transactions in it have completed. 1) Do a transaction from a user session (user1) and leave it in Active Mode. SQL> conn user1/user1 SQL> delete from EMP_ROLL where rownum<100; SQL> host (is same as!) 14 rows created. From another session Make the ACTIVE Rollback segment OFFLINE. SQL> alter rollback segment my_RBS1 OFFLINE; Rollback segment altered. Login to another session and try to see the status about Rollback Segments from DBA_ROLLBACK_SEGS and V$ROLLSTAT views. SQL> conn system/manager SQL> select segment_name,tablespace_name,status from dba_rollback_segs;
Page 56
Segment-Name Ts-Name ------------ --------------SYSTEM SYSTEM _SYSSMU1$ UNDOTBS _SYSSMU2$ UNDOTBS _SYSSMU3$ UNDOTBS _SYSSMU4$ UNDOTBS _SYSSMU5$ UNDOTBS _SYSSMU6$ UNDOTBS _SYSSMU7$ UNDOTBS _SYSSMU8$ UNDOTBS _SYSSMU9$ UNDOTBS _SYSSMU10$ UNDOTBS UDEMO_RBS1 TS_UDEMO_RBS UDEMO_RBS2 TS_UDEMO_RBS 13 rows selected.
Status ---------ONLINE OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE OFFLINE ONLINE
Selecting the Info from V$ROLLSTAT view. SQL> select usn,xacts,rssize,status from v$rollstat; USN XACTS RSSIZE Status ---------- ---------- ---------- ------------------0 0 407552 ONLINE 12 1 305152 ONLINE
Now Observing DEFERRED ROLLBACK segment. When any datafile or any Tablespace having (ACTIVE transaction to a TABLE SEGMENT init) is taken offline, Oracle creates DEFERRED Rollback segment in System Tablespace. Because the Rollback segment contains transaction info that Oracle could not apply to damaged offline Tablespace where the TABLE is there.
Page 57
Try to select the segment and Tablespace info about a particular TABLE segment (EMP_ROLL). SQL> select segment_name, tablespace_name from user_segments where segment_name='EMP_ROLL'; SEGMENT_NAME Ts-Name -------------------------------------------------------------- ----------------EMP_ROLL TS_UDEMO_USERS For seeing DEFERRED ROLLBACK segment you have to, 1) Do a transaction on EMP_ROLL segment from a user session (user1) and leave in Active mode. SQL> conn user1/user1 SQL> delete from emp_roll where rownum<100; SQL> commit; 100 rows deleted. Commit complete. 2) From Another session make the Datafile or Tablespace offline in which the EMP_ROLL segment holding ACTIVE transaction. SQL> alter tablespace TS_MY_USERS offline; Tablespace altered. Now see the Info from DBA_SEGMENTS SQL> select segment_name, segment_type, tablespace_name from dba_segments where segment_type='DEFERRED ROLLBACK';
Page 58
datafile '/disk2/oradata/my/my_undots01.dbf' size 2m reuse; Tablespace created. Try to see the Undo Parameters for your Instance. SQL> show parameter undo_ NAME VALUE -------------------------------------------------------------undo_management AUTO undo_retention 9000 undo_suppress_errors FALSE undo_tablespace TS_MY_UNDO TYPE ----------- --string integer boolean string
Query more info on Tablespaces. SQL> select segment_name, tablespace_name, initial_extent, next_extent, min_extents, max_extents, status from dba_rollback_segs order by tablespace_name;
Page 60
Segment-Name Ts-Name I-Ext N-Ext Min Max-E Status ------------ --------------- -------- -------- -------------- -------SYSTEM SYSTEM 51200 51200 2 121 ONLINE _SYSSMU1$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU2$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU3$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU4$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU5$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU6$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU7$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU8$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU9$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU10$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU11$ TS_UDEMO_UNDO 131072 2 32765 ONLINE _SYSSMU12$ TS_UDEMO_UNDO 131072 2 32765 ONLINE _SYSSMU13$ TS_UDEMO_UNDO 131072 2 32765 ONLINE _SYSSMU14$ TS_UDEMO_UNDO 131072 2 32765 ONLINE _SYSSMU15$ TS_UDEMO_UNDO 131072 2 32765 ONLINE _SYSSMU16$ TS_UDEMO_UNDO 131072 2 32765 ONLINE
Page 61
_SYSSMU17$ TS_UDEMO_UNDO 32765 ONLINE _SYSSMU18$ TS_UDEMO_UNDO 32765 ONLINE _SYSSMU19$ TS_UDEMO_UNDO 32765 ONLINE _SYSSMU20$ TS_UDEMO_UNDO 32765 ONLINE _SYSSMU21$ TS_UDEMO_UNDOTS 2147483645 OFFLINE _SYSSMU22$ TS_UDEMO_UNDOTS 2147483645 OFFLINE _SYSSMU23$ TS_UDEMO_UNDOTS 2147483645 OFFLINE _SYSSMU24$ TS_UDEMO_UNDOTS 2147483645 OFFLINE _SYSSMU25$ TS_UDEMO_UNDOTS 2147483645 OFFLINE _SYSSMU26$ TS_UDEMO_UNDOTS 2147483645 OFFLINE _SYSSMU27$ TS_UDEMO_UNDOTS 2147483645 OFFLINE _SYSSMU28$ TS_UDEMO_UNDOTS 2147483645 OFFLINE _SYSSMU29$ TS_UDEMO_UNDOTS 2147483645 OFFLINE _SYSSMU30$ TS_UDEMO_UNDOTS 2147483645 OFFLINE 31 rows selected.
131072 131072 131072 131072 131072 131072 131072 131072 131072 131072 131072 131072 131072 131072
2 2 2 2 2 2 2 2 2 2 2 2 2 2
The UNDO_TABLESPACE parameter is a dynamic parameter. Which we can change using 'ALTER SYSTEM'. SQL> alter system set undo_tablespace=TS_MY_UNDOTS; System altered. Query more info on Tablespaces.
Page 62
SQL> select segment_name, tablespace_name, initial_extent, next_extent, min_extents, max_extents, status from dba_rollback_segs order by tablespace_name; Segment-Name Ts-Name I-Ext N-Ext Min Max-E Status ------------ --------------- -------- -------- -------------- --------SYSTEM SYSTEM 51200 51200 2 121 ONLINE _SYSSMU1$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU2$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU3$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU4$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU5$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU6$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU7$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU8$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU9$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU10$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU11$ TS_UDEMO_UNDO 131072 2 32765 OFFLINE _SYSSMU12$ TS_UDEMO_UNDO 131072 2 32765 OFFLINE
Page 63
_SYSSMU13$ TS_UDEMO_UNDO 32765 OFFLINE _SYSSMU14$ TS_UDEMO_UNDO 32765 OFFLINE _SYSSMU15$ TS_UDEMO_UNDO 32765 OFFLINE _SYSSMU16$ TS_UDEMO_UNDO 32765 OFFLINE _SYSSMU17$ TS_UDEMO_UNDO 32765 OFFLINE _SYSSMU18$ TS_UDEMO_UNDO 32765 OFFLINE _SYSSMU19$ TS_UDEMO_UNDO 32765 OFFLINE _SYSSMU20$ TS_UDEMO_UNDO 32765 OFFLINE _SYSSMU21$ TS_UDEMO_UNDOTS 32765 ONLINE _SYSSMU22$ TS_UDEMO_UNDOTS 32765 ONLINE _SYSSMU23$ TS_UDEMO_UNDOTS 32765 ONLINE _SYSSMU24$ TS_UDEMO_UNDOTS 32765 ONLINE _SYSSMU25$ TS_UDEMO_UNDOTS 32765 ONLINE _SYSSMU26$ TS_UDEMO_UNDOTS 32765 ONLINE _SYSSMU27$ TS_UDEMO_UNDOTS 32765 ONLINE _SYSSMU28$ TS_UDEMO_UNDOTS 32765 ONLINE _SYSSMU29$ TS_UDEMO_UNDOTS 32765 ONLINE _SYSSMU30$ TS_UDEMO_UNDOTS 32765 ONLINE 31 rows selected.
131072 131072 131072 131072 131072 131072 131072 131072 131072 131072 131072 131072 131072 131072 131072 131072 131072 131072
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
Page 64
NOTE: The switch operation does not wait for transactions in the old undo Tablespace to commit. If their are any pending transactions in the old undo Tablespace, the old undo Tablespace enters into a PENDING OFFLINE mode(status).
Select EXTENTS info for UNDO Segments. SQL> select segment_name, tablespace_name, extent_id, file_id, block_id, blocks from dba_undo_extents where tablespace_name='TS_MY_UNDOTS'; Segment-Name Ts-Name EXTENT_ID FILE_ID BLOCK_ID Blocks ------------ --------------- ---------- ---------- --------- ------_SYSSMU30$ TS_UDEMO_UNDOTS 0 5 610 31 _SYSSMU30$ TS_UDEMO_UNDOTS 1 5 641 32 _SYSSMU29$ TS_UDEMO_UNDOTS 0 5 546 31 _SYSSMU29$ TS_UDEMO_UNDOTS 1 5 577 32 _SYSSMU28$ TS_UDEMO_UNDOTS 0 5 482 31 _SYSSMU28$ TS_UDEMO_UNDOTS 1 5 513 32 _SYSSMU27$ TS_UDEMO_UNDOTS 0 5 418 31 _SYSSMU27$ TS_UDEMO_UNDOTS 1 5 449 32 _SYSSMU26$ TS_UDEMO_UNDOTS 0 5 354 31
Page 65
_SYSSMU26$ 385 32 _SYSSMU25$ 290 31 _SYSSMU25$ 321 32 _SYSSMU24$ 226 31 _SYSSMU24$ 257 32 _SYSSMU23$ 162 31 _SYSSMU23$ 193 32 _SYSSMU22$ 98 31 _SYSSMU22$ 129 32 _SYSSMU21$ 34 31 _SYSSMU21$ 65 32
TS_UDEMO_UNDOTS TS_UDEMO_UNDOTS TS_UDEMO_UNDOTS TS_UDEMO_UNDOTS TS_UDEMO_UNDOTS TS_UDEMO_UNDOTS TS_UDEMO_UNDOTS TS_UDEMO_UNDOTS TS_UDEMO_UNDOTS TS_UDEMO_UNDOTS TS_UDEMO_UNDOTS
1 0 1 0 1 0 1 0 1 0 1
5 5 5 5 5 5 5 5 5 5 5
20 rows selected.
Now we like to see some of the STATISTICS info about UNDO Segment from V$ROLLSTAT. SQL> select usn, xacts, extents, rssize, optsize, shrinks, hwmsize from v$rollstat ; XACTS Exts RSSIZE OPTSIZE SHRINKS HWMSIZE ---------- ---------- ----- ---------- ---------- --------- ---------Page 66
USN
0 0 0 0 0 0 0 0 0 0 0
0 407552 21 129024 22 129024 23 129024 24 129024 25 129024 26 129024 27 129024 28 129024 29 129024 30 129024
0 0 0 0 0 0 0 0 0 0 0
8 2 2 2 2 2 2 2 2 2 2
407552 129024 129024 129024 129024 129024 129024 129024 129024 129024 129024
11 rows selected.
Selecting EXTENT information about UNDO segments. SQL> select segment_name, tablespace_name, initial_extent, next_extent, min_extents, max_extents, status from dba_rollback_segs;
Page 67
Segment-Name Ts-Name I-Ext N-Ext Min Max-E Status ------------ --------------- -------- ------- ---- ---------- ---------SYSTEM SYSTEM 51200 51200 2 121 ONLINE _SYSSMU1$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU2$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU3$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU4$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU5$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU6$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU7$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU8$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU9$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU10$ UNDOTBS 131072 2 32765 OFFLINE _SYSSMU11$ TS_UDEMO_UNDO 131072 2 32765 OFFLINE _SYSSMU12$ TS_UDEMO_UNDO 131072 2 32765 OFFLINE _SYSSMU13$ TS_UDEMO_UNDO 131072 2 32765 OFFLINE _SYSSMU14$ TS_UDEMO_UNDO 131072 2 32765 OFFLINE _SYSSMU15$ TS_UDEMO_UNDO 131072 2 32765 OFFLINE
Page 68
_SYSSMU16$ TS_UDEMO_UNDO 32765 OFFLINE _SYSSMU17$ TS_UDEMO_UNDO 32765 OFFLINE _SYSSMU18$ TS_UDEMO_UNDO 32765 OFFLINE _SYSSMU19$ TS_UDEMO_UNDO 32765 OFFLINE _SYSSMU20$ TS_UDEMO_UNDO 32765 OFFLINE _SYSSMU21$ TS_UDEMO_UNDOTS 32765 ONLINE _SYSSMU22$ TS_UDEMO_UNDOTS 32765 ONLINE _SYSSMU23$ TS_UDEMO_UNDOTS 32765 ONLINE _SYSSMU24$ TS_UDEMO_UNDOTS 32765 ONLINE _SYSSMU25$ TS_UDEMO_UNDOTS 32765 ONLINE _SYSSMU26$ TS_UDEMO_UNDOTS 32765 ONLINE _SYSSMU27$ TS_UDEMO_UNDOTS 32765 ONLINE _SYSSMU28$ TS_UDEMO_UNDOTS 32765 ONLINE _SYSSMU29$ TS_UDEMO_UNDOTS 32765 ONLINE _SYSSMU30$ TS_UDEMO_UNDOTS 32765 ONLINE 31 rows selected.
131072 131072 131072 131072 131072 131072 131072 131072 131072 131072 131072 131072 131072 131072 131072
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
Observing the usage of UNDO SEGMENTS. Here we have to use minimum 2 SESSIONS. From one SESSION we do some DML Operations in between using COMMIT statement (for using other rollback segments) from user SCOTT.
Page 69
From another SESSION try to see how the UNDO SEGMENTS use the EXTENTS whenever it needs more space. From First session do some DML operations (SCOTT). SQL> delete from emp_roll where rownum<3501; SQL> commit SQL> insert into emp_roll select * from emp_roll where rownum<4501; SQL> commit SQL> delete from emp_roll where rownum<4501; SQL> commit SQL> insert into emp_roll select * from emp_roll where rownum<3501; At the same time From Second session Query the info from V$ROLLSTAT view. SQL> select a.name, b.xacts, b.extents, b.rssize, b.optsize, b.shrinks, b.shrinks, b.hwmsize from v$rollname a, v$rollstat b where a.usn=b.usn; Here we continuously execute this statement and record the output into a SPOOL file (stat.log). Name A-Tr Exts RSSIZE OPTSIZE SHRINKS HWMSIZE ------------ ----- ----- ---------- ---------- --------- ---------SYSTEM 0 8 407552 0 407552 _SYSSMU21$ 0 2 129024 0 129024 _SYSSMU22$ 0 2 129024 0 129024 _SYSSMU23$ 0 2 129024 0 129024 _SYSSMU24$ 0 2 129024 0 129024
Page 70
_SYSSMU25$ 0 129024 _SYSSMU26$ 0 129024 _SYSSMU27$ 0 129024 _SYSSMU28$ 0 653312 _SYSSMU29$ 0 129024 _SYSSMU30$ 0 129024 11 rows selected.
0 0 0 1 0 0
2 2 2 10 2 2
Just we open the log file and try to see the usage of Rollback Segments. $ less stat.log Name A-Tr Exts RSSIZE OPTSIZE SHRINKS HWMSIZE ------------ ----- ----- ---------- ---------- --------- ---------SYSTEM 0 8 407552 0 407552 _SYSSMU21$ 0 2 129024 0 129024 _SYSSMU22$ 0 2 129024 0 129024 _SYSSMU23$ 0 2 129024 0 129024 _SYSSMU24$ 0 2 129024 0 129024 _SYSSMU25$ 0 2 129024 0 129024 _SYSSMU26$ 0 2 129024 0 129024 _SYSSMU27$ 1 2 129024 0 129024
Page 71
0 0 0
2 2 2
Views that show Undo Information. V$UNDOSTAT - Contains statistics for monitoring and tuning undo space. V$TRANSACTION - Contains Undo segment Information. DBA_UNDO_EXTENTS - Shows the commit time for each extent in the undo tablespace.
Page 72
DEMO ON FLASHBACK QUERY For using Flashback Query first we try to select some Segment info. SQL> conn U_SCOTT/U_SCOTT SQL> select * from tab; TNAME -----------------------------BONUS DEPT DUMMY EMP EMP1 EMP2 EMP_ROLL SALGRADE 8 rows selected. SQL> select count(*) from emp1; COUNT(*) -------14 Creating table keep_scn for storing the SCN Number, which used with DBMS_FLASHBACK. SQL> CREATE TABLE KEEP_SCN (SCN NUMBER); Table created. Generating the SCN number using DBMS_FLASHBACK and storing in keep_scn table. SQL> DECLARE I NUMBER; BEGIN I: = DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER; INSERT INTO KEEP_SCN VALUES (I);
Page 73
TABTYPE CLUSTERID ------- ---------TABLE TABLE TABLE TABLE TABLE TABLE TABLE TABLE
COMMIT; END; / PL/SQL procedure successfully completed. Selecting the SCN number from KEEP_SCN Table. SQL> select * from keep_scn; SCN ---------3111460 Selecting the rows from emp1 table order by job (try to check the salesman salary). SQL> select * from emp order by deptno; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7839 KING PRESIDENT 17-NOV-81 5000 10 7934 MILLER CLERK 7782 23-JAN-82 1300 10 7369 SMITH CLERK 7902 17-DEC-80 800 20 7876 ADAMS CLERK 7788 12-JAN-83 1100 20 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7788 SCOTT ANALYST 7566 09-DEC-82 3000 20 7566 JONES MANAGER 7839 02-APR-81 2975 20 7499 ALLEN SALESMAN 7698 20-FEB-81 3250 300 30
Page 74
7698 BLAKE 2850 7654 MARTIN 3250 1400 7900 JAMES 950 7844 TURNER 3250 0 7521 WARD 3250 500 14 rows selected.
7839 01-MAY-81 7698 28-SEP-81 7698 03-DEC-81 7698 08-SEP-81 7698 22-FEB-81
Delete some of the rows from EMP1 Table and commit. SQL> Delete from emp1 where deptno=20; 5 rows deleted. SQL> commit; Commit complete. SQL> select * from emp1 order by deptno;
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------7900 JAMES CLERK 7698 03-DEC-81 950 30 7934 MILLER CLERK 7782 23-JAN-82 1300 10 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7839 KING PRESIDENT 17-NOV-81 5000 10
Page 75
7499 ALLEN 3250 300 7844 TURNER 3250 0 7521 WARD 3250 500 7654 MARTIN 3250 1400 9 rows selected.
Select the Rows info about EMP1 Table. SQL> select * from emp1 where deptno=20 order by deptno; no rows selected And after disabling the Flashback mode try to insert back into the table using cursor. SQL> declare cursor c1 is select * from emp1 where deptno=20; crec emp%rowtype; begin open c1; dbms_flashback.disable; loop fetch c1 into crec; exit when c1%notfound; insert into emp1 values (crec.empno,crec.ename,crec.job,crec.mgr, crec.hiredate,crec.sal,crec.comm,crec.deptno);
Page 76
end loop; close c1; end; / PL/SQL procedure successfully completed. Now we try to select the records from the table. SQL> select * from emp1 order by deptno; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7839 KING PRESIDENT 17-NOV-81 5000 10 7934 MILLER CLERK 7782 23-JAN-82 1300 10 7369 SMITH CLERK 7902 17-DEC-80 800 20 7876 ADAMS CLERK 7788 12-JAN-83 1100 20 7566 JONES MANAGER 7839 02-APR-81 2975 20 7788 SCOTT ANALYST 7566 09-DEC-82 3000 20 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7499 ALLEN SALESMAN 7698 20-FEB-81 3250 300 30 7521 WARD SALESMAN 7698 22-FEB-81 3250 500 30 7900 JAMES CLERK 7698 03-DEC-81 950 30 7844 TURNER SALESMAN 7698 08-SEP-81 3250 0 30
Page 77
SALESMAN 30 MANAGER 30
Page 78
DEMO ON TEMPORARY SEGMENTS(for sorting temp data) When processing queries, Oracle often requires temporary workspace for intermediate stages of SQL statement parsing and execution. Oracle automatically allocates this disk space called a temporary segment. Typically, Oracle requires a temporary segment as a work area for sorting. The following commands may require the use of a temporary segment: CREATE INDEX SELECT ... ORDER BY SELECT DISTINCT ... SELECT ... GROUP BY SELECT ... UNION SELECT ... INTERSECT SELECT ... MINUS SORT_AREA_SIZE specifies the maximum amount, in bytes, of memory to use for a sort. After the sort is complete and all that remains to do is to return the rows, the memory is released down to the size specified by SORT_AREA_RETAINED_SIZE. After the last row is returned, all memory is freed. Increasing SORT_AREA_SIZE size improves the efficiency of large sorts. Multiple allocations never exist; there is only one memory area of SORT_AREA_SIZE for each user process at any time. For Observing the TEMPORARY SEGMENTS First we gather more Info for doing this. Selecting Info about Tablespaces. SQL> select tablespace_name, status, contents, logging from dba_tablespaces; Ts-Name Status Contents LOGGING --------------- ---------- ---------- --------SYSTEM ONLINE PERMANENT LOGGING
Page 79
Now selecting the USERS info. SQL> select username, default_tablespace, temporary_tablespace from dba_users; User ---------SYS SYSTEM OUTLN DBSNMP SCOTT U_SCOTT U_STEEVE Def-Tspace --------------SYSTEM SYSTEM SYSTEM SYSTEM SYSTEM TS_UDEMO_USERS TS_UDEMO_USERS Temp-Tspace --------------TEMP TEMP TEMP TEMP TEMP TS_UDEMO_TEMP TS_UDEMO_TEMP
7 rows selected. SQL> select tablespace_name, file_name, bytes from dba_data_files; Ts-Name File-Name Bytes --------------- -------------------------------------------- ---------SYSTEM /disk1/oradata/UDEMO/system01.dbf 94371840 TS_UDEMO_USERS /disk1/oradata/UDEMO/UDEMO_users01.dbf 13631488
Page 80
TS_UDEMO_INDEX /disk1/oradata/UDEMO/UDEMO_index01.dbf 2097152 TS_UDEMO_TEMP /disk1/oradata/UDEMO/UDEMO_temp01.dbf 2097152 UNDOTBS /disk1/oradata/UDEMO/undotbs01.dbf 20971520 TS_UDEMO_UNDO /disk1/oradata/UDEMO/UDEMO_undo01.dbf 4194304 6 rows selected. First we create a HUGE TABLE in the SCOTT schema. SQL> create table emp_test storage ( initial 100k next 100k maxextents 300) as select * from emp; SQL> set autocommit on SQL> begin for i in 1..14 loop insert into emp_test select * from emp_test; commit; end loop; end; / SQL> insert into emp_test select * from emp_test where rownum < 12657; Inserting More Rows. Wait... 10656 rows created.
Page 81
Commit complete. Selecting the TOTAL No. of ROWS & the SIZE in EMP_TEST table. SQL> select count(*) from emp_test; COUNT(*) ---------125344 SQL> select segment_name, bytes from user_tables where segment_name = 'EMP_TEST'; Segment-Name Bytes ------------ ---------EMP_TEST 6451200 Try to see the value of SORT_AREA_SIZE parameter. Now we try to create an INDEX on EMP_TEST table in TS_MY_INDEX Tablespace. SQL> show parameter sort_area_size NAME TYPE VALUE ------------------------------------ ----------- ----sort_area_size integer 65536
SQL> conn user1/user1 SQL> create index I_EMP_TEST on EMP_TEST(empno,ename,deptno) tablespace TS_MY_INDEX;
Page 82
Creating Index. Wait...... Index created. create index I_EMP_TEST on EMP_TEST(empno,ename,deptno) tablespace TS_UDEMO_INDEX ERROR at line 1:ORA-01652: unable to extend temp segment by 5 in Tablespace TS_UDEMO_TEMP NOTE: Here we got the problem when Index creation is using TEMPORARY Tablespace (TS_MY_TEMP) for SORTING and there is no Sufficient SPACE to INCREASE the TEMPORARY SEGMENT. Now we have to provide the Sufficient SPACE to increase the Temporary segment in Temporary Tablespace (TS_MY_TEMP) by increasing the size of the Tablespace. First we select the Datafiles and their Sizes. SQL> select tablespace_name, file_name, bytes from dba_data_files; Ts-Name File-Name Bytes --------------- -------------------------------------------- ---------SYSTEM /disk1/oradata/UDEMO/system01.dbf 94371840 TS_UDEMO_USERS /disk1/oradata/UDEMO/UDEMO_users01.dbf 13631488 TS_UDEMO_INDEX /disk1/oradata/UDEMO/UDEMO_index01.dbf 2097152
Page 83
TS_UDEMO_TEMP /disk1/oradata/UDEMO/UDEMO_temp01.dbf 2097152 UNDOTBS /disk1/oradata/UDEMO/undotbs01.dbf 20971520 TS_UDEMO_UNDO /disk1/oradata/UDEMO/UDEMO_undo01.dbf 4194304 6 rows selected. Now we try to increase the SIZE for the TEMPORARY Tablespace by adding a new datafile. SQL> alter tablespace TS_MY_TEMP add datafile '/disk2/oradata/my/my_temp02.dbf' size 7m reuse; Tablespace altered. Now Once More try to create an INDEX on EMP_TEST table in TS_MY_INDEX tablespace SQL> create index I_EMP_TEST on EMP_TEST(empno,ename,deptno) tablespace TS_MY_INDEX; Index is under Creation. Wait... Now Once More try to create an INDEX on EMP_TEST table in TS_MY_INDEX Tablespace and we are Intrested in seeing what is happening at the BACK END when an INDEX is creating ( we try to see it from ANOTHER Session). For this from FIRST SESSION we Create the Index : SQL> create index I_EMP_TEST on EMP_TEST(empno,ename,deptno) tablespace TS_MY_INDEX;
Page 84
From ANOTHER SESSION we query about the TEMPORARY SEGMENTS (continuously) and SPOOL in a file called temp.log . SQL> spool temp.log SQL> select segment_name, segment_type, tablespace_name, extents, bytes from dba_segments where segment_type = 'TEMPORARY' or segment_name = 'I_EMP_TEST' ; Segment-Name Segment_type Tablespace-Name Exts Bytes ------------ --------------- ------------------ ---- ---------5.2 TEMPORARY TS_UDEMO_TEMP 466 4777984 Now we open the SPOOL file and we try to analyze what is happening. $ less temp.log Segment-Name Segment_type Tablespace-Name Exts Bytes ------------ --------------- ------------------ ---- ---------5.2 TEMPORARY TS_UDEMO_TEMP 466 4777984 We have NOTICED " After SORTING in TEMPORARY Tablespace it's trying to CREATE the INDEX in INDEX Tablespace. But when its Writing to the Index Tablespace its giving the name TEMPORARY to it because INDEX is Still under Creation. " Their is no Sufficient SPACE in INDEX Tablespace to increase the Segment. Now we have to increase the SIZE of the INDEX Tablespace. First we select the DATAFILE and their sizes.
Page 85
SQL> select tablespace_name, file_name, bytes from dba_data_files; Ts-Name File-Name Bytes --------------- -------------------------------------------- ---------SYSTEM /disk1/oradata/UDEMO/system01.dbf 94371840 TS_UDEMO_USERS /disk1/oradata/UDEMO/UDEMO_users01.dbf 13631488 TS_UDEMO_INDEX /disk1/oradata/UDEMO/UDEMO_index01.dbf 2097152 TS_UDEMO_TEMP /disk1/oradata/UDEMO/UDEMO_temp01.dbf 2097152 TS_UDEMO_TEMP /disk1/oradata/UDEMO/UDEMO_temp02.dbf 7340032 UNDOTBS /disk1/oradata/UDEMO/undotbs01.dbf 20971520 TS_UDEMO_UNDO /disk1/oradata/UDEMO/UDEMO_undo01.dbf 4194304 7 rows selected. Increase the SIZE of the TS_MY_INDEX Tablespace. SQL> alter database datafile '/disk2/oradata/my/my_index01.dbf' resize 12m; Database altered. Now Once More try to create an INDEX on EMP_TEST table in TS_MY_INDEX Tablespace and we are Interested in seeing what is
Page 86
happening at the BACK END when an INDEX is creating ( we try to see it from ANOTHER Session). For this from FIRST SESSION we Create the Index : SQL> create index I_EMP_TEST on EMP_TEST(empno,ename,deptno) tablespace TS_MY_INDEX; From ANOTHER SESSION we query about the TEMPORARY SEGMENTS (continuously) and SPOOL in a file called temp.log SQL> spool temp.log
SQL> select segment_name, segment_type, tablespace_name, extents, bytes from dba_segments where segment_type = 'TEMPORARY' or segment_name = 'I_EMP_TEST' ; Segment-Name Segment_type Tablespace-Name Exts Bytes ------------ --------------- ------------------ ---- ---------5.2 TEMPORARY TS_UDEMO_TEMP 466 4777984 Now we open the SPOOL file and we try to analyze what is happening. $ less temp.log Segment-Name Segment_type Tablespace-Name Exts Bytes ------------ --------------- ------------------ ---- ---------5.2 TEMPORARY TS_UDEMO_TEMP 466 4777984 Now we try to see the INDEX is Created or Not ?
Page 87
SQL> select table_name, owner, index_name, tablespace_name from dba_indexes where index_name = 'I_EMP_TEST'; TABLE_NAME Owner IndexName Ts-Name ------------------------------ ------------ -------------- --------------EMP_TEST U_SCOTT I_EMP_TEST TS_UDEMO_INDEX
Page 88
DEMO ON USERS, ROLES, PROFILES Selecting Info about Tablespaces. SQL> select tablespace_name, block_size, extent_management, allocation_type, contents, status dba_tablespaces; EXTENT_MAN Alloc-T ---------- -------- ----DICTIONARY USER LOCAL LOCAL SYSTEM UNIFORM UNDO
from
Ts-Name BLOCK_SIZE Contents Status --------------- -------------- ---------SYSTEM 2048 PERMANENT ONLINE UNDOTBS 2048 ONLINE TEMP 2048 TEMPORARY ONLINE TS_RAHULDB_USERS 2048 PERMANENT ONLINE TS_RAHULDB_TEMP 2048 TEMPORARY ONLINE TS_RAHULDB_UNDO 2048 ONLINE TS_RAHULDB_INDEX 2048 PERMANENT ONLINE 7 rows selected.
DICTIONARY USER
Selecting allocated SIZE for Tablespaces. SQL> select tablespace_name, sum(bytes) From dba_data_files group by tablespace_name; Ts-Name --------------SUM(BYTES) ---------Page 89
Selecting user information. SQL> select username,default_tablespace, temporary_tablespace, account_status, profile from dba_users; User Def-Tspace Status PROFILE ---------- ----------------- ---------SYS SYSTEM DEFAULT SYSTEM SYSTEM DEFAULT OUTLN SYSTEM DEFAULT DBSNMP SYSTEM DEFAULT Temp-Tspace Acct-
Now Creating a New User with the name U_SCOTT. SQL> create user U_SCOTT identified by U_SCOTT default tablespace TS_MY_USERS temporary tablespace TS_MY_TEMP quota 2m on TS_MY_USERS quota unlimited on TS_MY_TEMP; User created.
Page 90
Now check whether the User is created or not.? SQL> select username,default_tablespace, temporary_tablespace, account_status, profile from dba_users; User Def-Tspace Status PROFILE ---------- ------------------ ---------SYS SYSTEM DEFAULT SYSTEM SYSTEM DEFAULT OUTLN SYSTEM DEFAULT DBSNMP SYSTEM DEFAULT U_SCOTT TS_RAHULDB_USERS DEFAULT Temp-Tspace Acct-
TS_RAHULDB_TEMP OPEN
Now checking the Quotas assigned to user 'U_SCOTT' SQL> select From where Ts-Name MAX_BYTES tablespace_name, bytes, blocks, max_blocks, max_bytes dba_ts_quotas username='U_SCOTT'; Bytes Blocks MAX_BLOCKS
Page 91
--------------- ---------- ------- ---------- --------TS_RAHULDB_TEMP 0 0 -1 -1 TS_RAHULDB_USERS 0 0 1024 2097152 Now checking about the Profile Information. SQL> Select from where order by PROFILE LIMIT ------------------DEFAULT UNLIMITED DEFAULT UNLIMITED DEFAULT UNLIMITED DEFAULT UNLIMITED DEFAULT UNLIMITED DEFAULT UNLIMITED DEFAULT UNLIMITED DEFAULT UNLIMITED DEFAULT UNLIMITED profile, resource_name, resource_type, limit dba_profiles profile='DEFAULT' 1,3; RESOURCE
RESOURCE_NAME
-------------------------------- -------COMPOSITE_LIMIT SESSIONS_PER_USER CPU_PER_SESSION CPU_PER_CALL LOGICAL_READS_PER_SESSION LOGICAL_READS_PER_CALL IDLE_TIME CONNECT_TIME PRIVATE_SGA KERNEL KERNEL KERNEL KERNEL KERNEL KERNEL KERNEL KERNEL KERNEL
Page 92
DEFAULT UNLIMITED DEFAULT UNLIMITED DEFAULT UNLIMITED DEFAULT UNLIMITED DEFAULT NULL DEFAULT UNLIMITED DEFAULT UNLIMITED
16 rows selected. Create a Profile prof_clerk and assign to a User. SQL> create profile prof_clerk limit sessions_per_user failed_login_attempts idle_time connect_time Profile created. Try to assign CREATE SESSION Privilege and PROF_CLERK profile to U_SCOTT User. SQL> alter user U_SCOTT profile prof_clerk; User altered. SQL> grant create session to U_SCOTT; Grant succeeded. Now check whether profile PROF_CLERK is assigned to user U_SCOTT. SQL> select username, profile from dba_users where username='U_SCOTT';
Page 93
3 3 5 480;
USERNAME PROFILE ------------------------------ ----------------------------U_SCOTT PROF_CLERK Now creating another profile PROF_READS SQL> create profile prof_reads limit logical_reads_per_call logical_reads_per_session Profile created. Now assigning this Profile PROF_READS to a new user STEEVE and granting CREATE SESSION privilege. SQL> create user U_STEEVE identified by U_STEEVE profile prof_reads; SQL> grant create session to U_STEEVE; User created. Grant succeeded. Now check whether profile PROF_READS is assigned to STEEVE. SQL> select username,default_tablespace, temporary_tablespace, account_status, profile dba_users; Temp-Tspace Acct-
10 200;
from
SYSTEM DEFAULT OUTLN DEFAULT DBSNMP DEFAULT U_SCOTT PROF_CLERK U_STEEVE PROF_READS
6 rows selected.
Page 95
Page 96
SQL> Insert into sqlplus_product_profile (product, userId,attribute,char_value) values ('SQL*Plus', 'U_TEST', 'HOST' , 'DISABLED'); 1 row created. Now try to query about PRODUCT PROFILES - (connecting user SYSTEM) SQL> select product, userid, attribute, char_value from sqlplus_product_profile; no row selected NOTE: Now try to U_TEST the PRODUCT PROFILES by using the SQL commands from another session logging as user U_TEST. SQL> spool abc; SP2-0544: invalid command: spool Its not allowing to spool a file. SQL> host; SP2-0544: invalid command: host Its not allowing you to use HOST command also.
Page 97
Check what are the privileges granted U_SCOTT user. SQL> conn U_SCOTT/U_SCOTT SQL> select * from session_privs; PRIVILEGE -------------CREATE SESSION Try to CREATE a TABLE connecting to U_SCOTT Schema. SQL> create table tab1 as select * from dict; * ERROR at line 1:ORA-01031: insufficient privileges Now Granting Privilege (CREATE TABLE) to user U_SCOTT SQL> grant create table to U_SCOTT; Grant succeeded. Now checking what are the PRIVILEGES having with user U_SCOTT. SQL> select grantee,privilege from dba_sys_privs where grantee='U_SCOTT'; GRANTEE PRIVILEGE
Page 98
------------ ---------------U_SCOTT CREATE TABLE U_SCOTT CREATE SESSION Try to CREATE a TABLE connecting to U_SCOTT Schema. SQL> create table tab1 as select * from dict; Table created. Now we are Granting PRIVILEGE (CREATE VIEW) to another user U_TEST SQL> grant create view to U_TEST; Grant succeeded. To see OBJECT PRIVILEGE demo we try to SELECT user U_SCOTT'S schema Object connecting to U_TEST User. SQL> conn U_TEST/U_TEST SQL> select count(*) from U_SCOTT.tab1; ERROR at line 1:ORA-00942: table or view does not exist
Connecting to U_SCOTT user we grant SELECT on TAB1 table to user U_TEST. SQL> conn U_SCOTT/U_SCOTT SQL> grant select on tab1 to U_TEST; Grant succeeded. Now we try to SELECT user U_SCOTT'S schema Object connecting to U_TEST User. SQL> conn U_TEST/U_TEST
Page 99
SQL> select count(*) from U_SCOTT.tab1; COUNT(*) ---------416 Now creating a ROLE ACCT_ROLE and granting all SYSTEM privileges SQL> create role acct_role identified by acct_role; SQL> grant create any table, create any view, create any synonym to acct_role; Role created. Grant succeeded. Now granting this role ACCT_ROLE to user U_TEST SQL> grant acct_role to U_TEST; Grant succeeded. Now checking the granted ROLES and their PRIVILEGES of U_TEST user. SQL> select a.grantee,a.granted_role,b.privilege from dba_role_privs a,dba_sys_privs b where a.granted_role=b.grantee and a.grantee='U_TEST'; GRANTEE -----------U_TEST U_TEST U_TEST U_TEST U_TEST U_TEST U_TEST GRANTED_ROLE -----------CONNECT CONNECT CONNECT CONNECT CONNECT CONNECT CONNECT PRIVILEGE -----------------------CREATE VIEW CREATE TABLE ALTER SESSION CREATE CLUSTER CREATE SESSION CREATE SYNONYM CREATE SEQUENCE
Page 100
11 rows selected. Now checking the granted SYSTEM PRIVILEGES to user U_TEST. SQL> select * from dba_sys_privs where grantee='U_TEST'; GRANTEE PRIVILEGE ADM ------------------------------ --------------------------------------- --U_TEST CREATE VIEW NO
Page 101
Introduction to Auditing
Auditing is the monitoring and recording of actions of selected database user. Auditing is normally used to:
Investigate suspicious activity. For example, if an unauthorized user is deleting data from tables, the security administrator might decide to audit all connections to the database and all successful and unsuccessful deletions of rows from all tables in the database. Monitor and gather data about specific database activities. For example, the database administrator can gather statistics about which tables are being updated, how many logical I/Os are performed, or how many concurrent users connect at peak times.
Features of Auditing
This section outlines the features of the Oracle auditing mechanism.
Types of Auditing
Description The selective auditing of SQL statements with respect to only the type of statement, not the specific schema objects on which it operates. Statement auditing options are typically broad, auditing the use of several types of related actions for each option. For example, AUDIT TABLE tracks several DDL statements regardless of the table on which they are issued. You can set statement auditing to audit selected users or every user in the database.(DDL create) The selective auditing of the use of powerful system privileges to perform corresponding actions, such as AUDIT CREATE TABLE. Privilege auditing is more focused than statement auditing because it audits only the use of the target privilege. You can set privilege auditing to audit a selected user or every user in the database.
Privilege auditing
Schema
Description as AUDIT SELECT ON employees. Schema object auditing is very focused, auditing only a specific statement on a specific schema object. Schema object auditing always applies to all users of the database. Fine-grained auditing allows the monitoring of data access based on content.
Focus of Auditing
Oracle allows audit options to be focused or to be generalized in broad spectrum. You can audit:
Successful statement executions, unsuccessful statement executions, or both Statement executions once in each user session or once every time the statement is executed Activities of all users or of a specific user
Audit records include information such as the 1) operation that was audited, 2) the user performing the operation, 3) and the date and time of the operation. Audit records can be stored in either a data dictionary table, called the database audit trail, or an operating system audit trail. The database audit trail is a single table named SYS.AUD$ in the SYS schema of each Oracle database's data dictionary. Several predefined views are provided to help you use the information in this table. The audit trail records can contain different types of information, depending on the events audited and the auditing options set. The following information is always included in each audit trail record, if the information is meaningful to the particular audit action:
1) 2) 3) 4) 5)
The user name The session identifier The terminal identifier The name of the schema object accessed The operation performed or attempted
Page 103
6) 7) 8)
The completion code of the operation The date and time stamp The system privileges used
The operating system audit trail is encoded and not readable, but it is decoded in data dictionary files and error messages.
Action code describes the operation performed or attempted. The AUDIT_ACTIONS data dictionary table contains a list of these codes and their descriptions. Privileges used describes any system privileges used to perform the operation. The SYSTEM_PRIVILEGE_MAP table lists all of these codes and their descriptions. Completion code describes the result of the attempted operation. Successful operations return a value of zero, and unsuccessful operations return the Oracle error code describing why the operation was unsuccessful.
This section explains the mechanisms used by the Oracle auditing features.
When Are Audit Records Generated?
The recording of audit information can be enabled or disabled. This functionality allows any authorized database user to set audit options at any time but reserves control of recording audit information for the security administrator. When auditing is enabled in the database, an audit record is generated during the execute phase of statement execution. SQL statements inside PL/SQL program units are individually audited, as necessary, when the program unit is executed. The generation and insertion of an audit trail record is independent of a user's transaction. Therefore, even if a user's transaction is rolled back, the audit trail record remains committed.
Note: Operations by the SYS user and by users connected through SYSDBA or SYSOPER can be fully audited with the AUDIT_SYS_OPERATIONS initialization parameter. Successful SQL statements from SYS are audited indiscriminately.
Page 104
The audit records for sessions established by the user SYS or connections with administrative privileges are sent to an operating system location. Sending them to a location separate from the usual database audit trail in the SYS schema provides for greater auditing security.
See Also:
Oracle9i Database Administrator's Guide for instructions on enabling and disabling auditing Chapter 14, "SQL, PL/SQL, and Java" for information about the different phases of SQL statement processing and shared SQL
Regardless of whether database auditing is enabled, Oracle always records some databaserelated actions into the operating system audit trail:
At instance startup, an audit record is generated that details the operating system user starting the instance, the user's terminal identifier, the date and time stamp, and whether database auditing was enabled or disabled. This information is recorded into the operating system audit trail because the database audit trail is not available until startup has successfully completed. Recording the state of database auditing at startup further prevents an administrator from restarting a database with database auditing disabled so that they are able to perform unaudited actions. At instance shutdown, an audit record is generated that details the operating system user shutting down the instance, the user's terminal identifier, the date and time stamp. During connections with administrator privileges, an audit record is generated that details the operating system user connecting to Oracle with administrator privileges. This provides accountability of users connected with administrator privileges.
On operating systems that do not make an audit trail accessible to Oracle, these audit trail records are placed in an Oracle audit trail file in the same directory as background process trace files.
See Also: Your operating system specific Oracle documentation for more information about the operating system audit trail
Statement and privilege audit options in effect at the time a database user connects to the database remain in effect for the duration of the session. A session does not see the effects of statement or privilege audit options being set or changed. The modified statement or privilege audit options take effect only when the current session is ended and a new session
Page 105
is created. In contrast, changes to schema object audit options become effective for current sessions immediately.
Audit in a Distributed Database
Auditing is site autonomous. An instance audits only the statements issued by directly connected users. A local Oracle node cannot audit actions that take place in a remote database. Because remote connections are established through the user account of a database link, the remote Oracle node audits the statements issued through the database link's connection.
See Also: Oracle9i Database Administrator's Guide
Oracle allows audit trail records to be directed to an operating system audit trail if the operating system makes such an audit trail available to Oracle. On some other operating systems, these audit records are written to a file outside the database, with a format similar to other Oracle trace files.
See Also: Your operating system specific Oracle documentation, to see if this feature has been implemented on your operating system
Oracle allows certain actions that are always audited to continue, even when the operating system audit trail (or the operating system file containing audit records) is unable to record the audit record. The usual cause of this is that the operating system audit trail or the file system is full and unable to accept new records. System administrators configuring operating system auditing should ensure that the audit trail or the file system does not fill completely. Most operating systems provide administrators with sufficient information and warning to ensure this does not occur. Note, however, that configuring auditing to use the database audit trail removes this vulnerability, because the Oracle server prevents audited events from occurring if the audit trail is unable to accept the database audit record for the statement.
Statement Auditing
Statement auditing is the selective auditing of related groups of statements that fall into two categories:
DDL statements, regarding a particular type of database structure or schema object, but not a specifically named structure or schema object (for example, AUDIT TABLE audits all CREATE and DROP TABLE statements)
Page 106
DML statements, regarding a particular type of database structure or schema object, but not a specifically named structure or schema object (for example, AUDIT SELECT TABLE audits all SELECT ... FROM TABLE/VIEWstatements, regardless of the table or view)
Statement auditing can be broad or focused, auditing the activities of all database users or the activities of only a select list of database users.
Privilege Auditing
Privilege auditing is the selective auditing of the statements allowed using a system privilege. For example, auditing of the SELECT ANY TABLE system privilege audits users' statements that are executed using the SELECT ANY TABLEsystem privilege. You can audit the use of any system privilege. In all cases of privilege auditing, owner privileges and schema object privileges are checked before system privileges. If the owner and schema object privileges suffice to permit the action, the action is not audited. If similar statement and privilege audit options are both set, only a single audit record is generated. For example, if the statement clause TABLE and the system privilege CREATE TABLE are both audited, only a single audit record is generated each time a table is created. Privilege auditing is more focused than statement auditing because each option audits only specific types of statements, not a related list of statements. For example, the statement auditing clause TABLE audits CREATE TABLE, ALTERTABLE, and DROP TABLE statements, while the privilege auditing option CREATE TABLE audits only CREATE TABLE statements. This is because only the CREATE TABLE statement requires the CREATE TABLE privilege. Like statement auditing, privilege auditing can audit the activities of all database users or the activities of a select list of database users.
Statements that reference clusters, database links, indexes, or synonyms are not audited directly. However, you can audit access to these schema objects indirectly by auditing the operations that affect the base table. Schema object audit options are always set for all users of the database. These options cannot be set for a specific list of users. You can set default schema object audit options for all auditable schema objects.
See Also: Oracle9i SQL Reference for information about auditable schema objects
Schema Object Audit Options for Views and Procedures
Views and procedures (including stored functions, packages, and triggers) reference underlying schema objects in their definition. Therefore, auditing with respect to views and procedures has several unique characteristics. Multiple audit records can be generated as the result of using a view or a procedure: The use of the view or procedure is subject to enabled audit options, and the SQL statements issued as a result of using the view or procedure are subject to the enabled audit options of the base schema objects (including default audit options). Consider the following series of SQL statements:
AUDIT SELECT ON employees; CREATE VIEW employees_departments AS SELECT employee_id, last_name, department_id FROM employees, departments WHERE employees.department_id = departments.department_id; AUDIT SELECT ON employees_departments; SELECT * FROM employees_departments;
As a result of the query on employees_departments, two audit records are generated: one for the query on the employees_departments view and one for the query on the base table employees (indirectly through the employees_departments view). The query on the base table departments does not generate an audit record because the SELECT audit option for this table is not enabled. All audit records pertain to the user that queried the employees_departments view. The audit options for a view or procedure are determined when the view or procedure is first used and placed in the shared pool. These audit options remain set until the view or procedure is flushed from, and subsequently replaced in, the shared pool. Auditing a schema object invalidates that schema object in the cache and causes it to be reloaded. Any changes to the audit options of base schema objects are not observed by views and procedures in the shared pool.
Page 108
Continuing with the previous example, if auditing of SELECT statements is turned off for the employees table, use of the employees_departments view no longer generates an audit record for the employees table.
Fine-Grained Auditing
Fine-grained auditing allows the monitoring of data access based on content. A built-in audit mechanism in the database prevents users from by-passing the audit. Oracle triggers can potentially monitor DML actions such as INSERT,UPDATE, and DELETE. However, monitoring on SELECT is costly and might not work for certain cases. In addition, users might want to define their own alert action in addition to just inserting an audit record into the audit trail. This feature provides an extensible interface to audit SELECT statements on tables and views. The DBMS_FGA package administers these value-based audit policies. Using DBMS_FGA, the security administrator creates an audit policy on the target table. If any of the rows returned from a query block matches the audit condition (these rows are referred to as interested rows), then an audit event entry, including username, SQL text, bind variable, policy name, session ID, time stamp, and other attributes, is inserted into the audit trail. As part of the extensibility framework, administrators can also optionally define an appropriate event handler, an audit event handler, to process the event; for example, the audit event handler could send an alert page to the administrator.
See Also: Oracle9i Application Developer's Guide - Fundamentals
For specific users or for all users in the database (statement and privilege auditing only)
Successful and Unsuccessful Statement Executions Auditing
For statement, privilege, and schema object auditing, Oracle allows the selective auditing of successful executions of statements, unsuccessful attempts to execute statements, or both. Therefore, you can monitor actions even if the audited statements do not complete successfully. You can audit an unsuccessful statement execution only if a valid SQL statement is issued but fails because of lack of proper authorization or because it references a nonexistent schema object. Statements that failed to execute because they simply were not valid cannot be audited. For example, an enabled privilege auditing option set to audit unsuccessful
Page 109
statement executions audits statements that use the target system privilege but have failed for other reasons (such as when CREATE TABLE is set but a CREATE TABLE statement fails due to lack of quota for the specified tablespace). Using either form of the AUDIT statement, you can include:
The WHENEVER SUCCESSFUL clause, to audit only successful executions of the audited statement The WHENEVER NOT SUCCESSFUL clause, to audit only unsuccessful executions of the audited statement Neither of the previous clauses, to audit both successful and unsuccessful executions of the audited statement
BY SESSION and BY ACCESS Clauses of Audit Statement
Most auditing options can be set to indicate how audit records should be generated if the audited statement is issued multiple times in a single user session. This section describes the distinction between the BY SESSION and BY ACCESSclauses of the AUDIT statement.
See Also: Oracle9i SQL Reference
BY SESSION
For any type of audit (schema object, statement, or privilege), BY SESSION inserts only one audit record in the audit trail, for each user and schema object, during the session that includes an audited action. A session is the time between when a user connects to and disconnects from an Oracle database.
BY SESSION Example 1
The SELECT TABLE statement auditing option is set BY SESSION. connects to the database and issues five SELECT statements against the table named departments and then disconnects from the database.
JWARD
connects to the database and issues three SELECT statements against the table employees and then disconnects from the database.
SWILLIAMS
In this case, the audit trail contains two audit records for the eight SELECT statements-- one for each session that issued a SELECT statement.
Page 110
BY SESSION Example 2
The SELECT TABLE statement auditing option is set BY SESSION. connects to the database and issues five SELECT statements against the table named departments, and three SELECT statements against the table employees, and then disconnects from the database.
JWARD
In this case, the audit trail contains two records--one for each schema object against which the user issued a SELECT statement in a session.
Note: If you use the BY SESSION clause when directing audit records to the operating system audit trail, Oracle generates and stores an audit record each time an access is made. Therefore, in this auditing configuration, BY SESSION is equivalent to BY ACCESS.
BY ACCESS
Setting audit BY ACCESS inserts one audit record into the audit trail for each execution of an auditable operation within a cursor. Events that cause cursors to be reused include the following:
An application, such as Oracle Forms, holding a cursor open for reuse Subsequent execution of a cursor using new bind variables Statements executed within PL/SQL loops where the PL/SQL engine optimizes the statements to reuse a single cursor
Note that auditing is not affected by whether a cursor is shared. Each user creates her or his own audit trail records on first execution of the cursor. For example, assume that:
The SELECT TABLE statement auditing option is set BY ACCESS. connects to the database and issues five SELECT statements against the table named departments and then disconnects from the database.
JWARD
connects to the database and issues three SELECT statements against the table departments and then disconnects from the database.
SWILLIAMS
The single audit trail contains eight records for the eight SELECT statements.
Page 111
The AUDIT statement lets you specify either BY SESSION or BY ACCESS. However, several audit options can be set only BY ACCESS, including:
All statement audit options that audit DDL statements All privilege audit options that audit DDL statements
Statement and privilege audit options can audit statements issued by any user or statements issued by a specific list of users. By focusing on specific users, you can minimize the number of audit records generated.
Audit By User Example
To audit statements by the users SCOTT and BLAKE that query or update a table or view, issue the following statements:
AUDIT SELECT TABLE, UPDATE TABLE BY scott, blake;
Page 112
SCOPE OF AUDITING The AUDIT facility allows you to specify the scope of AUDIT action as follows: BY USER Allow you to specify a specific user to audit. (Default ALL USERS) WHENEVER SUCCESSFUL / WHENEVER NOT SUCCESSFUL Allows you to specify whether you want an AUDITING to occur at all times, or only whenever the specific action was successful or unsuccessful. (Default BOTH) BY SESSION / BY ACCESS Allows you to specify how often AUDIT records are to be generated.
Page 113
LIMITATIONS OF AUDITING Oracles AUDITING facility only works at the statement level. It is able to capture that a particular user executed a SELECT statement against a specific table, but it is not able to tell you which rows were retrieved. IMPLEMENTING AUDITING AUDITING can be activated by following these steps: 1. Enable AUDITING at the database level with the INIT.ORA parameter audit_trail= true(DB) 2. Enable the level of AUDITING through the AUDIT SQL statement The audit_trail parameter must be enabled in init.ora for auditing to work. The valid values for this parameter are DB, NONE and OS. DB: Enables auditing to the (internal) Systems data dictionary OS: Enables auditing to the operating system audit trail. When set to OS, another parameter audit_file_dest has to be set in the INIT.ORA, for dumping audit_trail files in the OS. NONE: Disables all auditing AUDIT TRAIL VIEWS DBA_AUDIT_OBJECT DBA_AUDIT_SESSION DBA_AUDIT_STATEMENT DBA_AUDIT_TRAIL DBA_OBJ_AUDIT_OPTS DBA_PRIV_AUDIT_OPTS DBA_STMT_AUDIT_OPTS DBA_AUDIT_EXISTS AUDIT_ACTIONS NOTE: For using AUDITING first you have to (uncomment/make a new entry) in your PARAMETER ( init<SID>.ora ) file and RESTART your database.
Page 114
Parameter : audit_trail=TRUE ( DB )
STATEMENT LEVEL AUDITING Now enabling audit at the STATEMENT-LEVEL for all users Ex:- (1)
SQL> AUDIT create session WHENEVER NOT SUCCESSFUL ;
Audit succeeded. Now checking the info. about AUDITING options set, from the data dictionary view (DBA_STMT_AUDIT_OPTS).
SQL> SELECT FROM user_name, audit_option, success, failure dba_stmt_audit_opts;
Exp-User Audit-Opt SUCCESS FAILURE ------------ -------------------- ---------- --------CREATE SESSION NOT SET BY ACCESS Now trying to login to the database, probably with a wrong password, to simulate an unsuccessful logging. $ sqlplus U_SCOTT/tigre SQL*Plus: Release 9.0.1.0.0 - Production on Thu Jan 2 15:34:11 2003 c) Copyright 2001 Oracle Corporation. All rights reserved. ERROR: ORA-01017: invalid username/password; logon denied Enter user-name: ERROR:
Page 115
ORA-01017: invalid username/password; logon denied Enter user-name: ERROR: ORA-01017: invalid username/password; logon denied SP2-0157: unable to CONNECT to ORACLE after 3 attempts, exiting SQL*Plus This is a failed login attempt to the database, that would generate an audit_trail record in the Data_dictionary
Now selecting info. about failed login attempts from the view 'DBA_AUDIT_SESSION'
SQL> select from os_username, username, terminal, action_name, to_char(timestamp,'dd/mm/yyyy:hh24:mi:ss') Time-Stamp dba_audit_session ;
Os-User User Terminal Action-Name Time-Stamp ---------- ---------- ---------- ---------------- -----------------rahul U_SCOTT pts/8 LOGON 30/04/2005:18:02:55 Now enabling STATEMENT-LEVEL AUDIT for actions (CREATE, DROP, TRUNCATE) on any table by user U_SCOTT Ex:- (2)
SQL> audit create table by U_SCOTT;
Audit succeeded. Now checking the info. about AUDITING options set, from the data dictionary view (DBA_STMT_AUDIT_OPTS).
Page 116
Now trying to create a table from user U_SCOTT. SQL> create table test (test number); Table created. This was an attempt to create a table by user U_SCOTT, that would generate an audit_trail record in the Data_dictionary, whether it was successful or not. Now selecting info. about the created table from the view 'DBA_AUDIT_TRAIL'
SQL> Select from os_username, username, owner, action_name, obj_name, to_char(timestamp,'dd/mm/yyyy:hh24:mi:ss') Time-Stamp dba_audit_trail;
Os-User User Owner Time-Stamp -------- ---------- ------------- ------------------rahul U_SCOTT 02/05/2005:15:49:56 rahul U_SCOTT U_SCOTT 02/05/2005:15:54:13
Action-name
O-Name
Page 117
A>Enabling OBJECT-LEVEL AUDITING On Single User Now enabling OBJECT-LEVEL AUDITING on user U_SCOTT Ex:- (1)
SQL> audit select on U_SCOTT.dept whenever successful;
Audit succeeded. Now checking the info. about AUDITING options set, from the data dictionary view (DBA_OBJ_AUDIT_OPTS).
SQL> select owner, object_name, object_type, sel, upd, del from dba_obj_audit_opts where owner='U_SCOTT';
Obj-Name
Obj-Type
SEL UPD
------------- ------------- --- --- -BONUS DEPT DUMMY EMP SALGRADE TEST TABLE TABLE TABLE TABLE TABLE TABLE -/- -/- S/- -/- -/- -/- -/- -/- -/- -/- -/- -/- Page 118
6 rows selected. Now trying to select from U_SCOTT's dept table. SQL> select * from dept; DEPTNO ---------10 20 30 40 DNAME -------------ACCOUNTING RESEARCH SALES OPERATIONS LOC ------------NEW YORK DALLAS CHICAGO BOSTON
This was an attempt to select from U_SCOTT's dept table, that would generate an audit_trail record in the Data_dictionary Now selecting info. about the select from U_SCOTT's dept, from the view 'DBA_AUDIT_SESSION'
SQL> select os_username, username, owner, action_name, obj_name, to_char(timestamp,'dd/mm/yyyy:hh24:mi:ss') Time-Stamp from dba_audit_trail;
Os-User User Owner Time-Stamp -------- ---------- ------------- ------------------rahul U_SCOTT 02/05/2005:16:06:15 rahul U_SCOTT U_SCOTT 02/05/2005:16:29:34 rahul U_SCOTT U_SCOTT 02/05/2005:16:32:45
Action-name
O-Name
Now enabling OBJECT-LEVEL AUDITING by user ALL USERS by ACCESS, which writes a record, to the audit_trail, every time the table is accessed. Ex:- (2)
Page 119
Audit succeeded.
Now checking the info. about AUDITING options set, from the data dictionary view (DBA_OBJ_AUDIT_OPTS).
SQL> select owner, object_name, object_type, sel, upd, del from dba_obj_audit_opts where owner='U_SCOTT';
Obj-Name
Obj-Type
SEL UPD
------------- ------------- --- --- -BONUS DEPT DUMMY EMP SALGRADE TEST TABLE TABLE TABLE TABLE TABLE TABLE -/- -/- S/- -/- -/- -/- -/- -/A -/- -/- -/- -/- -
6 rows selected. Now trying to update U_SCOTT's emp table with an unsuccessful attempt.
SQL> update emp set sal = 5000 where empno = 'U_SCOTT';
ERROR at line 2:
Page 120
* ERROR at line 2: ORA-01722: invalid number There were three unsuccessful attempts to update salary figures in U_SCOTT's emp, that would generate THREE audit_trail records in the Data_dictionary. Now selecting info. about the unsuccessful attempts, from the view 'DBA_AUDIT_TRAIL'
SQL> select from os_username, username, owner, action_name, obj_name, to_char(timestamp,'dd/mm/yyyy:hh24:mi:ss') Time-Stamp dba_audit_trail;
Os-User User Owner Time-Stamp -------- ---------- ------------- ------------------rahul U_SCOTT 02/05/2005:16:06:15 rahul U_SCOTT U_SCOTT 02/05/2005:16:29:34 rahul U_SCOTT U_SCOTT 02/05/2005:16:32:45 rahul U_SCOTT U_SCOTT 02/05/2005:16:38:30 rahul U_SCOTT U_SCOTT 02/05/2005:16:38:33
Action-name
O-Name
-------------- -----LOGON CREATE TABLE SESSION REC UPDATE UPDATE TEST DEPT EMP EMP
Page 121
UPDATE
EMP
PRIVILEGE LEVEL AUDITING Now enabling audit at the PRIVILEGE-LEVEL for user U_SCOTT whenever successful Ex:- (1) SQL> audit create view by U_SCOTT whenever successful; Audit succeeded. Now checking the info. about AUDITING options set, from the data dictionary view (DBA_PRIV_AUDIT_OPTS).
SQL> select user_name, privilege, success, failure from dba_priv_audit_opts;
Exp-User PRIVILEGE FAILURE ------------ -----------------------------CREATE SESSION ACCESS U_SCOTT CREATE VIEW SET Now trying to create a view as U_SCOTT.
SQL> create view v_test as select * from test;
View created.
Page 122
This is a successful attempt to create view that would generate an audit_trail record in the Data_dictionary Now selecting info. the views created by U_SCOTT from the view 'DBA_AUDIT_TRAIL'
SQL> select from os_username, username, owner, action_name, obj_name, to_char(timestamp,'dd/mm/yyyy:hh24:mi:ss') Time-Stamp dba_audit_trail;
Os-User User Owner Time-Stamp -------- ---------- ------------- ------------------rahul U_SCOTT 02/05/2005:16:06:15 rahul U_SCOTT U_SCOTT 02/05/2005:16:29:34 rahul U_SCOTT U_SCOTT 02/05/2005:16:32:45 rahul U_SCOTT U_SCOTT 02/05/2005:16:38:30 rahul U_SCOTT U_SCOTT 02/05/2005:16:38:33 rahul U_SCOTT U_SCOTT 02/05/2005:16:38:38 rahul U_SCOTT U_SCOTT 02/05/2005:16:42:01 7 rows selected.
Action-name
O-Name
-------------- -----LOGON CREATE TABLE SESSION REC UPDATE UPDATE UPDATE CREATE VIEW TEST DEPT EMP EMP EMP V_TEST
Page 123
DISABLING AUDITING Now try to select all the AUDITING options ENABLED.
SQL> select from user_name, audit_option, success, failure dba_stmt_audit_opts;
Exp-User Audit-Opt ------------ ---------------------CREATE SESSION ACCESS U_SCOTT TABLE ACCESS U_SCOTT CREATE VIEW
SQL> noaudit table by U_SCOTT;
Now disabling one of the statement level audit option Noaudit succeeded. Now checking whether the option is deleted
SQL> select From user_name, audit_option, success, failure dba_stmt_audit_opts;
Page 124
Statement Level Auditing Disabled. Likewise disabling PRIVILEGE LEVEL audit options for all entries exit Now checking whether the (STATEMENT AUDIT) options are deleted or not.
SQL> select From user_name, audit_option, success, failure dba_stmt_audit_opts;
no rows selected Now checking whether the (PRIVILEGE AUDIT) options are deleted or not.
SQL> select from user_name, privilege, success, failure dba_priv_audit_opts;
no rows selected You can disable AUDITING completely or for an individual STATEMENT If you want to DISABLE AUDITING completely just (comment/remove) the PARAMETER in (init<sid>.ora) file. parameter : # audit_trail = true
Page 125
$ cat listener.ora |less lis_rahul = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = dba15 )(PORT = 10653)) ) ) ) SID_LIST_lis_rahul = (SID_LIST = (SID_DESC = (SID_NAME = rahuldb) (ORACLE_HOME = /oraeng/app/oracle/product/9.2.0) ) ) #################################################### ##################### After editing the LISTENER.ORA file, you have to APPEND it to the $ORACLE_HOME/network/admin/listener.ora ( Default Destination ) as follows : $ cat listener.ora >>$ORACLE_HOME/network/admin/listener.ora or You can change the default path by setting Environment variable.like : $ export TNS_ADMIN=$\HOME After configuring LISTENER File start the listener process.
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dba15) (PORT=10653))) Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dba15)(POR T=10653))) STATUS of the LISTENER -----------------------Alias lis_rahul Version TNSLSNR for Linux: Version 9.2.0.1.0 - Production Start Date 02-MAY-2005 17:06:48 Uptime 0 days 0 hr. 0 min. 0 sec Trace Level off Security OFF SNMP OFF Listener Parameter File /tmp/rahul/O_NET/listener.ora Listener Log File /oraeng/app/oracle/product/9.2.0/network/log/lis_rahu l.log Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dba15)(POR T=10653))) Services Summary... Service "rahuldb" has 1 instance(s). Instance "rahuldb", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully Later, find out whether the listener is started or not. $ ps x (to find out the process status)[Option] Option may be any one of the following
Page 128
Switch Description -O is preloaded "-o" -c different scheduler info for -l option -f does full listing -j jobs format -l long format -o user-defined format -y do not show flags; show rss in place of addr O is preloaded "o" (overloaded) X old Linux i386 register format j job control format l display long format o specify user-defined format s display signal format u display user-oriented format v display virtual memory format --format user-defined format OUTPUT MODIFIERS Switch Description -H show process hierarchy (forest) -m show all threads -n set namelist file -w wide output C use raw CPU time for %CPU instead of decaying average N specify namelist file O sorting order (overloaded) S include some dead child process data (as a sum with the parent) c true command name e show environment after the command f ASCII-art process hierarchy (forest) h do not print header lines (repeat header lines in BSD personality) m all threads n numeric output for WCHAN and USER w wide output --cols set screen width --columns set screen width --cumulative include some dead child process data (as a sum with the parent) --forest ASCII art process tree --html HTML escaped output --headers repeat header lines --no-headers print no header line at all --lines set screen height --nul unjustified output with NULs --null unjustified output with NULs --rows set screen height --sort specify sorting order --width set screen width --zero unjustified output with NULs
Page 129
PID TTY STAT TIME COMMAND 22570 pts/8 S 0:00 -bash 23234 pts/8 S 0:00 -bash 23235 pts/8 S 0:00 sh /.WILSHIRE/USERDEMO/user_demo 15205 pts/8 S 0:00 sh /.WILSHIRE/USERDEMO/FILES/onet_demo 15277 ? S 0:00 ora_pmon_rahuldb 15279 ? S 0:00 ora_dbw0_rahuldb 15281 ? S 0:00 ora_lgwr_rahuldb 15283 ? S 0:00 ora_ckpt_rahuldb 15285 ? S 0:00 ora_smon_rahuldb 15287 ? S 0:00 ora_reco_rahuldb 24751 pts/8 S 0:00 /oraeng/app/oracle/product/9.2.0/bin/tnslsnr lis_rahul 25488 pts/8 R 0:00 ps x Once, the listener process is started, you have to startup your database. $ sqlplus "/as sysdba" SQL> startup SQL*Plus: Release 9.0.1.0.0 - Production on Thu Jan 2 15:47:28 2003 (c) Copyright 2001 Oracle Corporation. All rights reserved. Connected to: Oracle9i Enterprise Edition Release 9.0.1.0.0 Production With the Partitioning option JServer Release 9.0.1.0.0 - Production Disconnected from Oracle9i Enterprise Edition Release 9.0.1.0.0 - Production With the Partitioning option
Page 130
JServer Release 9.0.1.0.0 - Production At the CLIENT we have to configure TNSNAMES.ORA file which is also copied in client's HOME Directory. $ cat tnsnames.ora tns_rahul = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = dba15 )(PORT = 10653)) ) (CONNECT_DATA = (SID = rahuldb) ) ) #################################################### ##################### After configuring append this TNSNAMES file to the default destination. $ cat tnsnames.ora >>$ORACLE_HOME/network/admin/tnsnames.ora or set the Environment variable for changing the default location. like : $ export TNS_ADMIN=$HOME At the SERVER we started LSNRCTL service from CLIENT we try to check the network correctly established or not.
$ tnsping tns_rahul
TNS Ping Utility for Linux: Version 9.0.1.0.0 Production on 02-JAN-2003 15:48:25 Copyright (c) 1997 Oracle Corporation. reserved. All rights
Page 131
Used parameter files: /oraeng/app/oracle/product/9.0.1/network/admin/sqlne t.ora /users/my/O_NET/tnsnames.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (HOST = dba15)(PORT = 10653))) (CONNECT_DATA = (SID = rahuldb))) OK (10 msec) From CLIENT try to connect to SERVER DATABASE using 'lis_udemo' alias. $ sqlplus system/manager@tns_rahul We select the database name to which we have connected. SQL> select * from v$database; DBID NAME CREATED ---------- -------------------------------------------- --------RESETLOGS_CHANGE# RESETLOGS PRIOR_RESETLOGS_CHANGE# PRIOR_RES LOG_MODE ----------------- --------- ------------------------------- -----------CHECKPOINT_CHANGE# ARCHIVE_CHANGE# CONTROL CONTROLFI CONTROLFILE_SEQUENCE# ------------------ --------------- ------- ----------------------------CONTROLFILE_CHANGE# CONTROLFI OPEN_RESETL VERSION_T OPEN_MODE ------------------- --------- ----------- -----------------PROTECTION_MODE PROTECTION_LEVEL REMOTE_A ACTIVATION# DATABASE_ROLE
Page 132
-------------------- -------------------- -------- ---------- ---------------ARCHIVELOG_CHANGE# SWITCHOVER_STATUS DATAGUAR GUARD_S SUP SUP SUP FOR ------------------ ------------------ -------- ------ --- --- --- --1477391597 RAHULDB 19-APR-05 1 19-APR-05 0 NOARCHIVELOG 433961 393307 CURRENT 19-APR-05 2605 433961 02-MAY-05 NOT ALLOWED 19-APR-05 READ WRITE MAXIMUM PERFORMANCE UNPROTECTED ENABLED 1477402861 PRIMARY 0 NOT ALLOWED DISABLED NONE NO NO NO NO Now we try to see the Background Processes any started. $ ps x 3617 pts/11 S 0:00 -bash 3676 pts/11 S 0:00 -bash 3677 pts/11 S 0:00 sh /.exam/TEST/t_userid 5573 pts/11 S 0:00 sh /.exam/TEST/FILES/onet_demo 5813 pts/11 S 0:00 /oraeng/app/oracle/product/9.0.1/bin/tnslsnr lis_my 5889 ? S 0:00 ora_pmon_my 5891 ? S 0:00 ora_dbw0_my 5893 ? S 0:00 ora_lgwr_my 5895 ? S 0:00 ora_ckpt_my 5897 ? S 0:00 ora_smon_my 5899 ? S 0:00 ora_reco_my 5901 ? S 0:00 ora_cjq0_my 5916 ? S 0:00 ora_j000_my 5921 ? S 0:00 ora_j001_my 5925 ? S 0:00 ora_j002_my 5987 pts/11 S 0:00 sqlplus
Page 133
5989 pts/11 S 0:00 oraclemy (LOCAL=NO) 6010 pts/11 R 0:00 ps x [ See here One server process is connected as LOCAL=NO ].
Page 134
14) FILESIZE :-maximum size of each dump file 15) FLASHBACK_SCN:-SCN used to set session snapshot back to 16) FLASHBACK_TIME:-time used closest to the specified time 17) QUERY:- select subset of a table clause to get to space the export SCN a
used a
related
Page 135
19) RESUMABLE_NAME:-text string used to identify resumable statement 20) RESUMABLE_TIMEOUT:-wait time for RESUMABLE full to or write partial to each
21) TTS_FULL_CHECK:-perform dependency check for TTS 22) VOLSIZE:-number tape volume 23) of bytes
25) TEMPLATE:template invokes iAS mode export Export terminated successfully without warnings.
Now taking BACKUP at Table-Level for U_SCOTT schema. $ exp U_SCOTT/U_SCOTT file=U_SCOTT_emp.dmp log=U_SCOTT_emp.log tables=emp (c) Copyright 2001 Oracle Corporation. reserved. All rights
Connected to: Oracle9i Enterprise Edition Release 9.0.1.0.0 - Production With the Partitioning option JServer Release 9.0.1.0.0 - Production Export done in US7ASCII character set and AL16UTF16 NCHAR character set
Page 136
About to export specified tables via Conventional Path ... . . exporting table EMP 14 rows exported Export terminated successfully without warnings. Now Dropping table EMP from Scott and try to check the TABLE exists or not?. SQL> DROP TABLE emp; Table dropped. SQL> SELECT * FROM tab; TNAME -----------------------------BONUS DEPT DUMMY SALGRADE TABTYPE CLUSTERID ------- ---------TABLE TABLE TABLE TABLE
Now Restoring Table EMP from BACKUP. $ imp U_SCOTT/U_SCOTT file=U_SCOTT_emp.dmp log=U_SCOTT_emp_i.log tables=emp Import: Release 9.0.1.0.0 - Production on Thu Jan 2 15:52:55 2003 (c) Copyright 2001 Oracle Corporation. reserved. All rights
Connected to: Oracle9i Enterprise Edition Release 9.0.1.0.0 - Production With the Partitioning option JServer Release 9.0.1.0.0 - Production
Page 137
Export file created by EXPORT:V09.00.01 via conventional path import done in US7ASCII character set and AL16UTF16 NCHAR character set . importing U_SCOTT's objects into U_SCOTT . . importing table "EMP" 14 rows imported Import terminated successfully without warnings. EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ------- ---------- --------- ------- --------- ------ ------- ------7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 09-DEC-82 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7876 ADAMS CLERK 7788 12-JAN-83 1100 20 7900 JAMES CLERK 7698 03-DEC-81 950 30
Page 138
ANALYST 20 CLERK 10
SCHEMA LEVEL EXPORT Now Exporting at Owner (Schema) Level $ exp U_SCOTT/U_SCOTT file=U_SCOTT.dmp log=U_SCOTT.log buffer=100000 . . exporting table BONUS 0 rows exported . . exporting table DEPT 4 rows exported . . exporting table DUMMY 1 rows exported . . exporting table EMP 14 rows exported . . exporting table SALGRADE 5 rows exported . exporting synonyms . exporting views . exporting stored procedures . exporting operators . exporting referential integrity constraints . exporting triggers . exporting indextypes . exporting bitmap, functional and extensible indexes . exporting posttables actions . exporting materialized views . exporting snapshot logs . exporting job queues . exporting refresh groups and children . exporting dimensions
Page 139
. exporting post-schema procedural objects and actions . exporting statistics Export terminated successfully without warnings.
NOTE : Well, we can also perform the same operation as a DBA using SYSTEM user. $ exp system/manager file=U_SCOTT.dmp log=U_SCOTT.log buffer=100000 owner=U_SCOTT Check whether the user U_STEEVE having any objects. ( from user SYSTEM ) SQL> SELECT segment_name FROM dba_segments WHERE owner='U_STEEVE'; no rows selected Now Importing the contents of U_SCOTT to user U_STEEVE. $ imp U_STEEVE/U_STEEVE file=U_SCOTT.dmp log=U_SCOTT_i.log full=y Import: Release 9.0.1.0.0 - Production on Thu Jan 2 15:55:19 2003 (c) Copyright 2001 Oracle Corporation. reserved. All rights
Connected to: Oracle9i Enterprise Edition Release 9.0.1.0.0 - Production With the Partitioning option JServer Release 9.0.1.0.0 - Production Export file created by EXPORT:V09.00.01 via conventional path
Page 140
Warning: the objects were exported by U_SCOTT, not by you import done in US7ASCII character set and AL16UTF16 NCHAR character set . importing U_SCOTT's objects into U_STEEVE . . importing table "BONUS" 0 rows imported . . importing table "DEPT" 4 rows imported . . importing table "DUMMY" 1 rows imported . . importing table "EMP" 14 rows imported . . importing table "SALGRADE" 5 rows imported Import terminated successfully without warnings. We try to select the Table's Info from STEEVE user. SQL> SELECT * FROM tab; TNAME -----------------------------BONUS DEPT DUMMY EMP SALGRADE TABTYPE CLUSTERID ------- ---------TABLE TABLE TABLE TABLE TABLE
SQL> BEGIN FOR i IN 40..10000 LOOP INSERT INTO dept VALUES(i,'WILSHIRE','ASHOKNAGAR'); COMMIT; END LOOP END; / Table altered. 1024 rows created. Commit complete. 2048 rows created. Commit complete. 4096 rows created. Commit complete. COUNT(*) ---------8192 Now checking the number of extents for table DEPT SQL> SELECT segment_name, extents FROM user_segments WHERE segment_name='DEPT';
Page 142
Seg-Name EXTENTS ---------- ---------DEPT 7 Now Exporting full DB to multiple files (by using the FILESIZE & COMPRESS option). $ exp system/manager file = exp1.dmp, exp2.dmp, exp3.dmp, \ exp4.dmp, exp5.dmp, exp6.dmp log = exp.log \ full = y filesize = 4m compress = n . . exporting table SALGRADE 5 rows exported . about to export U_JUNK's tables via Conventional Path ... . exporting synonyms . exportng views . exporting referential integrity constraints . exporting stored procedures . exporting operators . exporting indextypes . exporting bitmap, functional and extensible indexes . exporting posttables actions . exporting triggers . exporting materialized views . exporting snapshot logs . exporting job queues . exporting refresh groups and children . exporting dimensions . exporting post-schema procedural objects and actions . exporting user history table . exporting default and system auditing options . exporting statistics
Page 143
We try to see all the DUMP Files and their SIZES at O/S level. $ ls -ltr /users/my/EXP_DIR/| grep dmp |less -rw-rw-rw1 rahul 16:26 U_SCOTT_emp.dmp -rw-rw-rw1 rahul 16:30 U_SCOTT.dmp -rw-rw-rw1 rahul 16:36 exp1.dmp dba dba dba 16384 May 16384 May 385024 May 3 3 3
Now Importing only one table DEPT out of all multiple DUMP files into another USER 'U_JUNK'. $ imp system/manager file=exp1.dmp, exp2.dmp, \ exp3.dmp, exp4.dmp, exp5.dmp, \ exp6.dmp log=dept_only.log fromuser=U_SCOTT touser=U_JUNK \ tables=dept (c) Copyright 2001 Oracle Corporation. reserved. All rights
Connected to: Oracle9i Enterprise Edition Release 9.0.1.0.0 - Production With the Partitioning option JServer Release 9.0.1.0.0 - Production Export file created by EXPORT:V09.00.01 via conventional path import done in US7ASCII character set and AL16UTF16 NCHAR character set
Page 144
IMP-00046: using FILESIZE value from export file of 4194304 . importing U_SCOTT's objects into U_JUNK . . importing table "DEPT" 8192 rows imported Import terminated successfully with warnings. Now Checking the number of extents of the IMPORTED table with COMPRESS=N option Seg-Name EXTENTS ---------- ---------DEPT 7 Now Exporting partial data from a table using (QUERY Option). $ exp U_SCOTT/U_SCOTT file=expquery.dmp log=expquery.log tables=emp query=\'where deptno=10\' Export: Release 9.0.1.0.0 - Production on Thu Jan 2 15:58:57 2003 (c) Copyright 2001 Oracle Corporation. reserved. All rights
Connected to: Oracle9i Enterprise Edition Release 9.0.1.0.0 - Production With the Partitioning option JServer Release 9.0.1.0.0 - Production Export done in US7ASCII character set and AL16UTF16 NCHAR character set About to export specified tables via Conventional Path ...
Page 145
. . exporting table EMP 3 rows exported Export terminated successfully without warnings. NOTE : IMPORT for the above 'QUERY' option, can be done by any of the IMPORT options. Now exporting using a parameter file with PARFILE option $ exp system/manager parfile=par.file Contents of the par.file : file=expfull.dmp log=expfull.log buffer=2000000 owner=u_scott,u_steeve,u_junk feedback=100 . exporting . exporting . exporting indexes . exporting . exporting . exporting . exporting . exporting . exporting . exporting . exporting . exporting . exporting indexes . exporting . exporting . exporting . exporting . exporting triggers indextypes bitmap, functional and extensible posttables actions materialized views snapshot logs job queues refresh groups and children dimensions referential integrity constraints triggers indextypes bitmap, functional and extensible posttables actions materialized views snapshot logs job queues refresh groups and children
Page 146
. exporting dimensions . exporting post-schema procedural objects and actions . exporting statistics Export terminated successfully without warnings. Now taking FULL Database export (with CONSISTENT option, so that Oracle guarantees us a TIME-IMAGE (read-consistency). $ exp system/manager file=full.dmp log=full.log full=y consistent=y . about to export U_JUNK's tables via Conventional Path ... . . exporting table DEPT 8192 rows exported . exporting synonyms . exporting views . exporting referential integrity constraints . exporting stored procedures . exporting operators . exporting indextypes . exporting bitmap, functional and extensible indexes . exporting posttables actions . exporting triggers . exporting materialized views . exporting snapshot logs . exporting job queues . exporting refresh groups and children . exporting dimensions . exporting post-schema procedural objects and actions . exporting user history table . exporting default and system auditing options . exporting statistics Export terminated successfully without warings
Page 147
DEMO ON EXPORT & IMPORT ( day 2 ) ( with INCTYPE option ) There are 3 types of Incremental Exports 1. Complete 2. Cumulative 3. Incremental Here we are setting a standard for filenaming convention. X --> Complete I --> Incremental C --> Cumulative and at the same time we are going to give some numbering to set of files that we going to exp to. We can observe the details about the backups from the database. Oracle provides 3 views to get backup info, which are : 1. DBA_EXP_VERSION 2. DBA_EXP_FILES 3. DBA_EXP_OBJECTS The following view is to show the latest version of the Export, But whenever we take COMPLETE Exp, the Version resets back to 1 and later performed INCREMENTAL, CUMULATIVE will increment by '1'. Now we try to select the Export Version. SQL> SELECT * FROM dba_exp_version; EXP_VERSION ----------0 Selecting the Exported File Info. SQL> SELECT * FROM dba_exp_files ORADER BY 1; no rows selected
Page 148
Selecting MORE Details about INCTYPE Exports. SQL> SELECT owner, object_name, object_type, cumulative, incremental, a.export_version, b.exp_type FROM dba_exp_objects a, dba_exp_files b WHERE a.export_version = b.exp_version AND owner='U_SCOTT' ORDER BY 6; no rows selected Now creating U_SCOTT-Demo tables. SQL> conn U_SCOTT/U_SCOTT SQL> !demobld U_SCOTT U_SCOTT Done creating SCOTT-Demo tables. Now performing Complete Export ( inctype = COMPLETE ) $ exp system/manager file=X1_sun.dmp log=X1_sun_exp.log inctype=complete . . exporting table SALGRADE 5 rows exported . about to export U_STEEVE's tables via Conventional Path ... . about to export U_JUNK's tables via Conventional Path ... . exporting synonyms . exporting views . exporting referential integrity constraints . exporting stored procedures . exporting operators . exporting indextypes . exporting bitmap, functional and extensible indexes
Page 149
. exporting posttables actions . exporting triggers . exporting materialized views . exporting snapshot logs . exporting job queues . exporting refresh groups and children . exporting dimensions . exporting post-schema procedural objects and actions . exporting user history table . exporting default and system auditing options . exporting statistics Export terminated successfully without warnings. Now observing the BACKUP DETAILS in the database. SQL> select * FROM dba_exp_version; EXP_VERSION ----------1 Selecting Info about Exported Files. SQL> SELECT * FROM dba_exp_files ORDER BY 1; EX_VER Exp-Typ F-Name U-Name Exp-Dt ------ ----------- ---------------------------------- ------- --------1 COMPLETE /tmp/rahul/EXP_DIR/X1_sun.dmp SYSTEM 03-MAY-05
Page 150
Selecting Detailed Info about INCTYPE Exports. Sql> SELECT owner, object_name, object_type, cumulative, incremental, a.export_version, b.exp_type FROM dba_exp_objects a, dba_exp_files b WHERE a.export_version = b.exp_version AND owner='U_SCOTT' ORDER BY 6; Owner Obj-Name time Exp-V Exp-Typ ---------- ------------------- ------ ----------U_SCOTT BONUS MAY-05 1 COMPLETE U_SCOTT DEPT MAY-05 1 COMPLETE U_SCOTT DUMMY MAY-05 1 COMPLETE U_SCOTT EMP MAY-05 1 COMPLETE U_SCOTT SALGRADE MAY-05 1 COMPLETE Obj-Type Cum-time Inc-
---------- --------- ---TABLE TABLE TABLE TABLE TABLE 03-MAY-05 0303-MAY-05 0303-MAY-05 0303-MAY-05 0303-MAY-05 03-
Now we insert few lines into a table and try to take INCTYPE backup ( inctype = INCREMENTAL ). Connected to: Oracle9i Enterprise Edition Release 9.0.1.0.0 Production With the Partitioning option JServer Release 9.0.1.0.0 - Production SQL> SQL> SQL> 9 rows created. Commit complete. SQL>
Page 151
B Y Y Y N Y
SQL> Disconnected from Oracle9i Enterprise Edition Release 9.0.1.0.0 - Production With the Partitioning option JServer Release 9.0.1.0.0 - Production Now performing Incremental Export ( inctype = INCREMENTAL) $ exp system/manager file=I2_mon.dmp log=I2_mon_exp.log inctype=incremental . about to export U_JUNK's tables via Conventional Path ... . exporting synonyms . exporting views . exporting referential integrity constraints . exporting stored procedures . exporting operators . exporting indextypes . exporting bitmap, functional and extensible indexes . exporting posttables actions . exporting triggers . exporting materialized views . exporting snapshot logs . exporting job queues . exporting refresh groups and children . exporting dimensions . exporting post-schema procedural objects and actions . exporting user history table
Page 152
. exporting default and system auditing options . exporting information about dropped objects . exporting statistics Export terminated successfully without warnings. Now observing the BACKUP DETAILS in the database. SQL> SELECT * FROM dba_exp_version; EXP_VERSION ----------2 Selecting Info about Exported Files. SQL> SELECT * FROM dba_exp_files ORDER BY 1; EX_VER Exp-Typ F-Name U-Name Exp-Dt ------ ----------- ---------------------------------- ------- --------1 COMPLETE /tmp/rahul/EXP_DIR/X1_sun.dmp SYSTEM 03-MAY-05 2 INCREMENTAL /tmp/rahul/EXP_DIR/I2_mon.dmp SYSTEM 03-MAY-05 Selecting Detailed Info about INCTYPE Exports. SQL> SELECT owner, object_name, object_type, cumulative, incremental, a.export_version, b.exp_type FROM dba_exp_objects a, dba_exp_files b WHERE a.export_version = b.exp_version AND owner='U_SCOTT' ORDER BY 6;
Page 153
Owner Obj-Name time Exp-V Exp-Typ ---------- ------------------- ------ ----------U_SCOTT BONUS MAY-05 1 COMPLETE U_SCOTT DEPT MAY-05 1 COMPLETE U_SCOTT DUMMY MAY-05 1 COMPLETE U_SCOTT SALGRADE MAY-05 1 COMPLETE U_SCOTT EMP MAY-05 2 INCREMENTAL
Obj-Type
Cum-time
Inc-
---------- --------- ---TABLE TABLE TABLE TABLE TABLE 03-MAY-05 0303-MAY-05 0303-MAY-05 0303-MAY-05 0303-MAY-05 03-
Now insert few lines into a table and try to take another INCTYPE backup ( inctype = INCREMENTAL ). Connected to: Oracle9i Enterprise Edition Release 9.0.1.0.0 Production With the Partitioning option JServer Release 9.0.1.0.0 - Production SQL> SQL> SQL> 1 row created. Commit complete. SQL> TABLE_NAME -----------------------------BONUS DEPT DUMMY EMP SALGRADE
B Y N Y Y Y
With the Partitioning option JServer Release 9.0.1.0.0 - Production Now performing Incremental Export ( inctype = INCREMENTAL ) $ exp system/manager file=I3_tue.dmp log=I3_tue_exp.log inctype=incremental . about to export U_STEEVE's tables via Conventional Path ... . about to export U_JUNK's tables via Conventional Path ... . exporting synonyms . exporting views . exporting referential integrity constraints . exporting stored procedures . exporting operators . exporting indextypes . exporting bitmap, functional and extensible indexes . exporting posttables actions . exporting triggers . exporting materialized views . exporting snapshot logs . exporting job queues . exporting refresh groups and children . exporting dimensions . exporting post-schema procedural objects and actions . exporting user history table . exporting default and system auditing options . exporting information about dropped objects . exporting statistics Export terminated successfully without warnings.
Page 155
Now observing the BACKUP DETAILS in the database. SQL> select * from dba_exp_version; EXP_VERSION ----------3 Selecting Info about Exported Files. SQL> select * from dba_exp_files order by 1; EX_VER Exp-Typ F-Name U-Name Exp-Dt ------ ----------- ---------------------------------- ------- --------1 COMPLETE /tmp/rahul/EXP_DIR/X1_sun.dmp SYSTEM 03-MAY-05 2 INCREMENTAL /tmp/rahul/EXP_DIR/I2_mon.dmp SYSTEM 03-MAY-05 3 INCREMENTAL /tmp/rahul/EXP_DIR/I3_tue.dmp SYSTEM 03-MAY-05 Selecting Detailed Info about INCTYPE Exports. SQL> SELECT owner, object_name, object_type, cumulative, incremental, a.export_version, b.exp_type FROM dba_exp_objects a, dba_exp_files b WHERE a.export_version = b.exp_version AND owner='U_SCOTT' ORDER BY 6; Owner Obj-Name time Exp-V Exp-Typ ---------- ------------------- ------ ----------U_SCOTT BONUS MAY-05 1 COMPLETE U_SCOTT DUMMY MAY-05 1 COMPLETE Obj-Type Cum-time Inc-
Page 156
Now inserting few lines into a table that gets eligibility for backup ( inctype = CUMULATIVE ). Connected to: Oracle9i Enterprise Edition Release 9.0.1.0.0 Production With the Partitioning option JServer Release 9.0.1.0.0 - Production SQL> SQL> SQL> 2 rows created. Commit complete. SQL> TABLE_NAME -----------------------------BONUS DEPT DUMMY EMP SALGRADE
B Y Y Y Y N
SQL> Disconnected from Oracle9i Enterprise Edition Release 9.0.1.0.0 - Production With the Partitioning option JServer Release 9.0.1.0.0 - Production Now performing Incremental Export ( inctype = CUMULATIVE) $ exp system/manager file=C4_wed.dmp log=C4_wed_exp.log inctype=cumulative
Page 157
. about to export U_STEEVE's tables via Conventional Path ... . about to export U_JUNK's tables via Conventional Path ... . exporting synonyms . exporting views . exporting referential integrity constraints . exporting stored procedures . exporting operators . exporting indextypes . exporting bitmap, functional and extensible indexes . exporting posttables actions . exporting triggers . exporting materialized views . exporting snapshot logs . exporting job queues . exporting refresh groups and children . exporting dimensions . exporting post-schema procedural objects and actions . exporting user history table . exporting default and system auditing options . exporting information about dropped objects . exporting statistics Export terminated successfully without warnings. Now observing the BACKUP DETAILS in the database. SQL> select * FROM dba_exp_version; EXP_VERSION ----------4
Page 158
Selecting Info about Exported Files. SQL> SELECT * FROM dba_exp_files ORDER BY 4; EX_VER Exp-Typ F-Name U-Name Exp-Dt ------ ----------- ---------------------------------- ------- --------1 COMPLETE /tmp/rahul/EXP_DIR/X1_sun.dmp SYSTEM 03-MAY-05 2 INCREMENTAL /tmp/rahul/EXP_DIR/I2_mon.dmp SYSTEM 03-MAY-05 3 INCREMENTAL /tmp/rahul/EXP_DIR/I3_tue.dmp SYSTEM 03-MAY-05 4 CUMULATIVE /tmp/rahul/EXP_DIR/C4_wed.dmp SYSTEM 03-MAY-05 Selecting Detailed Info about INCTYPE Exports. SQL> SELECT owner, object_name, object_type, cumulative, incremental, a.export_version, b.exp_type FROM dba_exp_objects a, dba_exp_files b WHERE a.export_version = b.exp_version AND owner='U_SCOTT' ORDER BY 6; Owner Obj-Name time Exp-V Exp-Typ ---------- ------------------- ------ ----------U_SCOTT BONUS MAY-05 1 COMPLETE U_SCOTT DUMMY MAY-05 1 COMPLETE U_SCOTT DEPT MAY-05 4 CUMULATIVE U_SCOTT EMP MAY-05 4 CUMULATIVE Obj-Type Cum-time Inc-
---------- --------- ---TABLE TABLE TABLE TABLE 03-MAY-05 0303-MAY-05 0303-MAY-05 0303-MAY-05 03Page 159
U_SCOTT MAY-05
SALGRADE 4 CUMULATIVE
TABLE
03-MAY-05 03-
Now inserting few lines into a table that gets eligibility for backup ( inctype = INCREMENTAL ). Connected to: Oracle9i Enterprise Edition Release 9.0.1.0.0 Production With the Partitioning option JServer Release 9.0.1.0.0 - Production SQL> SQL> SQL> 23 rows created. Commit complete. SQL> TABLE_NAME -----------------------------BONUS DEPT DUMMY EMP SALGRADE
B Y Y Y N Y
SQL> Disconnected from Oracle9i Enterprise Edition Release 9.0.1.0.0 - Production With the Partitioning option JServer Release 9.0.1.0.0 - Production Now performing Incremental Export ( inctype = INCREMENTAL) $ exp system/manager file=I5_thu.dmp log=I5_thu_exp.log inctype=incremental . about to export U_STEEVE's tables via Conventional Path ...
Page 160
. about to export U_JUNK's tables via Conventional Path ... . exporting synonyms . exporting views . exporting referential integrity constraints . exporting stored procedures . exporting operators . exporting indextypes . exporting bitmap, functional and extensible indexes . exporting posttables actions . exporting triggers . exporting materialized views . exporting snapshot logs . exporting job queues . exporting refresh groups and children . exporting dimensions . exporting post-schema procedural objects and actions . exporting user history table . exporting default and system auditing options . exporting information about dropped objects . exporting statistics Export terminated successfully without warnings. Now observing the BACKUP DETAILS in the database. incr_exp_info.sql: incr_exp_info.sql: No such file or directory Now simulating as if a database is corrupted due to some reasons. This happened on Friday Morning. In this scenario, We should come out with a brand new DB with all 1. Appropriate TS properly created with sufficient sizes. 2. Create RBS which should support our Import process 3. And finally start the DB At this point, we start our Recovery by applying
Page 161
1. X1.dmp ( Complete DMP file ) 2. C4.dmp ( Cumulative DMP file ) 3. I5.dmp ( Incremental DMP file ) As we can observe here we are not using 'I2.dmp' and 'I3.dmp' Since 'C4.dmp' already covers all the contents of I2 & I3. Well, we are not going to drop the total DB, rather simply dropping U_SCOTT's objects, and try to attempt to get them back. DROPPING USER U_SCOTT's Objects. Table dropped. Table dropped. Table dropped. Table dropped. USER U_SCOTT's Objects Dropped Now simulating as a brand new database is created by creating a brand new U_SCOTT schema Now beginning to IMPORT using the incremental BACKUP SET X1_sun.dmp, C4_wed.dmp, I5_thu.dmp $ imp system/manager file=X1_sun.dmp log=X1_sun_imp.log inctype=restore \commit=y "); end;',NLSENV=>'NLS_LANGUAGE=''AMERICAN'' NLS_TERRITORY=''AMERICA'' NLS_C" "URRENCY=''$'' NLS_ISO_CURRENCY=''AMERICA'' NLS_NUMERIC_CHARACTERS=''.,'' NL" "S_DATE_FORMAT=''dd-Mon-yyyy'' NLS_DATE_LANGUAGE=''AMERICAN'' NLS_SORT=''BIN" "ARY''',ENV=>'0102000200000000'); END;"
Page 162
IMP-00003: ORACLE error 1 encountered ORA-00001: unique constraint (SYS.I_JOB_JOB) violated ORA-06512: at "SYS.DBMS_IJOB", line 210 ORA-06512: at line 1 IMP-00017: following statement failed with ORACLE error 1: "BEGIN SYS.DBMS_IJOB.SUBMIT(JOB=>263,LUSER=>'REPADMIN',PUSE R=>'REPADMIN',C" "USER=>'REPADMIN',NEXT_DATE=>TO_DATE('2003-0102:16:11:54','YYYY-MM-DD:HH24:" "MI:SS'),INTERVAL=>'/*1:Hrs*/ sysdate + 1/24',BROKEN=>FALSE,WHAT=>'declare r" "c binary_integer; begin rc := sys.dbms_defer_sys.purge( delay_seconds=>0); " "end;',NLSENV=>'NLS_LANGUAGE=''AMERICAN'' NLS_TERRITORY=''AMERICA'' NLS_CURR" "ENCY=''$'' NLS_ISO_CURRENCY=''AMERICA'' NLS_NUMERIC_CHARACTERS=''.,'' NLS_D" "ATE_FORMAT=''dd-Mon-yyyy'' NLS_DATE_LANGUAGE=''AMERICAN'' NLS_SORT=''BINARY" "''',ENV=>'0102000200000000'); END;" IMP-00003: ORACLE error 1 encountered ORA-00001: unique constraint (SYS.I_JOB_JOB) violated ORA-06512: at "SYS.DBMS_IJOB", line 210 ORA-06512: at line 1 Import terminated successfully with warnings. Now checking the objects and their records created in USER SCOTT SQL*Plus: Release 9.0.1.0.0 - Production on Thu Jan 2 16:14:04 2003 (c) Copyright 2001 Oracle Corporation. All rights reserved. Connected to:
Page 163
Oracle9i Enterprise Edition Release 9.0.1.0.0 Production With the Partitioning option JServer Release 9.0.1.0.0 - Production SQL> SET FEED ON SQL> SELECT COUNT(1) FROM bonus; COUNT(*) ---------0 1 row selected. SQL> !pause SQL> SELECT COUNT (1) FROM dept; COUNT(*) ---------4 1 row selected. SQL> !pause SQL> SELECT COUNT(1) FROM DUMMY; COUNT(*) ---------1 1 row selected. SQL> !pause
Page 164
SQL>SELECT COUNT(1) FROM DEPT; COUNT(*) ---------14 1 row selected. SQL> !pause SQL> SELECT COUNT(1) FROM SALGRADE COUNT(*) ---------5 1 row selected. SQL> exit Disconnected from Oracle9i Enterprise Edition Release 9.0.1.0.0 - Production With the Partitioning option JServer Release 9.0.1.0.0 - Production Now importing using file C4_wed.dmp, because the contents of I2_mon.dmp and I3_tue.dmp are anyway backed up in C4_wed.dmp $ imp system/manager file=C4_wed.dmp log=C4_wed_imp.log inctype=restore \ commit=y ignore=y "CURRENCY=''$'' NLS_ISO_CURRENCY=''AMERICA'' NLS_NUMERIC_CHARACTERS=''.,'' NL"
Page 165
"S_DATE_FORMAT=''dd-Mon-yyyy'' NLS_DATE_LANGUAGE=''AMERICAN'' NLS_SORT=''BIN" "ARY''',ENV=>'0102000200000000'); END;" IMP-00003: ORACLE error 1 encountered ORA-00001: unique constraint (SYS.I_JOB_JOB) violated ORA-06512: at "SYS.DBMS_IJOB", line 210 ORA-06512: at line 1 IMP-00017: following statement failed with ORACLE error 1: "BEGIN SYS.DBMS_IJOB.SUBMIT(JOB=>263,LUSER=>'REPADMIN',PUSE R=>'REPADMIN',C" "USER=>'REPADMIN',NEXT_DATE=>TO_DATE('2003-0102:16:11:54','YYYY-MM-DD:HH24:" "MI:SS'),INTERVAL=>'/*1:Hrs*/ sysdate + 1/24',BROKEN=>FALSE,WHAT=>'declare r" "c binary_integer; begin rc := sys.dbms_defer_sys.purge( delay_seconds=>0); " "end;',NLSENV=>'NLS_LANGUAGE=''AMERICAN'' NLS_TERRITORY=''AMERICA'' NLS_CURR" "ENCY=''$'' NLS_ISO_CURRENCY=''AMERICA'' NLS_NUMERIC_CHARACTERS=''.,'' NLS_D" "ATE_FORMAT=''dd-Mon-yyyy'' NLS_DATE_LANGUAGE=''AMERICAN'' NLS_SORT=''BINARY" "''',ENV=>'0102000200000000'); END;" IMP-00003: ORACLE error 1 encountered ORA-00001: unique constraint (SYS.I_JOB_JOB) violated ORA-06512: at "SYS.DBMS_IJOB", line 210 ORA-06512: at line 1 Import terminated successfully with warnings. Now checking for objects and their records in USER SCOTT SQL*Plus: Release 9.0.1.0.0 - Production on Thu Jan 2 16:15:43 2003 (c) Copyright 2001 Oracle Corporation. All rights reserved.
Page 166
Connected to: Oracle9i Enterprise Edition Release 9.0.1.0.0 Production With the Partitioning option JServer Release 9.0.1.0.0 - Production SQL> SET FEED ON SQL> SELECT COUNT(1) FROM bonus; COUNT(1) ---------0 1 row selected. SQL> !pause SQL> SELECT COUNT(1) FROM dept; COUNT(1) ---------5 1 row selected. SQL> !pause SQL> SELECT COUNT(1) FROM dummy; COUNT(*) ---------1 1 row selected.
Page 167
SQL> !pause SQL> SELECT count(1) FROM emp; COUNT(*) ---------23 1 row selected. SQL> !pause SQL> SELECT COUNT(1) FROM dept;
COUNT(*) ---------7 1 row selected. SQL> exit Disconnected from Oracle9i Enterprise Edition Release 9.0.1.0.0 - Production With the Partitioning option JServer Release 9.0.1.0.0 - Production Now importing using file I5_thu.dmp, which got created on Thursday. $ imp system/manager file=I5_thu.dmp log=I5_thu_imp.log inctype=restore \ commit=y ignore=y "); end;',NLSENV=>'NLS_LANGUAGE=''AMERICAN'' NLS_TERRITORY=''AMERICA'' NLS_C"
Page 168
"URRENCY=''$'' NLS_ISO_CURRENCY=''AMERICA'' NLS_NUMERIC_CHARACTERS=''.,'' NL" "S_DATE_FORMAT=''dd-Mon-yyyy'' NLS_DATE_LANGUAGE=''AMERICAN'' NLS_SORT=''BIN" "ARY''',ENV=>'0102000200000000'); END;" IMP-00003: ORACLE error 1 encountered ORA-00001: unique constraint (SYS.I_JOB_JOB) violated ORA-06512: at "SYS.DBMS_IJOB", line 210 ORA-06512: at line 1 IMP-00017: following statement failed with ORACLE error 1: "BEGIN SYS.DBMS_IJOB.SUBMIT(JOB=>263,LUSER=>'REPADMIN',PUSE R=>'REPADMIN',C" "USER=>'REPADMIN',NEXT_DATE=>TO_DATE('2003-0102:16:11:54','YYYY-MM-DD:HH24:" "MI:SS'),INTERVAL=>'/*1:Hrs*/ sysdate + 1/24',BROKEN=>FALSE,WHAT=>'declare r" "c binary_integer; begin rc := sys.dbms_defer_sys.purge( delay_seconds=>0); " "end;',NLSENV=>'NLS_LANGUAGE=''AMERICAN'' NLS_TERRITORY=''AMERICA'' NLS_CURR" "ENCY=''$'' NLS_ISO_CURRENCY=''AMERICA'' NLS_NUMERIC_CHARACTERS=''.,'' NLS_D" "ATE_FORMAT=''dd-Mon-yyyy'' NLS_DATE_LANGUAGE=''AMERICAN'' NLS_SORT=''BINARY" "''',ENV=>'0102000200000000'); END;" IMP-00003: ORACLE error 1 encountered ORA-00001: unique constraint (SYS.I_JOB_JOB) violated ORA-06512: at "SYS.DBMS_IJOB", line 210 ORA-06512: at line 1 Import terminated successfully with warn Now checking for objects and their records in USER SCOTT SQL*Plus: Release 9.0.1.0.0 - Production on Thu Jan 2 16:17:11 2003
Page 169
All rights
Connected to: Oracle9i Enterprise Edition Release 9.0.1.0.0 Production With the Partitioning option JServer Release 9.0.1.0.0 - Production SQL> SET FEED ON SQL> SELECT COUNT(1) FROM bonous; COUNT(*) ---------0 1 row selected. SQL> !pause SQL> SELECT COUNT(1) FROM dept; COUNT(1) ---------5 1 row selected. SQL> !pause SQL> SELECT COUNT(1) FROM dummy; COUNT(1) ---------1 1 row selected.
Page 170
SQL> !pause SQL> SELECT COUNT(1) FROM emp; COUNT(*) ---------46 1 row selected. SQL> !pause SQL> SELECT COUNT(1) FROM salgrade; COUNT(*) ---------7 1 row selected. SQL> exit Disconnected from Oracle9i Enterprise Edition Release 9.0.1.0.0 - Production With the Partitioning option JServer Release 9.0.1.0.0 Production
SYS>GRANT CONNECT,RESOURCE TO pict; Sys> conn pict/pict; Userd>@$ORACLE_HOME/sqlplus/demo/demobld.sql Userd> INSERT INTO emp SELECT * FROM emp; (up to 50,000 Rows) userd>COMMIT; sys> SELECT FROM WHERE ORDER BY segment_name, extent_id, file_id, block_id, blocks, bytes dba_extents tablespace_name=DMTS extent_id,block_id;
sys>save dba_ext (note: number of extent in T.S. DMTS=12) $exp pict/pict file=reorg.dmp tables=emp sys> CONN pict/pict userd> DROP TABLE emp PURGE; $imp pict/pict File= reorg.dmp tables= emp sys> @dba_ext (note the number ofextent = 1)
Page 172
Ratio for Library Cache ----------------------.996136284 The ratio should be less than 1. If it is more than 1 than you need to increase your shared pool by increasing the value SHARED_POOL_SIZE in init.ora. In Oracle, a cursor, trigger, procedure, or package can be held in memory using a special shared pool package, DBMS_SHARED_POOL. To create this package, run script SQL> @$ORACLE_HOME/rdbms/admin/dbmspool.sql Package created. Grant succeeded. View created. Package body created. To see some of the objects which we try to pin in memory. SQL> SELECT SUBSTR(name,1,35), pins, sharable_mem, kept FROM v$db_object_cache WHERE name like 'DBMS_%';
DBMS_APPLICATION_INFO 0 NO DBMS_OUTPUT 0 NO DBMS_UTILITY 24924 NO DBMS_STANDARD 23633 NO DBMS_APPLICATION_INFO 7696 NO DBMS_OUTPUT 7674 NO DBMS_SHARED_POOL 9092 NO DBMS_SHARED_POOL 11164 NO DBMS_APPLICATION_INFO 12465 NO DBMS_APPLICATION_INFO 3041 NO DBMS_OUTPUT 13171 NO DBMS_OUTPUT 7783 NO 12 rows selected.
0 0 0 0 0 0 0 0 0 0 0 0
Execute the shared pool package to pin one of the Packages selected (DBMS_SHARED_POOL).
Page 175
To determine whether the object was successfully pinned. SQL> SELECT SUBSTR(name,1,35), pins, sharable_mem, kept FROM v$db_object_cache WHERE name like 'DBMS_%'; SUBSTR(NAME,1,35) PINS SHARABLE_MEM KEP ----------------------------------- ---------- ----------- --DBMS_APPLICATION_INFO 0 0 NO DBMS_OUTPUT 0 0 NO DBMS_SQL 0 0 NO DBMS_UTILITY 0 29700 NO DBMS_UTILITY 0 29592 NO DBMS_STANDARD 0 23633 NO DBMS_APPLICATION_INFO 0 7696 NO DBMS_OUTPUT 0 7674 NO DBMS_SESSION 0 0 NO DBMS_SHARED_POOL 0 10384 YES DBMS_SHARED_POOL 0 11164 YES DBMS_DDL 0 0 NO
Page 176
DBMS_APPLICATION_INFO 12465 NO DBMS_APPLICATION_INFO 3041 NO DBMS_OUTPUT 13171 NO DBMS_OUTPUT 7783 NO DBMS_JOB 0 NO 17 rows selected. If the object was pinned, the KEPT column has a YES value, otherwise, it says NO. DICTIONARY CACHE
0 0 0 0 0
The following are the objects held in the SYS and SYSTEM schemas (the SYSTEM tablespace): X$TABLES V$VIEWS DBA_VIEWS USER_VIEWS There is a way to measure data dictionary cache performance SQL> SELECT (1 - (SUM(gets) / (SUM(getmisses) + SUM(gets)))) * 100 Ratio FROM v$rowcache;
Page 177
TYPE SUBSTR(PARAMETER,1,2 FIXED GETS GETMISSES FLUSHES ----------- -------------------- ---------- --------- ---------- ---------PARENT dc_free_extents 0 2 1 0 PARENT dc_used_extents 0 0 0 0 PARENT dc_segments 0 51 35 0 PARENT dc_tablespaces 0 18 3 0 PARENT dc_tablespace_quotas 0 0 0 0 PARENT dc_files 0 1 1 0 PARENT dc_users 0 39 11 0 PARENT dc_rollback_segments 1 318 21 20 PARENT dc_objects 55 530 124 5 PARENT dc_global_oids 0 4 2 0 PARENT dc_constraints 0 0 0 0 PARENT dc_object_ids 55 515 69 3 PARENT dc_sequences 0 2 2 2 PARENT dc_usernames 0 89 3 0 PARENT dc_database_links 0 0 0 0 PARENT dc_histogram_defs 0 24 24 0 PARENT dc_table_scns 0 0 0 0
Page 178
PARENT dc_outlines 0 0 0 PARENT dc_profiles 2 1 0 PARENT dc_encrypted_objects 0 0 0 PARENT dc_encryption_profil 0 0 0 PARENT dc_qmc_cache_entries 0 0 0 SUBORDINATE dc_users 0 0 0 SUBORDINATE dc_histogram_data 0 0 0 SUBORDINATE dc_histogram_data_va 0 0 0 SUBORDINATE dc_partition_scns 0 0 0 SUBORDINATE dc_user_grants 24 9 0 SUBORDINATE dc_app_role 0 0 0 28 rows selected.
0 0 0 0 0 0 0 0 0 0 0
The ratio should be less than 15%. If its more then you have to increase the value of SHARED_POOL_SIZE. NOTE: As you have observed that for both Library Cache and Dictionary Cache we are increasing only SHARED_POOL_SIZE. Oracle Engine will look after how much to allocate to library cache and how much to dictionary cache.
Page 179
TUNING THE DATABASE BUFFER CACHE:::::::: The main data queried by the user is copied into database buffer cache by the Server Process. It depends on DB_FILE_MULTIBLOCK_READ_COUNT. Try to see what is the Parameter value for your database. SQL> show parameter db_file_multiblock_read_count NAME TYPE VALUE ------------------------------------ ----------- ----------------------------db_file_multiblock_read_count integer 8 This value shows the block read at a point of time and this is called as one PHYSICAL READ. As a DBA you need to reduce such physical reads. Try to calculate the HITRATIO for the database buffer cache. SQL> select 1 - (p.value / (d.value + c.value)) Cache Hit Ratio from v$sysstat p, v$sysstat c, v$sysstat d where p.name='physical reads' and d.name='db block gets' and c.name='consistent gets'; Name VALUE ---------------------------------------- ---------db block gets 3737 consistent gets 19492 physical reads 4469 Cache Hit Ratio --------------.808057381 The ratio should be greater than 85%. If not increase the size of database buffer cache parameter.
Page 180
DEMO ON APPLICATION TUNING Observing with Tkprof Settings to be made before working with tkprof utility. SQL> connect scott/tiger SQL> alter session set sql_trace=true; SQL> alter session set timed_statistics=true; SQL> select * from emp where deptno=10; Session altered. Session altered. EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ------- ---------- --------- ------- --------- ------ ------- ------7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7839 KING PRESIDENT 17-NOV-81 5000 10 7934 MILLER CLERK 7782 23-JAN-82 1300 10 Working with TKPROF The sql trace requested will be dumped into the destination mentioned as user_dump_dest in the parameter file. The file format will be ora_<server process id>.trc. This utility is runned at the O.S. level $ tkprof ora_1521.trc junk.log explain=U_SCOTT/U_SCOTT Viewing the file ..... $ cat /disk3/oradata/rahuldb/udump | less TKPROF: Release 9.2.0.1.0 - Production on Fri May 6 16:23:29 2005
Page 181
All
Trace file: /disk3/oradata/rahuldb/udump/rahuldb_ora_6681.trc Sort options: default **************************************************** **************************** count = number of times OCI procedure was executed cpu = cpu time in seconds executing elapsed = elapsed time in seconds executing disk = number of physical reads of buffers from disk query = number of buffers gotten for consistent read current = number of buffers gotten in current mode (usually for update) rows = number of rows processed by the fetch or execute call **************************************************** **************************** alter session set sql_trace=true call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- --------- ---------- ---------Parse 0 0.00 0.00 0 0 0 0 Observing EXPLAIN PLAN Before going to explain plan we need to create a permanent plan table in the schema where and request oracle engine to give the access path. To create the plan table SQL >connect scott/tiger
Page 182
SQL >@ $ORACLE_HOME/rdbms/admin/utlxplan.sql Table created. TNAME -----------------------------BONUS DEPT DUMMY EMP PLAN_TABLE SALGRADE 6 rows selected. The disadvantage with Tkprof is access path can be aquired only after the sql statement is executed. Using explain we can ask Oracle to suggest the access path before even executing the sql statement. SQL> explain plan set statement_id='YOGI' for select * from emp where deptno=10; Explained. Selecting from the plan table SQL> select level||operation||options||object_name Q_Plan from plan_table where statement_id='YOGI' Q_PLAN ----------------------------CHOOSE SELECT STATEMENT TABLE ACCESS FULL EMP TABTYPE CLUSTERID ------- ---------TABLE TABLE TABLE TABLE TABLE TABLE
Page 183
Passing sql hints to use the different types of Optimizer modes. SQL> explain plan set statement_id='YOGI' for select /*+rule*/* from emp where deptno=10; By default Oracle Engine uses CHOOSE based Optimizer for parsing the sql statement. But in the above statement we are passing rule based optimizer as a hint. Explained. Selecting from the plan table SQL> select optimizer||operation||options ||object_name Q_Plan from where plan_table statement_id='YOGI'
Q_PLAN ---------------------------HINT: RULE SELECT STATEMENT TABLE ACCESSFULLEMP The table does not 've any index on it. So the above sql statements will go with full table scan to retrieve the data. Create an index on the table and observe the difference SQL> create index ind_emp on emp(deptno); Index created. Now we try to select from the same table and try to check whether it is using index or not? SQL> delete from plan_table;
Page 184
SQL> explain plan set statement_id='UDEMO' for select * from emp where deptno=10;Using the command SQL> explain plan set statement_id='YOGI' for select * from emp where deptno=10; 2 rows deleted. Commit complete. Explained. Selecting from the plan table SQL> select optimizer||operation||options ||object_name Q_Plan from plan_table where statement_id='YOGI' Q_PLAN ---------------------------------CHOOSE SELECT STATEMENT TABLE ACCESSBY INDEX ROWIDEMP INDEXRANGE SCANIND_EMP
Page 185
DEMO ON SHARED SERVER (MTS) The Multithreaded Server allows many user sessions to share a group of server processes, thereby reducing the overhead resources necessary to support a large simultaneous user base. This structure also allows you to reduce the overall idle time among these server sessions. For Configuring the MTS we have to set some of the PARAMETERS in init<SID>.ora File. shared_servers = 2 max_shared_servers = 10 dispatchers = "(protocol=tcp)(dispatchers=1)" dispatchers = "ipc,1" max_dispatchers = 10 local_listener = "(address=(protocol=tcp)(host=192.168.0.12)(port=9990))" Parameters Mentioned in init<SID>.ora File. Before starting the DATABASE, First we have to Configure and START the LISTENER. Starting the Listener. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dba15)(POR T=10653))) STATUS of the LISTENER -----------------------Alias lis_rahul Version TNSLSNR for Linux: Version 9.2.0.1.0 - Production Start Date 06-MAY-2005 17:06:54 Uptime 0 days 0 hr. 0 min. 0 sec Trace Level off Security OFF SNMP OFF
Page 186
Listener Parameter File /tmp/rahul/O_NET/listener.ora Listener Log File /oraeng/app/oracle/product/9.2.0/network/log/lis_rah ul.log Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dba15)(POR T=10653))) Services Summary... Service "rahuldb" has 1 instance(s). Instance "rahuldb", status UNKNOWN, has 1 handler(s) for this service... Know try to start the DATABASE. ORACLE instance Shutdown. ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers Database mounted. Database opened. 17589096 279400 12582912 4194304 532480 bytes bytes bytes bytes bytes
Try to check the PROCESSES at O/S level. $ ps -x | grep rahuldb 3781 3783 3787 3789 3791 3793 3800 3804 3806 ? ? ? ? ? ? ? ? ? S S S S S S S S S 0:00 0:00 0:00 0:00 0:00 0:00 0:00 0:00 0:00 ora_pmon_rahuldb ora_dbw0_rahuldb ora_lgwr_rahuldb ora_ckpt_rahuldb ora_smon_rahuldb ora_reco_rahuldb ora_s000_rahuldb ora_s001_rahuldb ora_d000_rahuldb
Page 187
S S
Know we try to select some Info about the DISPATCHERS Started. SQL> select name, status, accept, bytes, owned, idle, busy from v$dispatcher; Dis-Name Status IDLE BUSY ---------- --------------- ---------D000 WAIT 21081 0 D001 WAIT 21078 0 ACC Bytes OWNED
Know this database is Working as Multithreaded Server ( MTS ). If any CLIENT who was connecting to this database is going to use the SHARED SERVER Process Only. Know try to configure a TNSNAMES.ORA at CLIENT for using the MTS Connection. Configure the TNSNAMES.ORA and append it to Default location $ORACLE_HOME/network/admin/tnsnames.ora tns_rahul = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = dba15 )(PORT = 10653)) ) (CONNECT_DATA = (SERVICE_NAME = rahuldb.dba15) ) ) #################################################### #####################
Page 188
Configured and Appended. Know using that ALIAS (tns_rahul) try to connect for this MTS Configured Database from that CLIENT. $ sqlplus system/manager@tns_rahul SQL> Connected. Now try to see the Background Processes at OS Level. $ ps x PID TTY STAT TIME COMMAND 964 pts/9 S 0:00 -bash 1366 pts/9 S 0:00 -bash 1367 pts/9 S 0:00 sh /.WILSHIRE/USERDEMO/user_demo 23468 pts/9 S 0:00 sh /.WILSHIRE/USERDEMO/FILES/mts_demo 31666 pts/9 S 0:00 /oraeng/app/oracle/product/9.2.0/bin/tnslsnr lis_rahul 3781 ? S 0:00 ora_pmon_rahuldb 3783 ? S 0:00 ora_dbw0_rahuldb 3787 ? S 0:00 ora_lgwr_rahuldb 3789 ? S 0:00 ora_ckpt_rahuldb 3791 ? S 0:00 ora_smon_rahuldb 3793 ? S 0:00 ora_reco_rahuldb 3800 ? S 0:00 ora_s000_rahuldb 3804 ? S 0:00 ora_s001_rahuldb 3806 ? S 0:00 ora_d000_rahuldb 3808 ? S 0:00 ora_d001_rahuldb 19968 pts/9 S 0:00 sqlplus -s 23525 pts/9 R 0:00 ps x We try to the see Dispatchers which are being used. SQL>select name, status, accept, bytes, owned, idle, busy From v$dispatcher;
Page 189
Dis-Name Status IDLE BUSY ---------- --------------- ---------D000 WAIT 56313 2 D001 WAIT 56311 0
ACC
Bytes
OWNED
Now try to see the SHARED SERVERS and their usage SQL>select name, status, messages, bytes, idle, busy From V$shared_server; Dis-Name Status MESSAGES Bytes IDLE BUSY ---------- ------------------- ---------- --------- ---------S000 EXEC 35 4921 64013 43 S001 WAIT(COMMON) 10 1052 64039 0 From CLIENT connection we try to do some transactions and from SERVER side we try to query the DISPATCHERS and SHARED SERVERS SQL> insert into emp select * from emp; SQL> insert into emp select * from emp; From another session we query DISPATCHERS and SHARED SERVERS We spool its output in a file ( sel_disp_ser.log). SQL>select name, status, accept, bytes, owned, idle, busy From v$dispatcher;
Page 190
SQL>select from
NAME Status OWNED IDLE BUSY ------------ ---------------- ---------- ---------D000 WAIT 1 81942 4 D001 WAIT 0 81942 0
NAME Status MESSAGES Bytes IDLE BUSY ------------ --------------- ---------- ---------- --------- ---------S000 EXEC 117 14829 77242 4754 S001 WAIT(COMMON) 24 1763 81936 19 #################################################### ################ Open the spool file ( dis_rahul.log ) and try to see the STATUS of both DISPATCHERS and SHARED SERVERS. $ cat dis_rahul.log |less NAME Status IDLE BUSY ------------ ----------------- ---------D000 WAIT 76719 3 D001 WAIT 76718 0 ACC Bytes OWNED
Page 191
NAME Status MESSAGES Bytes IDLE BUSY ------------ --------------- ---------- ---------- --------- ---------S000 WAIT(COMMON) 92 12145 76708 57 S001 EXEC 19 1639 76727 5 We try to see the SERVICES of this SHARED SERVER INSTANCE. $ lsnrctl services lis_rahul Copyright (c) 1991, 2002, Oracle Corporation. rights reserved. All
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dba15)(POR T=10653))) Services Summary... Instance "rahuldb", status READY, has 1 handler(s) for this service... Handler(s): "D001" established:0 refused:0 current:0 max:1002 state:ready DISPATCHER <machine: dba15, pid: 3808> (ADDRESS=(PROTOCOL=ipc)(KEY=#3808.1)) Service "rahuldb" has 1 instance(s). Instance "rahuldb", status UNKNOWN, has 1 handler(s) for this service... Handler(s): "DEDICATED" established:0 refused:0 LOCAL SERVER Service "rahuldb.dba15" has 1 instance(s). Instance "rahuldb", status READY, has 2 handler(s) for this service... Handler(s): "DEDICATED" established:0 refused:0 state:ready LOCAL SERVER
Page 192
"D000" established:2 refused:0 current:0 max:1002 state:ready DISPATCHER <machine: dba15, pid: 3806> (ADDRESS=(PROTOCOL=tcp)(HOST=dba15)(PORT=47636)) NOTE: If a CLIENT needs a DEDICATED connection then configure TNSNAMES as tns_dedi_rahul = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.15)(PORT = 10653)) ) (CONNECT_DATA = (SERVICE_NAME = rahuldb.dba15) (SERVER = DEDICATED) ) ) #################################################### #####################
Page 193
DEMO ON PARTITIONED TABLES AND INDEXES Databases have grown to hundreds of Gigabytes 1. Tables greater than 10GB are common in large data warehouse systems. 2. Tables need an associated index or indexes, which can also be gigabytes in size. 3. Tables can grow larger than a medium-sized database. Why use Partitioning? 1. Continued data availability with partial failures 2. Data management operations within finite maintenance windows 3. Scalable performance with substantial growth in data volumes 4. Simplified data disk placement Types of Indexes 1. Nonpartitioned indexes 2. Partitioned indexes
Global indexes o Global prefixed indexes Local indexes o Local prefixed indexes o Local nonprefixed indexes
DATA DICTIONARY VIEWS FOR PARTITIONED TABLES AND INDEXES DBA_PART_TABLES DBA_TAB_PARTITIONS DBA_TABLES DBA_PART_INDEXES
Page 194
DBA_IND_PARTITIONS DBA_INDEXES DBA_OBJECTS DBA_SEGMENTS Now creating tablespaces (TS1, TS2, TS3, TS4, TS5 ) to store different partitions SQL>create tablespace TS_RAHULDB_TS1 datafile '/disk3/oradata/rahuldb/rahuldb_ts01.dbf' size 2m; SQL>create tablespace TS_RAHULDB_TS2 datafile '/disk3/oradata/rahuldb/rahuldb_ts02.dbf' size 2m; SQL>create tablespace TS_RAHULDB_TS3 datafile '/disk3/oradata/rahuldb/rahuldb_ts03.dbf' size 2m; SQL>create tablespace TS_RAHULDB_TS4 datafile '/disk3/oradata/rahuldb/rahuldb_ts04.dbf' size 2m; SQL>create tablespace TS_RAHULDB_TS5 datafile '/disk3/oradata/rahuldb/rahuldb_ts05.dbf' size 2m; Tablespace created. Tablespace created. Tablespace created. Tablespace created. Tablespace created.
Page 195
1 1 1 1 1 1 1
Page 196
a.table_name, a.partition_name, b.partitioning_type, a.tablespace_name, a.high_value user_tab_partitions a, user_part_tables b a.table_name=b.table_name a.table_name, a.partition_name;
T-Nam P-Name H-Value ----- ----------------REMP RP1 11 REMP RP2 21 REMP RP3 31 REMP RP4 41
P-Type
TS-Name
--------------- ----------------- RANGE RANGE RANGE RANGE TS_RAHULDB_TS1 TS_RAHULDB_TS2 TS_RAHULDB_TS3 TS_RAHULDB_TS4
Please enter the TABLE NAME for (tname) and enter PARTITION NAME for (Pname)
Enter value for tname: REMP Enter value for pname: RP1
old new
1: select * from &tname partition(&Pname) 1: select * from REMP partition(RP1) ENAME DEPTNO ---------- ---------CLARK 10 KING 10
Page 197
7934 MILLER
SQL> ALTER TABLE remp MOVE PARTITION rp2 TABLESPACE TS_RAHUL_TS3;
10
P-Name P-Type TS-Name H-Value -------- ---------- ----------------- -------RP1 RP2 RP3 RP4 RANGE RANGE RANGE RANGE TS_RAHULDB_TS1 TS_RAHULDB_TS3 TS_RAHULDB_TS3 TS_RAHULDB_TS4 11 21 31 41
Table altered.
Page 198
T-Nam P-Name H-Value ----- ----------------REMP RP1 11 REMP RP2 21 REMP RP3 31 REMP RP4 41 REMP RP5 MAXVALUE
P-Type
TS-Name
--------------- ----------------- RANGE RANGE RANGE RANGE RANGE TS_RAHULDB_TS1 TS_RAHULDB_TS3 TS_RAHULDB_TS3 TS_RAHULDB_TS4 TS_RAHULDB_TS5
Table Altered.
Page 199
a.table_name, a.partition_name, b.partitioning_type, a.tablespace_name, a.high_value user_tab_partitions a, user_part_tables b a.table_name=b.table_name a.table_name, a.partition_name;
T-Name P-Name TS-Name H-Value --------------- --------------------------------- --------REMP RP1 TS_RAHULDB_TS1 11 REMP RP2_RP3 TS_RAHULDB_TS2 31 REMP RP4 TS_RAHULDB_TS4 41 REMP RP5 TS_RAHULDB_TS5 MAXVALUE Now splitting a existing Partition into two
SQL> alter table remp SPLIT PARTITION rp2_rp3 AT (21) INTO (partition rp2 tablespace TS_RAHUL_TS2, partition rp3 tablespace TS_RAHUL_TS3);
T-Nam P-Name Value ----- -----------REMP RP1 REMP RP2 REMP RP3 REMP RP4 REMP RP5 MAXVALUE
P-Type
TS-Name
H-
---------- ----------------- -----RANGE RANGE RANGE RANGE RANGE TS_RAHULDB_TS1 TS_RAHULDB_TS2 TS_RAHULDB_TS3 TS_RAHULDB_TS4 TS_RAHULDB_TS5 11 21 31 41
P-Name
P-Type
TS-Name
---------- ---------- ----------------- RP1 RP2 RP4 RANGE RANGE RANGE TS_RAHULDB_TS1 TS_RAHULDB_TS2 TS_RAHULDB_TS4
Page 201
RP5 RP7
RANGE RANGE
TS_RAHULDB_TS5 TS_RAHULDB_TS3
T-Name P-Name TS-Name H-Value --------------- --------------------------------- --------REMP RP1 TS_RAHULDB_TS1 11 REMP RP2 TS_RAHULDB_TS2 21 REMP RP5 TS_RAHULDB_TS5 MAXVALUE REMP RP7 TS_RAHULDB_TS3 31
SQL> CREATE TABLE HEMP ( EMPNO NUMBER(4) , ENAME VARCHAR2(10), DEPTNO NUMBER(2)) PARTITION BY HASH(deptno) (PARTITION HP1 TABLESPACE PARTITION HP2 TABLESPACE PARTITION HP3 TABLESPACE PARTITION HP4 TABLESPACE Table created. TS_RAHUL_TS1, TS_RAHUL_TS2, TS_RAHUL_TS3, TS_RAHUL_TS4);
1 1 1 1 1 1 1
Page 203
T-Name P-Name H-Value ----------- ----------------- --------HEMP HP1 TS_RAHULDB_TS1 HEMP HP2 TS_RAHULDB_TS2 HEMP HP3 TS_RAHULDB_TS3 HEMP HP4 TS_RAHULDB_TS4
P-Type
TS-Name
Please enter the TABLE NAME for tname and PARTITION NAME for Pname. Enter Enter old new value for value for 1: select 1: select tname: pname: * from * from HEMP HP2 &tname partition(&Pname) HEMP partition(HP2)
NOTE: The following actions are not possible with HASH partitions. SPLITTING, MERGING, DROPPING All the other actions work normally. 3.Now creating a LIST-PARTITIONED TABLE (SALES_BY_REGION)
Page 204
SQL>create table SALES_BY_REGION (ITEM# number,QTY number, STORE_NAME VARCHAR(30), STATE_CODE VARCHAR(4), SALE_DATE DATE) partition by list(state_code) (partition region_east values('WB','AM','MZ','NGL') tablespace TS_RAHULDB_TS1, partition region_west values ('GT','MH','MP','RT') tablespace TS_RAHULDB_TS2, partition region_nort values ('JM','ND','UP','HP','PJB') tablespace TS_RAHULDB_TS3, partition region_south values ('KRL','TN','AP','KRT') tablespace TS_RAHULDB_TS4); Table created. Now inserting records into sales_by_region.
SQL> INSERT INTO SALES_BY_REGION (10,1000,'BAZAR','AP',sysdate); SQL> INSERT INTO SALES_BY_REGION (10,1000,'BAZAR','UP',sysdate); SQL> INSERT INTO SALES_BY_REGION (10,1000,'BAZAR','MP',sysdate); SQL> INSERT INTO SALES_BY_REGION (10,1000,'BAZAR','AP',sysdate); SQL> INSERT INTO SALES_BY_REGION (10,1000,'BAZAR','WB',sysdate); SQL> INSERT INTO SALES_BY_REGION (10,1000,'BAZAR','KRL',sysdate); VALUES VALUES VALUES VALUES VALUES VALUES
1 row created.
Page 205
1 1 1 1 1 1
T-Name P-Name H-Value ---------------- -------------------------------SALES_BY_REGION REGION_EAST 'WB', 'AM', 'MZ', 'NGL' SALES_BY_REGION REGION_NORT 'JM', 'ND', 'UP', 'HP','PJB' SALES_BY_REGION REGION_SOUTH 'KRL', 'TN', 'AP', 'KRT' SALES_BY_REGION REGION_WEST 'GT', 'MH', 'MP', 'RT'
SQL> create index i_remp on remp(deptno) LOCAL;
P-Ty
TS-NAME
a.index_name, a.partition_name, b.partitioning_type, a.tablespace_name, a.high_value, b.locality user_tab_partitions a, user_part_indexes b a.index_name=b.index_name a a.index_name, a.partition_name;
I-Name P-Name H-Value Locality ------------- ------------ ---------- ------------I_REMP RP1 11 LOCAL I_REMP RP2 21 LOCAL I_REMP RP5 MAXVALUE LOCAL I_REMP RP7 31 LOCAL
P-Type
TS-Name
Index created.
Page 207
I-Name P-Name H-Value Locality ------------ ------------- --------- ------------R_IGEMP IRPG1 21 GLOBAL R_IGEMP IRPG2 MAXVALUE GLOBAL
P-Type
TS-Name
Page 208
DEMO ON DB LINKS AND SNAP-SHOTS First we have to configure the NET8 connection between client and Server. Here we take TWO Databases for configuring One as CLIENT and the another as SERVER. Configuration -----------------SERVER CLIENT Database Name ---------------------UDEMO PAY
First We try to configure the (listener.ora file) at SERVER DATABASE (UDEMO). In our Users Home Directory we had a listener-configured file. Just we try to see the CONTENTS in that file. $ less listener.ora listener.ora (END) (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.15)(PORT =9990 )) ) ) ) SID_LIST_lis_udemo = (SID_LIST = (SID_DESC = (SID_NAME = UDEMO) (ORACLE_HOME = /oraeng/app/oracle/product/9.0.1) ) ) #################################################### #################
Page 209
LSNRCTL for Linux: Version 9.0.1.0.0 - Production on 20-FEB-2002 18:05:19 Copyright (c) 1991, 2001, Oracle Corporation. rights reserved. All
Starting /oraeng/app/oracle/product/9.0.1/bin/tnslsnr: please wait... TNSLSNR for Linux: Version 9.0.1.0.0 - Production System parameter file is /oraeng/app/oracle/product/9.0.1/network/admin/liste ner.ora Log messages written to /oraeng/app/oracle/product/9.0.1/network/log/lis_ude mo.log Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0. 15)(PORT=9990))) Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0. 15)(PORT=9990))) STATUS of the LISTENER
Page 210
-----------------------Alias lis_udemo Version TNSLSNR for Linux: Version 9.0.1.0.0 - Production Start Date 20-FEB-2002 18:05:20 Uptime 0 days 0 hr. 0 min. 0 sec TPRESS ENTER TO CONTINUE...ff Security OFF SNMP OFF Listener Parameter File /oraeng/app/oracle/product/9.0.1/network/admin/liste ner.ora Listener Log File /oraeng/app/oracle/product/9.0.1/network/log/lis_ude mo.log Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.0. 15)(PORT=9990))) Services Summary... Service "UDEMO" has 1 instance(s). Instance "UDEMO", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully Now START the Remote Database [ UDEMO ].
SQL> startup
ORACLE instance shut down. ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers Database mounted. Database opened. 21783420 279420 12582912 8388608 532480 bytes bytes bytes bytes bytes
Page 211
Till now our SERVER Side Configuration is Over. Now try to Connect to the CLIENT Database ( CATRMAN ) and Configure the ALIAS pointing to this DATABASE. Now we are at CLIENT Database ( CATRMAN ). At our CLIENT Side in our users HOME directory we had a file called TNSNAMES.ORA with all these settings. My ALIAS - tns_udemo sid - UDEMO port - 9990 (on which my listener is listening ) host - oradba15 ( where my target database is their ) We try to see the CONTENTS in TNSNAMES.ora File. $ less tnsnames.ora tnsnames.ora (END) (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST=192.168.0.15)(PORT=9990)) ) (CONNECT_DATA = (SID = UDEMO) ) ) #################################################### #########################
Now try to APPEND these lines to the tnsnames.ora at /oraeng/app/oracle/product/9.0.1/network/admin $ cat tnsnames.ora >>/oraeng/app/oracle/product/9.0.1/network/admin/tnsnames.ora [ or ]
Page 212
Appended. Know check where the service is started at server or not by pinging using tnsping command. $ tnsping <alias_name in tnsnames.ora file>
$ tnsping tns_udemo
[ Its pinging fine..... ] Using that alias now try to CONNECT to the SERVER Database.
$ sqlplus scott/tiger@tns_udemo
Connected. Try to select the Database Name for which we had connected.
SQL> select * from v$database;
DBID NAME CREATED ---------- -------------------------------------------- --------RESETLOGS_CHANGE# RESETLOGS PRIOR_RESETLOGS_CHANGE# PRIOR_RES LOG_MODE ----------------- --------- ------------------------------- -----------CHECKPOINT_CHANGE# ARCHIVE_CHANGE# CONTROL CONTROLFI CONTROLFILE_SEQUENCE# ------------------ --------------- ------- ----------------------------CONTROLFILE_CHANGE# CONTROLFI OPEN_RESETL VERSION_T OPEN_MODE ------------------- --------- ----------- -----------------PROTECTION_MODE PROTECTION_LEVEL REMOTE_A ACTIVATION# DATABASE_ROLE
Page 213
-------------------- -------------------- -------- ---------- ---------------ARCHIVELOG_CHANGE# SWITCHOVER_STATUS DATAGUAR GUARD_S SUP SUP SUP FOR ------------------ ------------------ -------- ------ --- --- --- --3806677314 UDEMO 27-JAN-03 1 27-JAN-03 0 ARCHIVELOG 9049391 9007448 CURRENT 27-JAN-03 13058 9049391 06-MAY-05 NOT ALLOWED 27-JAN-03 READ WRITE MAXIMUM PERFORMANCE UNPROTECTED ENABLED 3806645570 PRIMARY 9007448 NOT ALLOWED DISABLED NONE NO NO NO NO From the SERVER we try to see the OS Processes. $ ps x PID TTY STAT TIME COMMAND 844 pts/0 S 0:00 -bash 882 pts/0 S 0:00 sh userid 936 pts/0 S 0:00 sh LINKS/link_demo 983 pts/0 S 0:00 /oraeng/app/oracle/product/9.0.1/bin/tnslsnr lis_udem 1000 ? S 0:00 ora_pmon_UDEMO 1002 ? S 0:00 ora_dbw0_UDEMO 1004 ? S 0:00 ora_lgwr_UDEMO 1006 ? S 0:00 ora_ckpt_UDEMO 1008 ? S 0:00 ora_smon_UDEMO 1010 ? S 0:00 ora_reco_UDEMO 1012 ? S 0:00 ora_arc0_UDEMO 1068 pts/0 S 0:00 sqlplus -s 1070 pts/0 S 0:00 oracleUDEMO (LOCAL=NO)
Page 214
1092 pts/0
0:00 ps x
[ See here user connected process is showing LOCAL=NO ] NOTE : When we are using the ALIAS we are connecting to the TARGET Database . Now we can perform all the OPERATIONS at the REMOTE Database for which the USER ( SCOTT ) having the PERMISSIONS ( DDL & DML ). When you are going to use MATERIALIZED VIEWS among TWO Databases those transactions are called DISTRIBUTED Transactions. For doing Distributed transactions we have to set some of the init.ora parameters in both the databases. SERVER ---------------------------global_names=true db_domain=PAY ------CLIENT --------------------------------global_names=true -----job_queue_processes=2
Now we try to create a DATABASE LINK at CLIENT DATABASE ( CATRMAN ) from CLIENT user, for that first we have to start the database.
SQL> startup SQL> conn client/client SQL> create database link UDEMO.ORADBA15 connect to scott identified by tiger using 'tns_udemo';
Wait...Creating Database link. We try to select from emp table from remote database user using database link.
SQL> select * from emp@udemo.dba12;
MGR HIREDATE
Page 215
-------- ---------- --------- -------- --------- ------- -------- -------7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 09-DEC-82 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7876 ADAMS CLERK 7788 12-JAN-83 1100 20 7900 JAMES CLERK 7698 03-DEC-81 950 30 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 14 rows selected. For creating materialized view at CLIENT with REFRESH FAST Option first we want to have at REMOTE Database 1. A Materialized View Log on Master table ( EMP ). 2. The master table (EMP) should have one Primary key.
Page 216
We try to Create Materialized view log on Remote database EMP table and try to see wether the table contain any Primary key. SQL> desc emp;
SQL> create materialized view log on emp;
Name Null? Type ----------------------------------------- ----------------------------------EMPNO NOT NULL NUMBER(4) ENAME VARCHAR2(10) JOB VARCHAR2(9) MGR NUMBER(4) HIREDATE DATE SAL NUMBER(7,2) COMM NUMBER(7,2) DEPTNO NUMBER(2) Materialized view log created. At CLIENT Database we try to create a MATERIALIZED VIEW ( snapshot ) on REMOTE Database Master table emp using this database link( udemo.oradba15).
SQL> create materialized view mv_emp refresh fast with primary key start with sysdate next sysdate+1/(24*60*60) as select * from scott.emp@udemo.oradba15;
Wait...
Page 217
Materialized view created. First we select the info from MASTER emp table on REMOTE Database .
SQL> conn scott/tiger SQL> select * from emp;
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 09-DEC-82 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7876 ADAMS CLERK 7788 12-JAN-83 1100 20 7900 JAMES CLERK 7698 03-DEC-81 950 30 7902 FORD ANALYST 7566 03-DEC-81 3000 20
Page 218
CLERK 10
7782 23-JAN-82
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 09-DEC-82 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7876 ADAMS CLERK 7788 12-JAN-83 1100 20 7900 JAMES CLERK 7698 03-DEC-81 950 30
Page 219
ANALYST 20 CLERK 10
Now we do some Transactions and commit on master table (EMP) that should be reflected at Materialized view.
SQL> delete from emp where deptno=30; SQL> commit; SQL> select * from emp;
6 rows deleted. Commit complete. EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------7369 SMITH CLERK 7902 17-DEC-80 800 20 7566 JONES MANAGER 7839 02-APR-81 2975 20 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 09-DEC-82 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7876 ADAMS CLERK 7788 12-JAN-83 1100 20 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 8 rows selected.
Page 220
[ query continuously until the changes been occured ]. EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------7369 SMITH CLERK 7902 17-DEC-80 800 20 7566 JONES MANAGER 7839 02-APR-81 2975 20 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 09-DEC-82 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7876 ADAMS CLERK 7788 12-JAN-83 1100 20 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 8 rows selected. EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------7369 SMITH CLERK 7902 17-DEC-80 800 20 7566 JONES MANAGER 7839 02-APR-81 2975 20
Page 221
7782 CLARK 2450 7788 SCOTT 3000 7839 KING 5000 7876 ADAMS 1100 7934 MILLER 1300 8 rows selected.
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------7369 SMITH CLERK 7902 17-DEC-80 800 20 7566 JONES MANAGER 7839 02-APR-81 2975 20 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 09-DEC-82 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7876 ADAMS CLERK 7788 12-JAN-83 1100 20 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 8 rows selected. EMPNO ENAME JOB SAL COMM DEPTNO MGR HIREDATE
Page 222
-------- ---------- --------- -------- --------- ------- -------- -------7369 SMITH CLERK 7902 17-DEC-80 800 20 7566 JONES MANAGER 7839 02-APR-81 2975 20 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 09-DEC-82 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7876 ADAMS CLERK 7788 12-JAN-83 1100 20 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 8 rows selected. EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------7369 SMITH CLERK 7902 17-DEC-80 800 20 7566 JONES MANAGER 7839 02-APR-81 2975 20 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 09-DEC-82 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7876 ADAMS CLERK 7788 12-JAN-83 1100 20
Page 223
ANALYST 20 CLERK 10
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------7369 SMITH CLERK 7902 17-DEC-80 800 20 7566 JONES MANAGER 7839 02-APR-81 2975 20 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 09-DEC-82 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7876 ADAMS CLERK 7788 12-JAN-83 1100 20 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 8 rows selected. EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------7369 SMITH CLERK 7902 17-DEC-80 800 20 7566 JONES MANAGER 7839 02-APR-81 2975 20
Page 224
7782 CLARK 2450 7788 SCOTT 3000 7839 KING 5000 7876 ADAMS 1100 7902 FORD 3000 7934 MILLER 1300 8 rows selected.
7839 09-JUN-81 7566 09-DEC-82 17-NOV-81 7788 12-JAN-83 7566 03-DEC-81 7782 23-JAN-82
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------7369 SMITH CLERK 7902 17-DEC-80 800 20 7566 JONES MANAGER 7839 02-APR-81 2975 20 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 09-DEC-82 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7876 ADAMS CLERK 7788 12-JAN-83 1100 20 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 8 rows selected.
Page 225
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------7369 SMITH CLERK 7902 17-DEC-80 800 20 7566 JONES MANAGER 7839 02-APR-81 2975 20 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 09-DEC-82 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7876 ADAMS CLERK 7788 12-JAN-83 1100 20 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 8 rows selected. EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------7369 SMITH CLERK 7902 17-DEC-80 800 20 7566 JONES MANAGER 7839 02-APR-81 2975 20 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 09-DEC-82 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10
Page 226
7876 ADAMS 1100 7902 FORD 3000 7934 MILLER 1300 8 rows selected.
Usage: SQLPLUS [ [<option>] [<logon>] [<start>] ] where <option> ::= -H | -V | [ [-M <o>] [-R <n>] [S] ] <logon> ::= <username>[/<password>][@<connect_string>] | / | /NOLOG <start> ::= @<filename>[.<ext>] [<parameter> ...] "-H" displays the SQL*Plus version banner and usage syntax "-V" displays the SQL*Plus version banner "-M <o>" uses HTML markup options <o> "-R <n>" uses restricted mode <n> "-S" uses silent mode
DEMO ON ORACLE NEW INDEXES 1. BITMAP INDEXES. 2. DESCENDING INDEXES. 3. FUNCTION-BASED INDEXES. 4. REVERSE-KEY INDEXES. 5. INDEX-ORGANIZED TABLES.
Page 227
DEMO ON BITMAP INDEXES BITMAP indexes can substantially improve performance of queries with the following characteristics:
The WHERE clause contains multiple predicates on low- or medium-cardinality columns The individual predicates on these low- or medium-cardinality columns select a large number of rows BITMAP indexes have been created on some or all of these lowor medium-cardinality columns The tables being queried contain many rows
You can use multiple BITMAP indexes to evaluate the conditions on a single table. BITMAP indexes are thus highly advantageous for complex ad hoc queries that contain lengthy WHERE clauses. BITMAP indexes can also provide optimal performance for aggregate queries and for optimizing joins in star schemas. Some info. on BITMAP INDEXES
Now selecting from table BTEST which has no index. SQL> set autotrace on explain SQL> SELECT * FROM DEMO_EMP WHERE JOB='ANALYST' AND MGR=7682 AND SAL=3000; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------560 MILLER ANALYST 7682 23-JAN-82 3000 10 574 MILLER ANALYST 7682 23-JAN-82 3000 10
Page 228
Execution Plan --------------------------------------------------------0 SELECT STATEMENT Optimizer=CHOOSE 1 0 TABLE ACCESS (FULL) OF 'DEMO_EMP' Now creating a BITMAP index on table BTEST SQL> CREATE BITMAP INDEX IND_BIT_EMP ON DEMO_EMP(JOB,MGR,SAL); Wait... Index created.
BITMAP INDEX CREATED Now selecting from table BTEST with a BITMAP index. SQL> analyze table demo_emp compute statistics; SQL> set autotrace on explain SQL> SELECT * FROM DEMO_EMP WHERE JOB='ANALYST' AND MGR=7682 AND SAL=3000; Table analyzed. EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------560 MILLER ANALYST 7682 23-JAN-82 3000 10 574 MILLER ANALYST 7682 23-JAN-82 3000 10
Page 229
Execution Plan --------------------------------------------------------0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=5 Bytes=170) 1 0 TABLE ACCESS (BY INDEX ROWID) OF 'DEMO_EMP' (Cost=2 Card=5 Bytes=170) 2 1 BITMAP CONVERSION (TO ROWIDS) 3 2 BITMAP INDEX (SINGLE VALUE) OF 'IND_BIT_EMP' DEMO ON DESCENDING INDEXES Concatenated indexes can specify different ordering for each of the column indexed. This feature is very useful in reducing the sorting requirement that required different ordering sequences for columns in 'ORDER BY' clause. Now selecting from table DTEST which has no index. SQL> set autotrace on explain SQL> SELECT * FROM DEMO_EMP WHERE DEPTNO=10 AND SAL=3000 ORDER BY SAL DESC, ENAME ASC; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------560 MILLER ANALYST 7682 23-JAN-82 3000 10 574 MILLER ANALYST 7682 23-JAN-82 3000 10
Page 230
Execution Plan --------------------------------------------------------0 SELECT STATEMENT Optimizer=CHOOSE (Cost=14 Card=65 Bytes=221 0) 1 0 SORT (ORDER BY) (Cost=14 Card=65 Bytes=2210) 2 1 TABLE ACCESS (FULL) OF 'DEMO_EMP' (Cost=11 Card=65 Bytes=2210)
Now creating a DESCENDING index on table DTEST SQL> CREATE INDEX IND_DESC_EMP ON DEMO_EMP ( SAL DESC, ENAME ASC); Wait... Index created. DESCENDING INDEX CREATED Now selecting from table DTEST with a DESCENDING index. SQL> set autotrace on explain SQL> SELECT * FROM DEMO_EMP WHERE DEPTNO=10 AND SAL=3000 ORDER BY SAL DESC, ENAME ASC;
Page 231
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------560 MILLER ANALYST 7682 23-JAN-82 3000 10 574 MILLER ANALYST 7682 23-JAN-82 3000 10 Execution Plan --------------------------------------------------------0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3 Card=1 Bytes=34) 1 0 TABLE ACCESS (BY INDEX ROWID) OF 'DEMO_EMP' (Cost=3 Card=1 Bytes=34) 2 1 INDEX (RANGE SCAN) OF 'IND_DESC_EMP' (NON-UNIQUE) (Cost=2 Card=1)
When a 'Select' list uses expression that can materialized as FUNCTION-BASED index. When the where clause contains an expression such as(SAL+COMM).
Enabling FUNCTION-BASED Indexes: Set the following parameters in the init.ora query_rewrite_enabled=true query_rewrite_integrity=trusted OR Use the following commands to enable FUNCTION-BASED indexes:
Page 232
SQL> ALTER SESSION SET QUERY_REWRITE_ENABLED=TRUE; Session altered. SQL> ALTER SESSION SET QUERY_REWRITE_INTEGRITY=TRUSTED; Session altered.
Now selecting from table FTEST which has no index. SQL> set autotrace on SQL> SELECT * FROM DEMO_EMP WHERE DEPTNO=20 AND SAL+COMM > 3000; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------5 MARTIN CLERK 7698 28-SEP-81 2500 1400 20 19 MARTIN CLERK 7698 28-SEP-81 2500 1400 20 Execution Plan --------------------------------------------------------0 SELECT STATEMENT Optimizer=CHOOSE (Cost=11 Card=42 Bytes=1428) 1 0 TABLE ACCESS (FULL) OF 'DEMO_EMP' (Cost=11 Card=42 Bytes=1428) Now creating a FUNCTION-BASED INDEX SQL> CREATE INDEX IND_FUNC_EMP ON DEMO_EMP ( SAL+COMM ); Wait... Index created.
Page 233
FUNCTION-BASED INDEX CREATED Now selecting from table FTEST with a FUNCTION-BASED INDEX.
SQL> ALTER SESSION SET query_rewrite_enabled=true; Session altered. SQL> ALTER SESSION SET query_rewrite_integrity=trusted; Session altered. SQL> set autotrace on
SQL> SELECT * FROM DEMO_EMP WHERE DEPTNO=20 AND SAL+COMM > 3000;
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------5 MARTIN CLERK 7698 28-SEP-81 2500 1400 20 19 MARTIN CLERK 7698 28-SEP-81 2500 1400 20 Execution Plan --------------------------------------------------------0 SELECT STATEMENT Optimizer=CHOOSE (Cost=10 Card=42 Bytes=1428) 1 0 TABLE ACCESS (BY INDEX ROWID) OF 'DEMO_EMP' (Cost=10 Card=42 Bytes=1428) 2 1 INDEX (RANGE SCAN) OF 'IND_FUNC_EMP' (NON-UNIQUE) (Cost=2 Card=23)
Page 234
It REVERSES the bytes of each column indexed (except the ROWID), while keeping the column order. By REVERSING the keys of index, the insertions become distributed across all leaf keys in the index. Should be used in a situation where user inserts ascending values and deletes lower values from a Table, there by preventing 'skewed' indexes. Limitations of REVERSE-KEY Indexes:
1) A REVERSE-KEY index cannot be used if the query requires a range scan. 2) A BITMAPPED index cannot be reversed. 3) An INDEX-ORGANIZED table cannot be reversed. 4) Now selecting from table RTEST which has no index.
SQL> set autotrace on explain SQL> SELECT * FROM DEMO_EMP WHERE EMPNO=2550; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------2550 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 Execution Plan --------------------------------------------------------0 SELECT STATEMENT Optimizer=CHOOSE (Cost=11 Card=1 Bytes=34) 1 0 TABLE ACCESS (FULL) OF 'DEMO_EMP' (Cost=11 Card=1 Bytes=34) Now creating a REVERSE-KEY index on table RTEST SQL> CREATE INDEX IND_REV_EMP ON DEMO_EMP (EMPNO) REVERSE; Wait...
Page 235
Index created.
REVERSE-KEY INDEX CREATED Now selecting from table RTEST with a REVERSE-KEY index. SQL> set autotrace on explain SQL> SELECT * FROM DEMO_EMP WHERE EMPNO=2550; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------2550 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 Execution Plan --------------------------------------------------------0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=34) 1 0 TABLE ACCESS (BY INDEX ROWID) OF 'DEMO_EMP' (Cost=2 Card=1 Bytes=34) 2 1 INDEX (RANGE SCAN) OF 'IND_REV_EMP' (NON-UNIQUE) (Cost=1 Card=1)
DEMO ON INDEX-ORGANIZED TABLES INDEX-ORGANIZED TABLES store data in B*-tree structures that are similar to the indexes on regular tables. However, these tables minimize overall storage, because there is no need for a separate index structure that holds columns already defined in the table.
Used for information retrieval applications that are content-based, such as text, images, and sound storage. If index blocks contain long rows, full index scans may be slow. You can store non-key columns in an overflow area.
Must have a primary key Cannot use unique constraints Cannot containg LONG columns Distribution and replication not supported
Wait.. Table created. INDEX-ORGANIZED TABLE CREATED Database views to check info. about IOTs
DEMO ON ONLINE REDOLOG MANAGEMENT LOG SWITCHES : A log switch occurs when Oracle switches from one redolog another. A log switch occurs LGWR has filled one log file group. A log switch can be forced by a DBA when the current redo log is needs to be archived. First we try select the info about existing REDOLOGS. SQL> select group#, member from v$logfile; GROUP# ---------2 2 1 1 Member ---------------------------------------/disk3/oradata/rahuldb/redolog2a.log /disk3/oradata/rahuldb/redolog2b.log /disk3/oradata/rahuldb/redolog1a.log /disk3/oradata/rahuldb/redolog1b.log
Database altered. Selecting the Info about redologs. SQL> select group#, member from v$logfile; GROUP# ---------2 2 1 1 3 Member ---------------------------------------/disk3/oradata/rahuldb/redolog2a.log /disk3/oradata/rahuldb/redolog2b.log /disk3/oradata/rahuldb/redolog1a.log /disk3/oradata/rahuldb/redolog1b.log /disk3/oradata/rahuldb/DEMO_redoa.log
Database altered. Selecting the Info using command SQL> select group#, member from v$logfile; GROUP# ---------2 2 1 1 3 3 Member ---------------------------------------/disk3/oradata/rahuldb/redolog2a.log /disk3/oradata/rahuldb/redolog2b.log /disk3/oradata/rahuldb/redolog1a.log /disk3/oradata/rahuldb/redolog1b.log /disk3/oradata/rahuldb/DEMO_redoa.log /disk3/oradata/rahuldb/DEMO_redob.log
6 rows selected. [ Sorry! I mistakenly added the new member in disk2 it has to be created in disk3 ]
TO RENAME A LOGFILE : A. Bring the database to mount state. SQL> alter database close; Database altered. B. Copy or move the desired log files to new destination or to new name. $ cp /disk3/oradata/rahuldb/DEMO_redob.log //oradata//DEMO_redob.log Copied. C. Use alter database command to rename logically. SQL> alter database rename file '/disk3/oradata/rahuldb/DEMO_redob.log' to
Page 239
'//oradata//DEMO_redob.log'; Database altered. D. Finally Open the database and try to select the redologs info.
SQL> alter database open; SQL> select group#, member from v$logfile;
GROUP# ---------2 2 1 1 3 3
6 rows selected. To drop a member of an existing group. SQL> alter database drop logfile member '/disk3/oradata/rahuldb/DEMO_redo3b.log'; Database altered. Selecting the Info about online redologs. SQL> select group#, member from v$logfile; GROUP# ---------2 2 1 1 3 Member ---------------------------------------/disk3/oradata/rahuldb/redolog2a.log /disk3/oradata/rahuldb/redolog2b.log /disk3/oradata/rahuldb/redolog1a.log /disk3/oradata/rahuldb/redolog1b.log /disk3/oradata/rahuldb/DEMO_redoa.log
5 rows selected. now we try to drop the Group 3 and select the Info.
Page 240
Database altered. GROUP# ---------2 2 1 1 Member ---------------------------------------/disk3/oradata/rahuldb/redolog2a.log /disk3/oradata/rahuldb/redolog2b.log /disk3/oradata/rahuldb/redolog1a.log /disk3/oradata/rahuldb/redolog1b.log DEMO ON MANAGING CONTROLFILE The controlfile size depends on the values set for the parameters: MAXDATAFILES MAXLOGFILES MAXLOGMEMBERS MAXLOGHISTORY MAXINSTANCES of the CREATE DATABASE statement. To check the number of files specified in controlfile we have to take the trace.
SQL> alter database backup controlfile to trace;
[ It is going to dump a file in your specified path for user_dump_dest parameter ] Database altered. To see the user_dump_dest parameter value
SQL> show parameter user_dump_
Page 241
NAME TYPE VALUE ------------------------------------ ----------- ----------------------------user_dump_dest string /disk3/oradata/rahuldb/udump Go to the destination and open the latest file $ ls -ltr total 8 -rw-rw---1 rahul dba 16:41 rahuldb_ora_31109.trc 4454 May 9
When ever your database reached the maxvalues of some parameters like MAXDATAFILES, MAXLOGFILES, MAXLOGMEMBERS, MAXINSTANCES and e.t.c., You have to take the trace of your controlfile to change the values for those parameters and recreate your controlfile with new parameter settings: After modifying the file we convert the filename into a readable format like : trace.sql # be invalidated. Use this only if online logs are damaged.
STARTUP NOMOUNT CREATE CONTROLFILE REUSE DATABASE "RAHULDB" RESETLOGS ARCHIVELOG
--
SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 899 LOGFILE GROUP 1 ( '/disk3/oradata/rahuldb/redolog1a.log',
Page 242
'/disk3/oradata/rahuldb/redolog1b.log' ) SIZE 300K, GROUP 2 ( '/disk3/oradata/rahuldb/redolog2a.log', '/disk3/oradata/rahuldb/redolog2b.log' ) SIZE 300K -- STANDBY LOGFILE DATAFILE now execute the file to create new controlfile SQL> shutdown abort; SQL> @trace.sql [When you execute the trace.sql file first its going to Nomount your Database, Creates the Control File and then Opens the Database] To create an ADDITIONAL copy of controlfile,first we will check existing current controlfiles and shutdown our database to make another copy. SQL> show parameter control_files SQL> shutdown normal/immediate; NAME TYPE VALUE --------------------------- ------- ----------------------------control_files string /disk3/oradata/rahuldb/ora_control1, /disk2/oradata/rahuldb/ora_co ntrol2 Database closed. Database dismounted. ORACLE instance shut down. Now copy your existing current controlfile to another destination where ever you want the New Controlfile with some other name using OS command. Example : $ cp /disk3/oradata/rahuldb/control1.ctl //oradata//control2.ctl
Page 243
Next, Add the new path for control_files parameter in your init<SID>.ora it should be like this : control_files = (/disk3/oradata/rahuldb/control1.ctl, //oradata//control3.ctl) Now try to start your database and check the controlfiles. SQL> startup; SQL> show parameter control_files
NAME TYPE VALUE --------------------------- ------- ----------------------------control_files string /disk3/oradata/rahuldb/ora_control1, /disk2/oradata/rahuldb/ora_co ntrol2 //oradata//control3.ctl
DEMO ON ARCHIVED REDOLOG FILES Try to check your database mode SQL> select name,log_mode from v$database;
Page 244
NAME LOG_MODE -------------------- -----------UDEMO NOARCHIVELOG To convert the database into ARCHIVELOG mode Enter the following parameters in the init<SID>.ora file LOG_ARCHIVE_START=TRUE LOG_ARCHIVE_DEST= /disk3/oradata/rahuldb/ARCH/arch LOG_ARCHIVE_FORMAT="%s.log" Note : Above the small "arch" is used as PREFIX for your future generation Archive Log Files in the specified Destination. Create the Directory called ARCH you specified. $ mkdir /disk3/oradata/rahuldb/ARCH now mount the database and enable archieving
SQL> startup mount; SQL> alter database archivelog; SQL> alter database open;
ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers Database mounted. Database altered. Database altered. now we check the database mode
SQL> archive log list;
Page 245
[Please check that the ARCHIVE LOG MODE is now showing ARCHIVE MODE.] Database log mode Automatic archival Archive destination /disk3/oradata/rahuldb/arch Oldest online log sequence Next log sequence to archive Current log sequence Archive Mode Enabled 17 18 18
DEMO ON PHYSICAL COLD BACKUP For performing COLD BACKUP our database should be down. But we query the files info from the database and then we perform the backup of those files by shutting down. First we are going to START the database. For taking COLD Physical backup first we have to know the location of DATAFILES, TEMPFILES, CONTROLFILES, LOGFILES of my Database. We have to get the info of these files from the Database. SQL> select name from v$datafile; SQL> select name from v$tempfile; SQL> select name from v$controlfile; SQL> select member from v$logfile; DATAFILE NAMES -------------Page 246
/disk3/oradata/FTDEMO/system01.dbf /disk3/oradata/FTDEMO/undo.dbf /disk3/oradata/FTDEMO/rcvcat01.dbf /disk3/oradata/FTDEMO/bkup01.dbf /disk3/oradata/FTDEMO/bkup02.dbf /disk3/oradata/FTDEMO/bkup03.dbf TEMPORARY FILES --------------/disk3/oradata/FTDEMO/temp01.dbf CONTROL FILES --------------/disk3/oradata/FTDEMO/control1.ctl REDOLOG FILES ------------/disk3/oradata/FTDEMO/redo1b.log /disk3/oradata/FTDEMO/redo2b.log We have to create a DIRECTORY for performing BACKUP [ cbkup ]. $ cd /disk3/oradata/FTDEMO/ $ mkdir cbkup Directory created. Now we will write script to take the backup. The following script will concatinate the os command 'cp' with files which are retrived from database views to destination. $ cat cold.sql |less set feed off set echo off set head off set pause off spool /tmp/rahul/cold.sh select 'cp '||name||' /disk3/oradata/FTDEMO/cbkup' from v$datafile;
Page 247
select 'cp '||name||' /disk3/oradata/FTDEMO/cbkup' from v$tempfile; select 'cp '||name||' /disk3/oradata/FTDEMO/cbkup' from v$controlfile; select 'cp '||member||' /disk3/oradata/FTDEMO/cbkup' from v$logfile; spool off shutdown immediate; exit Now connect to the database and execute the script which SPOOL the output in a FILE and at last it SHUTDOWN the DATABASE. $ sqlplus /as sysdba SQL> @cold.sql cp /disk3/oradata/FTDEMO/system01.dbf /disk3/oradata/FTDEMO/cbkup cp /disk3/oradata/FTDEMO/undo.dbf /disk3/oradata/FTDEMO/cbkup cp /disk3/oradata/FTDEMO/rcvcat01.dbf /disk3/oradata/FTDEMO/cbkup cp /disk3/oradata/FTDEMO/bkup01.dbf /disk3/oradata/FTDEMO/cbkup cp /disk3/oradata/FTDEMO/bkup02.dbf /disk3/oradata/FTDEMO/cbkup cp /disk3/oradata/FTDEMO/bkup03.dbf /disk3/oradata/FTDEMO/cbkup cp /disk3/oradata/FTDEMO/temp01.dbf /disk3/oradata/FTDEMO/cbkup cp /disk3/oradata/FTDEMO/control1.ctl /disk3/oradata/FTDEMO/cbkup cp /disk3/oradata/FTDEMO/redo1b.log /disk3/oradata/FTDEMO/cbkup cp /disk3/oradata/FTDEMO/redo2b.log /disk3/oradata/FTDEMO/cbkup
Page 248
Database closed. Database dismounted. ORACLE instance shut down. Check whether your database is down. $ ps x PID TTY -------5824 pts/18 5933 pts/18 21772 pts/18 STAT TIME COMMAND ------ ---- ------S 0:00 -bash S 0:00 -bash S 0:00 grep bash
Then execute the spooled shell file ( cold.sh ) for performing backup. $ sh cold.sh cp /disk3/oradata/FTDEMO/system01.dbf /disk3/oradata/FTDEMO/cbkup cp /disk3/oradata/FTDEMO/undo.dbf /disk3/oradata/FTDEMO/cbkup cp /disk3/oradata/FTDEMO/rcvcat01.dbf /disk3/oradata/FTDEMO/cbkup cp /disk3/oradata/FTDEMO/bkup01.dbf /disk3/oradata/FTDEMO/cbkup cp /disk3/oradata/FTDEMO/bkup02.dbf /disk3/oradata/FTDEMO/cbkup cp /disk3/oradata/FTDEMO/bkup03.dbf /disk3/oradata/FTDEMO/cbkup cp /disk3/oradata/FTDEMO/temp01.dbf /disk3/oradata/FTDEMO/cbkup cp /disk3/oradata/FTDEMO/control1.ctl /disk3/oradata/FTDEMO/cbkup cp /disk3/oradata/FTDEMO/redo1b.log /disk3/oradata/FTDEMO/cbkup
Page 249
cp /disk3/oradata/FTDEMO/redo2b.log /disk3/oradata/FTDEMO/cbkup Let us check whether the files backed up or not. $ cd /disk3/oradata/FTDEMO/cbkup $ ls -l -rwxrwxrwx 1 rahul 17:13 bkup01.dbf -rwxrwxrwx 1 rahul 17:13 bkup02.dbf -rwxrwxrwx 1 rahul 17:13 bkup03.dbf -rwxrwxrwx 1 rahul 17:13 control1.ctl -rwxrwxrwx 1 rahul 17:13 rcvcat01.dbf -rwxrwxrwx 1 rahul 17:13 redo1b.log -rwxrwxrwx 1 rahul 17:13 redo2b.log -rwxrwxrwx 1 rahul 17:13 system01.dbf -rwxrwxrwx 1 rahul 17:13 temp01.dbf -rwxrwxrwx 1 rahul 17:13 undo.dbf dba dba dba dba dba dba dba dba dba dba 10487808 May 10 10487808 May 10 10487808 May 10 808960 May 10 20973568 May 10 307712 May 10 307712 May 10 83888128 May 10 10487808 May 10 41945088 May 10
ORACLE instance shut down. ORACLE instance started. Total System Global Area Fixed Size 17589096 bytes 279400 bytes
Page 250
Just now we have taken a cold backup before going further we will query more info about users, tablespaces, and datafiles.
SQL> select from username,default_tablespace, temporary_tablespace dba_users;
File-Name Name Bytes ----------------------------------------------------- ---------/disk3/oradata/FTDEMO/system01.dbf 83886080 /disk3/oradata/FTDEMO/undo.dbf UNDOTBS 41943040 /disk3/oradata/FTDEMO/rcvcat01.dbf 20971520 /disk3/oradata/FTDEMO/bkup01.dbf 10485760
Ts-----SYSTEM
RCVCAT BKUP
Page 251
BKUP BKUP
Now we try to do some DML operations to generate some archive log files.
SQL> SQL> SQL> SQl> select * from v$log; conn U_SCOTT/U_SCOTT create table test (a number); insert into test select rownum from dict;
1114 rows created. Commit complete. Commit complete. COUNT(*) ---------38990 Selecting the log info.
SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# Bytes MEMBERS ARC Status ---------- ---------- ---------- ---------- --------- --- ---------FIRST_CHANGE# FIRST_TIM ------------- --------1 1 719 307200 1 NO CURRENT 61472 10-MAY-05
Page 252
718
307200
6 rows selected. File-Name Name ----------------------------------------------------/disk3/oradata/FTDEMO/bkup01.dbf /disk3/oradata/FTDEMO/bkup02.dbf /disk3/oradata/FTDEMO/bkup03.dbf Now remove the datafile. SQL> !rm /disk3/oradata/FTDEMO/bkup.dbf Datafile Removed. Let us try to insert data in table test;
Page 253
SQL> insert into test select * from test where rownum<5000; Inserting rows. Wait.... 4999 rows created. We had seen a ERROR Message that the File is unable to read/write bec'z its lost. Now we have to RESTORE the lost Datafile from the Backup we had and Perform the Recovery Strategy by applying the ARCHIVE LOG files to bring up to the last COMMITED Record. For doing the RESTORE & RECOVER of the lost datafile we have to follow these steps : First we have to make the datafile offline.
SQL>alter database datafile '/disk3/oradata/FTDEMO/bkup01.dbf' offline;
Database altered. Then Restore only the LOST Datafile file from BACKUP.
SQL> !cp /disk3/oradata/FTDEMO/bkup01.sql /disk3/oradata/FTDEMO/
File restored. Now perform the RECOVERY on the file by APPLYING ARCHIVE LOG files
SQL> recover datafile '/disk3/oradata/FTDEMO/bkup01.dbf';
ORA-00279: change 61195 generated at 05/10/2005 17:13:14 needed for thread 1 ORA-00289: suggestion : /disk3/oradata/FTDEMO/ARCH/717.arc ORA-00280: change 61195 for thread 1 is in sequence #717
Page 254
ORA-00279: change 61430 generated at 05/10/2005 17:19:09 needed for thread 1 ORA-00289: suggestion : /disk3/oradata/FTDEMO/ARCH/718.arc ORA-00280: change 61430 for thread 1 is in sequence #718 ORA-00278: log file '/disk3/oradata/FTDEMO/ARCH/717.arc' no longer needed for this recovery Log applied. Media recovery complete. At last now make the datafile ONLINE and select the data from the table ( test )
SQL>alter database datafile '/disk3/oradata/FTDEMO/bkup01.dbf' online; SQL>select count(*) from test;
Database altered. COUNT(*) ---------43989 Let us see TABLESPACE RECOVERY Senario. Selecting Datafiles and Tablespaces info.
SQL> select file_name, tablespace_name, bytes from dba_data_files;
/disk3/oradata/FTDEMO/bkup01.dbf 10485760 /disk3/oradata/FTDEMO/bkup02.dbf 10485760 /disk3/oradata/FTDEMO/bkup03.dbf 10485760 /disk3/oradata/FTDEMO/rcvcat01.dbf 20971520 /disk3/oradata/FTDEMO/system01.dbf 83886080 /disk3/oradata/FTDEMO/undo.dbf UNDOTBS 41943040 6 rows selected.
Now we try to Select the Current Log Sequence No. and do some DML operations to generate some archive log files. SQL> select * from v; SQL> conn U_SCOTT/U_SCOTT SQL> create table test (a number); SQL> insert into test select rownum from dict; GROUP# THREAD# SEQUENCE# Bytes MEMBERS ARC Status ---------- ---------- ---------- ---------- --------- --- ---------FIRST_CHANGE# FIRST_TIM ------------- --------1 1 719 307200 1 YES ACTIVE 61472 10-MAY-05 1 NO 2 1 CURRENT 61593 10-MAY-05 720 307200
Table Dropped
Page 256
Table Created 1114 rows created. Commit complete. Commit complete. COUNT(*) ---------38990 Selecting the log info. SQL> seleclt * from v$log; GROUP# THREAD# SEQUENCE# Bytes MEMBERS ARC Status ---------- ---------- ---------- ---------- --------- --- ---------FIRST_CHANGE# FIRST_TIM ------------- --------1 1 721 307200 1 YES ACTIVE 61804 10-MAY-05 1 NO 2 1 CURRENT 61849 10-MAY-05 722 307200
Now we query info about the TEST table. SQL> select file_name, tablespace_name from dba_data_files where tablespace_name = 'BKUP'; Segment-Name Ts-Name ------------ --------------EMP BKUP
Page 257
Now remove the all the datafiles belong to tablespace BKUP. SQL> !rm /disk3/oradata/FTDEMO/bk*.dbf Datafiles Removed. Let us try to insert data in table test; SQL> insert into test select * from test where rownum<5000; Inserting rows. Wait.... We had seen a ERROR Message that the File is unable to read/write bec'z its lost. Now we have to RESTORE the lost Datafile from the Backup we had and Perform the Recovery Strategy by applying the ARCHIVE LOG files to bring up to the last COMMITED Record. Let us try to find the datafiles at Operating System level. $ ls -l *.dbf total 156040
Page 258
drwxrwxrwx 2 wstpack 17:30 ARCH drwxrwxrwx 2 wstpack 17:14 bdump drwxrwxrwx 2 wstpack 2003 BKUP -rwxrwxrwx 1 rahul 17:05 c2.ctl drwxrwxrwx 2 rahul 17:13 cbkup drwxrwxrwx 2 wstpack 2003 cdump -rwxrwxrwx 1 rahul 17:34 control1.ctl -rwxrwxrwx 1 rahul 17:05 FTDEMO.files -rwxrwxrwx 1 rahul 17:30 rcvcat01.dbf -rwxrwxrwx 1 rahul 17:30 redo1b.log -rwxrwxrwx 1 rahul 17:33 redo2b.log -rwxrwxrwx 1 rahul 17:30 system01.dbf -rwxrwxrwx 1 rahul 17:05 temp01.dbf drwxrwxrwx 2 wstpack 17:27 udump -rwxrwxrwx 1 rahul 17:30 undo.dbf
dba dba dba dba dba dba dba dba dba dba dba dba dba dba dba
4096 May 10 16384 May 10 4096 Feb 13 808960 May 10 4096 May 10 4096 Feb 7
808960 May 10 439 May 10 20973568 May 10 307712 May 10 307712 May 10 83888128 May 10 10487808 May 10 12288 May 10 41945088 May 10
Now for performing the RESTORE of all the Lost datafiles and then applying the Archive log files on it we have to go through some steps : First we have to make the tablespace BKUP OFFLINE. SQL>alter tablespace BKUP offline immediate; Table Altered
Page 259
Now Restore all the Datafiles files related to the tablespace from backup. SQL> !cp /disk3/oradata/FTDEMO/bk*.dbf /disk3/oradata/FTDEMO/ Files restored. Now perform the RECOVERY Strategy for that TABLESPACE by applying the Archive log files. SQL> recover tablespace BKUP;
At last now make the tablespace BKUP ONLINE. select the data from the table ( test ) SQL> alter tablespace bkup online; SQL> select count(*) from test; Tablespace altered. COUNT(*) ---------38990 Let us see full database recovery senario. Selecting Datafiles and tablespaces info. SQL> select file_name, tablespace_name, bytes from dba_data_files; File-Name Name Bytes ----------------------------------------------------- ---------/disk3/oradata/FTDEMO/bkup01.dbf 10485760 /disk3/oradata/FTDEMO/bkup02.dbf 10485760 Ts-----BKUP BKUP
Page 260
/disk3/oradata/FTDEMO/bkup03.dbf 10485760 /disk3/oradata/FTDEMO/rcvcat01.dbf 20971520 /disk3/oradata/FTDEMO/system01.dbf 83886080 /disk3/oradata/FTDEMO/undo.dbf UNDOTBS 41943040 6 rows selected.
Now we try to do some DML operations to generate some archive log files. SQL> select * from v$log; SQL> conn U_SCOTT/U_SCOTT SQL> create table test (a number); SQl> insert into test select rownum from dict; GROUP# THREAD# SEQUENCE# Bytes MEMBERS ARC Status ---------- ---------- ---------- ---------- --------- --- ---------FIRST_CHANGE# FIRST_TIM ------------- --------1 1 721 307200 1 YES ACTIVE 61804 10-MAY-05 1 NO 2 1 CURRENT 61849 10-MAY-05 722 307200
Selecting the log info. SQL> seleclt * from v$log; GROUP# THREAD# SEQUENCE# Bytes MEMBERS ARC Status ---------- ---------- ---------- ---------- --------- --- ---------FIRST_CHANGE# FIRST_TIM ------------- --------1 1 725 307200 1 NO CURRENT 62251 10-MAY-05 2 1 1 YES ACTIVE 62207 10-MAY-05 724 307200
Now we query info about the TEST table. SQL> select segment_name, tablespace_name from user_segments; SQL> select file_name, tablespace_name from dba_data_files where tablespace_name = 'BKUP'; Segment-Name -----------EMP DEPT BONUS SALGRADE Ts-Name --------------BKUP BKUP BKUP BKUP
Page 262
DUMMY TEST
BKUP BKUP
6 rows selected. File-Name Name ----------------------------------------------------/disk3/oradata/FTDEMO/bkup01.dbf /disk3/oradata/FTDEMO/bkup02.dbf /disk3/oradata/FTDEMO/bkup03.dbf Now we remove the all the 3 types of files of this database(controlfile, redologfile, datafiles ). $ rm /disk3/oradata/FTDEMO/*.ctl $ rm /disk3/oradata/FTDEMO/*.log $ rm /disk3/oradata/FTDEMO/*.dbf Allfiles Removed. Let us try to insert data in the table test; SQL> insert into test select * from test where rownum<5000; Inserting rows. Waiting... Same Oncemore we had got the Error Message that the files missing. let us see at OS Level. $ cd /disk3/oradata/FTDEMO $ ls -l *.dbf *.ctl *.log total 48 drwxrwxrwx 17:41 ARCH drwxrwxrwx 12:45 bdump 2 wstpack 2 wstpack dba dba 4096 May 10 16384 May 11
Page 263
drwxrwxrwx 2 wstpack 2003 BKUP drwxrwxrwx 2 rahul 12:44 cbkup drwxrwxrwx 2 wstpack 2003 cdump -rwxrwxrwx 1 rahul 12:41 FTDEMO.files drwxrwxrwx 2 wstpack 12:50 udump
Now we will try to perform the FULL DATABASE RECOVERY. For that first we have to shutdown the current database. SQL> connect /as sysdba SQL> shutdown abort ORACLE instance shut down. We have to go to the BACKUP directory and RESTORE all the files in their Original locations from BACKUP. $ cd /disk3/oradata/FTDEMO/cbkup $ cp *.* .. Restore is going on.. All Files restored. For Performing the RECOVER of the DATABASE. First bring the database in mount state. $ sqlplus "/as sysdba" SQL> startup mount Connected to an idle instance. ORACLE instance started. Total System Global Area Fixed Size 17589096 bytes 279400 bytes
Page 264
[Its applying the archive log files. Wait...] Now cancel the RECOERY operation and OPEN the database with RESETLOGS SQL> recover cancel; SQL> alter database open resetlogs;
Now let us connect as SAM user and check the table TEST. SQL>conn U_SCOTT/U_SCOTT SQL>select count(*) from test; COUNT(*) ---------35648 NOTE : Once the database is OPENED with RESETLOGS, their is no use of your OLD BACKUP's. We have to take a FRESH BACKUP either COLD/HOT for your future problems using the backup Script.
DEMO ON PHYSICAL HOT BACKUP For performing HOT BACKUP our database need not to be shutdowned being the database open we can perform this operation. First we are going to START the database. For taking Physical HOT backup first we have to know the location of DATAFILES, TEMPFILES, CONTROLFILES, LOGFILES of my Database.
Page 265
DATAFILE NAMES -------------/disk3/oradata/FTDEMO/system01.dbf /disk3/oradata/FTDEMO/undo.dbf /disk3/oradata/FTDEMO/rcvcat01.dbf /disk3/oradata/FTDEMO/bkup01.dbf /disk3/oradata/FTDEMO/bkup02.dbf /disk3/oradata/FTDEMO/bkup03.dbf TEMPORARY FILES --------------/disk3/oradata/FTDEMO/temp01.dbf CONTROL FILES --------------/disk3/oradata/FTDEMO/control1.ctl REDOLOG FILES ------------/disk3/oradata/FTDEMO/redo1b.log /disk3/oradata/FTDEMO/redo2b.log We have to create a DIRECTORY for performing BACKUP [ hbkup ]. $ cd /disk3/oradata/FTDEMO/ $ mkdir hbkup Directory created. 1. Now we will write script to take the hot backup.
Page 266
The following script will put the tablespaces in a special mode called BEGIN BACKUP and copies all the related datafiles of that Tablespace to the Backup destination and finally put the tablespaces in END BACKUP mode. It also create backup controlfile in your given destination. At last it switches the logfile to generate min. 1 Archive log after performing the Backup. Now we try to see the output of the script and EXECUTE it from the Oracle prompt. $ cat hot.sql SQL>@hot.sql set head off set echo off set feed off spool hot_backup.sql select 'hot.log' || chr(10) from dual; select 'alter tablespace '||tablespace_name||' begin backup;'||chr(10)|| '!cp '||file_name||' /disk3/oradata/FTDEMO/hbkup'||chr(10)|| 'alter tablespace '||tablespace_name||' end backup;' from dba_data_files; select 'alter database backup controlfile to '||chr(10)|| '''/disk3/oradata/FTDEMO/hbkup/control.bkp'''||' reuse;' from dual; select 'alter system switch logfile;' from dual; select 'exit' from dual; spool off
Page 267
exit First we will execute hot.sql at SQL prompt which is going to generate hot_backup.sql file. spool /tmp/rahul/hot.log alter tablespace SYSTEM begin backup; !cp /disk3/oradata/FTDEMO/system01.dbf /disk3/oradata/FTDEMO/hbkup alter tablespace SYSTEM end backup; alter tablespace UNDOTBS begin backup; !cp /disk3/oradata/FTDEMO/undo.dbf /disk3/oradata/FTDEMO/hbkup alter tablespace UNDOTBS end backup; alter tablespace RCVCAT begin backup; !cp /disk3/oradata/FTDEMO/rcvcat01.dbf /disk3/oradata/FTDEMO/hbkup alter tablespace RCVCAT end backup; alter tablespace BKUP begin backup; !cp /disk3/oradata/FTDEMO/bkup01.dbf /disk3/oradata/FTDEMO/hbkup alter tablespace BKUP end backup; alter tablespace BKUP begin backup; !cp /disk3/oradata/FTDEMO/bkup02.dbf /disk3/oradata/FTDEMO/hbkup alter tablespace BKUP end backup; alter tablespace BKUP begin backup; !cp /disk3/oradata/FTDEMO/bkup03.dbf /disk3/oradata/FTDEMO/hbkup alter tablespace BKUP end backup; alter database backup controlfile to '/disk3/oradata/FTDEMO/hbkup/control.bkp' reuse;
Page 268
alter system switch logfile; exit Now we will try to see the contents in hot_backup.sql file. $ cat hot_backup.sql spool /tmp/rahul/hot.log alter tablespace SYSTEM begin backup; !cp /disk3/oradata/FTDEMO/system01.dbf /disk3/oradata/FTDEMO/hbkup alter tablespace SYSTEM end backup; alter tablespace UNDOTBS begin backup; !cp /disk3/oradata/FTDEMO/undo.dbf /disk3/oradata/FTDEMO/hbkup alter tablespace UNDOTBS end backup; alter tablespace RCVCAT begin backup; !cp /disk3/oradata/FTDEMO/rcvcat01.dbf /disk3/oradata/FTDEMO/hbkup alter tablespace RCVCAT end backup; alter tablespace BKUP begin backup; !cp /disk3/oradata/FTDEMO/bkup01.dbf /disk3/oradata/FTDEMO/hbkup alter tablespace BKUP end backup; alter tablespace BKUP begin backup; !cp /disk3/oradata/FTDEMO/bkup02.dbf /disk3/oradata/FTDEMO/hbkup alter tablespace BKUP end backup; alter tablespace BKUP begin backup; !cp /disk3/oradata/FTDEMO/bkup03.dbf /disk3/oradata/FTDEMO/hbkup alter tablespace BKUP end backup;
Page 269
alter database backup controlfile to '/disk3/oradata/FTDEMO/hbkup/control.bkp' reuse; alter system switch logfile; exit Now execute the file hot_backup.sql to perform the actual backup at SQL prompt. SQL> @hot_backup.sql Tablespace altered. Database altered. System altered. Let us chech it up whether the backup is successfully completed or not. $ cd /disk3/oradata/FTDEMO/hbkup $ ls -l total 175100 -rwxrwxrwx 1 rahul 14:58 bkup01.dbf -rwxrwxrwx 1 rahul 14:58 bkup02.dbf -rwxrwxrwx 1 rahul 14:58 bkup03.dbf -rw-rw---1 rahul 14:58 control.bkp -rwxrwxrwx 1 rahul 14:58 rcvcat01.dbf -rwxrwxrwx 1 rahul 14:58 system01.dbf -rwxrwxrwx 1 rahul 14:58 undo.dbf dba dba dba dba dba dba dba 10487808 May 12 10487808 May 12 10487808 May 12 808960 May 12 20973568 May 12 83888128 May 12 41945088 May 12
Just now we have taken a hot backup before going further we will query more info about users, tablespaces, and datafiles. SQL> select username, default_tablespace, temporary_tablespace from dba_users; User ---------SYS SYSTEM OUTLN DBSNMP U_SCOTT RMAN Def-Tspace --------------SYSTEM SYSTEM SYSTEM SYSTEM BKUP RCVCAT Temp-Tspace --------------TEMP TEMP TEMP TEMP TEMP TEMP
6 rows selected. Selecting Datafiles and tablespaces info. SQL> select file_name, tablespace_name, bytes from dba_data_files; File-Name Name Bytes ----------------------------------------------------- ---------/disk3/oradata/FTDEMO/system01.dbf 83886080 /disk3/oradata/FTDEMO/undo.dbf UNDOTBS 41943040 /disk3/oradata/FTDEMO/rcvcat01.dbf 20971520 /disk3/oradata/FTDEMO/bkup01.dbf 10485760 /disk3/oradata/FTDEMO/bkup02.dbf 10485760 /disk3/oradata/FTDEMO/bkup03.dbf 10485760 6 rows selected.
Page 271
Ts-----SYSTEM
Now we try to do some DML operations to generate some archive log files. SQL> select * from v$log; SQL> conn U_SCOTT/U_SCOTT SQL> create table test (a number); SQl> insert into test select rownum from dict; GROUP# THREAD# SEQUENCE# Bytes MEMBERS ARC Status FIRST_CHANGE# FIRST_TIM ---------- ---------- ---------- ---------- --------- --- -------- ------------- --------1 1 717 307200 1 YES ACTIVE 60849 28-APR-03 2 1 718 307200 1 NO CURRENT 61571 12-MAY-05 Table Created 1114 rows created. Commit complete. Commit complete. COUNT(*) ---------38990 Selecting the log info. SQL> seleclt * from v$log; GROUP# THREAD# SEQUENCE# Bytes MEMBERS ARC Status FIRST_CHANGE# FIRST_TIM ---------- ---------- ---------- ---------- --------- --- ---------- ------------- --------1 1 719 307200 1 YES ACTIVE 61399 12-MAY-05 2 1 720 307200 1 NO CURRENT 61441 12-MAY-05 Now we query more detailed info about the TEST table.
Page 272
SQL> select segment_name, tablespace_name from user_segments; SQL> select file_name, tablespace_name from dba_data_files where tablespace_name = 'BKUP'; Segment-Name -----------EMP DEPT BONUS SALGRADE DUMMY TEST Ts-Name --------------BKUP BKUP BKUP BKUP BKUP BKUP
6 rows selected. File-Name Name ----------------------------------------------------/disk3/oradata/FTDEMO/bkup01.dbf /disk3/oradata/FTDEMO/bkup02.dbf /disk3/oradata/FTDEMO/bkup03.dbf Now remove the datafile. SQL> !rm /disk3/oradata/FTDEMO/bkup01.dbf Datafile Removed. Let us try to insert data in table test; SQL> insert into test select * from test where rownum<5000; Inserting rows. Wait.... 4999 rows created.
Page 273
We had seen a ERROR Message that the File is unable to read/write bec'z its lost. Now we have to RESTORE the lost Datafile from the Backup we had and Perform the Recovery Strategy by applying the ARCHIVE LOG files to bring up to the last COMMITED Record.
1.1 For doing the RESTORE & RECOVER of the lost datafile we have to follow these steps : 1.1.1. First we have to make the datafile offline. SQL>alter database datafile '/disk3/oradata/FTDEMO/bkup01.dbf' offline; Database altered. 1.1.2. Then Restore only the LOST Datafile file from BACKUP. SQL> !cp /disk3/oradata/FTDEMO/bkup01.sql /disk3/oradata/FTDEMO/ File restored. 1.1.3. Now perform the RECOVERY on the file by APPLYING ARCHIVE LOG files SQL> recover datafile '/disk3/oradata/FTDEMO/bkup01.dbf'; ORA-00279: change 61199 generated at 05/12/2005 15:29:27 needed for thread 1 ORA-00289: suggestion : /disk3/oradata/FTDEMO/ARCH/717.arc ORA-00280: change 61199 for thread 1 is in sequence #717
Page 274
ORA-00279: change 61227 generated at 05/12/2005 15:29:28 needed for thread 1 ORA-00289: suggestion : /disk3/oradata/FTDEMO/ARCH/718.arc ORA-00280: change 61227 for thread 1 is in sequence #718 ORA-00278: log file '/disk3/oradata/FTDEMO/ARCH/717.arc' no longer needed for this recovery Log applied. Media recovery complete. 1.1.4. At last now make the datafile ONLINE. And select the data from the table ( test ) SQL>alter database datafile '/disk3/oradata/FTDEMO/bkup01.dbf' online; SQL>select count(*) from test; Database altered. COUNT(*) ---------43989 2. Let us see TABLESPACE RECOVERY Senario. Selecting Datafiles and tablespaces info. SQL> select file_name, tablespace_name, bytes from dba_data_files; File-Name TsName Bytes --------------------------------------------- -------------- ---------/disk3/oradata/FTDEMO/bkup01.dbf BKUP 10485760
Page 275
/disk3/oradata/FTDEMO/bkup02.dbf 10485760 /disk3/oradata/FTDEMO/bkup03.dbf 10485760 /disk3/oradata/FTDEMO/rcvcat01.dbf 20971520 /disk3/oradata/FTDEMO/system01.dbf 83886080 /disk3/oradata/FTDEMO/undo.dbf UNDOTBS 41943040 6 rows selected.
Now we try to Select the Current Log Sequence No. and do some DML operations to generate some archive log files. SQL> select * from v; SQL> conn U_SCOTT/U_SCOTT SQL> create table test (a number); SQL> insert into test select rownum from dict; GROUP# THREAD# SEQUENCE# Bytes MEMBERS ARC Status FIRST_CHANGE# FIRST_TIM ---------- ---------- ---------- ---------- --------- --- ---------- ------------- --------1 1 719 307200 1 YES ACTIVE 61399 12-MAY-05 2 1 720 307200 1 NO CURRENT 61441 12-MAY-05 Table Droped Table Created 1114 rows created. Commit complete. Commit complete. COUNT(*) ---------38990
Page 276
Selecting the log info. SQL> seleclt * from v$log; GROUP# THREAD# SEQUENCE# Bytes MEMBERS ARC Status FIRST_CHANGE# FIRST_TIM ---------- ---------- ---------- ---------- --------- --- ---------- ------------- --------1 1 723 307200 1 NO CURRENT 61775 12-MAY-05 2 1 722 307200 1 YES ACTIVE 61734 12-MAY-05 Now we query info about the TEST table. SQL> select segment_name, tablespace_name from user_segments; SQL> select file_name, tablespace_name from dba_data_files where tablespace_name = 'BKUP'; Segment-Name -----------EMP DEPT BONUS SALGRADE DUMMY TEST Ts-Name --------------BKUP BKUP BKUP BKUP BKUP BKUP
6 rows selected. File-Name Name ----------------------------------------------------/disk3/oradata/FTDEMO/bkup01.dbf /disk3/oradata/FTDEMO/bkup02.dbf /disk3/oradata/FTDEMO/bkup03.dbf Ts-----BKUP BKUP BKUP
Page 277
Now remove the all the datafiles belong to tablespace BKUP. SQL> !rm /disk3/oradata/FTDEMO/bk*.dbf Datafiles Removed. Let us try to insert data in table test; SQL> insert into test select * from test where rownum<5000; Inserting Rows Wait We had seen a ERROR Message that the File is unable to read/write bec'z its lost. Now we have to RESTORE the lost Datafile from the Backup we had and Perform the Recovery Strategy by applying the ARCHIVE LOG files to bring up to the last COMMITED Record. Let us try to find the datafiles at Operating System level. $ ls -l *.dbf total 156048 drwxrwxrwx 2 wstpack 17:41 ARCH drwxrwxrwx 2 wstpack 15:22 bdump drwxrwxrwx 2 wstpack 2003 BKUP -rwxrwxrwx 1 rahul 15:22 c2.ctl drwxrwxrwx 2 wstpack 2003 cdump -rwxrwxrwx 1 rahul 15:48 control1.ctl -rwxrwxrwx 1 rahul 15:22 FTDEMO.files drwxrwxrwx 2 rahul 15:29 hbkup dba dba dba dba dba dba dba dba 4096 May 10 20480 May 12 4096 Feb 13 808960 May 12 4096 Feb 7
-rwxrwxrwx 1 rahul 15:44 rcvcat01.dbf -rwxrwxrwx 1 rahul 15:47 redo1b.log -rwxrwxrwx 1 rahul 15:44 redo2b.log -rwxrwxrwx 1 rahul 15:44 system01.dbf -rwxrwxrwx 1 rahul 15:22 temp01.dbf drwxrwxrwx 2 wstpack 15:42 udump -rwxrwxrwx 1 rahul 15:46 undo.dbf
20973568 May 12 307712 May 12 307712 May 12 83888128 May 12 10487808 May 12 16384 May 12 41945088 May 12
Now for performing the RESTORE of all the Lost datafiles and then applying the Archive log files on it we have to go through some Steps : 2.1. First we have to make the tablespace BKUP OFFLINE. SQL>alter tablespace BKUP offline immediate; Tablespace Altered 2.2. Now Restore all the Datafiles files related to the Tablespace from backup. SQL> !cp /disk3/oradata/FTDEMO/bk*.dbf /disk3/oradata/FTDEMO/ Files restored. 2.3. Now perform the RECOVERY Strategy for that TABLESPACE by applying the Archive log files. SQL> recover tablespace BKUP; At last now make the tablespace BKUP ONLINE and select the data from the table ( test ) SQL> alter tablespace bkup online; SQL> select count(*) from test;
Page 279
Tablespace altered. COUNT(*) ---------38990 3. Let us see full database recovery scenario Selecting Datafiles and tablespaces info. SQL> select file_name, tablespace_name, bytes from dba_data_files; File-Name Name Bytes ----------------------------------------------------- ---------/disk3/oradata/FTDEMO/bkup01.dbf 10485760 /disk3/oradata/FTDEMO/bkup02.dbf 10485760 /disk3/oradata/FTDEMO/bkup03.dbf 10485760 /disk3/oradata/FTDEMO/rcvcat01.dbf 20971520 /disk3/oradata/FTDEMO/system01.dbf 83886080 /disk3/oradata/FTDEMO/undo.dbf UNDOTBS 41943040 6 rows selected. Now we try to do some DML operations to generate some archive log files. SQL> select * from v$log; SQL> conn U_SCOTT/U_SCOTT SQL> create table test (a number); SQl> insert into test select rownum from dict; Now we query info about the TEST table.
Page 280
SQL> select segment_name, tablespace_name SQL> select file_name, tablespace_name from dba_data_files where tablespace_name = 'BKUP'; GROUP# THREAD# SEQUENCE# Bytes MEMBERS ARC Status FIRST_CHANGE# FIRST_TIM ---------- ---------- ---------- ---------- --------- --- ---------- ------------- --------1 1 723 307200 1 NO CURRENT 61775 12-MAY-05 2 1 722 307200 1 YES ACTIVE 61734 12-MAY-05 Table Droped Table Created 1114 rows created. Commit complete. Commit complete. COUNT(*) ---------38990 Selecting the log info. SQL> seleclt * from v$log; GROUP# THREAD# SEQUENCE# Bytes MEMBERS ARC Status FIRST_CHANGE# FIRST_TIM ---------- ---------- ---------- ---------- --------- --- ---------- ------------- --------1 1 725 307200 1 NO CURRENT 62118 12-MAY-05 2 1 724 307200 1 YES ACTIVE 62076 12-MAY-05 Now we query info about the TEST table. SQL> select segment_name, tablespace_name from user_segments;
Page 281
SQL> select file_name, tablespace_name from dba_data_files where tablespace_name = 'BKUP'; Segment-Name -----------EMP DEPT BONUS SALGRADE DUMMY TEST Ts-Name --------------BKUP BKUP BKUP BKUP BKUP BKUP
6 rows selected.
Now we remove the all the 3 types of files of this database (controlfile, redologfile, datafiles ). $ rm /disk3/oradata/FTDEMO/*.ctl $ rm /disk3/oradata/FTDEMO/*.log $ rm /disk3/oradata/FTDEMO/*.dbf Allfiles Removed. Let us try to insert data in the table test; SQL> insert into test select * from test where rownum<5000; Inserting rows Wait
Page 282
Same Oncemore we had got the Error Message that the files missing let us see at OS Level. $ cd /disk3/oradata/FTDEMO $ ls -l *.dbf *.ctl *.log ls: *.dbf: No such file or directory ls: *.ctl: No such file or directory ls: *.log: No such file or directory total 56 drwxrwxrwx 2 wstpack 17:41 ARCH drwxrwxrwx 2 wstpack 15:22 bdump drwxrwxrwx 2 wstpack 2003 BKUP drwxrwxrwx 2 wstpack 2003 cdump -rwxrwxrwx 1 rahul 15:22 FTDEMO.files drwxrwxrwx 2 rahul 15:29 hbkup drwxrwxrwx 2 wstpack 15:50 udump dba dba dba dba dba dba dba 4096 May 10 20480 May 12 4096 Feb 13 4096 Feb 7
4. Now we will try to perform the FULL DATABASE RECOVERY For that first we have to shutdown the current database. SQL> connect /as sysdba SQL> shutdown abort ORACLE instance shut down. We have to go to the BACKUP directory and RESTORE all the files in their Original locations from BACKUP. $ cd /disk3/oradata/FTDEMO/hbkup $ cp *.* .. All Files restored.
Page 283
Restore is going on from the Backup. Remember backup controlfile has to be restored with the original name ( control1.ctl ). $ mv /disk3/oradata/FTDEMO/hbkup/control.bkp /disk3/oradata/FTDEMO/hbkup/control1.ctl All Files restored.
5. For Performing the RECOVER of the DATABASE. First bring the database in mount state. $ sqlplus "/as sysdba" SQL> startup mount Connected to an idle instance. ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers Database mounted. 17589096 279400 12582912 4194304 532480 bytes bytes bytes bytes bytes
And RECOVER the database with following command SQL> alter database recover automatic using backup controlfile until cancel; [Its applying the archive log files. Wait...] alter database recover automatic using backup controlfile until cancel Media recovery complete.
Page 284
Now cancel the RECOERY operation and OPEN the database with RESETLOGS SQL> recover cancel; SQL> alter database open resetlogs; Database altered. Now let us connect as SAM user and check the table TEST. SQL>conn U_SCOTT/U_SCOTT SQL>select count(*) from test; COUNT(*) ---------27850 NOTE : Once the database is OPENED with RESETLOGS, their is no use of your OLD BACKUP's. We have to take a FRESH BACKUP either COLD/HOT for your future problems using the backup Script
Page 285
DEMO ON RECOVERY MANAGER I(RMAN) These are the Changes to be made at the TARGET database( Client) 1. Configure a listener and start it. 2. Pass a parameter called
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE in init.ora
(To create a password for your sysdba) 3. At $ORACLE_HOME/dbs execute the following utility: $> orapwd file=orapw<SID> password=oracle entries=3 4. Create a directory for taking RMAN backup's. 5. Start the database. Changes to be made at the CATALOG DATABASE side(SERVER): 1. Nearly 20mb of free space should be in SYSTEM TABLESPACE. 2. Create a new tablespace(RCVCAT) with at least of 15 mb, which will maintain the catalog information. 3. Make sure the Undo tablespace or Rollback segment's tablespace is of at least 5mb. 4. If Undo management is manual create a big rollback segment i.e., initial 100k next 100k maxextents unlimited. 5. Make sure the temporary tablespace is of atleast 5m. 6. Create a user and make the tablespace (i.e., created in 2nd step) as his default tablespace and temporary tablespace as (i.e.,4th). 7. Grant RECOVERY_CATALOG_OWNER role to that user 8. Configure a tns-alias. 9. At the OS prompt type $ rman catalog user/passwd ( you will get a RMAN prompt) RMAN> create catalog tablespace RCVCAT; ( it will create your catalog ) RMAN>exit
Page 286
If you want to register any target database for performing backup's in your RMAN catalog then $ rman target sys/oracle@tns-alias catalog rman/rman RMAN> register database; Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> register database; 2> database registered in recovery catalog starting full resync of recovery catalog full resync complete Recovery Manager complete. Try to check the database registered information RMAN> list incarnation of database; Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> list incarnation of database; 2> List of Database Incarnations
Page 287
All
All
DB Key Inc Key DB Name DB ID CUR Reset SCN Reset Time ------- ------- -------- ---------------- --- --------- ---------1 2 PRODRMAN 1571350840 YES 1 03-FEB-03 Recovery Manager complete. Before going with backup's let us Examine the configuration's to be made on the catalog database side at RMAN prompt. RMAN> show all; (It will show you all the default configuration settings for RMAN) Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> show all; 2> RMAN configuration parameters are: CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default CONFIGURE BACKUP OPTIMIZATION OFF; # default CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default CONFIGURE DEVICE TYPE DISK PARALLELISM 1; # default CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE MAXSETSIZE TO UNLIMITED; # default
Page 288
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/oraeng/app/oracle/product/9.2.0/dbs/sna pcf_PRODRMAN.f'; # default RMAN configuration has now stored or default parameters If any file is already backed up RMAN will not take such files backup if the optimization is enabled RMAN> configure backup optimization on; Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> configure backup optimization on; 2> new RMAN configuration parameters: CONFIGURE BACKUP OPTIMIZATION ON; new RMAN configuration parameters are successfully stored starting full resync of recovery catalog full resync complete Recovery Manager complete. Whenever we take any type of RMAN backup if you want your controlfile to be backedup to some specified destination then set this settings form your RMAN.
RMAN> configure controlfile autobackup format for device type disk to '/disk2/oradata/PRODRMAN/rman/rmanbk.%s.%F.bkp';
All
Page 289
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database
RMAN> configure controlfile autobackup format for device type disk to '/disk2/oradata/PRODRMAN/rman/rmanbk.%s.%F.bkp';
All
2> new RMAN configuration parameters: CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/disk2/oradata/ PRODRMAN/rman/rmanbk.%s.%F.bkp'; new RMAN configuration parameters are successfully stored starting full resync of recovery catalog full resync complete Recovery Manager complete. When taking backup's if you want to start automatic channels and to be backedup to a specified default destination by
RMAN> configure channel 1 device type disk format '/disk2/oradata/PRODRMAN/rman/%U%s.bkp'
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database
RMAN> configure channel 1 device type disk clear;
Page 290
All
3> old RMAN configuration parameters are successfully deleted new RMAN configuration parameters: CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT '/disk2/oradata/PRODRMAN/rman/%U%s.bkp'; new RMAN configuration parameters are successfully stored starting full resync of recovery catalog full resync complete Recovery Manager complete. Once more we try to see the default configured settings for RMAN
RMAN> show all;
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> show all; 2> RMAN configuration parameters are: CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default CONFIGURE BACKUP OPTIMIZATION ON; CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/disk2/oradata/ PRODRMAN/rman/rmanbk.%s.%F.bkp'; CONFIGURE DEVICE TYPE DISK PARALLELISM 1; # default
Page 291
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT '/disk2/oradata/PRODRMAN/rman/%U%s.bkp'; CONFIGURE MAXSETSIZE TO UNLIMITED; # default CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/oraeng/app/oracle/product/9.2.0/dbs/sna pcf_PRODRMAN.f'; # default RMAN configuration has no stored or default parameters We work how to take a backup using rman as our database is already registered with the catalog database. Taking the backup of a single datafile: till ORACLE8i we are going to use
RMAN> run { allocate channel c1 type disk; backup format '/disk2/oradata/PRODRMAN/rman/%d_%s.bkp' (datafile 3); release channel c1; }
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database
RMAN> backup datafile 3;
Page 292
2> Starting backup at 12-MAY-05 allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=12 devtype=DISK channel ORA_DISK_1: starting full datafile backupset channel ORA_DISK_1: specifying datafile(s) in backupset input datafile fno=00003 name=/disk2/oradata/PRODRMAN/userdata01.dbf channel ORA_DISK_1: starting piece 1 at 12-MAY-05 channel ORA_DISK_1: finished piece 1 at 12-MAY-05 piece handle=/disk2/oradata/PRODRMAN/rman/01gk8clf_1_11.bk p comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 12-MAY-05 Recovery Manager complete. Here we can see this backedup information using RMAN
RMAN> list backupset;
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> list backupset; 2> List of Backup Sets ===================
Page 293
All
BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- -------------------------674 Full 64K DISK 00:00:00 12-MAY-05 BP Key: 675 Status: AVAILABLE Tag: TAG20050512T163911 Piece Name: /disk2/oradata/PRODRMAN/rman/01gk8clf_1_11.bkp List of Datafiles in backup set 674 File LV Type Ckp SCN Ckp Time Name ---- -- ---- ---------- --------- ---3 Full 56989 12-MAY-05 /disk2/oradata/PRODRMAN/userdata01.dbf Recovery Manager complete. Now we try to remove the datafile which we has backedup.
$ rm /disk2/oradata/PRODRMAN/userdata01.dbf
For this scenario we remove one of the datafile and try to query at the OS level $ rm /disk2/oradata/PRODRMAN/userdata01.dbf $ ls -l /disk2/oradata/PRODRMAN/ total 140892 -rwxrwxrwx 1 rahul 16:17 afiedt.buf drwxrwxrwx 2 wstpack 16:18 ARCH drwxrwxrwx 2 wstpack 16:17 bdump drwxrwxrwx 2 wstpack 2003 BKUP drwxrwxrwx 2 wstpack 2003 cdump -rwxrwxrwx 1 rahul 16:41 control1.ctl dba dba dba dba dba dba 189 May 12 4096 May 12 8192 May 12 4096 Feb 13 4096 Feb 15 808960 May 12
Page 294
-rwxrwxrwx 1 rahul 16:17 initPRODRMAN.ora -rwxrwxrwx 1 rahul 16:17 orapwPRODRMAN -rwxrwxrwx 1 rahul 16:17 PRODRMAN.files -rwxrwxrwx 1 rahul 16:18 redo1b.log -rwxrwxrwx 1 rahul 16:38 redo2b.log drwxrwxrwx 2 rahul 16:39 rman -rwxrwxrwx 1 rahul 16:19 system01.dbf -rwxrwxrwx 1 rahul 16:17 temp01.dbf drwxrwxrwx 2 wstpack 16:17 udump -rwxrwxrwx 1 rahul 16:19 undo.dbf -rwxrwxrwx 1 rahul 16:17 userdata02.dbf -rwxrwxrwx 1 rahul 16:19 userdata03.dbf
dba dba dba dba dba dba dba dba dba dba dba dba
827 May 12 1536 May 12 478 May 12 307712 May 12 307712 May 12 4096 May 12 83888128 May 12 10487808 May 12 4096 May 12 41945088 May 12 3147776 May 12 3147776 May 12
We observed that the file is lost. Any users who are using this datafile are unable to get the data because the datafile is no more exist. As we had already datafile 3 backup with us. so, we try to restore it and perform recovery
RMAN> run { allocate channel c1 type disk; sql "alter database datafile 3 offline"; restore datafile 3; recover datafile 3; sql "alter database datafile 3 online"; release channel c1; }
Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database
All
RMAN> sql "alter database datafile 3 offline"; 2> restore datafile 3; 3> recover datafile 3; 4> sql "alter database datafile 3 online"; 5> sql statement: alter database datafile 3 offline Starting restore at 12-MAY-05 allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=9 devtype=DISK channel ORA_DISK_1: starting datafile backupset restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set restoring datafile 00003 to /disk2/oradata/PRODRMAN/userdata01.dbf channel ORA_DISK_1: restored backup piece 1 piece handle=/disk2/oradata/PRODRMAN/rman/01gk8clf_1_11.bk p tag=TAG20050512T1639 11 params=NULL channel ORA_DISK_1: restore complete Finished restore at 12-MAY-05 Starting recover at 12-MAY-05 using channel ORA_DISK_1 starting media recovery media recovery complete
Page 296
Finished recover at 12-MAY-05 sql statement: alter database datafile 3 online Recovery Manager complete. NOTE : Same procedure you can perform in ORACLE9i without using run command just type the commands directly at rman prompt.
RMAN> RMAN> RMAN> RMAN> alter database datafile 3 offline; restore datafile 3; recover datafile 3; alter database datafile 3 online;
Now we try to take the TABLESPACE level BACKUP'S RMAN> run { allocate channel c1 type disk; backup format '/disk2/oradata/PRODRMAN/rman/%d_%s.bkp' (tablespace user_data); release channel c1; } [In Oracle9i just issue this command...]
RMAN> backup tablespace user_data;
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database
RMAN> resync catalog;
All
Recovery Manager complete. Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database [In Oracle8i issue this command...]
RMAN> run 2> { 3> allocate channel c1 type disk; 4> backup 5> format '/disk2/oradata/PRODRMAN/rman/%d_%s.bkp' 6> (tablespace user_data); 7> release channel c1; 8> }
All
9> allocated channel: c1 channel c1: sid=12 devtype=DISK Starting backup at 12-MAY-05 channel c1: starting full datafile backupset channel c1: specifying datafile(s) in backupset input datafile fno=00003 name=/disk2/oradata/PRODRMAN/userdata01.dbf input datafile fno=00004 name=/disk2/oradata/PRODRMAN/userdata02.dbf input datafile fno=00005 name=/disk2/oradata/PRODRMAN/userdata03.dbf channel c1: starting piece 1 at 12-MAY-05 channel c1: finished piece 1 at 12-MAY-05 piece handle=/disk2/oradata/PRODRMAN/rman/PRODRMAN_2.bkp comment=NONE
Page 298
channel c1: backup set complete, elapsed time: 00:00:01 Finished backup at 12-MAY-05 released channel: c1 Recovery Manager complete. Here we can see this backedup information using RMAN [In Oracle9i just issue this command...]
RMAN> list backupset;
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> list backupset; 2> List of Backup Sets =================== BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- -------------------------674 Full 64K DISK 00:00:00 12-MAY-05 BP Key: 675 Status: AVAILABLE Tag: TAG20050512T163911 Piece Name: /disk2/oradata/PRODRMAN/rman/01gk8clf_1_11.bkp List of Datafiles in backup set 674
Page 299
All
File LV Type Ckp SCN Ckp Time Name ---- -- ---- ---------- --------- ---3 Full 56989 12-MAY-05 /disk2/oradata/PRODRMAN/userdata01.dbf BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- -------------------------680 Full 580K DISK 00:00:00 12-MAY-05 BP Key: 681 Status: AVAILABLE Tag: TAG20050512T164626 Piece Name: /disk2/oradata/PRODRMAN/rman/PRODRMAN_2.bkp List of Datafiles in backup set 680 File LV Type Ckp SCN Ckp Time Name ---- -- ---- ---------- --------- ---3 Full 57145 12-MAY-05 /disk2/oradata/PRODRMAN/userdata01.dbf 4 Full 57145 12-MAY-05 /disk2/oradata/PRODRMAN/userdata02.dbf 5 Full 57145 12-MAY-05 /disk2/oradata/PRODRMAN/userdata03.dbf Recovery Manager complete. Try to select the datafiles and their related datafiles info from the TARGET DATABASE.
SQL> select From file_name,tablespace_name dba_data_files;
/disk2/oradata/PRODRMAN/userdata01.dbf USER_DATA /disk2/oradata/PRODRMAN/userdata02.dbf USER_DATA /disk2/oradata/PRODRMAN/userdata03.dbf USER_DATA Now we try to remove all the datafiles related to that tablespace USER_DATA at the OS level. $ rm /disk2/oradata/PRODRMAN/userdata*.dbf Files Removed Just try to confirm that the files are not existing at OS. $ ls -l /disk2/oradata/PRODRMAN/ total 134732 -rwxrwxrwx 1 rahul 16:17 afiedt.buf drwxrwxrwx 2 wstpack 16:18 ARCH drwxrwxrwx 2 wstpack 16:17 bdump drwxrwxrwx 2 wstpack 2003 BKUP drwxrwxrwx 2 wstpack 2003 cdump -rwxrwxrwx 1 rahul 16:51 control1.ctl -rwxrwxrwx 1 rahul 16:17 initPRODRMAN.ora -rwxrwxrwx 1 rahul 16:17 orapwPRODRMAN -rwxrwxrwx 1 rahul 16:17 PRODRMAN.files -rwxrwxrwx 1 rahul 16:18 redo1b.log -rwxrwxrwx 1 rahul 16:50 redo2b.log dba dba dba dba dba dba dba dba dba dba dba 189 May 12 4096 May 12 8192 May 12 4096 Feb 13 4096 Feb 15 808960 May 12 827 May 12 1536 May 12 478 May 12 307712 May 12 307712 May 12
Page 301
drwxrwxrwx 2 rahul 16:46 rman -rwxrwxrwx 1 rahul 16:49 system01.dbf -rwxrwxrwx 1 rahul 16:17 temp01.dbf drwxrwxrwx 2 wstpack 16:43 udump -rwxrwxrwx 1 rahul 16:49 undo.dbf
4096 May 12 83888128 May 12 10487808 May 12 4096 May 12 41945088 May 12
If we loose all the datafile which are related to a tablespace, then instead of going through all the DATAFILES individual restoration and recovery rather we can directly go with a TABLESPACE level recovery. [In Oracle8i issue this command...]
RMAN> run { allocate channel c1 type disk; sql "alter tablespace user_data offline immediate"; restore tablespace user_data; recover tablespace user_data; sql "alter tablespace user_data online"; release channel c1; }
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database All
RMAN> sql "alter tablespace user_data offline immediate"; 2> restore tablespace user_data; 3> recover tablespace user_data; 4> sql "alter tablespace user_data online"; 5> release channel c1;
Now go and check whether the files has been restored or not. $ ls -l /disk2/oradata/PRODRMAN/ total 134732 -rwxrwxrwx 1 rahul 16:17 afiedt.buf drwxrwxrwx 2 wstpack 16:18 ARCH drwxrwxrwx 2 wstpack 16:17 bdump drwxrwxrwx 2 wstpack 2003 BKUP drwxrwxrwx 2 wstpack 2003 cdump -rwxrwxrwx 1 rahul 16:53 control1.ctl -rwxrwxrwx 1 rahul 16:17 initPRODRMAN.ora -rwxrwxrwx 1 rahul 16:17 orapwPRODRMAN -rwxrwxrwx 1 rahul 16:17 PRODRMAN.files -rwxrwxrwx 1 rahul 16:18 redo1b.log -rwxrwxrwx 1 rahul 16:50 redo2b.log drwxrwxrwx 2 rahul 16:46 rman -rwxrwxrwx 1 rahul 16:49 system01.dbf -rwxrwxrwx 1 rahul 16:17 temp01.dbf dba dba dba dba dba dba dba dba dba dba dba dba dba dba 189 May 12 4096 May 12 8192 May 12 4096 Feb 13 4096 Feb 15 808960 May 12 827 May 12 1536 May 12 478 May 12 307712 May 12 307712 May 12 4096 May 12 83888128 May 12 10487808 May 12
Page 303
dba dba
1. Now we try to see the FULL DATABASE BACKUP & RECOVERY . First we take here the full database backup
RMAN> database backup;
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database
RMAN> resync catalog;
All
2> starting full resync of recovery catalog full resync complete Recovery Manager complete. Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database [In Oracle8i issue this command...]
Page 304
All
RMAN> run 2> { 3> allocate channel c1 type disk; 4> backup 5> format '/disk2/oradata/PRODRMAN/rman/%d_%s.bkp' 6> (database); 7> release channel c1; 8> }
allocated channel: c1 channel c1: sid=9 devtype=DISK Starting backup at 12-MAY-05 Recovery Manager Completed Before doing some transactions first select the log sequence number at TARGET database.
SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# Bytes MEMBERS ARC Status FIRST_CHANGE# FIRST_TIM ---------- ---------- ---------- ---------- --------- --- ---------- ------------- --------1 1 663 307200 1 YES INACTIVE 56541 12-MAY-05 2 1 664 307200 1 NO CURRENT 56563 12-MAY-05 After taking the FULL database backup. Connect to the TARGET database and generate some archive logs by doing some transactions.
SQL> SQL> SQL> SQL> SQL> conn scott/tiger select count(*) from emp1; delete from emp1 where rownum<5001; commit; select count(*) from emp1;
Page 305
COUNT(*) ---------7168 5000 rows deleted. Commit complete. System altered. COUNT(*) ---------2168 Now select the log sequence number at TARGET database. SQL> select * from v$log; GROUP# THREAD# SEQUENCE# Bytes MEMBERS ARC Status FIRST_CHANGE# FIRST_TIM ---------- ---------- ---------- ---------- --------- --- ---------- ------------- --------1 1 669 307200 1 YES ACTIVE 57547 12-MAY-05 2 1 670 307200 1 NO CURRENT 57561 12-MAY-05 Before going to RESTORE & RECOVERY first try to see that How many ARCHIVES info is their in your RMAN CATALOG.
SQL> conn rman/rman SQL> select sequence# from rc_log_history order by sequence#;
Page 306
NOTE:If the current log sequence number at TARGET DATABASE and the sequence number at our CATALOG is different then we have to resync our catalog
RMAN> resync catalog;
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> resync catalog; 2> starting full resync of recovery catalog full resync complete Recovery Manager complete. Once more select the ARCHIVE LOG Info from RMAN user.
SQL> conn rman/rman SQL> select sequence# from rc_log_history order by sequence#;
SEQUENCE# ---------661 662 663 664 665 666 667 668 669 9 rows selected.
Page 307
Now we REMOVE all the FILES of the TARGET database and try to do RESTORE & RECOVERY. $ cd /disk2/oradata/PRODRMAN $ rm *.ctl *.log *.dbf All files Removed. Now try to check the files at OS level. $ ls -l /disk2/oradata/PRODRMAN/ total 44 -rwxrwxrwx 1 rahul 16:17 afiedt.buf drwxrwxrwx 2 wstpack 16:58 ARCH drwxrwxrwx 2 wstpack 16:17 bdump drwxrwxrwx 2 wstpack 2003 BKUP drwxrwxrwx 2 wstpack 2003 cdump -rwxrwxrwx 1 rahul 16:17 initPRODRMAN.ora -rwxrwxrwx 1 rahul 16:17 orapwPRODRMAN -rwxrwxrwx 1 rahul 16:17 PRODRMAN.files drwxrwxrwx 2 rahul 16:46 rman drwxrwxrwx 2 wstpack 16:43 udump dba dba dba dba dba dba dba dba dba dba 189 May 12 4096 May 12 8192 May 12 4096 Feb 13 4096 Feb 15 827 May 12 1536 May 12 478 May 12 4096 May 12 4096 May 12
For making the RMAN connection with the target database the TARGET DATABASE should be minimum in NOMOUNT state else bring it first.
$ sqlplus "/as sysdba" SQL> startup nomount;
ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers 46731324 450620 37748736 8388608 143360 bytes bytes bytes bytes bytes
First RESTORE the controlfile from the backup to its default location using RMAN.
RMAN> restore controlfile;
NOTE : For performing the recovery please enter (the LAST log seq.no from your RMAN CATALOG) + 1. Last Log Seq.no is 669 . Please enter 669 + 1 = 670. Enter the LAST LOG SEQ.NO + 1 : 670
After doing the RECOVERY Now you OPEN the TARGET database.
RMAN> alter database open resetlogs;
Now try to select the Info from the TARGET database user.
SQL> conn scott/tiger SQL> select count(*) from emp1;
NOTE: After opening the TARGET database with resetlogs you have to RESET your RMAN CATALOG. RMAN> reset database;
Page 309
What ever the BACKUP'S till now you have taken are OBSOLETE. You can see this info from RMAN prompt.
RMAN> report obsolete;
(It will ask you Y/N to delete all the backup sets) NOTE : If you want to delete a particular backup set then DELETE BACKUPSET xxxx; ( where xxxx is the KEY value of BS).
Page 310
DEMO ON RECOVERY MANAGER - II If you want to register any target database for performing backup's in your RMAN catalog then
$> rman target sys/oracle@tns-alias catalog rman/rman RMAN> register database;
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> register database; 2> database registered in recovery catalog starting full resync of recovery catalog full resync complete Recovery Manager complete. Try to check the database registered information
RMAN> list incarnation of database;
All
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> list incarnation of database; 2>
Page 311
All
List of Database Incarnations DB Key Inc Key DB Name DB ID CUR Reset SCN Reset Time ------- ------- -------- ---------------- --- --------- ---------1 2 PRODRMAN 1571350840 YES 1 03-FEB-03 Recovery Manager complete. Before going with backup's let us examine the configuration's to be made on the catalog database side at RMAN prompt. RMAN> show all; (It will show you all the default configuration settings for RMAN) Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> show all; 2> RMAN configuration parameters are: CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default CONFIGURE BACKUP OPTIMIZATION OFF; # default CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default CONFIGURE DEVICE TYPE DISK PARALLELISM 1; # default CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
Page 312
All
CONFIGURE MAXSETSIZE TO UNLIMITED; # default CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/oraeng/app/oracle/product/9.2.0/dbs/sna pcf_PRODRMAN.f'; # default RMAN configuration has no stored or default parameters Recovery Manager complete. If any file is already backed up RMAN will not take such files backup if the optimization is enabled
RMAN> configure backup optimization on;
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> configure backup optimization on; 2> new RMAN configuration parameters: CONFIGURE BACKUP OPTIMIZATION ON; new RMAN configuration parameters are successfully stored starting full resync of recovery catalog full resync complete Recovery Manager complete. Whenever we take any type of RMAN backup if you want your controlfile to be backedup to some specified destination then set this settings form your RMAN. All
Page 313
RMAN> configure controlfile autobackup format for device type disk to '/disk2/oradata/PRODRMAN/rman/rmanbk.%s.%F.bkp';
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> configure controlfile autobackup format for device type disk to '/disk2/oradata/PRODRMAN/rman/rmanbk.%s.%F.bkp'; 2> new RMAN configuration parameters: CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/disk2/oradata/PRODRMAN/rman/rmanbk.%s.%F.bkp'; new RMAN configuration parameters are successfully stored starting full resync of recovery catalog full resync complete Recovery Manager complete. When taking backup's if you want to start automatic channels and to be backedup to a specified default destination by
RMAN> configure channel 1 device type disk format '/disk2/oradata/PRODRMAN/rman/%U%s.bkp'
All
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. All
Page 314
connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> configure channel 1 device type disk clear; 2> configure channel 1 device type disk format '/disk2/oradata/PRODRMAN/rman/%U%s.bkp'; 3> old RMAN configuration parameters are successfully deleted new RMAN configuration parameters: CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT '/disk2/oradata/PRODRMAN/rman/%U%s.bkp'; new RMAN configuration parameters are successfully stored starting full resync of recovery catalog full resync complete Recovery Manager complete. Once more we try to see the default configured settings for RMAN
RMAN> show all;
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> show all; 2> RMAN configuration parameters are: CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
Page 315
All
CONFIGURE BACKUP OPTIMIZATION ON; CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/disk2/oradata/ PRODRMAN/rman/rmanbk.%s.%F.bkp'; CONFIGURE DEVICE TYPE DISK PARALLELISM 1; # default CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT '/disk2/oradata/PRODRMAN/rman/%U%s.bkp'; CONFIGURE MAXSETSIZE TO UNLIMITED; # default CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/oraeng/app/oracle/product/9.2.0/dbs/snapcf_PRODRMA N.f'; # default RMAN configuration has no stored or default parameters Recovery Manager complete. We work how to take a backup using rman as our database is already registered with the catalog database. Now we try to see the INCREMENTAL TYPE OF BACKUP. First we take here the full database incremental backup
RMAN> backup database incremental level 0;
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database
Page 316
All
RMAN> resync catalog; 2> starting full resync of recovery catalog full resync complete Recovery Manager complete. Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database (For 8i)
RMAN> run 2> { 3> allocate channel c1 type disk; 4> backup 5> format '/disk2/oradata/PRODRMAN/rman/%d_%s.bkp' 6> (database); 7> release channel c1; 8> }
All
allocated channel: c1 channel c1: sid=12 devtype=DISK Starting backup at 13-MAY-05 channel c1: starting full datafile backupset channel c1: specifying datafile(s) in backupset including current controlfile in backupset input datafile fno=00001 name=/disk2/oradata/PRODRMAN/system01.dbf input datafile fno=00002 name=/disk2/oradata/PRODRMAN/undo.dbf
Page 317
input datafile fno=00003 name=/disk2/oradata/PRODRMAN/userdata01.dbf input datafile fno=00004 name=/disk2/oradata/PRODRMAN/userdata02.dbf input datafile fno=00005 name=/disk2/oradata/PRODRMAN/userdata03.dbf channel c1: starting piece 1 at 13-MAY-05 channel c1: finished piece 1 at 13-MAY-05 piece handle=/disk2/oradata/PRODRMAN/rman/PRODRMAN_1.bkp comment=NONE channel c1: backup set complete, elapsed time: 00:00:07 Finished backup at 13-MAY-05 released channel: c1 Recovery Manager complete. Before doing some transactions first select the log sequence number at TARGET database. SQL> select * from v$log; GROUP# THREAD# SEQUENCE# Bytes MEMBERS ARC Status FIRST_CHANGE# FIRST_TIM ---------- ---------- ---------- ---------- --------- --- ---------- ------------- --------1 1 719 307200 1 YES ACTIVE 61191 13-MAY-05 2 1 720 307200 1 NO CURRENT 61224 13-MAY-05
After taking the FULL database backup. Connect to the TARGET database and generate some archive logs by doing some transactions.
SQL> conn scott/tiger
Page 318
select count(*) from emp1; delete from emp1 where rownum<2001; commit; select count(*) from emp1;
COUNT(*) ---------7168 2000 rows deleted. Commit complete. System altered. COUNT(*) ---------5168 Now select the log sequence number at TARGET database.
SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# Bytes MEMBERS ARC Status FIRST_CHANGE# FIRST_TIM ---------- ---------- ---------- ---------- --------- --- ---------- ------------- --------1 1 667 307200 1 NO CURRENT 57123 13-MAY-05 2 1 666 307200 1 YES ACTIVE 57114 13-MAY-05 Know we try to take an incremental backup (LEVEL 2) for these committed Transactions.
RMAN> backup database incremental level 2;
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. All
Page 319
connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> resync catalog; 2> starting full resync of recovery catalog full resync complete Recovery Manager complete. Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database (for 8i)
RMAN> run 2> { 3> allocate channel c1 type disk; 4> backup 5> format '/disk2/oradata/PRODRMAN/rman/%d_%s.bkp' 6> (database); 7> release channel c1; 8>}
All
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> backup incremental level 2 database; 2>
Page 320
All
Starting backup at 13-MAY-05 allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=9 devtype=DISK no parent backup or copy of datafile 1 found no parent backup or copy of datafile 2 found no parent backup or copy of datafile 3 found no parent backup or copy of datafile 4 found no parent backup or copy of datafile 5 found channel ORA_DISK_1: starting incremental level 2 datafile backupset channel ORA_DISK_1: specifying datafile(s) in backupset including current controlfile in backupset input datafile fno=00001 name=/disk2/oradata/PRODRMAN/system01.dbf input datafile fno=00002 name=/disk2/oradata/PRODRMAN/undo.dbf input datafile fno=00003 name=/disk2/oradata/PRODRMAN/userdata01.dbf input datafile fno=00004 name=/disk2/oradata/PRODRMAN/userdata02.dbf input datafile fno=00005 name=/disk2/oradata/PRODRMAN/userdata03.dbf channel ORA_DISK_1: starting piece 1 at 13-MAY-05 channel ORA_DISK_1: finished piece 1 at 13-MAY-05 piece handle=/disk2/oradata/PRODRMAN/rman/02gkah03_1_12.bk p comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:08 Finished backup at 13-MAY-05 Recovery Manager complete. After taking the INCREMENTAL LEVEL database backup. Connect to the TARGET database and generate some archive logs by doing some transactions.
SQL> conn scott/tiger
Page 321
select count(*) from emp1; delete from emp1 where rownum<2001; commit; select count(*) from emp1;
COUNT(*) ---------5168 2000 rows deleted. Commit complete. System altered. COUNT(*) ---------3168 now we try to take another incremental backup (LEVEL 2) for these new transactions. (Incremental types 0=> complete 1=> commulative 2=>incremental changed blocks only )
RMAN> backup database incremental level 2;
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> backup incremental level 2 database; 2> Starting backup at 13-MAY-05
Page 322
All
allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=12 devtype=DISK channel ORA_DISK_1: starting incremental level 2 datafile backupset channel ORA_DISK_1: specifying datafile(s) in backupset including current controlfile in backupset input datafile fno=00001 name=/disk2/oradata/PRODRMAN/system01.dbf input datafile fno=00002 name=/disk2/oradata/PRODRMAN/undo.dbf input datafile fno=00003 name=/disk2/oradata/PRODRMAN/userdata01.dbf input datafile fno=00004 name=/disk2/oradata/PRODRMAN/userdata02.dbf input datafile fno=00005 name=/disk2/oradata/PRODRMAN/userdata03.dbf channel ORA_DISK_1: starting piece 1 at 13-MAY-05 channel ORA_DISK_1: finished piece 1 at 13-MAY-05 piece handle=/disk2/oradata/PRODRMAN/rman/03gkah79_1_13.bk p comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03 Finished backup at 13-MAY-05 Recovery Manager complete. Now try to check the log sequence number at TARGET database. SQL> select * from v$log; GROUP# THREAD# SEQUENCE# Bytes MEMBERS ARC Status FIRST_CHANGE# FIRST_TIM ---------- ---------- ---------- ---------- --------- --- ---------- ------------- --------1 1 669 307200 1 YES INACTIVE 57274 13-MAY-05 2 1 670 307200 1 NO CURRENT 57280 13-MAY-05
Page 323
We try to check the files in the BACKUP directory. $ ls -l /disk2/oradata/PRODRMAN/rman total 207160 -rw-rw---1 rahul 12:05 02gkah03_1_12.bkp -rw-rw---1 rahul 12:09 03gkah79_1_13.bkp -rw-rw---1 rahul 11:59 PRODRMAN_1.bkp dba dba dba 105304064 May 13 1288192 May 13 105312256 May 13
After taking the SECOND INCREMENTAL LEVEL database backup. Connect to the TARGET database and generate some archive logs by doing some transactions.
SQL> SQL> SQL> SQL> SQL> conn scott/tiger select count(*) from emp1; create table emp2 as select * from emp1; commit; select count(*) from emp2;
COUNT(*) ---------3168 Table created. Commit complete. System altered. COUNT(*) ---------3168 now we try to take an incremental LEVEL 1 i.e., CUMULATIVE for these committed Transactions.
RMAN> backup database incremental level 1;
Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database
All
RMAN> backup incremental level 1 database; 2> Starting backup at 13-MAY-05 allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=9 devtype=DISK channel ORA_DISK_1: starting incremental level 1 datafile backupset channel ORA_DISK_1: specifying datafile(s) in backupset including current controlfile in backupset input datafile fno=00001 name=/disk2/oradata/PRODRMAN/system01.dbf input datafile fno=00002 name=/disk2/oradata/PRODRMAN/undo.dbf input datafile fno=00003 name=/disk2/oradata/PRODRMAN/userdata01.dbf input datafile fno=00004 name=/disk2/oradata/PRODRMAN/userdata02.dbf input datafile fno=00005 name=/disk2/oradata/PRODRMAN/userdata03.dbf channel ORA_DISK_1: starting piece 1 at 13-MAY-05 channel ORA_DISK_1: finished piece 1 at 13-MAY-05 piece handle=/disk2/oradata/PRODRMAN/rman/04gkahfk_1_14.bk p comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 13-MAY-05 Recovery Manager complete.
Page 325
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> list backupset; 2> List of Backup Sets =================== BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- -------------------------675 Full 100M DISK 00:00:06 13-MAY-05 BP Key: 676 Status: AVAILABLE Tag: TAG20050513T115927 Piece Name: /disk2/oradata/PRODRMAN/rman/PRODRMAN_1.bkp Controlfile Included: Ckp SCN: 56960 Ckp time: 13-MAY-05 List of Datafiles in backup set 675 File LV Type Ckp SCN Ckp Time Name ---- -- ---- ---------- --------- ---1 Full 56961 13-MAY-05 /disk2/oradata/PRODRMAN/system01.dbf 2 0 Incr 57159 13-MAY-05 /disk2/oradata/PRODRMAN/undo.dbf
Page 326
All
3 0 Incr 57159 13-MAY-05 /disk2/oradata/PRODRMAN/userdata01.dbf 4 0 Incr 57159 13-MAY-05 /disk2/oradata/PRODRMAN/userdata02.dbf 5 0 Incr 57159 13-MAY-05 /disk2/oradata/PRODRMAN/userdata03.dbf BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- -------------------------707 Incr 2 1M DISK 00:00:02 13-MAY-05 BP Key: 708 Status: AVAILABLE Tag: TAG20050513T120912 After taking the 2 INCREMENTAL LEVEL & CUMULATIVE database backups. Connect to the TARGET database and generate some archive logs by doing some transactions to take another INCREMENTAL LEVEL backup.
SQL> SQL> SQL> SQL> delete from emp2 where rownum<2001; commit; select count(*) from emp1; select count(*) from emp2;
2000 rows deleted. Commit complete. System altered. COUNT(*) ---------3168 COUNT(*) ---------Page 327
Know we try to take another incremental backup (LEVEL 2) for these new transactions.
RMAN> backup database incremental level 2;
Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> backup incremental level 2 database; 2> Starting backup at 13-MAY-05 allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=12 devtype=DISK channel ORA_DISK_1: starting incremental level 2 datafile backupset channel ORA_DISK_1: specifying datafile(s) in backupset including current controlfile in backupset input datafile fno=00001 name=/disk2/oradata/PRODRMAN/system01.dbf input datafile fno=00002 name=/disk2/oradata/PRODRMAN/undo.dbf input datafile fno=00003 name=/disk2/oradata/PRODRMAN/userdata01.dbf input datafile fno=00004 name=/disk2/oradata/PRODRMAN/userdata02.dbf input datafile fno=00005 name=/disk2/oradata/PRODRMAN/userdata03.dbf
Page 328
All
channel ORA_DISK_1: starting piece 1 at 13-MAY-05 channel ORA_DISK_1: finished piece 1 at 13-MAY-05 piece handle=/disk2/oradata/PRODRMAN/rman/05gkahoe_1_15.bk p comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 13-MAY-05 Recovery Manager complete. Now try to check the log sequence number at TARGET database.
SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# Bytes MEMBERS ARC Status FIRST_CHANGE# FIRST_TIM ---------- ---------- ---------- ---------- --------- --- ---------- ------------- --------1 1 675 307200 1 NO CURRENT 57565 13-MAY-05 2 1 674 307200 1 YES INACTIVE 57560 13-MAY-05 Before going to RESTORE & RECOVERY first try to see that How many ARCHIVES info is their in your RMAN CATALOG.
SQL> conn rman/rman SQL> select sequence# from rc_log_history order by sequence#;
669 670 671 672 673 674 674 rows selected. NOTE : If the current log sequence number at TARGET DATABASE and the sequence number at our CATALOG is different then we have to resync our catalog RMAN> resync catalog; Recovery Manager: Release 9.2.0.1.0 - Production Copyright (c) 1995, 2002, Oracle Corporation. rights reserved. connected to target database: PRODRMAN (DBID=1571350840) connected to recovery catalog database RMAN> resync catalog; 2> starting full resync of recovery catalog full resync complete Recovery Manager complete. Once more select the ARCHIVE LOG Info from RMAN user.
SQL> conn rman/rman SQL> select sequence# from rc_log_history order by sequence#;
All
663 664 665 666 667 668 669 670 671 672 673 674 674 rows selected. Now we REMOVE all the FILES of the TARGET database and try to do RESTORE & RECOVERY.
$ cd /disk2/oradata/PRODRMAN $ rm *.ctl *.log *.dbf
total 44 -rwxrwxrwx 1 rahul 11:46 afiedt.buf drwxrwxrwx 2 wstpack 12:16 ARCH drwxrwxrwx 2 wstpack 11:46 bdump drwxrwxrwx 2 wstpack 2003 BKUP drwxrwxrwx 2 wstpack 2003 cdump -rwxrwxrwx 1 rahul 11:46 initPRODRMAN.ora
189 May 13 4096 May 13 8192 May 13 4096 Feb 13 4096 Feb 15 827 May 13
Page 331
-rwxrwxrwx 1 rahul 11:46 orapwPRODRMAN -rwxrwxrwx 1 rahul 11:46 PRODRMAN.files drwxrwxrwx 2 rahul 12:18 rman drwxrwxrwx 2 wstpack 11:46 udump
For making the RMAN connection with the target database the TARGET DATABASE should be minimum in NOMOUNT state else bring it first.
$ sqlplus "/as sysdba" SQL> startup nomount;
First RESTORE the controlfile from the backup to its default location using RMAN.
RMAN> restore controlfile;
NOTE : For performing the recovery please enter (the LAST log seq.no from your RMAN CATALOG) + 1. Last log seq. no is 674 . Please enter 674 + 1 = 675. Enter the LAST LOG SEQ.NO + 1 : 675
After doing the RECOVERY Now you OPEN the TARGET database.
Page 332
Now try to select the Info from the TARGET database user.
SQL> conn scott/tiger SQL> select count(*) from emp1; SQL> select count(*) from emp2;
COUNT(*) ---------3168 NOTE: After opening the TARGET database with resetlogs you have to RESET your RMAN CATALOG.
RMAN> reset database;
What ever the BACKUP'S till now you have taken are OBSOLETE. You can see this info from RMAN prompt.
RMAN> report obsolete;
(It will ask you Y/N to delete all the backup sets ) NOTE : If you want to delete a particular backup set then DELETE BACKUPSET xxxx; ( where xxxx is the KEY value of BS). We try to check the files in the BACKUP directory is they still exists. $ ls -l /disk2/oradata/PRODRMAN/rman total 209932 -rw-rw---1 rahul 12:05 02gkah03_1_12.bkp -rw-rw---1 rahul 12:09 03gkah79_1_13.bkp -rw-rw---1 rahul 12:13 04gkahfk_1_14.bkp -rw-rw---1 rahul 12:18 05gkahoe_1_15.bkp dba dba dba dba 105304064 May 13 1288192 May 13 1542144 May 13 1286144 May 13
Page 333
dba
105312256 May 13
Page 334
DEMO ON LOGMINER LOGMINER is a powerful PL/SQL tool from Oracle8i that can be used to extract valuable information from both online and archived redo logs of the a database. Because the redo logs record and track changes made to the database, the information contained in them can be valuable to the DBA LOGMINER is used to perform logical recovery by identifying and then undoing specific changes made by database transactions LOGMINER is used to perform Track specific sets of changes based on transaction, user, table,time, and so on. You can determine who modified a database object and what the object data was before and after the modification. This provides data security and control. Pinpoint when an incorrect modification was introduced into the database. This allows you to perform logical recovery at the application level, instead of database level. Provide supplement information for tuning and capacity planning . Retrieve critical information for debugging complex applications. Add parameter utl_file_dir='/users/rahul' in init<SID>.ora and start the database in the archive log mode Startup the database
SQL> startup
ORACLE instance shut down. ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers Database mounted. Database opened. 25759704 450520 20971520 4194304 143360 bytes bytes bytes bytes bytes
Page 335
DEPTNO ---------10 20 30 40
GROUP# THREAD# SEQUENCE# Bytes MEMBERS ARC Status FIRST_CHANGE# FIRST_TIM -------- -------- --------- ---------- -------- ------------ ------------- --------1 1 26 307200 2 NO CURRENT 1133963 13-MAY-05 2 1 25 307200 2 YES INACTIVE 1113760 13-MAY-05 Inserting multiple rows into table U_SCOTT.emp
SQL> insert into emp select * from emp;
Fri May 13 12:47:10 IST 2005 7168 rows created. Commit complete. now we update a row in DEPT table and committing it.
SQL> update table dept set dname='XXXXXX' where deptno=10;
Please note the time.....
Page 336
Meanwhile some other transaction also takes place like someone has inserted more rows into Scott's emp table and the user commits the records.
SQL> insert into emp SQL> commit; select * from emp;
Inserting Rows. Wait..... 14336 rows created. Commit complete. Checking the archive log info
SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# Bytes MEMBERS ARC Status FIRST_CHANGE# FIRST_TIM ---------- ---------- ---------- ---------- --------- --- ---------- ------------- --------1 1 30 307200 2 NO CURRENT 1134274 13-MAY-05 2 1 29 307200 2 YES INACTIVE 1134246 13-MAY-05 View the contents of the log from history
SQL> select from where and from sequence#, to_char(first_time,'yyyy/mm/dd:hh24:mi:ss') v$loghist to_date(first_time,'dd-monyy')=to_date(sysdate,'dd-mon-yy') sequence#>(select max(sequence#)-30 v$loghist);
27 28 29 30
9 rows selected. Enter the FIRST Sequence number : 25 Enter the LAST Sequence number : 30 now build the LOGMNR using the packages
SQL> exec dbms_logmnr_d.build('dict.ora','/tmp/rahul');
Add the first logfile to the log set to be analyzed SQL> exec dbms_logmnr.add_logfile('/disk2/oradata/rahuldb/arch/T 0001S0000000025. ARC', dbms_logmnr.new); PL/SQL procedure successfully completed. Add further logfile to the log set to be analyzed
begin dbms_logmnr.add_logfile('/disk2/oradata/rahuldb/arch/T0001S 0000000026. ARC',dbms_logmnr.addfile); dbms_logmnr.add_logfile('/disk2/oradata/rahuldb/arch/T0001S 0000000027. ARC',dbms_logmnr.addfile); dbms_logmnr.add_logfile('/disk2/oradata/rahuldb/arch/T0001S 0000000028. ARC',dbms_logmnr.addfile);
Page 338
SEQUENCE# TO_CHAR(FIRST_TIME, ---------- ------------------22 2005/05/13:12:34:46 23 2005/05/13:12:37:32 24 2005/05/13:12:37:33 25 2005/05/13:12:37:38 26 2005/05/13:12:45:11 27 2005/05/13:12:48:08 28 2005/05/13:12:50:48 29 2005/05/13:12:50:51 30 2005/05/13:12:50:55 31 2005/05/13:13:05:41 32 2005/05/13:13:38:42 33 2005/05/13:13:38:57 34 2005/05/13:13:39:00 35 2005/05/13:13:39:06 36 2005/05/13:13:39:17 37 2005/05/13:13:39:28 38 2005/05/13:13:39:39 39 2005/05/13:13:39:42 40 2005/05/13:13:39:45 19 rows selected.
Page 339
/ PL/SQL procedure successfully completed. Start spool on. Query the v table to get undo and redo information
select to_char(timestamp,'hh24:mi:ss') "time", username, sql_redo, sql_undo from v$logmnr_contents where seg_name='DEPT';
no rows selected [now use the UNDO column data to rechange the COLUMN VALUE.] Before updating the rows just we select the table
SQL> select * from dept;
10 20 30 40
Now we update the modified row with UNDO value. Now once more we select the rows to confirm.
SQL> select * from dept;
40 OPERATIONS
BOSTON
[See the column is modified back to ACCOUNTING.] Stop log mining activity SQL> exec dbms_logmnr.end_logmnr;
Page 341
DEMO ON TRANSPORTABLE TABLESPACE A demo on TRANSPORTABLE TABLESPACES using Oracle 8i's Transport_Tablespace Option in Exports. Name: Madhu Date: 5/10/2001 Desc: Performing Oracle 8i's new feature Transportable Tablespace. The idea behind this project is: Copy all objects of a TS from one DB to another DB. Prior to this new feature this can only be done by performing EXP on all objects in that TS and taking the DMP file to the Target DB. But this process needs to list all the Tables that are in that TS ( may be by visiting DBA_TABLES and listing all the Tables with (where tablespace_name = 'WHATEVER-TS') and spooling these Table names to a file and making this file as PARFILE and perfrom EXP. This is a tedious task to handle. The new alternative is implementing T-TS. STEPS TO TRANSPORT TABLESPACE 1. Take EXP of Meta-data of that TS while that TS is in 'READONLY' mode. 2. Copy all the DBF files that belong to that TS to the Target DB. At this time the Tgt DB should be UP, but it won't be recognizing these new DBF files since they don't belong to that DB. 3. Now copy the DMP file which we created from the Source DB to the Target DB. 4. Here we are importing the DMP file in the Tgt DB using new Transportable Tablespace option (8i). After performing this Import, the new Tablespace will be recognized by the Tgt DB as 'PLUGGED_IN' TS. 5. After this we can alter the TS to 'READ WRITE' mode. We are interested in TTS_DEMO TS to Transport to the Tgt DB sidd2 Selecting the tablespace info from the source database [PRODRMAN].
Page 342
SQL> select a.file_name,a.tablespace_name,b.status,b.plugged_in from dba_data_files a , dba_tablespaces b where a.tablespace_name=b.tablespace_name; DBF-Name Status Plugged-In ------------------------------------------- -------- ---------/disk2/oradata/PRODRMAN/system01.dbf ONLINE NO /disk2/oradata/PRODRMAN/undo.dbf ONLINE NO /disk2/oradata/PRODRMAN/userdata01.dbf ONLINE NO /disk2/oradata/PRODRMAN/userdata02.dbf ONLINE NO /disk2/oradata/PRODRMAN/userdata03.dbf ONLINE NO /disk2/oradata/PRODRMAN/tts_demo01.dbf ONLINE NO 6 rows selected. Now making the JUNK TS READ ONLY and EXPORT the Metadata of TS SQL> alter tablespace tts_demo read only; Tablespace altered. Now performing export of the tablespace using EXPORT utility. $ exp parfile=tts_par.file And the contents of tts_par.file are: file=exp_tts_demo.dmp log=exp_tts_demo.log transport_tablespace=y tablespaces=tts_demo
Page 343
PLEASE TYPE THE USERNAME AS "/ as sysdba" AT USER PROMPT. ----------------------------------------------------------Export: Release 9.2.0.1.0 - Production on Sat May 14 11:06:38 2005 Copyright (c) 1982, 2002, Oracle Corporation. rights reserved. Username: / as sysdba Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production With the Partitioning, OLAP and Oracle Data Mining options JServer Release 9.2.0.1.0 - Production Export done in US7ASCII character set and AL16UTF16 NCHAR character set Note: table data (rows) will not be exported About to export transportable tablespace metadata... For tablespace TTS_DEMO ... . exporting cluster definitions . exporting table definitions . . exporting table EMP . . exporting table DEPT . . exporting table BONUS . . exporting table SALGRADE . . exporting table DUMMY . exporting referential integrity constraints . exporting triggers . end transportable tablespace metadata export Exports done sucessfuly with warning Now copying all DBF files to the Tgt DB. $ cp /disk2/oradata/PRODRMAN/tts_demo01.dbf /disk2/oradata/RIZ/ Copying the Files....
Page 344
All
Copied. At this point, we are done with the Source DB i.e PRODRMAN, Now we are connecting to the TARGET database [ sidd2 ] and performing the IMPORT. Observing existing TS in the Tgt DB before performing T-TS Selecting the tablespace info using command SQL> select a.file_name,a.tablespace_name,b.status,b.plugged_in from dba_data_files a , dba_tablespaces b where a.tablespace_name=b.tablespace_name; DBF-Name Status Plugged-In ------------------------------------------- -------- ---------/disk3/oradata/RIZ/system.dbf ONLINE NO /disk3/oradata/RIZ/undotbs.dbf ONLINE NO /disk3/oradata/RIZ/users01.dbf ONLINE NO /disk3/oradata/RIZ/dg1.dbf ONLINE NO /disk3/oradata/RIZ/rman.dbf ONLINE NO 5 rows selected. Now trying to get the new TS (TTS_DEMO) in this Tgt DB [sidd2] by following these steps : $ imp parfile=tts_imp.par TS-Name ----------SYSTEM UNDOTBS USERS DG RMAN
Page 345
PLEASE TYPE THE USERNAME AS "/ as sysdba" AT USER PROMPT. ----------------------------------------------------------Import: Release 9.2.0.1.0 - Production on Sat May 14 11:11:47 2005 Copyright (c) 1982, 2002, Oracle Corporation. rights reserved. Username:/ as sysdba See it gave an error [ ORA - 29342 ] that the required USER does not exist. Now first create the USER and import next. Create the user [ USER_TTS ]. SQL> Create user USER_TTS identified by USER_TTS; User created. Now trying to get the new TS (TTS_DEMO) in this Tgt DB [RIZ]. $ imp parfile=tts_imp.par PLEASE TYPE THE USERNAME AS "/ as sysdba" AT USER PROMPT. ----------------------------------------------------------Import: Release 9.2.0.1.0 - Production on Sat May 14 11:27:18 2005 Copyright (c) 1982, 2002, Oracle Corporation. rights reserved. Username: / as sysdba Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production With the Partitioning, OLAP and Oracle Data Mining options JServer Release 9.2.0.1.0 - Production
Page 346
All
All
Export file created by EXPORT:V09.02.00 via conventional path About to import transportable tablespace(s) metadata... import done in US7ASCII character set and AL16UTF16 NCHAR character set . importing SYS's objects into SYS . importing USER_TTS's objects into USER_TTS . . importing table "EMP" . . importing table "DEPT" . . importing table "BONUS" . . importing table "SALGRADE" . . importing table "DUMMY" Imports done sucessfuly with warning As we have finished performing Import, let's observe details from DBA_TABLESPACES, DBA_DATA_FILES again SQL> select a.file_name,a.tablespace_name,b.status,b.plugged_in from dba_data_files a , dba_tablespaces b where a.tablespace_name=b.tablespace_name; DBF-Name Status Plugged-In ------------------------------------------- -------- ---------/disk3/oradata/RIZ/system.dbf ONLINE NO /disk3/oradata/RIZ/undotbs.dbf ONLINE NO /disk3/oradata/RIZ/users01.dbf ONLINE NO /disk3/oradata/RIZ/dg1.dbf ONLINE NO /disk3/oradata/RIZ/rman.dbf ONLINE NO /disk3/oradata/RIZ/tts_demo01.dbf READ ONL YES TS-Name ----------SYSTEM UNDOTBS USERS DG RMAN TTS_DEMO
Page 347
Y 6 rows selected. As we can see the new TS as 'Plugged-In' already, which is still in the 'READ ONLY' mode, we can alter it to 'READ WRITE' Mode SQL> alter tablespace TTS_DEMO read write; Tablespace altered. As we have finished altering the tablespace, let's observe details from DBA_TABLESPACES, DBA_DATA_FILES again. SQL> select a.file_name,a.tablespace_name,b.status,b.plugged_in from dba_data_files a , dba_tablespaces b where a.tablespace_name=b.tablespace_name; DBF-Name Status Plugged-In ------------------------------------------- -------- ---------/disk3/oradata/RIZ/system.dbf ONLINE NO /disk3/oradata/RIZ/undotbs.dbf ONLINE NO /disk3/oradata/RIZ/users01.dbf ONLINE NO /disk3/oradata/RIZ/dg1.dbf ONLINE NO /disk3/oradata/RIZ/rman.dbf ONLINE NO /disk3/oradata/RIZ/tts_demo01.dbf ONLINE YES 6 rows selected. TS-Name ----------SYSTEM UNDOTBS USERS DG RMAN TTS_DEMO
Page 348
Page 349
DEMO ON SQL*LOADER SQL*Loader's sole purpose is to read data from a flat file and to place that data into an Oracle database. In spite of having such a singular purpose, SQL*Loader is one of Oracle's most versatile utilities. Using SQL*Loader, you can do the following: Load data from a delimited text file. Load data from a fixed-width text file. Combine multiple input records into one logical record. Store data from one logical record into one table or into multiple tables. Filter the data in the input file, loading only selected records. Collect bad records - that is, those records which are failed because of datatype mismatch. Can increase the loading performance by using options such as Direct Path Loading, Parallel Loading. To invoke SQL*LOADER the command is : $ sqlldr <options> Options: userid control log bad data discard discardmax skip load errors parfile : : : : : : : Username and Password Control file name
Log file name Bad file name Data file name : : Discard file name Number of discards to allow
Number of Logical records to skip Number of Logical records to load : : Number of errors to allow Parameter file
Page 350
Control File The Control file starts with LOAD DATA statement.It contains a number of commands and clauses describing the data that SQL*LOADER is reading. The control file describes the format of the data in the input file and which tables and columns to populate with that data. The Default extention is .ctl. INFILE specifies the external filename where the data is found. If the data is included in the control file supply an asterisk(*) for the filename in the INFILE clause and the last clause in the control file must be the BEGINDATA clause. SQL*LOADER will begin reading data from the line imeediately following BEGINDATA. The default extension is .dat. INTO TABLE is to that the table in which the data can be loaded. By default SQL*LOADER requires the table to be empty before it inserts any records. INSERT Specifies that your are loading an empty table. SQL*LOADER wiil abort the load if the table contains data to start with. APPEND Specifies that your are adding data to a table. SQL*LOADER will proceed with the load even if preexisting data is in the table. REPLACE Specifies that you want to replace the data in the table. Before loading, SQL*LOADER will delete any existing records TRUNCATE Specifies the same as REPLACE, but SQL*LOADER will uses the TRUNCATE statement instead of a DELETE statement. CASE - 1 Loading the data from a comma delimited file into the table DEPT. The Input File is : $ vi case1.dat 12,Research,saratoga 12,Purchase,Chicago 11,Art,Salem
Page 351
22,Sales,Phila 12,Production,Texas 30,StockHolding,California 40,Finance,NewYork 36,Accounts,Mexico 15,Payroll,Maryland 17,CustomerService,Boston :wq Creating Control for the external file ... $ vi case1.ctl load data infile 'case1.dat' badfile 'case1.bad' discardfile 'case1.dis' append into table dept when deptno != '30' fields terminated by ',' (deptno,dname,loc) :wq Now Loading the data from a file 'case1.dat' to a table 'DEPT' using the control file case1.ctl for the user 'U_SCOTT' $ sqlldr userid=u_scott/u_scott control=case1.ctl log=case1.log SQL*Loader: Release 9.2.0.1.0 - Production on Fri May 13 15:29:06 2005 Copyright (c) 1982, 2002, Oracle Corporation. rights reserved. Commit point reached - logical record count 10
Page 352
All
Checking in the 'DEPT' table whether the Records are APPENDED or not by connecting as a User 'U_SCOTT' $ sqlplus u_scott/u_scott Selecting the Records from DEPT SQL> select * from dept ; DEPTNO ---------10 20 30 40 12 12 11 22 12 40 36 15 DNAME -------------ACCOUNTING RESEARCH SALES OPERATIONS Research Purchase Art Sales Production Finance Accounts Payroll LOC ------------NEW YORK DALLAS CHICAGO BOSTON saratoga Chicago Salem Phila Texas NewYork Mexico Maryland
12 rows selected. CASE - 2 Loading the data from a position-based flat file into the table EMP. The INPUT File is : $ vi case2.dat
12345678901234567890123456789012345678901234567890 7782 CLARK MANAGER 7839 2572.50 10 7839 KING PRESIDENT 5500.00 10 7934 MILLER CLERK 7782 920.00 10
Page 353
Note :- The First Line in this file is only to understand the positions of all columns. Ex:- 'EMPNO' column occupies from position 1 - 4 'ENAME' column occupies from position 6 15 'JOB' column occupies from position 16 24 Creating Control for the flat file .. $ vi case2.ctl LOAD DATA INFILE 'ulcase2.dat' APPEND INTO TABLE EMP ( EMPNO POSITION(01:04) INTEGER EXTERNAL, ENAME JOB MGR EXTERNAL, SAL EXTERNAL, COMM EXTERNAL, DEPTNO EXTERNAL) Now Loading the data from a file 'case2.dat' to a table 'EMP' using the control file case2.ctl for the user 'U_SCOTT'
Page 354
POSITION(06:15) CHAR, POSITION(17:25) CHAR, POSITION(27:30) INTEGER POSITION(32:39) DECIMAL POSITION(41:48) DECIMAL POSITION(50:51) INTEGER
control=case2.ctl
SQL*Loader: Release 9.2.0.1.0 - Production on Fri May 13 15:36:49 2005 Copyright (c) 1982, 2002, Oracle Corporation. rights reserved. Commit point reached - logical record count 7 All
Now Selecting the Records from the table EMP SQL> select * from emp; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 09-DEC-82 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10
Page 355
7844 TURNER 1500 0 7876 ADAMS 1100 7900 JAMES 950 7902 FORD 3000 7934 MILLER 1300 7782 CLARK 2572.5 7839 KING 5500 7934 MILLER 920 7566 JONES 3123.75 7499 ALLEN 1600 300 7654 MARTIN 1312.5 1400 7658 CHAN 3450 21 rows selected. CASE - 3
SALESMAN 30 CLERK 20 CLERK 30 ANALYST 20 CLERK 10 MANAGER 10 PRESIDENT 10 CLERK 10 MANAGER 20 SALESMAN 30 SALESMAN 30 ANALYST 20
7698 08-SEP-81 7788 12-JAN-83 7698 03-DEC-81 7566 03-DEC-81 7782 23-JAN-82 7839
Using SQL*LOADER inserting one record of flat file into multiple tables. Ex:- One Employee may involve in one or more projects. The records in flat file contains employee information and the projects involved. Generate a Control file to load the information of the employee in 'EMP' table and the related project information in 'PROJ' table. SQL> desc proj PROJ Table
Page 356
Name Null? Type ----------------------------------------- -------- -----EMPNO NUMBER PROJNO NUMBER The flat file 'case3.dat' : $ vi case3.dat 1234 BAKER 10 9999 1234 JOKER 10 9999 2664 YOUNG 20 2893 5321 OTOOLE 10 9999 2134 FARMER 20 4555 2414 LITTLE 20 5634 6542 LEE 10 4532 2849 EDDS xx 4555 4532 PERKINS 10 9999 1244 HUNT 11 3452 123 DOOLITTLE 12 9940 1453 MACDONALD 25 5532 101 777 425 321 236 236 102 102 103 888 999 abc 102 55 40 456 456 40 321 14 294 40
Creating Control file to load the data of the above flat file 'case3.dat' into tables 'EMP' and 'PROJ'. $ vi case3.ctl LOAD DATA INFILE 'case3.dat' BADFILE 'case3.bad' DISCARDFILE 'case3.dis' REPLACE INTO TABLE EMP (EMPNO POSITION(1:4) ENAME POSITION(6:15)
DEPTNO POSITION(17:18) MGR POSITION(20:23) INTO TABLE PROJ WHEN PROJNO != ' ' (EMPNO POSITION(1:4) PROJNO POSITION(25:27)
Now Loading (REPLACING) the data from a file 'case3.dat' into tables 'EMP' and 'PROJ'using the control file case3.ctl for the user 'U_SCOTT' $ sqlldr userid=u_scott/u_scott control=case3.ctl log=case3.log SQL*Loader: Release 9.2.0.1.0 - Production on Fri May 13 15:44:23 2005 Copyright (c) 1982, 2002, Oracle Corporation. rights reserved. Commit point reached - logical record count 12 Selecting the Records from EMP and PROJ tables. SQL> select * from emp; Replaced Records in EMP EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO -------- ---------- --------- -------- --------- ------- -------- -------1234 BAKER 9999 10 2664 YOUNG 2893 20 5321 OTOOLE 9999 10 2134 FARMER 4555 20
Page 358
All
2414 LITTLE 20 6542 LEE 10 4532 PERKINS 10 1244 HUNT 11 123 DOOLITTLE 12 1453 MACDONALD 25 10 rows selected. SQL> select * from proj; Replaced Records in PROJ EMPNO PROJNO ---------- ---------1234 101 2664 425 5321 321 2134 236 2414 236 6542 102 4532 40 1244 665 8 rows selected.
CASE - 4 Using SQL*LOADER loading the data into tables DEPT and DEPT1 only for selected columns. DEPT and DEPT1 are identical tables with same stucture. SQL> desc dept
Page 359
Name Null? Type ----------------------------------------- -------- ----------DEPTNO NUMBER(2) DNAME VARCHAR2(14) LOC VARCHAR2(13) SQL> desc dept1 Name Null? Type ----------------------------------------- -------- ----------DEPTNO NUMBER(3) LOC VARCHAR2(20) The Infile is 'case4.dat' $ vi case4.dat 12,Research,saratoga 11,Art,Salem 12,Production,Texas 30,StockHolding,California 40,Finance,NewYork 15,Payroll,Maryland 17,CustomerService,Boston Creating Control 'case4.ctl' to load the data into DEPT and DEPT1 $ vi case4.ctl load data infile 'case4.dat'
Page 360
append into table dept (deptno integer external terminated by ',', dname char terminated by ',', loc char terminated by whitespace) into table dept1 (deptno position(1) integer external terminated by ',', dname FILLER terminated by ',', loc char terminated by whitespace) Note - FILLER clause should be used whenever you want to skip a column. Now Loading the data from a file 'case4.dat' into tables 'DEPT' and 'DEPT1' using the control file case4.ctl for the user 'U_SCOTT' $ sqlldr userid=u_scott/u_scott control=case4.ctl log=case4.log SQL*Loader: Release 9.2.0.1.0 - Production on Fri May 13 15:49:52 2005 Copyright (c) 1982, 2002, Oracle Corporation. rights reserved. Commit point reached - logical record count 7 Selecting the Records from DEPT SQL> select * from dept; DEPTNO ---------10 20 30 40 12 12 DNAME -------------ACCOUNTING RESEARCH SALES OPERATIONS Research Purchase LOC ------------NEW YORK DALLAS CHICAGO BOSTON saratoga Chicago
Page 361
All
11 22 12 40 36 15 12 11 12 30 40 15
Art Sales Production Finance Accounts Payroll Research Art Production StockHolding Finance Payroll
Salem Phila Texas NewYork Mexico Maryland saratoga Salem Texas California NewYork Maryland
18 rows selected. Selecting the Records from DEPT1 DEPTNO ---------12 11 12 30 40 15 LOC -------------------saratoga Salem Texas California NewYork Maryland
6 rows selected.
Page 362
DEMO ON ROW MIGRATION AND ROW CHAINING DEMO ON ROW MIGRATION AND ROW CHAINING Migration and Chaining Of Rows Row Chaining : When the data for a row in a table is too large to fit into a single data block, then the Oracle server stores the data for the row in a chain of data blocks if the data is too large to fit an empty block, this process is called Row Chaining. Row Migration : When an update statement increases the amount of data in a row so that the row no longer fits in its data block. The oracle server tries to find another block with enough free space to hold the entire row. If such a block is available, the Oracle server moves the entire row to the new block. The Oracle server keeps the original piece of a migrated row to point to the new block containing the actual row does not change. Indexes are not updated, so they point to the original row location. This process is called Row Migration. Migration and Chaining have a negative effect on performance Insert and Update statement that cause migration and chaining perform poorly, because they perform additional processing. Queries that use an index to select migrated or chained rows must perform additional I/Os. Causes Of Row Migration and Row Chaining Row Migration is caused when PCTFREE is set too low and there is not enough room in the block for updates. To avoid migration, all tables that are updated should have their PCTFREE set so that there is enough space within the block for updates. Row Chaining is caused when the row in a table is too large to fit into Oracle Data Block. To avoid Row Chaining the average row length has to be calculated and the table or cluster has to be created in the tablespace with corresponding block size. Detecting Row Migration and Row Chaining You can detect the existance of migrated and chained rows in a table or cluster by using the ANALYZE command with the COMPUTE
Page 363
STATISTICS option. This command counts the number of migrated and chained rows and places this information in the CHAIN_CNT column of DBA_TABLES. The NUM_ROWS column provides the number of rows stored in the analyzed table or cluster. Compute the ratio of chained and migrated rows to the number of rows to decide if you need to eliminate migrated rows. You can also detect migrated and chained rows by checking the 'table fetch continued row' statistics in the V$SYSSTAT view. DEMO ON ROW CHAINING Create a table with low PCTFREE value SQL> create table migemp pctfree 5 as select * from emp; Table created. Inserting 10000 tuples into the emp table SQL> Insert into migemp select * from migemp; 1000 rows created. Commit complete. Modify one of the columns to a larger value SQL> Alter table emp modify ename varchar2(2000); Table altered.
Analyze table to find the no.of rows, average row length, empty blocks, blocks used and the chain count from the data dictionary table dba_tables SQL> analyze table migemp compute statistics; Table analyzed.
Page 364
Checking the details from the data dictionary view dba_tables especially check the chain_cnt column SQL> select table_name,avg_row_length,blocks,empty_blocks, chain_cnt from dba_tables where table_name='MIGEMP'; TABLE_NAME AVG_ROW_LEN BLOCKS EMPTY_BLOCKS CHAIN_CNT ------------------------------ ----------- --------- ------------ ---------MIGEMP 41 234 55 0 Update the migemp table (ename column) with a value larger than the present value SQL> Update emp set ename='This is a test for checking Row Migration which is caused when PCTFREE is set to a low value' where empno=7566; 720 rows updated. Analyze table to find the no.of rows, average row length, empty blocks, blocks used and the chain count from the data dictionary table dba_tables SQL> analyze table migemp compute statistics; Table analyzed. Checking the details from the data dictionary view dba_tables especially check the chain_cnt column SQL> select table_name, avg_row_length, blocks, empty_blocks,chain_cnt from dba_tables where table_name='MIGEMP'; TABLE_NAME BLOCKS EMPTY_BLOCKS AVG_ROW_LEN CHAIN_CNT
Page 365
------------------------------ ----------- --------- ------------ ---------MIGEMP 47 279 10 480 To get the information about each migrated or chained row We need to create a specified output table CHAINED_ROWS which is created by using the script UTLCHAIN.SQL Utlchain.sql rem rem $Header: utlchain.sql 07-may-96.19:40:01 sbasu Exp $ rem Rem Copyright (c) 1990, 1995, 1996, 1998 by Oracle Corporation Rem NAME REM UTLCHAIN.SQL Rem FUNCTION Rem Creates the default table for storing the output of the Rem analyze list chained rows command Rem NOTES Rem MODIFIED Rem syeung 06/17/98 - add subpartition_name Rem mmonajje 05/21/96 - Replace timestamp col name with analyze_timestam Rem sbasu 05/07/96 - Remove echo setting Rem ssamu 08/14/95 - merge PTI with Objects Rem ssamu 07/24/95 - add field for partition name Rem glumpkin 10/19/92 - Renamed from CHAINROW.SQL Rem ggatlin 03/09/92 - add set echo on Rem rlim 04/29/91 change char to varchar2 Rem Klein 01/10/91 - add owner name for chained rows
Page 366
Rem Rem
Klein
12/04/90 - Creation
create table CHAINED_ROWS ( owner_name varchar2(30), table_name varchar2(30), cluster_name varchar2(30), partition_name varchar2(30), subpartition_name varchar2(30), head_rowid rowid, analyze_timestamp date ); SQL> @$ORACLE_HOME/rdbms/admin/utlchain.sql Table dropped. Table created. Name Null? Type ----------------------------------------- ----------------------------------OWNER_NAME VARCHAR2(30) TABLE_NAME VARCHAR2(30) CLUSTER_NAME VARCHAR2(30) PARTITION_NAME VARCHAR2(30) SUBPARTITION_NAME VARCHAR2(30) HEAD_ROWID ROWID ANALYZE_TIMESTAMP DATE
Page 367
You can identify migrated and chained rows in a table or cluster by using the ANALYZE command with the LIST CHAINED ROWS option. This command collects information about each migrated or chained row and places this information into a specified output table. SQL> analyze table migemp list chained rows into chained_rows; Table analyzed. The rowid information of the chained or migrated rows are collected into CHAINED_ROWS table. You can query the no. of chained rows collected SQL> select count(head_rowid) from chained_rows where table_name='MIGEMP'; COUNT(HEAD_ROWID) ----------------480 Create a new table with the rows which aremigrated or chained in the table using the information in the CHAINED_ROWS table SQL> create table migbak as select * from migemp where rowid in (select head_rowid from chained_rows where table_name='MIGEMP'); Table created. Delete the chained rows from the table whichhas chained or migrated rows after creating a table that has chained rows SQL> delete migemp where rowid in (select head_rowid from chained_rows where table_name='MIGEMP'); 480 rows deleted. Insert all the rows back into the migemp table from the migbak table SQL> insert into migemp select * from migbak;
Page 368
480 rows created. Analyze table to find the no.of rows, average row length, empty blocks, blocks used and the chain count from the data dictionary table dba_tables SQL> analyze table migemp compute statistics; Table analyzed. Checking the details from the data dictionary view dba_tables especially check the chain_cnt column SQL> select table_name,avg_row_length,blocks,empty_blocks, chain_cnt from dba_tables where table_name='MIGEMP'; TABLE_NAME AVG_ROW_LEN BLOCKS EMPTY_BLOCKS CHAIN_CNT ------------------------------ ----------- --------- ------------ ---------MIGEMP 47 279 10 0 Drop the table which was used for swapping of chained or migrated rows from the original table SQL> drop table migbak; Table dropped. To prevent the table Row Migration for the future transactions increase the PCTFREE by a specific value SQL> alter table migemp pctfree 20; Table altered.
Page 369
Page 370