Oracle 11g DBA Handouts Book-3
Oracle 11g DBA Handouts Book-3
Figure 1-1. Hi-Level Overview of Oracle Data Guard
In order to implement the availability option using ORACLE 7/8/8i/9i we need to ensure that there is a 2nd server with
1. Similar hardware (may be less number of CPU/memory).
2. Same operating system including patches
3. Same oracle version including patches.
In case of Oracle 8i we can configure all the clients with an alias containing fail over option targeting standby server. Thus
the moment production server goes down immediately we can cancel the MRM mode on the standby server and opens
the database as a regular database (no longer treating that one as Hot Standby anymore).
Since this database was running in MRM mode the recovery has been performed already with generated archived files
coming from the production sever. With 8i option we no longer need to waste time for the recovery. Crystal reports,
analyzing the statistics and some other time taking process can be done on the second hot standby server. With out
wasting time on the primary server for backup processes we can take the backup of the hot standby by shutting down the
database (cold backup).
g
Oracle 11 – Dataguard Page 10 of 242
WK: 5 - Day: 3
75.2. Concepts
1. Analyzing high availability and disaster protection.
2. Two loosely connected sites (Ethernet) primary and standby combines into a single easily managed disaster
recovery solution.
3. Uniform management solution
4. Support for both GUI and CLI interfaces
5. Automating creating / configuring of physical hot standby by databases.
6. Oracle data guards log transport shifts the data in the form of archive log files to the standby database.
7. Fail over and switchover automation.
8. Monitoring alert and control mechanism
75.8.4. Automatic deletion of applied archived redo log files in logical standby DB’s
Archived logs, once they are applied on the logical standby database, are automatically deleted, reducing storage
consumption on the logical standby and improving Data Guard manageability. Physical standby databases have already
had this functionality since Oracle Database 10g Release 1, with Flash Recovery Area.
Step 3 Verify that the DB_UNIQUE_NAME database initialization parameter has been set to a unique name on the
primary and standby database.
For example, if the DB_UNIQUE_NAME parameter has not been defined on either database, the following SQL
statements might be used to assign a unique name to each database.
g
Oracle 11 – Dataguard Protection Modes Page 15 of 242
WK: 5 - Day: 3
Execute this SQL statement on the primary database:
SQL> ALTER SYSTEM SET DB_UNIQUE_NAME='CHICAGO' SCOPE=SPFILE;
Execute this SQL statement on the standby database:
SQL> ALTER SYSTEM SET DB_UNIQUE_NAME='BOSTON' SCOPE=SPFILE;
Step 4 Verify that the LOG_ARCHIVE_CONFIG database initialization parameter has been defined on the primary and
standby database and that its value includes a DG_CONFIG list that includes the DB_UNIQUE_NAME of the primary and
standby database.
For example, if the LOG_ARCHIVE_CONFIG parameter has not been defined on either database, the following SQL
statement could be executed on each database to configure the LOG_ARCHIVE_CONFIG parameter:
SQL> ALTER SYSTEM SET
2> LOG_ARCHIVE_CONFIG='DG_CONFIG=(CHICAGO,BOSTON)';
Step 5 Skip this step unless you are raising your protection mode.
Shut down the primary database and restart it in mounted mode if the protection mode is being changed to Maximum
Protection or being changed from Maximum Performance to Maximum Availability.
If the primary database is an Oracle Real Applications Cluster, shut down all of the instances and then start and mount a
single instance. For example:
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
Step 6 Set the data protection mode.
Execute the following SQL statement on the primary database
SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE {AVAILABILITY | PERFORMANCE |
PROTECTION};
Step 7 Open the primary database.
If the database was restarted in Step 5, open the database:
SQL> ALTER DATABASE OPEN;
Step 8 Confirm that the primary database is operating in the new protection mode.
Perform the following query on the primary database to confirm that it is operating in the new protection mode:
SQL> SELECT PROTECTION_MODE FROM V$DATABASE;
g
Oracle 11 – Startingup & Shutting down a Physical Database Page 16 of 242
WK: 5 - Day: 3
This section describes how to start up and shut down a physical standby database.
78.5. Performance
The Redo Apply technology used by a physical standby database is the most efficient mechanism for keeping a standby
database updated with changes being made at a primary database because it applies changes using low-level recovery
mechanisms which bypass all SQL level code layers.
standby and test, is a cycle that can be repeated as often as desired. The same process can be used to easily create
and regularly update a snapshot standby for reporting purposes where read/write access to data is required.
g
Oracle 11 – User Interfaces for Administering Data Guard Configurations Page 20 of 242
WK: 5 - Day: 3
Table: Primary Database Changes That Require Manual Intervention at a Physical Standby
Primary Database Change Action Required on Physical Standby Database
Drop or delete a tablespace or datafile Delete datafile from primary and physical standby database after
the redo data containing the DROP or DELETE command is
applied to the physical standby.
Use transportable tablespaces Move tablespace between the primary and the physical standby
database.
Add or drop a redo log file group Evaluate the configuration of the redo log and standby redo log
on the physical standby database and adjust as necessary.
Perform a DML or DDL operation using the Copy the datafile containing the unlogged changes to the
NOLOGGING or UNRECOVERABLE clause physical standby database.
Reset the TDE master encryption key Replace the database encryption wallet on the physical standby
database with a fresh copy of the database encryption wallet
from the primary database.
Change initialization parameters Evaluate whether a corresponding change must be made to the
initialization parameters on the physical standby database.
The standby database automatically adds the datafile because the raw devices exist. The standby alert log shows the
following:
Fri Apr 8 09:49:31 2005
Media Recovery Log
/u01/MILLER/flash_recovery_area/MTS_STBY/archivelog/2005_04_08/o1_mf_1_7_15ffgt0z_.arc
Recovery created file /dev/raw/raw100
Successfully added datafile 6 to media recovery
Datafile #6: '/dev/raw/raw100'
Media Recovery Waiting for thread 1 sequence 8 (in transit)
However, if the raw device was created on the primary system but not on the standby, then Redo Apply will stop due to
file-creation errors. For example, issue the following statements on the primary database:
SQL> CREATE TABLESPACE MTS3 –
> DATAFILE '/dev/raw/raw101' size 1m;
Tablespace created.
The standby system does not have the /dev/raw/raw101 raw device created. The standby alert log shows the following
messages when recovering the archive:
Fri Apr 8 10:00:22 2005
Media Recovery Log
/u01/MILLER/flash_recovery_area/MTS_STBY/archivelog/2005_04_08/o1_mf_1_8_15ffjrov_.arc
File #7 added to control file as 'UNNAMED00007'.
Originally created as:
'/dev/raw/raw101'
Recovery was unable to create the file as:
'/dev/raw/raw101'
MRP0: Background Media Recovery terminated with error 1274
Fri Apr 8 10:00:22 2005
Errors in file /u01/MILLER/MTS/dump/mts_mrp0_21851.trc:
ORA-01274: cannot add datafile '/dev/raw/raw101' - file could not be created
ORA-01119: error in creating database file '/dev/raw/raw101'
ORA-27041: unable to open file
Linux Error: 13: Permission denied
Additional information: 1
Some recovered datafiles maybe left media fuzzy
Media recovery may continue but open resetlogs may fail
Fri Apr 8 10:00:22 2005
Errors in file /u01/MILLER/MTS/dump/mts_mrp0_21851.trc:
ORA-01274: cannot add datafile '/dev/raw/raw101' - file could not be created
ORA-01119: error in creating database file '/dev/raw/raw101'
g
Oracle 11 – Monitoring Stand by Databases Page 23 of 242
WK: 5 - Day: 3
NAME.
--------------------------------------------------------------------------------
/u01/MILLER/MTS/system01.dbf
/u01/MILLER/MTS/undotbs01.dbf
/u01/MILLER/MTS/sysaux01.dbf
/u01/MILLER/MTS/users01.dbf
/u01/MILLER/MTS/mts.dbf
/dev/raw/raw100
/u01/app/oracle/product/10.1.0/dbs/UNNAMED00007
3. In the standby alert log you should see information similar to the following:
Fri Apr 8 10:09:30 2005
alter database create datafile
'/dev/raw/raw101' as '/dev/raw/raw101'
4. On the standby database, set STANDBY_FILE_MANAGEMENT to AUTO and restart Redo Apply:
SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
SQL> RECOVER MANAGED STANDBY DATABASE DISCONNECT;
At this point Redo Apply uses the new raw device datafile and recovery continues.
To verify that deleted datafiles are no longer part of the database, query the V$DATAFILE view.
Delete the corresponding datafile on the standby system after the redo data that contains the previous changes is applied
to the standby database. For example:
% rm /disk1/oracle/oradata/payroll/s2tbs_4.dbf
On the primary database, after ensuring the standby database applied the redo information for the dropped tablespace,
g
Oracle 11 – Monitoring Stand by Databases Page 24 of 242
WK: 5 - Day: 3
you can remove the datafile for the tablespace. For example:
% rm /disk1/oracle/oradata/payroll/tbs_4.dbf
o Exit from the SQL prompt and issue an operating system command, such as the following UNIX mv command, to
rename the datafile on the primary system:
% mv /disk1/oracle/oradata/payroll/tbs_4.dbf
/disk1/oracle/oradata/payroll/tbs_x.dbf
o Rename the datafile in the primary database and bring the tablespace back online:
SQL> ALTER TABLESPACE tbs_4 RENAME DATAFILE
2> '/disk1/oracle/oradata/payroll/tbs_4.dbf'
3> TO '/disk1/oracle/oradata/payroll/tbs_x.dbf';
SQL> ALTER TABLESPACE tbs_4 ONLINE;
o Connect to the standby database, query the V$ARCHIVED_LOG view to verify all of the archived redo log files
are applied, and then stop Redo Apply:
SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
SEQUENCE# APP
g
Oracle 11 – Monitoring Stand by Databases Page 25 of 242
WK: 5 - Day: 3
--------- ---
8 YES
9 YES
10 YES
11 YES
4 rows selected.
o Rename the datafile at the standby site using an operating system command, such as the UNIX mv command:
% mv /disk1/oracle/oradata/payroll/tbs_4.dbf
/disk1/oracle/oradata/payroll/tbs_x.dbf
o Rename the datafile in the standby control file. Note that the STANDBY_FILE_MANAGEMENT initialization
parameter must be set to MANUAL.
SQL> ALTER DATABASE RENAME FILE '/disk1/oracle/oradata/payroll/tbs_4.dbf'
2> TO '/disk1/oracle/oradata/payroll/tbs_x.dbf';
If you do not rename the corresponding datafile at the standby system, and then try to refresh the standby database
control file, the standby database will attempt to use the renamed datafile, but it will not find it. Consequently, you will see
error messages similar to the following in the alert log:
ORA-00283: recovery session canceled due to errors
ORA-01157: cannot identify/lock datafile 4 - see DBWR trace file
ORA-01110: datafile 4: '/Disk1/oracle/oradata/payroll/tbs_x.dbf'
or revoking administrative privileges or changing the password of a user with administrative privileges.
Failure to refresh the password file on the physical standby database may cause authentication of redo transport sessions
or connections as SYSDBA or SYSOPER to the physical standby database to fail.
When a physical standby database receives a new branch of redo data, Redo Apply automatically takes the new branch
of redo data. For physical standby databases, no manual intervention is required if the standby database did not apply
redo data past the new resetlogs SCN (past the start of the new branch of redo data). The following table describes how
to resynchronize the standby database with the primary database branch.
Has not applied redo data past the Redo Apply automatically No manual intervention is necessary. The MRP
new resetlogs SCN (past the start of takes the new branch of automatically resynchronizes the standby
the new branch of redo data) redo. database with the new branch of redo data.
Has applied redo data past the new The standby database is Follow the procedure in
resetlogs SCN (past the start of the recovered in the future of “flashback_dg_specific_point.doc” to flash back a
new branch of redo data) and the new branch of redo physical standby database.
Flashback Database is enabled on data.
the standby database Restart Redo Apply to continue application of redo
data onto new reset logs branch.
The MRP automatically resynchronizes the
standby database with the new branch.
Has applied redo data past the new The primary database Re-create the physical standby database following
resetlogs SCN (past the start of the has diverged from the the procedures in
new branch of redo data) and standby on the indicated
Flashback Database is not enabled primary database
on the standby database branch.
Is missing intervening archived redo The MRP cannot Locate and register missing archived redo log files
log files from the new branch of redo continue until the missing from each branch.
data log files are retrieved.
Is missing archived redo log files The MRP cannot Locate and register missing archived redo log files
from the end of the previous branch continue until the missing from the previous branch.
of redo data. log files are retrieved.
Add or drop a redo log file group Alert log Alert log
V$LOG
STATUS column of V$LOGFILE
For logical standby databases, Data Guard uses SQL Apply technology, which first transforms the received redo data into
SQL statements and then executes the generated SQL statements on the logical standby database, as shown in figure
below.
Automatic Updating of a Logical Standby Database
g
Oracle 11 – Dataguard Services Page 29 of 242
WK: 5 - Day: 3
o Before performing a switchover from an Oracle RAC primary database to a physical standby database, shut
down all but one primary database instance. Any primary database instances shut down at this time can be
started after the switchover completes.
o Before performing a switchover or a failover to an Oracle RAC physical standby database, shut down all but
one standby database instance. Any standby database instances shut down at this time can be restarted
after the role transition completes.
o Figure 8-1 shows a two-site Data Guard configuration before the roles of the databases are switched. The primary
database is in San Francisco, and the standby database is in Boston.
This illustration shows a Data Guard configuration consisting of a primary database and a standby database.
An application is performing read/write transactions on the primary database in San Francisco, from which online redo
logs are being archived locally and over Oracle Net services to the standby database in Boston. On the Boston standby
location, the archived redo logs are being applied to the standby database, which is performing read-only transactions.
Figure 8-2 shows the Data Guard environment after the original primary database was switched over to a standby
database, but before the original standby database has become the new primary database. At this stage, the Data Guard
configuration temporarily has two standby databases.
Figure Standby Databases Before Switchover to the New Primary Database
This illustration shows a Data Guard configuration during a switchover operation. The San Francisco database (originally
the primary database) has changed to the standby role, but the Boston database has not yet changed to the primary role.
At this point in time, both the San Francisco and Boston databases are operating in the standby role.
Applications that were previously sending read/write transactions to the San Francisco database are preparing to send
read/write transactions to the Boston database. On the Boston standby database, the standby database online redo logs
and local archived redo logs are still being generated. However, no redo logs are being sent or received over the Oracle
Net network. Both of the standby databases are capable of operating in read-only mode.
Figure shows the Data Guard environment after a switchover took place. The original standby database became the new
primary database. The primary database is now in Boston, and the standby database is now in San Francisco.
Figure Data Guard Environment After Switchover
g
Oracle 11 – Dataguard Services Page 32 of 242
WK: 5 - Day: 3
This illustration shows a Data Guard configuration after a switchover operation has occurred. The San Francisco database
(originally the primary database) is now operating as the standby database and the Boston database is now operating as
the primary database.
Preparing for a Switchover
Ensure the prerequisites listed in Section 8.1.1 are satisfied. In addition, the following prerequisites must be met for a
switchover:
For switchovers involving a physical standby database, verify that the primary database is open and that redo apply is
active on the standby database.
For switchovers involving a logical standby database, verify both the primary and standby database instances are open
and that SQL Apply is active.
This illustration shows a two-site Data Guard configuration after a system or software failure occurred. In this figure, the
primary site (in San Francisco) is crossed out to indicate that the site is no longer operational. The Boston site that was
originally a standby site is now operating as the new primary site. Applications that were previously sending read/write
transactions to the San Francisco site when it was the primary site are now sending all read/write transactions to the new
primary site in Boston. The Boston site is writing to online redo logs and local archived redo logs.
g
Oracle 11 – Dataguard Services Page 33 of 242
WK: 5 - Day: 3
1 row selected
The TO STANDBY value in the SWITCHOVER_STATUS column indicates that it is possible to switch the primary
database to the standby role. If the TO STANDBY value is not displayed, then verify the Data Guard configuration is
functioning correctly (for example, verify all LOG_ARCHIVE_DEST_n parameter values are specified correctly).
After performing these steps, the SWITCHOVER_STATUS column still displays SESSIONS ACTIVE, you can
successfully perform a switchover by appending the WITH SESSION SHUTDOWN clause to the ALTER DATABASE
COMMIT TO SWITCHOVER TO PHYSICAL STANDBY statement described in Step 2.
Step 2. Initiate the switchover on the primary database.
To change the current primary database to a physical standby database role, use the following SQL statement on the
primary database:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE
DISCONNECT FROM SESSION;
THREAD LAST
---------- ----------
1 100
Copy any available archived redo log files from the primary database that contains sequence numbers higher than the
highest sequence number available on the target standby database to the target standby database and register them. This
must be done for each thread.
For example:
SQL> ALTER DATABASE REGISTER PHYSICAL LOGFILE 'filespec1';
After all available archived redo log files have been registered, query the V$ARCHIVE_GAP view as described in Step 1
to verify no additional gaps were introduced in Step 3.
Note:
If, while performing Steps 1 through 3, you are not able to resolve gaps in the archived redo log files (for example,
because you do not have access to the system that hosted the failed primary database), some data loss will occur during
the failover.
Step 4: Stop Managed Recovery.
Issue the following statement to stop managed recovery:
By default, apply services wait for the full archived redo log file to arrive on the standby database before applying it to the
standby database. However, if you use a standby redo log, you can enable real-time apply, which allows Data Guard to
recover redo data from the current standby redo log file as it is being filled.
Apply services use the following methods to maintain physical and logical standby databases:
Redo apply (physical standby databases only)
Uses media recovery to keep the primary and physical standby databases synchronized.
SQL Apply (logical standby databases only)
Reconstitutes SQL statements from the redo received from the primary database and executes the SQL
statements against the logical standby database.
Logical standby databases can be opened in read/write mode, but the target tables being maintained by the logical
standby database are opened in read-only mode for reporting purposes (providing the database guard was set
appropriately). SQL Apply enables you to use the logical standby database for reporting activities, even while SQL
statements are being applied.
The sections in this chapter describe Redo Apply, SQL Apply, real-time apply, and delayed apply in more detail.
82.2.1. Specifying a Time Delay for the Application of Archived Redo Log Files
In some cases, you may want to create a time lag between the time when redo data is received from the primary site and
when it is applied to the standby database. You can specify a time interval (in minutes) to protect against the application of
corrupted or erroneous data to the standby database. When you set a DELAY interval, it does not delay the transport of
the redo data to the standby database. Instead, the time lag you specify begins when the redo data is completely archived
at the standby destination.
Note: If you define a delay for a destination that has real-time apply enabled, the delay is ignored.
These commands result in apply services immediately beginning to apply archived redo log files to the standby database,
before the time interval expires.
Using Flashback Database as an Alternative to Setting a Time Delay
g
Oracle 11 – Redo Apply Services Page 42 of 242
WK: 5 - Day: 3
As an alternative to setting an apply delay, you can use Flashback Database to recover from the application of corrupted
or erroneous data to the standby database. Flashback Database can quickly and easily flash back a standby database to
an arbitrary point in time.
If you start a foreground session, control is not returned to the command prompt until recovery is canceled by
another session.
To start Redo Apply in the background, include the DISCONNECT keyword on the SQL statement. For example:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;
This statement starts a detached server process and immediately returns control to the user. While the managed
recovery process is performing recovery in the background, the foreground process that issued the RECOVER
statement can continue performing other tasks. This does not disconnect the current SQL session.
To start real-time apply, include the USING CURRENT LOGFILE clause on the SQL statement. For example:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE;
When you issue this statement, SQL Apply waits until it has committed all complete transactions that were in the process
of being applied. Thus, this command may not stop the SQL Apply processes immediately.
g
Oracle 11 – Redo transport Services Page 43 of 242
WK: 5 - Day: 3
Value Description
ENABLE Redo transport services can transmit redo data to this destination. This is the default.
DEFER Redo transport services will not transmit redo data to this destination.
ALTERNATE This destination will become enabled if communication to its associated destination fails.
A redo transport destination is configured by setting the LOG_ARCHIVE_DEST_n parameter to a character string
that includes one or more attributes. This section briefly describes the most commonly used attributes.
The SERVICE attribute, which is a mandatory attribute for a redo transport destination, must be the first attribute
specified in the attribute list. The SERVICE attribute is used to specify the Oracle Net service name used to
connect to a redo transport destination.
The SYNC attribute is used to specify that the synchronous redo transport mode be used to send redo data to a
redo transport destination.
The ASYNC attribute is used to specify that the asynchronous redo transport mode be used to send redo data to
a redo transport destination. The asynchronous redo transport mode will be used if neither the SYNC nor the
ASYNC attribute is specified.
The NET_TIMEOUT attribute is used to specify how long the LGWR process will block waiting for an
acknowledgement that redo data has been successfully received by a destination that uses the synchronous redo
transport mode. If an acknowledgement is not received within NET_TIMEOUT seconds, the redo transport
connection is terminated and an error is logged.
Oracle recommends that the NET_TIMEOUT attribute be specified whenever the synchronous redo transport
mode is used, so that the maximum duration of a redo source database stall caused by a redo transport fault can
be precisely controlled.
The AFFIRM attribute is used to specify that redo received from a redo source database is not acknowledged until
it has been written to the standby redo log. The NOAFFIRM attribute is used to specify that received redo is
acknowledged without waiting for received redo to be written to the standby redo log.
The DB_UNIQUE_NAME attribute is used to specify the DB_UNIQUE_NAME of a redo transport destination. The
DB_UNIQUE_NAME attribute must be specified if the LOG_ARCHIVE_CONFIG database initialization parameter
has been defined and its value includes a DG_CONFIG list.
If the DB_UNIQUE_NAME attribute is specified, its value must match one of the DB_UNIQUE_NAME values in
the DG_CONFIG list. It must also match the value of the DB_UNIQUE_NAME database initialization parameter at
the redo transport destination. If either match fails, an error is logged and redo transport will not be possible to that
destination.
The VALID_FOR attribute is used to specify when redo transport services transmits redo data to a redo transport
destination. Oracle recommends that the VALID_FOR attribute be specified for each redo transport destination at
every site in a Data Guard configuration so that redo transport services will continue to send redo data to all
standby databases after a role transition, regardless of which standby database assumes the primary role.
The REOPEN attribute is used to specify the minimum number of seconds between automatic reconnect attempts
to a redo transport destination that is inactive because of a previous error.
g
Oracle 11 – Redo transport Services Page 45 of 242
WK: 5 - Day: 3
The COMPRESSION attribute is used to specify that redo data is transmitted to a redo transport destination in
compressed form when resolving redo data gaps. Redo transport compression can significantly improve redo gap
resolution time when network links with low bandwidth and high latency are used for redo transport.
The following example uses all of the LOG_ARCHIVE_DEST_n attributes described in this section. Two redo transport
destinations are defined and enabled. The first destination uses the asynchronous redo transport mode. The second
destination uses the synchronous redo transport mode with a 30-second timeout. A DB_UNIQUE_NAME has been
specified for both destinations, as has the use of compression when resolving redo gaps. If a redo transport fault occurs at
either destination, redo transport will attempt to reconnect to that destination, but not more frequently than once every 60
seconds.
DB_UNIQUE_NAME=BOSTON
LOG_ARCHIVE_CONFIG='DG_CONFIG=(BOSTON,CHICAGO,DENVER)'
LOG_ARCHIVE_DEST_2='SERVICE=CHICAGO
ASYNC
NOAFFIRM
VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE)
REOPEN=60
COMPRESSION=ENABLE
DB_UNIQUE_NAME=CHICAGO'
LOG_ARCHIVE_DEST_STATE_2='ENABLE'
LOG_ARCHIVE_DEST_3='SERVICE=DENVER
SYNC
AFFIRM
NET_TIMEOUT=30
VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE)
REOPEN=60
COMPRESSION=ENABLE
DB_UNIQUE_NAME=DENVER'
LOG_ARCHIVE_DEST_STATE_3='ENABLE'
Viewing Attributes With V$ARCHIVE_DEST
The V$ARCHIVE_DEST view can be queried to see the current settings and status for each redo transport destination.
LOG_ARCHIVE_DEST_2 = 'LOCATION=USE_DB_RECOVERY_FILE_DEST
VALID_FOR=(STANDBY_LOGFILE,STANDBY_ROLE)'
LOG_ARCHIVE_DEST_STATE_2=ENABLE
Oracle recommends the use of a flash recovery area, because it simplifies the management of archived redo log files.
Standby Redo Log Archival to a Local FIle System Location
Take the following steps to set up standby redo log archival to a local file system location:
Set the LOCATION attribute of a LOG_ARCHIVE_DEST_n parameter to a valid pathname.
Set the VALID_FOR attribute of the same LOG_ARCHIVE_DEST_n parameter to a value that allows standby redo
log archival.
The following are some sample parameter values that might be used to configure a physical standby database to archive
its standby redo log to a local file system location:
THREAD# SEQUENCE#
--------- ---------
1 12
1 13
1 14
Step 4: Trace the progression of redo transmitted to a redo transport destination.
Set the LOG_ARCHIVE_TRACE database initialization parameter at a redo source database and at each redo transport
destination to trace redo transport progress.
SUP SUP
--- ---
YES YES
If the supplemental logging is not enabled, execute the following
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY,UNIQUE INDEX) COLUMNS;
SQL> ALTER SYSTEM SWITCH LOGFILE;
If log parallelism is not enabled, execute the following:
Note: If we get "ORA-1161 error: database name in file header does not match given name" when we try to run the
CREATE CONTROLFILE, try this instead: don't copy the control files to the new directories (or, delete the control files
from the new directories), and, edit ctrl.sql to take out the REUSE option in the CREATE CONTROLFILE command, then,
try rerunning ctrl.sql.
SQL> SELECT * FROM GLOBAL_NAME;
Shows the original global name, such as PPRD.WORLD
SQL> UPDATE GLOBAL_NAME SET GLOBAL_NAME = "TEST.WORLD';
Changes the global name so that doing a "create database link" to access a remote database doesn't give we a
"loopback" error. (Only do this if needed, since we don't know if this "world" change would adversely affect anything else in
Oracle.)
$ . oraenv
Set the Oracle SID to the name of the original instance (PPRD here).
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
Restarts that original instance.
* $ lsnrctl status
Shows the pathname of the Listener Parameter File (listener.ora).
* $ vi /u00/oracle/product/v901/network/admin/listener.ora
Edit the listener.ora file to copy the PPRD lines and change the copy to match TEST (don't change any of the spacing!),
giving:
(SID_DESC=
(SID_NAME=TEST)
(ORACLE_HOME=/u00/oracle/product/v723)
)
* $ lsnrctl stop
* $ lsnrctl start
The TEST instance has now been added to the SQL*NET Listener. On the client network (such as Novell), we will need to
edit our tnsnames.ora file in the orawin\network\admin directory to copy the PPRD instance's lines and change the copy to
match TEST, similar to:
unix_test =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS =
(PROTOCOL = TCP)
(Host = myhost.domain.com)
(Port = 1521)
)
)
(CONNECT_DATA = (SID = TEST)
)
)
* $ su – jobsub
* $ vi start_jobsub.shl
These lines log into jobsub (if we are allowed to use "su"; otherwise, just login to jobsub, etc.) and edit the jobsub startup
script to include the TEST instance (copy PPRD's lines and change to match TEST):
g
Oracle 11 – Cloning Oracle Database Page 54 of 242
WK: 5 - Day: 3
ORACLE_SID=TEST; export ORACLE_SID; . oraenv
echo "=== Starting jobsubmission for $ORACLE_SID....... "
nohup sh $BANNER_LINKS/gurjobs.shl > gurjobsTEST.out 2>&1 &
$ su – jobsub
$ kill -9 -1
These lines kill the current jobsub processes for all instances.
$ su – jobsub
$ start_jobsub.shl
Starts jobsub for all instances, including for TEST. (Or, instead of killing jobsub and restarting it, we could have just
entered ". oraenv" to set the TEST instance, and the "nohup" line to start jobsub for it.)
If we want to change the internal database ID of the cloned copy so that we can use utilities such as RMAN on that copy
which requires unique database ID's, we can use the Oracle 9i "nid" (new ID) utility to generate a new database ID, as
shown below. Note that we haven't tried this, yet, but, we can give it a try if we need to use RMAN on a copy-datafile
clone.
$ . oraenv
Set the oracle SID to the name of the new instance (TEST here).
SQL> CONNECT / AS SYSDBA
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP MOUNT
SQL> HOST
$ nid SYS/<syspassword>
Answer the prompt with Y
$ exit
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP MOUNT
SQL> ALTER DATABASE OPEN RESETLOGS;
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP
SQL> EXIT
g
Oracle 11 – Cloning Oracle Dataguard Broker Page 55 of 242
WK: 5 - Day: 3
In addition, it makes all Oracle Net Services configuration changes necessary to support redo transport services and log
apply services.
g
Oracle 11 – Cloning Oracle Dataguard Broker Page 60 of 242
WK: 5 - Day: 3
Command Description
CONVERT Converts a database between a physical standby database and a snapshot standby database
REINSTATE Changes a database marked for reinstatement into a viable standby database
SWITCHOVER Switches roles between the primary database and a standby database
This two-way communication channel is used to pass requests between databases and to monitor the health of all of the
databases in the broker configuration.
A number of enhancements in RMAN help to simplify backup and recovery operations across all primary and
physical standby databases, when using a catalog. Also, you can use the RMAN DUPLICATE command to
create a physical standby database over the network without a need for pre-existing database backups.
New Features Specific to SQL Apply and Logical Standby Databases
The following list summarizes the new features for SQL Apply and logical standby databases in Oracle Database 11g
Release 1 (11.1):
Support for additional object datatypes and PL/SQL package support
XML stored as CLOB (Support for additional PL/SQL Package)
DBMS_RLS (row level security or Virtual Private Database)
DBMS_FGA
Redo data gap detection and resolution works just as it does on a physical standby database.
If the primary database moves to new database branch (for example, because of a Flashback Database
or an OPEN RESETLOGS, the snapshot standby database will continue accepting redo from new
database branch.
A snapshot standby database cannot be the target of a switchover or failover. A snapshot standby
database must first be converted back into a physical standby database before performing a role
transition to it.
After a switchover or failover between the primary database and one of the physical or logical standby
databases in a configuration, the snapshot standby database can receive redo data from the new
primary database after the role transition.
A snapshot standby database cannot be the only standby database in a Maximum Protection Data
Guard configuration.
g
Oracle 11 – Snapshot Standby Databases Page 68 of 242
WK: 5 - Day: 3
The FAL_SERVER parameter is configured and its value contains an Oracle Net service name for the primary database
If automatic repair is not possible, an ORA-1578 error is returned.
-- Start Redefinition
g
Oracle 11 - Online Redefinition Page 100 of 242
WK: 6 - Day: 5.2
EXEC Dbms_Redefinition.Start_Redef_Table( -
'SCOTT', -
'EMPLOYEES', -
'EMPLOYEES2', -
'EMPNO EMPNO, FIRST_NAME FIRST_NAME, SALARY*1.10 SAL);
-- Optionally synchronize new table with interim data before index creation
EXEC dbms_redefinition.sync_interim_table( -
'SCOTT', 'EMPLOYEES', 'EMPLOYEES2');
-- Complete redefinition
EXEC Dbms_Redefinition.Finish_Redef_Table( -
'SCOTT', 'EMPLOYEES', 'EMPLOYEES2');
-- Remove original table which now has the name of the new table
DROP TABLE employees2;
If the column mappings are ommitted it is assumed that all column names in the new table match those of the old table.
Functions can be performed on the data during the redefinition if they are specified in the column mapping. Any indexes,
keys and triggers created against the new table must have unique names. All FKs should be created disabled as the
redefinition completion will enable them.
The redefinition process can be aborted using:
EXEC Dbms_Redefinition.Abort_Redef_Table('SCOTT', 'EMPLOYEES', 'EMPLOYEES2');
This process allows the following operations to be performed with no impact on DML operations:
Converting a non-partitioned table to a partitioned table and vice versa.
Switching a heap organized table to an index organized table and vice versa.
Dropping non-primary key columns.
Adding new columns.
Adding or removing parallel support.
Modifying storage parameters.
Online table redefinition has a number of restrictions including:
There must be enough space to hold two copies of the table.
Primary key columns cannot be modified.
Tables must have primary keys.
Redefinition must be done within the same schema.
New columns added cannot be made NOT NULL until after the redefinition operation.
Tables cannot contain LONGs, BFILEs or User Defined Types.
Clustered tables cannot be redefined.
Tables in the SYS or SYSTEM schema cannot be redefined.
Tables with materialized view logs or materialized views defined on them cannot be redefined.
Horizontal sub setting of data cannot be performed during the redefinition.
93.5. Monitoring
After a job is issued, we can monitor its status from the view DBA_SCHEDULER_JOB_LOG, where the column STATUS
shows the current status of the job. If it shows FAILED, we can drill down further to find out the cause from the view
DBA_SCHEDULER_JOB_RUN_DETAILS.
93.6. Administration
So far, we've discussed how to create several types of objects: programs, schedules, job classes, and jobs. What if we
want to modify some of them to adjust to changing needs? Well, we can do that via APIs provided in the
DBMS_SCHEDULER package.
From the Database tab of the Enterprise Manager 10g home page, click on the Administration link. This will bring up the
Administration Screen, shown in Figure 1. All the Scheduler related tasks are found under the heading "Scheduler" to the
bottom right-hand corner, shown inside a red ellipse in the figure.
g
Oracle 11 – OEM Jobs & Events Page 107 of 242
WK: 6 - Day: 5.2
All the tasks related to scheduler, such as creating, deleting, and maintaining jobs, can be easily accomplished through
the hyper-linked task in this page. Let's see a few of these tasks. We created all these tasks earlier, so the clicking on the
Jobs tab will show a screen similar to Figure 2.
Clicking on the job COLLECT_STATS allows us to modify its attributes. The screen shown in Figure 3 shows up when we
click on "Job Name."
g
Oracle 11 – OEM Jobs & Events Page 108 of 242
WK: 6 - Day: 5.2
As we can see, we can change parameters of the job as well as the schedule and options by clicking on the appropriate
tabs. After all changes are made, we would press the button "Apply" to make the changes permanent. Before doing so, we
may want to click the button marked "Show SQL", which shows the exact SQL statement that will be issued if for no other
reason than to see what APIs are called, thereby enabling us to understand the workings behind the scene. We can also
store the SQL in a script and execute it later, or store it as a template for the future.
g
Oracle 11 – OEM Jobs & Events Page 109 of 242
WK: 6 - Day: 5.2
96.2. DBMS_REPAIR
The DBMS_REPAIR utility provides a mechanism to rebuild the impacted freelists and bitmap entries after fixing block
corruption. This procedure recreates the header portion of the datafile, allowing Oracle to use the newly repaired blocks.
This package allows us to detect and repair corruption. The process requires two administration tables to hold a list of
corrupt blocks and index keys pointing to those blocks. These are created as follows:
BEGIN
DBMS_REPAIR.admin_tables (
table_name => 'REPAIR_TABLE',
table_type => DBMS_REPAIR.repair_table,
action => DBMS_REPAIR.create_action,
tablespace => 'USERS');
DBMS_REPAIR.admin_tables (
table_name => 'ORPHAN_KEY_TABLE',
table_type => DBMS_REPAIR.orphan_table,
action => DBMS_REPAIR.create_action,
tablespace => 'USERS');
END;
With the administration tables built we are able to check the table of interest using the CHECK_OBJECT procedure:
SET SERVEROUTPUT ON
DECLARE
v_num_corrupt INT;
BEGIN
v_num_corrupt := 0;
DBMS_REPAIR.check_object (
schema_name => 'SCOTT',
object_name => 'DEPT',
repair_table_name => 'REPAIR_TABLE',
corrupt_count => v_num_corrupt);
DBMS_OUTPUT.put_line('number corrupt: ' || TO_CHAR (v_num_corrupt));
END;
Assuming the number of corrupt blocks is greater than 0 the CORRUPTION_DESCRIPTION and the
REPAIR_DESCRIPTION columns of the REPAIR_TABLE can be used to get more information about the corruption. At
this point the currupt blocks have been detected, but are not marked as corrupt. The FIX_CORRUPT_BLOCKS procedure
can be used to mark the blocks as corrupt, allowing them to be skipped by DML once the table is in the correct mode:
SET SERVEROUTPUT ON
DECLARE
v_num_fix INT;
BEGIN
v_num_fix := 0;
g
Oracle 11 – DBMS Built-in Packages Page 137 of 242
WK: 6 - Day: 5.2
DBMS_REPAIR.fix_corrupt_blocks (
schema_name => 'SCOTT',
object_name => 'DEPT',
object_type => Dbms_Repair.table_object,
repair_table_name => 'REPAIR_TABLE',
fix_count => v_num_fix);
DBMS_OUTPUT.put_line('num fix: ' || TO_CHAR(v_num_fix));
END;
Once the corrupt table blocks have been located and marked all indexes must be checked to see if any of their key entries
point to a corrupt block. This is done using the DUMP_ORPHAN_KEYS procedure:
SET SERVEROUTPUT ON
DECLARE
v_num_orphans INT;
BEGIN
v_num_orphans := 0;
DBMS_REPAIR.dump_orphan_keys (
schema_name => 'SCOTT',
object_name => 'PK_DEPT',
object_type => DBMS_REPAIR.index_object,
repair_table_name => 'REPAIR_TABLE',
orphan_table_name => 'ORPHAN_KEY_TABLE',
key_count => v_num_orphans);
DBMS_OUTPUT.put_line('orphan key count: ' || TO_CHAR(v_num_orphans));
END;
If the orphan key count is greater than 0 the index should be rebuilt. The process of marking the table block as corrupt
automatically removes it from the freelists. This can prevent freelist access to all blocks following the corrupt block. To
correct this freelists must be rebuilt using the REBUILD_FREELISTS procedure:
BEGIN
DBMS_REPAIR.rebuild_freelists (
schema_name => 'SCOTT',
object_name => 'DEPT',
object_type => DBMS_REPAIR.table_object);
END;
The final step in the process is to make sure all DML statements ignore the data blocks marked as corrupt. This is done
using the SKIP_CORRUPT_BLOCKS procedure:
BEGIN
DBMS_REPAIR.skip_corrupt_blocks (
schema_name => 'SCOTT',
object_name => 'DEPT',
object_type => DBMS_REPAIR.table_object,
flags => DBMS_REPAIR.skip_flag);
END;
The SKIP_CORRUPT column in the DBA_TABLES view indicates if this action has been successful. At this point the table
can be used again but we will have to take steps to correct any data loss associated with the missing blocks.
g
Oracle 11 – DBMS Built-in Packages Page 138 of 242
WK: 6 - Day: 5.2
Example:
Assume that you have a stored procedure called proc_one which takes an argument arg1, and you want this procedure to
be executed every hour.
Solution:
Declare
job_no number;
Begin
DBMS_JOB.SUBMIT(job_no,'proc_one(''arg1''); ',sysdate,'sysdate+1/24');
-- Do not forget the semicolon
DBMS_OUTPUT.PUT_LINE(job_no); -- display job number
end;
You can delete a job from the queue by calling the procedure DBMS_JOB.REMOVE(job_no). This can be called
interactively from the SQL prompt for example:
SQL>Execute DBMS_JOB.REMOVE(1) -- Will remove job number 1.
Related data dictionary View is USER_JOBS
SQL> select job,log_user,last_sec,this_sec,next_sec,what from user_jobs;
/usr 7000 MB
/oraeng 2000 MB
/var 1000 MB
/tmp 1000 MB
/disk1 2000 MB
/disk2 2000 MB
/disk3 2000 MB
In the above e.g. /disk1, /disk2 and /disk3 are external disk subsystems. The reason why we have like this, in case if the
internal disk goes corrupted, we can simply re-install Linux after replacing the drive and every thing can function normally.
And make sure your external drives are running with either RAID-0 or RAID-5, so those disk problems won’t stop the
show.
:wq
$ . .bash_profile
5. Now login as oracle10g and start the installation process
$ startx
$cd /mnt/cdrom
$sh runInstaller
g
Oracle 11 – Installation of 11g on Red Hat Enterprise Linux - 4 Page 151 of 242
WK: 6 - Day: 5.2
If we get any errors during the upgrade script executes during the upgrade script executes re-execute the script after
fixing the error.
For example, to upgrade an oracle 9.2.0 DB to oracle 10g we must run u0902000.sql.
SQL>SPOOL upgrade.log
SQL>@$ORACLE- HOME/rdborns/adminisuu090200.sql
SQL>SPOOL OFF
Check the spool file & verify that the packages and procedures are compiled successfully. Correct any problems
we find in this file and rerun the appropriate upgrade script if necessary.
Run post-upgrade status script utlu1025.sql. Specifying the TEXT option to see if all the components are upgrade
successfully.
SQL> @$0.M/rdbms/admin/utlu102s.SQL TEXT
Shutdown the DB & Startup
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP
If we encounter a message listing obsolete init parameters when we start the DB, then remove the remove the
obsolete parameters from the parameter file.
Run utlrp.sql to re-compute any remaining stored PL/SQL and Java code.
SQL> @$0.M/rdbms/admin/uterp.sql
Verify that all packages are valid
SQL> SELECT count(*) from DBA OBJECTS WHERE Status = ‘INVALID’;
Our DB is now upgraded to the new oracle DB 10g release.
Here, while importing we needs to take care that we have configured the DB_NK_CACHE_SIZE in init.ora
parameter for the non-default tablespaces. Otherwise, the tablespace creation fails if the older version Database
has tablespace of 2k, 2k, 16k and 32k as we know that the default block size from Oracle 10g is 8k.
Check the Log file after the upgrade process is over and if we come across any errors, correct the error and
import that specified object only.
Alternate Method:
Use this method if we don’t wish to go with the full DB export perform export for all objects of the application as SYSTEM
using the OWNER option of the export utility.
$ exp system/<password> file=app.dmp log=app.log owner=app1
This has to be performed for all the schemas in case the DB is supporting more than on application. Once we are done
with export on all the schemas of the order version DB, create a new Oracle 10g DB with minimum requirements. Create
the tablespaces and users that are present in order version of the DB manually. Now use the above created dump files to
import back into the appropriate schemas.
Note: Here we need to take of the dependences between the schemas if any and resolve then as per the business
requirements.
Method 4: Upgrade using COPY / CREATE TABLE AS Commands
We can copy data from one oracle DB to another using Database links. For examples, we can create new tables and fill
the tables with data by using the INSER INTO statement and the CREATE TABLE AS Statement. Alternatively, we can
also use the COPY command.
Copying data and Export/Import offer the same advantages for upgrading. Using either method we can defragment data
files and restructure the DB by creating new tablespace or modifying existing tables or tablespace. In addition we can
copy only specified DB objects or users.
Copying data, however, unlike Export/Import enables the selection of specific rows of tables to be placed into the new DB.
Copying Data is good method for copying only point of a Database table. In contrast, using Export/Import, we can copy
only entire tables.
For more information on COPY and CREATE TABLE AS commands. Refer to Oracle DB Sql Reference.
g
Oracle 11 – Upgradation to Oracle 10g Page 164 of 242
WK: 6 - Day: 5.2
100.1. Instance
V$BGPROCESS This view describes the background processes.
V$BH This is a Parallel Server view. This view gives the status and number of pings
for every buffer in the SGA.
V$BUFFER_POOL This view displays information about all buffer pools available for the instance.
The "sets" pertain to the number of LRU latch sets.
V$BUFFER_POOL_STATISTICS This view displays information about all buffer pools available for the instance.
The "sets" pertain to the number of LRU latch sets.
V$INSTANCE This view displays the state of the current instance. This version of
V$INSTANCE is not compatible with earlier versions of V$INSTANCE.
V$SGA This view contains summary information on the System Global Area.
V$SGASTAT This view contains detailed information on the System Global Area.
V$DATAFILE_HEADER This view displays datafile information from the datafile headers.
g
Oracle 11 – Dynamic Performance Views Page 171 of 242
WK: 6 - Day: 5.2
V$DBFILE This view lists all datafiles making up the database. This view is retained for historical
compatibility. Use of V$DATAFILE is recommended instead.
V$PROXY_DATAFILE This view contains descriptions of datafile and controlfile backups which are taken with
a new feature called Proxy Copy. Each row represents a backup of one database file.
V$PWFILE_USERS This view lists users who have been granted SYSDBA and
SYSOPER privileges as derived from the password file.
V$RESOURCE_LIMIT This view displays information about global resource use for some
of the system resources. Use this view to monitor the consumption
of resources so that you can take corrective action, if necessary.
V$ROLLNAME This view lists the names of all online rollback segments. It can
only be accessed when the database is open.
V$RSRC_CONSUMER_GROUP This view displays data related to the currently active resource
consumer groups.
V$RSRC_CONSUMER_GROUP_CPU_MTH This view shows all available resource allocation methods for
resource consumer groups.
V$RSRC_PLAN This view displays the names of all currently active resource plans.
V$RSRC_PLAN_CPU_MTH This view shows all available CPU resource allocation methods for
resource plans.
V$DISPATCHER_RATE This view provides rate statistics for the dispatcher processes.
V$MTS This view contains information for tuning the multi-threaded server.
V$REQDIST This view lists statistics for the histogram of MTS dispatcher request times, divided into
12 buckets, or ranges of time. The time ranges grow exponentially as a function of the
bucket number.
V$OFFLINE_RANGE This view displays datafile offline information from the controlfile. Note that the last
offline range of each datafile is kept in the DATAFILE record.
V$RECOVERY_LOG This view lists information about archived logs that are needed to complete media
recovery. This information is derived from the log history view, V$LOG_HISTORY.
V$INSTANCE_RECOVERY This view is used to monitor the mechanisms that implement the user-specifiable
limit on recovery reads.
V$THREAD This view contains thread information from the control file.
V$BACKUP_DEVICE This view displays information about supported backup devices. If a device type does
not support named devices, then one row with the device type and a null device
name is returned for that device type. If a device type supports named devices then
one row is returned for each available device of that type. The special device type
DISK is not returned by this view because it is always available.
V$BACKUP_PIECE This view displays information about backup pieces from the controlfile. Each backup
set consists of one or more backup pieces.
V$BACKUP_REDOLOG This view displays information about archived logs in backup sets from the controlfile.
That online redo logs cannot be backed up directly; they must be archived first to disk
and then backed up. An archive log backup set can contain one or more archived
logs.
V$BACKUP_SET This view displays backup set information from the controlfile. A backup set record is
inserted after the backup set is successfully completed.
V$BACKUP_SYNC_IO This view displays backup set information from the controlfile. A backup set record is
inserted after the backup set is successfully completed.
V$COPY_CORRUPTION This view displays information about datafile copy corruptions from the controlfile.
V$DATAFILE_COPY This view displays datafile copy information from the controlfile.
V$DB_PIPES This view displays the pipes that are currently in this database.
V$DELETED_OBJECT This view displays information about deleted archived logs, datafile copies and
backup pieces from the controlfile. The only purpose of this view is to optimize the
recovery catalog resync operation. When an archived log, datafile copy, or backup
piece is deleted, the corresponding record is marked deleted.
V$BACKUP_CORRUPTION This view displays information about corruptions in datafile backups from the
controlfile. Note that corruptions are not tolerated in the controlfile and archived
log backups.
V$RECOVER_FILE This view displays the status of files needing media recovery.
V$RECOVERY_FILE_STATUS V$RECOVERY_FILE_STATUS contains one row for each datafile for each
RECOVER command. This view contains useful information only for the Oracle
process doing the recovery. When Recovery Manager directs a server process
to perform recovery, only Recovery Manager is able to view the relevant
information in this view. V$RECOVERY_FILE_STATUS will be empty to all
other Oracle users.
V$FIXED_VIEW_DEFINITION This view contains the definitions of all the fixed views (views beginning with
V$). Use this table with caution. Oracle tries to keep the behavior of fixed views
the same from release to release, but the definitions of the fixed views can
change without notice. Use these definitions to optimize your queries by using
indexed columns of the dynamic performance tables.
V$LOADTSTAT SQL*Loader statistics compiled during the execution of a direct load. These statistics apply to
the current table. Any SELECT against this table results in "no rows returned" since you
cannot load data and do a query at the same time.
100.18. Logminer
V$LOGMNR_CONTENTS This view contains log history information.
V$LOG_HISTORY This view contains log history information from the control file.
V$NLS_VALID_VALUES This view lists all valid values for NLS parameters.
V$RESERVED_WORDS This view gives a list of all the keywords that are used by the PL/SQL compiler.
This view helps developers to determine whether a word is already being used as
a keyword in the language.
V$SESSION_CONNECT_INFO This view displays information about network connections for the current
session.
V$SESSION_CURSOR_CACHE This view displays information on cursor usage for the current session. The
V$SESSION_CURSOR_CACHE view is not a measure of the effectiveness of
the SESSION_CACHED_CURSORS initialization parameter.
V$SESSION_EVENT This view lists information on waits for an event by a session. Note that the
TIME_WAITED and AVERAGE_WAIT columns will contain a value of zero on
those platforms that do not support a fast timing mechanism. If you are
running on one of these platforms and you want this column to reflect true wait
times, you must set TIMED_STATISTICS to TRUE in the parameter file.
Please remember that doing this will have a small negative effect on system
performance.
V$SESSION_LONGOPS This view displays the status of certain long-running operations. It provides
progression reports on operations using the columns SOFAR and
TOTALWORK. For example, the operational status for the following
components can be monitored:
hash cluster creations
backup operations
recovery operations
V$SESSION_OBJECT_CACHE This view displays object cache statistics for the current user session on the
local server (instance).
V$SESSION_WAIT This view lists the resources or events for which active sessions are waiting.
V$SESSTAT This view lists user session statistics. To find the name of the statistic
associated with each statistic number (STATISTIC#)
V$SESS_IO This view lists I/O statistics for each user session.
100.24. Tablespace
V$TABLESPACE This view displays tablespace information from the controlfile.
V$CACHE This is a Parallel Server view. This view contains information from the block
header of each block in the SGA of the current instance as related to
particular database objects.
V$DB_OBJECT_CACHE This view displays database objects that are cached in the library cache.
Objects include tables, indexes, clusters, synonym definitions, PL/SQL
procedures and packages, and triggers.
g
Oracle 11 – Dynamic Performance Views Page 175 of 242
WK: 6 - Day: 5.2
V$DLM_LOCKS This is a Parallel Server view. V$DLM_LOCKS lists information of all locks
currently known to lock manager that are being blocked or blocking others.
V$ENQUEUE_LOCK This view displays all locks owned by enqueue state objects. The columns
in this view are identical to the columns in V$LOCK.
V$FILE_PING The view V$FILE_PING displays the number of blocks pinged per datafile.
This information in turn can be used to determine access patterns to
existing datafiles and deciding new mappings from datafile blocks to PCM
locks.
V$LATCH This view lists statistics for non-parent latches and summary statistics for
parent latches. That is, the statistics for a parent latch include counts from
each of its children.
V$LATCHHOLDER This view contains information about the current latch holders.
V$LATCHNAME This view contains information about decoded latch names for the latches
shown in V$LATCH. The rows of V$LATCHNAME have a one-to-one
correspondence to the rows of V$LATCH.
V$LATCH_CHILDREN This view contains statistics about child latches. This view includes all
columns of V$LATCH plus the CHILD# column. Note that child latches
have the same parent if their LATCH# columns match each other.
V$LATCH_MISSES This view contains statistics about missed attempts to acquire a latch.
V$LATCH_PARENT This view contains statistics about the parent latch. The columns of
V$LATCH_PARENT are identical to those in V$LATCH.
V$LIBRARYCACHE This view contains statistics about library cache performance and activity.
V$LOCK This view lists the locks currently held by the Oracle server and outstanding
requests for a lock or latch.
V$LOCK_ACTIVITY This is a Parallel Server view. V$LOCK_ACTIVITY displays the DLM lock
operation activity of the current instance. Each row corresponds to a type of
lock operation.
V$LOCK_ELEMENT This is a Parallel Server view. There is one entry in v$LOCK_ELEMENT for
each PCM lock that is used by the buffer cache. The name of the PCM lock
that corresponds to a lock element is {'BL', indx, class}.
V$LOCKED_OBJECT This view lists all locks acquired by every transaction on the system.
V$LOCKS_WITH_COLLISIONS This is a Parallel Server view. Use this view to find the locks that protect
multiple buffers, each of which has been either force-written or force-read
at least 10 times. It is very likely that those buffers are experiencing false
pings due to being mapped to the same lock.
V$OPEN_CURSOR This view lists cursors that each user session currently has opened and
parsed.
V$PING This is a Parallel Server view. The V$PING view is identical to the
V$CACHE view but only displays blocks that have been pinged at least
g
Oracle 11 – Dynamic Performance Views Page 176 of 242
WK: 6 - Day: 5.2
once. This view contains information from the block header of each block in
the SGA of the current instance as related to particular database objects.
V$PQ_SLAVE This view lists statistics for each of the active parallel execution servers on
an instance. This view will be replaced/obsoleted in a future release by a
new view called V$PX_PROCESS.
V$PROCESS This view contains information about the currently active processes. While
the LATCHWAIT column indicates what latch a process is waiting for, the
LATCHSPIN column indicates what latch a process is spinning on. On
multi-processor machines, Oracle processes will spin on a latch before
waiting on it.
V$ROWCACHE This view displays statistics for data dictionary activity. Each row contains
statistics for one data dictionary cache.
V$ROWCACHE_PARENT This view displays information for parent objects in the data dictionary.
There is one row per lock owner, and one waiter for each object. This row
shows the mode held or requested. For objects with no owners or waiters,
a single row is displayed.
V$ROWCACHE_SUBORDINATE This view displays information for subordinate objects in the data
dictionary.
V$SHARED_POOL_RESERVED This fixed view lists statistics that help you tune the reserved pool and
space within the shared pool. The following columns of
V$SHARED_POOL_RESERVED are valid only if the initialization
parameter shared_pool_reserved_size is set to a valid value.
V$SORT_SEGMENT This view contains information about every sort segment in a given
instance. The view is only updated when the tablespace is of the
TEMPORARY type.
V$SQL This view lists statistics on shared SQL area without the GROUP BY clause
and contains one row for each child of the original SQL text entered.
V$SQL_BIND_DATA This view displays the actual bind data sent by the client for each distinct
bind variable in each cursor owned by the session querying this view if the
data is available in the server.
V$SQL_BIND_METADATA This view displays bind metadata provided by the client for each distinct
bind variable in each cursor owned by the session querying this view.
V$SQL_CURSOR This view displays debugging information for each cursor associated with
the session querying this view.
V$SQL_SHARED_MEMORY This view displays information about the cursor shared memory snapshot.
Each SQL statement stored in the shared pool has one or more child
objects associated with it. Each child object has a number of parts, one of
which is the context heap, which holds, among other things, the query plan.
V$SQLAREA This view lists statistics on shared SQL area and contains one row per SQL
string. It provides statistics on SQL statements that are in memory, parsed,
and ready for execution.
V$SQLTEXT This view contains the text of SQL statements belonging to shared SQL
cursors in the SGA.
V$SQLTEXT_WITH_NEWLINES This view is identical to the V$SQLTEXT view except that, to improve
legibility, V$SQLTEXT_WITH_NEWLINES does not replace newlines and
tabs in the SQL statement with spaces.
V$STATNAME This view displays decoded statistic names for the statistics shown in the
V$SESSTAT and V$SYSSTAT tables.
V$SUBCACHE This view displays information about the subordinate caches currently
loaded into library cache memory. The view walks through the library
cache, printing out a row for each loaded subordinate cache per library
cache object.
V$SYSSTAT This view lists system statistics. To find the name of the statistic associated
with each statistic number (STATISTIC#)
system wide.
V$SYSTEM_EVENT This view contains information on total waits for an event. Note that the
TIME_WAITED and AVERAGE_WAIT columns will contain a value of zero
on those platforms that do not support a fast timing mechanism. If you are
running on one of these platforms and you want this column to reflect true
wait times, you must set TIMED_STATISTICS to TRUE in the parameter
file. Please remember that doing this will have a small negative effect on
system performance.
V$WAITSTAT This view lists block contention statistics. This table is only updated when
timed statistics are enabled.
Other Views
V$AQ This view describes statistics for the queues in the database.
V$CLASS_PING V$CLASS_PING displays the number of blocks pinged per block class.
Use this view to compare contentions for blocks in different classes.
V$COMPATIBILITY This view displays features in use by the database instance that may
prevent downgrading to a previous release. This is the dynamic (SGA)
version of this information, and may not reflect features that other
instances have used, and may include temporary incompatibilities (like
UNDO segments) that will not exist after the database is shut down
cleanly.
V$COMPATSEG This view lists the permanent features in use by the database that will
prevent moving back to an earlier release.
V$CONTEXT This view lists set attributes in the current session.
V$DLM_CONVERT_LOCAL V$DLM_CONVERT_LOCAL displays the elapsed time for the local lock
conversion operation.
V$DLM_CONVERT_REMOTE V$DLM_CONVERT_REMOTE displays the elapsed time for the remote
lock conversion operation.
V$FALSE_PING V$FALSE_PING is a Parallel Server view. This view displays buffers that
may be getting false pings. That is, buffers pinged more than 10 times
that are protected by the same lock as another buffer that pinged more
than 10 times. Buffers identified as getting false pings can be remapped in
"GC_FILES_TO_LOCKS" to reduce lock collisions.
V$HS_AGENT This view identifies the set of HS agents currently running on a given host,
using one row per agent process.
V$HS_SESSION This view identifies the set of HS sessions currently open for the Oracle
Server.
V$LICENSE This view contains information about license limits.
V$MLS_PARAMETERS This is a Trusted Oracle Server view that lists Trusted Oracle Server-
specific initialization parameters. For more information, see your Trusted
Oracle documentation.
V$OPTION This view lists options that are installed with the Oracle Server.
V$PARALLEL_DEGREE_LIMIT_MTH This view displays all available parallel degree limit resource allocation
methods.
V$PQ_SYSSTAT This view lists system statistics for parallel queries. This view will be
replaced/obsoletes in a future release by a new view called
V$PX_PROCESS_SYSSTAT.
V$PQ_TQSTAT This view contains statistics on parallel execution operations. The
statistics are compiled after the query completes and only remain for the
duration of the session. It displays the number of rows processed through
each parallel execution server at each stage of the execution tree. This
view can help determine skew problems in a query's execution.
V$PX_PROCESS This view contains information about the sessions running parallel
execution.
g
Oracle 11 – Dynamic Performance Views Page 178 of 242
WK: 6 - Day: 5.2
V$PX_PROCESS_SYSSTAT This view contains information about the sessions running parallel
execution.
V$PX_SESSION This view contains information about the sessions running parallel
execution.
V$PX_SESSTAT This view contains information about the sessions running parallel
execution.
V$TEMPORARY_LOBS This view displays temporary lobs.
V$TEMP_EXTENT_MAP This view displays the status of each unit for all temporary tablespaces.
V$TEMP_EXTENT_POOL This view displays the state of temporary space cached and used for a
given instance. Note that loading of the temporary space cache is lazy
and those instances can be dormant. Use GV$TEMP_EXTENT_POOL for
information about all instances.
V$TEMP_PING The view V$TEMP_PING displays the number of blocks pinged per
datafile. This information in turn can be used to determine access patterns
to existing datafiles and deciding new mappings from datafile blocks to
PCM locks.
V$TEMP_SPACE_HEADER This view displays aggregate information per file per temporary
tablespace regarding how much space is currently being used and how
much is free as per the space header.
V$TEMPSTAT This view contains information about file read/write statistics.
V$TIMER This view lists the elapsed time in hundredths of seconds. Time is
measured since the beginning of the epoch, which is operating system
specific, and wraps around to 0 again whenever the value overflows four
bytes (roughly 497 days).
V$TYPE_SIZE This view lists the sizes of various database components for use in
estimating data block capacity.
V$VERSION Version numbers of core library components in the Oracle server. There is
one row for each component.
g
Oracle 11 – Data Dictionary Views Page 179 of 242
WK: 6 - Day: 5.2
6. Audit Views
DBA_OBJ_AUDIT_OPTS DBA_AUDIT_TRAIL DBA_AUDIT_OBJECT
DBA_STMT_AUDIT_OPTS DBA_AUDIT_SESSION DBA_AUDIT_EXIST
DBA_PRIV_AUDIT_OPTS DBA_AUDIT_STATEMENT DBA_FGA_AUDIT_TRAIL
DBA_AUDIT_POLICIES DBA_COMMON_AUDIT_TRAIL DBA_AUDIT_POLICY_COLUMNS
7. Partition Views
DBA_PART_COL_STATISTICS DBA_SUBPART_HISTOGRAMS DBA_PART_KEY_COLUMNS
DBA_TAB_SUBPARTITIONS DBA_PART_HISTOGRAM DBA_SUBPART_KEY_COLUMNS
DBA_PART_TABLES DBA_TAB_PARTITIONS DBA_IND_SUBPARTITIONS
DBA_SUBPART_COL_STATISTICS DBA_PART_LOBS DBA_SUBPARTITION_TEMPLATES
8. Index Views
DBA_INDEXES DBA_JOIN_IND_COLUMNS DBA_IND_EXPRESSIONS
DBA_IND_STATISTICS DBA_PART_INDEXES DBA_IND_PARTITIONS
DBA_RECYCLEBIN
ORS
Data Pump (and the new IMPDB and EXPDB applications) offers a number of improvements over the old IMPORT and
EXPORT, including resumable/restartable jobs, automatic two-level parallelism, a network mode that uses
DBLINKs/listener service names instead of pipes, fine-grained object selection (so we can select individual tables, view,
packages, indexes and so on for import or export, not just tables or schemas as with IMPORT and EXPORT), and a fully
callable API that allows Data Pump functionality to be embedded in third-party ETL packages.
With Real Application Clusters, we de-couple the Oracle Instance (the processes and memory structures running on a
server to allow access to the data) from the Oracle database (the physical structures residing on storage which actually
hold the data, commonly known as datafiles). A clustered database is a single database that can be accessed by multiple
instances. Each instance runs on a separate server in the cluster. When additional resources are required, additional
g
Oracle 11 – Real Application Clusters(RAC) Page 189 of 242
WK: 6 - Day: 5.2
nodes and instances can be easily added to the cluster with no downtime. Once the new instance is started, applications
using services can immediately take advantage of it with no changes to the application or application server.
Real Application Clusters is an extension of the Oracle Database and therefore benefits from the manageability, reliability
and security features built into Oracle Database 10g.
103.4.2. Reliability
Oracle DB is known for its reliability. Real Application Clusters takes this a step further by removing the database server
as a single point of failure. If an instance fails, the remaining instances in the cluster are open and active.
103.4.3. Recoverability
Oracle Database includes many features that make it easy to recover from all types of failures. If an instance fails in a
RAC database, it is recognized by another instance in the cluster and recovery automatically takes place. Fast Application
Notification, Fast Connection Failover and Transparent Application Failover make it easy for applications to mask
component failures from the user.
103.4.6. Scalability
Oracle Real Application Clusters provides unique technology for scaling applications. Traditionally, when the database
server ran out of capacity, it was replaced with a new larger server. As servers grow in capacity, they are more expensive.
For databases using RAC, there are alternatives for increasing the capacity. Applications that have traditionally run on
large SMP servers can be migrated to run on clusters of small servers. Alternatively, we can maintain the investment in
the current hardware and add a new server to the cluster (or to create a cluster) to increase the capacity. Adding servers
to a cluster with Oracle Clusterware and RAC does not require an outage and as soon as the new instance is started, the
application can take advantage of the extra capacity. All servers in the cluster must run the same operating system and
same version of Oracle but they do not have to be exactly the same capacity. Customers today run clusters that fit their
g
Oracle 11 – Real Application Clusters(RAC) Page 191 of 242
WK: 6 - Day: 5.2
needs whether they are clusters of servers where each server is a 2 cpu commodity server to clusters where the servers
have 32 or 64 cpus in each server. Oracle Real Application Clusters architecture automatically accommodates rapidly
changing business requirements and the resulting workload changes. Application users, or mid tier application server
clients, connect to the database by way of a service name. Oracle automatically balances the user load among the
multiple nodes in the cluster. The Real Application Clusters database instances on the different nodes subscribe to all or
some subset of database services. This provides DBAs the flexibility of choosing whether specific application clients that
connect to a particular database service can connect to some or all of the database nodes. Administrators can painlessly
add processing capacity as application requirements grow. The Cache Fusion architecture of RAC immediately utilizes the
CPU and memory resources of the new node. DBAs do not need to manually re-partition data.
Another way of distributing workload in an Oracle database is through the Oracle Database's parallel execution feature.
Parallel execution (I.E. parallel query or parallel DML) divides the work of executing a SQL statement across multiple
processes. In an Oracle Real Application Clusters environment, these processes can be balanced across multiple
instances. Oracle’s cost-based optimizer incorporates parallel execution considerations as a fundamental component in
arriving at optimal execution plans. In a Real Application Clusters environment, intelligent decisions are made with regard
to intra-node and inter-node parallelism. For example, if a particular query requires six query processes to complete the
work and six CPUs are idle on the local node (the node that the user connected to), then the query is processed using
only local resources. This demonstrates efficient intra-node parallelism and eliminates the query coordination overhead
across multiple nodes. However, if there are only two CPUs available on the local node, then those two CPUs and four
CPUs of another node are used to process the query. In this manner, both inter-node and intra-node parallelism are used
to provide speed up for query operations.
g
Oracle 11 – Real Application Clusters(RAC) Page 192 of 242
WK: 6 - Day: 5.2
104. Glossary
Automatic Database Diagnostic Monitor (ADDM)
This lets the Oracle Database diagnose its own performance and determine how identified problems could be resolved. It
runs automatically after each AWR statistics capture, making the performance diagnostic data readily available.
Automatic Storage Management (ASM)
A vertical integration of both the file system and the volume manager built specifically for Oracle database files. It extends
the concept of stripe and mirrors everything to optimize performance, while removing the need for manual I/O tuning.
Automatic Storage Management Disk
Storage is added and removed from Automatic Storage Management disk groups in units of Automatic Storage
Management disks.
Automatic Storage Management File
Oracle database file stored in an Automatic Storage Management disk group. When a file is created, certain file attributes
are permanently set. Among these are its protection policy (parity, mirroring, or none) and its striping policy. Automatic
Storage Management files are not visible from the operating system or its utilities, but they are visible to database
instances, RMAN, and other Oracle-supplied tools.
Automatic Storage Management Instance
An Oracle instance that mounts Automatic Storage Management disk groups and performs management functions
necessary to make Automatic Storage Management files available to database instances. Automatic Storage
Management instances do not mount databases.
Automatic Storage Management Template
Collections of attributes used by Automatic Storage Management during file creation.
Templates simplify file creation by mapping complex file attribute specifications into a single name. A default template
exists for each Oracle file type. Users can modify the attributes of the default templates or create new templates.
Automatic Undo Management Mode
A mode of the database in which undo data is stored in a dedicated undo tablespace. Unlike manual undo management
mode, the only undo management that we must perform is the creation of the undo tablespace. All other undo
management is performed automatically.
Automatic Workload Repository (AWR)
A built-in repository in every Oracle Database. At regular intervals, the Oracle Database makes a snapshot of all its vital
statistics and workload information and stores them here.
Background Process
Background processes consolidate functions that would otherwise be handled by multiple Oracle programs running for
each user process. The background processes asynchronously perform I/O and monitor other Oracle processes to
provide increased parallelism for better performance and reliability.
Buffer Cache
The portion of the SGA that holds copies of Oracle data blocks. All user processes concurrently connected to the instance
share access to the buffer cache. The buffers in the cache are organized in two lists: the dirty list and the least recently
used (LRU) list. The dirty list holds dirty buffers, which contain data that has been modified but has not yet been written to
disk. The least recently used (LRU) list holds free buffers (unmodified and available), pinned buffers (currently being
accessed), and dirty buffers that have not yet been moved to the dirty list.
Byte Semantics
The length of string is measured in bytes.
Cache Recovery
The part of instance recovery where Oracle applies all committed and uncommitted changes in the redo log files to the
affected data blocks. Also known as the rolling forward phase of instance recovery.
Character Semantics
The length of string is measured in characters.
Checkpoint
g
Oracle 11 – Glossary Page 199 of 242
WK: 6 - Day: 5.2
A data structure that defines an SCN in the redo thread of a database. Checkpoints are recorded in the control file and
each datafile header, and are a crucial element of recovery.
Client
In client/server architecture, the front-end database application, which interacts with a user through the keyboard, display,
and pointing device such as a mouse. The client portion has no data access responsibilities. It concentrates on
requesting, processing, and presenting data managed by the server portion.
Client/Server architecture
Software architecture based on a separation of processing between two CPUs, one acting as the client in the transaction,
requesting and receiving services, and the other as the server that provides services in a transaction.
Cluster
Optional structure for storing table data. Clusters are groups of one or more tables physically stored together because
they share common columns and are often used together. Because related rows are physically stored together, disk
access time improves.
Concurrency
Simultaneous access of the same data by many users. A multi-user database management system must provide
adequate concurrency controls, so that data cannot be updated or changed improperly, compromising data integrity.
Connection
Communication pathway between a user process and an Oracle instance.
Database
Collection of data that is treated as a unit. The purpose of a database is to store and retrieve related information.
Database Buffer
One of several types of memory structures that stores information within the system global area. Database buffers store
the most recently used blocks of data.
Database Buffer Cache
Memory structure in the system global area that stores the most recently used blocks of data.
Database Link
A named schema object that describes a path from one database to another. Database links are implicitly used when a
reference is made to a global object name in a distributed database.
Data Block
Smallest logical unit of data storage in an Oracle database. Also called logical blocks, Oracle blocks, or pages. One data
block corresponds to a specific number of bytes of physical database space on disk.
Data Integrity
Business rules that dictate the standards for acceptable data. These rules are applied to a database by using integrity
constraints and triggers to prevent the entry of invalid information into tables.
Data Segment
Each nonclustered table has a data segment. All of the table’s data is stored in the extents of its data segment. For a
partitioned table, each partition has a data segment. Each cluster has a data segment. The data of every table in the
cluster is stored in the cluster’s data segment.
Dedicated Server
A database server configuration in which a server process handles requests for a single user process.
Define Variables
Variables defined (location, size, and datatype) to receive each fetched value.
Disk Group
One or more Automatic Storage Management disks managed as a logical unit. Automatic Storage Management disks can
be added or dropped from a disk group while preserving the contents of the files in the group, and with only a minimal
amount of automatically initiated I/O required to redistribute the data evenly. All I/O to a disk group is automatically spread
across all the disks in the group.
Distributed Processing
g
Oracle 11 – Glossary Page 200 of 242
WK: 6 - Day: 5.2
Software architecture that uses more than one computer to divide the processing for a set of related jobs. Distributed
processing reduces the processing load on a single computer.
DDL
Data definition language. Includes statements like CREATE/ALTER TABLE/INDEX, which define or change data
structure.
DML
Data manipulation language. Includes statements like INSERT, UPDATE, and DELETE, which change data in tables.
DOP
The degree of parallelism of an operation.
Enterprise Manager
An Oracle system management tool that provides an integrated solution for centrally managing our heterogeneous
environment. It combines a graphical console, Oracle Management Servers, Oracle Intelligent Agents, common services,
and administrative tools for managing Oracle products.
Extent
Second level of logical database storage. An extent is a specific number of contiguous data blocks allocated for storing a
specific type of information.
Failure Group
Administratively assigned sets of disks that share a common resource whose failure must be tolerated. Failure groups are
used to determine which Automatic Storage Management disks to use for storing redundant copies of data.
Indextype
An object that registers a new indexing scheme by specifying the set of supported operators and routines that manage a
domain index.
Index Segment
Each index has an index segment that stores all of its data. For a partitioned index, each partition has an index segment.
Integrity Constraint
Declarative method of defining a rule for a column of a table. Integrity constraints enforce the business rules associated
with a database and prevent the entry of invalid information into tables.
Logical Structures
Logical structures of an Oracle database include tablespaces, schema objects, data blocks, extents, and segments.
Because the physical and logical structures are separate, the physical storage of data can be managed without affecting
the access to logical storage structures.
LogMiner
A utility that lets administrators use SQL to read, analyze, and interpret log files. It can view any redo log file, online or
archived. The Oracle Enterprise Manager application Oracle LogMiner Viewer adds a GUI-based interface.
Mean Time To Recover (MTTR)
The desired time required to perform instance or media recovery on the database. For example, we may set 10 minutes
as the goal for media recovery from a disk failure. A variety of factors influence MTTR for media recovery, including the
speed of detection, the type of method used to perform media recovery, and the size of the database.
Mounted Database
An instance that is started and has the control file associated with the database open. We can mount a database without
opening it; typically, we put the database in this state for maintenance or for restore and recovery operations.
Object Type
An object type consists of two parts: a spec and a body. The type body always depends on its type spec.
Operator
In memory management, the term operator refers to a data flow operator, such as a sort, hash join, or bitmap merge.
Oracle XA
The Oracle XA library is an external interface that allows global transactions to be coordinated by a transaction manager
other than the Oracle database server.
Partition
g
Oracle 11 – Glossary Page 201 of 242
WK: 6 - Day: 5.2
A smaller and more manageable piece of a table or index.
Priority Inversion
Priority inversion occurs when a high priority job is run with lower amount of resources than a low priority job. Thus the
expected priority is "inverted."
Query Block
A self-contained DML against a table. A query block can be a top-level DML or a subquery.
Real Application Clusters (RAC)
Option that allows multiple concurrent instances to share a single physical database.
Recovery Manager (RMAN)
A utility that backs up, restores, and recovers Oracle databases. We can use it with or without the central information
repository called a recovery catalog. If we do not use a recovery catalog, RMAN uses the database's control file to store
information necessary for backup and recovery operations. We can use RMAN in conjunction with a media manager to
back up files to tertiary storage.
Redo Thread
The redo generated by an instance. If the database runs in a single instance configuration, then the database has only
one thread of redo. If we run in an Oracle Real Application Clusters configuration, then we have multiple redo threads, one
for each instance.
Schema
Collection of database objects, including logical structures such as tables, views, sequences, stored procedures,
synonyms, indexes, clusters, and database links. A schema has the name of the user who controls it.
Segment
Third level of logical database storage. A segment is a set of extents, each of which has been allocated for a specific data
structure, and all of which are stored in the same tablespace.
Sequence
A sequence generates a serial list of unique numbers for numeric columns of a database’s tables.
Server
In a client/server architecture, the computer that runs Oracle software and handles the functions required for concurrent,
shared data access. The server receives and processes the SQL and PL/SQL statements that originate from client
applications.
Shared Server
A database server configuration that allows many user processes to share a small number of server processes,
minimizing the number of server processes and maximizing the use of available system resources.
Standby Database
A copy of a production database that we can use for disaster protection. We can update the standby database with
archived redo logs from the production database in order to keep it current. If a disaster destroys the production database,
we can activate the standby database and make it the new production database.
Subtype
In the hierarchy of user-defined datatypes, a subtype is always a dependent on its supertype.
Synonym
An alias for a table, view, materialized view, sequence, procedure, function, package, type, Java class schema object,
user-defined object type, or another synonym.
System Change Number (SCN)
A stamp that defines a committed version of a database at a point in time. Oracle assigns every committed transaction a
unique SCN.
System Global Area (SGA)
A group of shared memory structures that contain data and control information for one Oracle database instance. If
multiple users are concurrently connected to the same instance, then the data in the instance’s SGA is shared among the
users. Consequently, the SGA is sometimes referred to as the shared global area.
Unicode
g
Oracle 11 – Glossary Page 202 of 242
WK: 6 - Day: 5.2
A way of representing all the characters in all the languages in the world. Characters are defined as a sequence of
codepoints, a base codepoint followed by any number of surrogates. There are 64K codepoints.
Unicode column
A column of type NCHAR, NVARCHAR2, or NCLOB guaranteed to hold Unicode.
User process
User processes execute the application or Oracle tool code.
UTC
Coordinated Universal Time, previously called Greenwich Mean Time, or GMT.
View
A view is a custom-tailored presentation of the data in one or more tables. A view can also be thought of as a "stored
query." Views do not actually contain or store data; they derive their data from the tables on which they are based. Like
tables, views can be queried, updated, inserted into, and deleted from, with some restrictions. All operations performed on
a view affect its base tables.
g
Oracle 11 – FAQs Page 203 of 242
WK: 6 - Day: 5.2
Track 1Z0-010
Price Rs. 5,440/-
No. of Questions
Pass Mark
Exam Time
Discount Offered 40%
Track 1Z0-020
Price Rs. 5,440/-
No. of Questions
Pass Mark
Exam Time
Discount Offered 40%
1. Professional Level - OCP (2 Exams and one Oracle University hands-on course within the Oracle 9i DBA learning
path). In order to get OCP certificate you have to clear OCA papers. You need to clear following exams:
2. Upgrade Level - If you do not want to take the hands-on course from Oracle you can clear the Oracle 8i track first
(5 exams) and then take the upgrade exam (1 exam).
New Features for Oracle7.3 and Oracle
Upgrade from Oracle 8i to 9i DBA
8 OCPs
Track 1Z0-030 1Z0-035
Price Rs. 5,440/- Rs. 5,440/-
No. of Questions 53 84
Pass Mark 37 58
Exam Time 90 Min 2 Hours
Discount Offered 40% 40%
Track 1Z0-042
Price Rs. 5,440/-
No. of Questions 77
Pass Mark 51
Exam Time 2 Hours
Reg. Req. Introduction to Oracle 9i SQL & PL/SQL
Course Fee Rs. 10,500/- (SQL * PLUS)
Discount Offered 40%
2. Professional Level - OCP (1 Exam and one Oracle University hands-on course within the Oracle 10g DBA learning
path). In order to get OCP certificate you have to clear OCA papers. You need to clear following exam:
Track 1Z0-043
Price Rs. 5,440/-
No. of Questions 70
g
Oracle 11 – FAQs Page 205 of 242
WK: 6 - Day: 5.2
Pass Mark 46
Exam Time 90 Min
Reg. Req. Introduction to Oracle 9i SQL & PL/SQL
Course Fee Rs. 10,500/- (SQL * PLUS)
Discount Offered 40%
3. Upgrade Level - If you do not want to take the hands-on course from Oracle you can clear the Oracle 8i, 9i track
first (5 exams) and then take the upgrade exam (1 exam).
Track 1Z0-040
Price Rs. 5,440/-
No. of Questions 61
Pass Mark 37
Exam Time 90 Min
Discount Offered 40%
105.9. Website Links for Oracle OCP Dumps and Oracle FAQ’s
www.certsbraindumps.com
www.best-braindumps.com
www.certificationking.com
www.testking.com
www.braindumps.com
www.dbaclick.com
www.selftestsoftware.com
www.actualtests.com
www.orafaq.com
www.dbasupport.com
106. FAQs
1. Which of the following file is read to start the Instance?
a. Controlfile b. Initialization Parameter file
c. Data files d. None
g
Oracle 11 – FAQs Page 206 of 242
WK: 6 - Day: 5.2
Answer: B
Explanation: It will read Init.ora parameter file for starting the instance.
2. Which of the following file is read when database is mounted?
a. Controlfile b. Initialization Parameter file
c. Data files d. All of the above
Answer: A
Explanation: Controlfile is read while we are mounting the database
3. Which of the following actions will occur if we issue command startup at SQL prompt immediately then
a. Instance is started b. Database is mounted
c. Database is opened d. All of the above
Answer: D
Explanation: It will perform all of the above action in startup command.
4. What do Dirty buffers comprises of?
a. Buffers modified but written to disk b. Buffers not yet modified
c. Buffers only accessed for data d. Buffers modified but not yet written to disk
Answer: D
Explanation: Modified buffers in database buffer cache (SGA) which has not written to disk.
5. Which init.ora parameter is used to size database buffer cache?
a. db_cache_buffers b. data_block_buffers
c. block_buffers d. None answer
Answer: D
Explanation: If we want to change the size of daatbase buffer cache we have to spcify db_cache_size or
db_block_buffers,None of the above.
6. What do the library cache consists of?
a. Hold parsed versions of excuted sql b. Hold compiled versions of pl/sql program unit
statements
c. Both a and b d. Metadata
Answer: C
Explanation: Consists of both parsed versions of sql and pl/sql
7. How can we size shared pool?
a. shared_pool_size b. db_shared_pool
c. set db_shared_pool d. alter database
Answer: A
Explanation: We have to specify shared_pool_size=<value> in inialization parameter file.
8. What cannot be contents of Program global area?
a. Users program variables b. Users session information
c. Users own sql statements d. User defined Cursors
Answer: C
Explanation: PGA contains program variables,session information,cursors.Not a SQL statement.
9. What happens during process of checkpoint?
a. Its a event of recording redolog buffer b. Its a event of recording number of rollbacks
entries onto redolog files. and commits
c. Its a event of recording modified d. None of the above
blocks in database buffer cache onto
data files.
Answer: C
g
Oracle 11 – FAQs Page 207 of 242
WK: 6 - Day: 5.2
Explanation: When checkpoint occures it will invoke the DBWR to write dirty blocks from database buffer cache
to Database Files.
10. Which of the following is not a function of SMON?
a. Crash Recovery b. Clean up of temporary segments
c. Coalescing free space d. Taking care of background processes of the
system
Answer: D
Explanation: SMON will do the crash recovery,cleaning of temporary segments and coalescing free space. But
it doesnot take care of background process.
11. Which of the following file is read to start the Instance?
a. Controlfile b. Initialization Parameter file
c. Data files d. None
Answer: B
Explanation: It will read Init.ora parameter file for starting the instance.
12. Which of the following file is read when database is mounted?
a. Controlfile b. Initialization Parameter file
c. Data files d. All of the above
Answer: A
Explanation: Controlfile is read while we are mounting the database
13. Which of the following actions will occur if we issue command startup at SQL prompt immediately then
a. Instance is started b. Database is mounted
c. Database is opened d. All of the above
Answer: D
Explanation: It will perform all of the above action in startup command.
14. Which of the following can not be a part of System Global Area?
a. Database buffer cache b. Large Pool
c. Program global area d. Java Poo
Answer: C
Explanation: PGA(program global area) is not part of SGA.It is a seprate memory structure.
15. What do Dirty buffers comprises of?
a. Buffers modified but written to disk b. Buffers not yet modified
c. Buffers only accessed for data d. Buffers modified but not yet written to disk
Answer: D
Explanation: Modified buffers in database buffer cache (SGA) which has not written to disk.
16. Which init.ora parameter is used to size database buffer cache?
a. db_cache_buffers b. data_block_buffers
c. block_buffers d. None answer
Answer: D
Explanation: If we want to change the size of daatbase buffer cache we have to spcify db_cache_size or
db_block_buffers,None of the above.
17. What do the library cache consists of ?
a. Hold parsed versions of excuted sql b. Hold compiled versions of pl/sql program unit
statements
c. Both a and b d. Metadata
Answer: C
g
Oracle 11 – FAQs Page 208 of 242
WK: 6 - Day: 5.2
Explanation: Consists of both parsed versions of sql and pl/sql
18. How can we size shared pool?
a. shared_pool_size b. db_shared_pool
c. set db_shared_pool d. alter database
Answer: A
Explanation: we have to specify shared_pool_size=<value> in inialization parameter file.
19. What cannot be contents of Program global area?
a. Users program variables b. Users session information
c. Users own sql statements d. User defined Cursors
Answer: C
Explanation: PGA contains program variables,session information,cursors.Not a SQL statement.
20. What happens during process of checkpoint?
a. Its a event of recording redolog buffer b. Its a event of recording number of rollbacks
entries onto redolog files. and commits
c. Its a event of recording modified d. None of the above
blocks in database buffer cache onto
data files.
Answer: C
Explanation: When checkpoint occures it will invoke the DBWR to write dirty blocks from database buffer cache
to Database Files.
21. Which of the following is not a function of SMON?
a. Crash Recovery b. Clean up of temporary segments
c. Coalescing free space d. Taking care of background processes of the
system
Answer: D
Explanation: SMON will do the crash recovery, cleaning of temporary segments and coalescing free space. But
it doesnot take care of background process.
22. The total number of Base tables that get created into sys account?
a. 1000 b. 82
c. 1762 d. 100
Answer: C
Explanation: 1762 base tables will get created in sys account.
23. What is the status of your database when we run the create databse file(ex cr8demo.sql)
a. Nomount b. mount
c. open d. shutdown
Answer: A
Explanation: Database status should be in nomount status.Because for mounting a database it requires a
controlfile.
24. Who are the users that gets created automatically the moment the database is created
a. sys,system b. sys,system,scott
c. scott d. no users get created by default
Answer: A
Explanation: sys and system users will get created when we create a database.
25. What is the tablespaces that accomodates base tables?
a. Undotbs b. temp
c. system d. user_data
Answer: C
Explanation: It creates Base tables in SYSTEM tablespace.
g
Oracle 11 – FAQs Page 209 of 242
WK: 6 - Day: 5.2
26. What is the default tablespace for sys user
a. user_data b. system
c. temp d. undotbs
Answer: B
Explanation: Default tablespace for SYS user is SYSTEM.
27. Data Dictionary Views are Static?
a. True b. False
Answer: A
Explanation: Data dictionary view are nothing but DBA_, ALL_, USER_.
28. Is the database creation successful with this command?
SQL> create database;
a. True b. False
Answer: A
Explanation: It will use OMF for creating controlfile and datafiles.
29. Which of the following is not true when a command 'SHUTDOWN NORMAL' is issued?
a. Database and Redo buffers are b. The next startup doest not require any
written to disk instance recovery
c. New Connections can be made d. Background process are terminated
Answer: C
Explanation: When we issue a command SHUTDOWN then it will wait for connected users to disconnect but it
dosenot allow to any user to log in to the database.
30. Which of the following is not the content of parameter file?
a. Names and locations of controlfiles b. Information on UNDO segments
c. Names and locations of datafiles d. Allocations for memory structures of SGA
Answer: C
Explanation: It doesnot maintain the location and names of datafiles that will be maintain by controlfile.remaining
are the content of parameter file.
31. Can we create a tablespace with multiple datafiles at a single stroke
a. Yes b. No
Answer: A
Explanation: We can create tablespace in single stroke with command SQL>create tablespace
<tablespacename> datafile '<path of datafile 1>' size 2m,'<path of datafile 2>' size 3m; like this we can specify
multiple datafiles for 1 tablespace.
32. Can a datafile be associated with two different tablespaces
a. Yes b. No
Answer: B
Explanation: One datafile can associated to one tablespace not more than one tablespace.
33. Suppose your database has max_datafiles limit of 80 and we want to add files above this limit which file we
need to modify
a. Controlfile b. Init.ora
c. Alertfile d. None
Answer: A
Explanation: In controlfile we have to change MAXDATAFILES=<number> and we have to recreate controlfile
then only it will change the limit of datafiles for the database.
34. select the view which tells us about all the tablespaces in your database
a. v$database b. dba_tables
c. v$tablespace d. dba_table_space
Answer: C
g
Oracle 11 – FAQs Page 210 of 242
WK: 6 - Day: 5.2
Explanation: V$TABLESPACE view give the tablespace details in a database.
35. Can we bring system tablespace offline when the database is up
a. Yes b. No
Answer: B
Explanation: We cannot make system tablespace offline because it contains base tables.
36. What is default initial extent size when the tablespace is dictionary managed
a. 64k b. 20K
c. 10 blocks d. 5 blocks
Answer: D
Explanation: When we creates a dictionary managed tablespace it will give the initial extent as 5*<block_size>.
37. Which parameter should be added in init.ora file for creating tablespace with multiple blocksizes.
a. db_nk_cache_size=n b. block_size=n
c. multiple_blocks=n d. multiple_cache_size=n
Answer: A
Explanation: We have to add db_Nk_cache_size=<value> where N in 2,4,8,16,32.we are specifying value for
that perticular block size, oracle allocate buffers in database buffer cache for that block size. whenever we
perform any transaction on that particular block size tablespace it will use that buffers.
38. What is the value for the storage clause pctincrease when the tablespace extent management is local
(uniform)
a. 50% b. 0%
c. 100% d. 10%
Answer: B
Explanation: PCTINCREASE for locally managed tablespace is 0%.
39. What is the command that combines all the smaller contiguousfree extents in the tablespace into one larger
extent
a. Merge b. sum
c. coalesce d. add extents
Answer: C
Explanation: Coalesce is used to combine all the smaller continous free extents in the tablespace into one larger
extent.merge and sum are SQL commands related to table and add extents is not a valid.
40. If the system datafileis to be renamed, the database must be in which mode?
a. Nomount b. mount
c. open d. close
Answer: B
Explanation: For renameing a datafile beloging to system tablespace our database should be in mount state
because system contains all the base tables when we open a database it will continuosly update the base tables
evenif we are not performing transactions.
41. After creating a tablespace what is the default value for segment space management in 9i?
a. Auto b. dictionary
c. local d. manual
Answer: D
Explanation: Its MANUAL in 9i and Oracle10g its AUTO.
42. A tablespace was created with extent management as local. After that the tablespace extent management
was changed from local to dictionary. What would be the next extent size?
a. 64k b. 1m
c. 10k d. null
Answer: B
Explanation: Its 1m after the change
g
Oracle 11 – FAQs Page 211 of 242
WK: 6 - Day: 5.2
43. If we create a tablespace with extent management dictionary and block size 8k with default storage initial
10k.After creating this tablespace whatvalue it will show for initial_extent in dba_tablespaces?
a. 10k b. 64k
c. 40k d. 1m
Answer: C
Explanation: If extent management is dictionary then database reqiures initial extent size atleast (block_size *
5).here its 8k*5=40k.
44. Can we create a tablewith your own parameters like (initial 300k next 300k minextents ) on tablespace
whoseextent management is local
a. Yes b. No
Answer: A
Explanation: Yes we can create
45. A locally managed tablespace is madeoffline what is the status of bytes column in dba_data_files?
a. It shows the orginal bytes b. It shows the null value
c. It shows the used value d. None
Answer: B
Explanation: It will show null value
46. Can we resize a datafile where the related tablespace is inoffline mode?
a. No b. Yes
Answer: A
Explanation: We cannot do it
47. DBA changeda datafile's autoextend value to on, what is the default value for increment_by(column)located
in dba_data_files?
a. 100m b. om
c. 10m d. 1m
Answer: D
Explanation: When we will changed datafile to Autoextend on then value of increment_by column in
DBA_DATA_FILES will be 1m (bydefault) means after filling of datafile complete it will increase the size datafile
by 1m everytime.
48. Can we drop a object when the tablespace is in read only mode?
a. No b. Yes
Answer: B
Explanation: Yes we can do it
49. We are trying to create a table with your own storage parameters in a locally managed tablespace. Guess
what happens?
a. Unable to create b. It gets created with your given storage
parameters
c. It will create the table with default d. None
storage parameters at tablespace
level.
Answer: C
Explanation: It wil create tablespace with default storage parameters
50. Extent deallocation for a segment is done when _______________
a. dropped, truncate b. delete, truncate
c. delete, alter d. none
Answer: A
Explanation: When we dropped or truncate a object it will deallocate the extents for that segment.
51. What type of data is available in rollback segments
a. previous image b. post updated image
g
Oracle 11 – FAQs Page 212 of 242
WK: 6 - Day: 5.2
a. timed_statistics=true b. resource_limits=true
c. resource_limit=true d. none
Answer: C
Explanation: Resource_limit=true we have set init.ora file so profile will effect on user.
74. Which privilege is necessary for a normal user to change his password?
a. create any table b. create session
c. alter user d. alter any user
Answer: B
Explanation: User require create session privilege to change his own password because he is own that whole
schema.
75. How to manually lock user account?
a. user <username> account lock b. alter user <username> account lock
c. alter user <username> identified by d. none
<new password>
Answer: B
Explanation: Alter user <username> account lock;
76. From which view user can see his privileges?
a. user_role_privs b. dba_sys_privs
c. session_privs d. role_role_privs
Answer: C
Explanation: session_privs will show the privileges for that user.
77. One user has quota on two tablespaces.Can he create his tables other than default tablespace?
a. No b. Yes
Answer: B
Explanation: Yes if user is having quota on different tablespaces he can create his own objects on that
tablespaces.
78. System user granted DBA to normal user.Now can this user revoke DBA from system?
a. Yes b. No
Answer: A
Explanation: User can revoke a dba privilege from sysdba.Because Oracle database requires at any point of
time one DBA only.
79. For creating password file for sys in which directory we are created?
a. $HOME b. $ORACLE_HOME/rdbms/admin
c. $ORACLE_HOME/dbs d. $ORACLE_HOME/sqlplus/admin
Answer: C
Explanation: We have to create your password file in ORACLE_HOME/dbs directory only then only it will read
that password file.
80. What is the default location of listener.ora file?
a. $ORACLE_HOME/rdbms/admin b. $ORACLE_HOME/dbs
c. $ORACLE_HOME/network/admin/sa d. $ORACLE_HOME/network/tools/samples
mples
Answer: C
Explanation: Bydefault listener.ora file will be available in $ORACLE_HOME/network/admin/samples.
81. The Listener service is stopped after giving a connection to a client. What is the status of client?
a. connection will lost b. connection will be continued by giving an error
message
c. the client session hangs d. connection will be continued without any
messages
g
Oracle 11 – FAQs Page 216 of 242
WK: 6 - Day: 5.2
Answer: D
Explanation: Connection will be continued without any messages because listener has already authenticated
that user for database.
82. What is the command to Start the Listener for a particular parameters set?
a. lsnrctl reload <listenername> b. lsnrctl start <listenername>
c. tnsping <tnsname> start d. lsnrctl startall;
Answer: B
Explanation: LSNRCTL START <listenername> to start the listener.lsnrctl reload will be used when we add
more sevices in listener.ora.lsnrctl start all doesnot work because all is not a listenername, every listener is
having different names,tnsping utility is not for listener.
83. Can we start the listener service for a database which is not yet started/opened?
a. Yes b. No [ minimum the database should be in
mount state ]
c. No [ first the database should be d. None of the above
opened ]
Answer: A
Explanation: Listener is independent fromdatabase thats why we can start/stop the listener without database
also.
84. In which file do we set this parameter FAILOVER = ON to use failover server option of Oracle Networking?
a. init.ora file b. tnsnames.ora
c. listener.ora d. both listener.ora & tnsnames.ora
e. controlfile
Answer: B
Explanation: In TNSNAMES.ORA file we have to specify FAILOVER=ON to use failover server option of Oracle
networking.
85. Can we start Multiple database services with in one listerer service?
a. No b. Yes
c. Yes & its Only in Oracle9i
Answer: B
Explanation: Yes we can start n number of services with one listener.
86. Can I have multiple listeners for a single database?
a. Yes b. No
c. Yes & its only in Oracle9i
Answer: A
Explanation: Yes we can configure n number of listener for one database.
87. What is the view do we query to find out the users who are connected using oracle networking?
a. DBA_USERS b. DBA_NET_INFO
c. V$SESSION d. DBA_CLIENT_INFO
Answer: C
Explanation: We can query V$SESSION to find out all the information of users who are logged in to your
database.We can find out from where user logged in,at what time he logged in etc.
88. After some modifications to listener file, How can I refresh the already running listener service without
stopping it?
a. lsnrctl start <listenername> b. lsnrctl reload <listenername>
c. lsnrctl status <listenername> d. lsnrctl restart <listenername>
Answer: B
Explanation: We can use reload option to refresh already running listener.
89. Which operations we can perform using network connections?
g
Oracle 11 – FAQs Page 217 of 242
WK: 6 - Day: 5.2
a. DML b. DDL
c. A&B d. Only DML's
Answer: C
Explanation: We can use DML and DDL oprations using network connection because using oracle networking
directly your login to user SCHEMA.
90. Which background process is needed to create materialized view?
a. ckpt & cjq0 b. lgwr & reco
c. dbw0 & reco d. reco & cjq0
Answer: D
Explanation: cjq0 process is reqiured to refresh materailized views and reco is required for maintaining
distributed transactions between database.
91. Which parameter do we use to start the reco process?
a. job_queue_process b. reco_processes
c. distributed_transactions d. global_names
Answer: C
Explanation: DISTRIBUTED_TRANSACTIONS parameter is responsible for distributed transactions. From 9i
onwords this reco is mandatory background process so Oracle depricated this parameter.
92. Is it mandatory to put the parameter global_names=true for creating database links?
a. Yes b. No
Answer: B
Explanation: It is not mandatory to put parameter global_names=true for creating database link because this
parameter we have to set when we are creating global database links.
93. For creating database links is it necessary to put some value for distributed_transactions parameter?
a. No [ not required at client ] b. Yes [ needed only at client ]
c. Yes [ needed at both client and d. Yes [ needed only at server ]
server ]
Answer: B
Explanation: Its required only at client side.
94. Which background process will refresh the materialized view on a given refresh interval?
a. cjq0 b. reco
c. arc0 d. ckpt
Answer: A
Explanation: CJQ0 background process will refresh the materialized view after every refresh interval.
95. Can we do any DML operations on materialized view?
a. Yes [ Only its not possible with b. Yes [ Only with refresh fast option ]
refresh fast option ]
c. No d. Yes
Answer: C
Explanation: No materialized view is only read only we cannot perform any DML operations on materialized
views.
96. How many refresh options do we have for creating materialized view?
a. 2 b. 3
c. 1 d. 4
Answer: B
Explanation: We have only three refresh option for creating materialized view (COMPELETE, FAST, FORCE).
97. Can we manually refresh any materialized view?
a. Yes b. No
g
Oracle 11 – FAQs Page 218 of 242
WK: 6 - Day: 5.2
Answer: A
Explanation: Yes we can refresh materialized view manually using DBMS_MVIEW package.
98. What is the segment type for a materialized view?
a. View b. table
c. materialized view d. synonym
Answer: B
Explanation: When we create a materialized view that a local table in database so segment type of materialized
view is TABLE.
99. What is the syntax to drop materialized view?
a. SQL> DROP VIEW <view_name>; b. SQL> DROP TABLE <mview_name>
cascade;
c. SQL> DROP MATERIALIZED VIEW d. SQL> DROP <mview_name> cascade;
<mview_name>;
Answer: C
Explanation: SQL> DROP MATERIALIZED VIEW <mview_name>; other options are not valid for materialized
views.
100. What is the status of the tablespace when itsgetting exported(transportable tablespace)
a. read write b. read only
c. offline d. pending offline
Answer: B
Explanation: Since data should not be manipulated while its getting exported that's what the tablespace must be
in read only mode.
101.What should be the status of the database when we are taking full database backup
a. Mount b. shutdown
c. open d. nomount
Answer: C
Explanation: Since in open state data object's definitions and data are available and in other states its not
available that's what the database must be in open mode.
102.What does compress parameter mean
a. compress all the extents of a table b. compress the data of a tablespace
and make one single big extent
c. compress the datafiles d. changes data to binary mode
Answer: A
Explanation: By default the compress parameter value is 'Y' if u don't want to compress all the extents into a
single extent then u can mention compress=N while taking export.
103.One of these levels is NOT supported by export utility
a. table level b. schema level
c. database level d. block level
Answer: D
Explanation: Exp is a logical backup it'll not support block level backup.
104.What does volsize parameter in exports mean
a. volume of database exported b. number of bytes to write to each tape volume
c. the max volume of data we can d. the min volume of data we can export at a
export at a single stroke single stroke
Answer: B
Explanation: volsize parameter gives the size which can be written to each tape volume.
105.Suppose we had taken a full database export and now want to import it to a non oracle database can we do
it?
g
Oracle 11 – FAQs Page 219 of 242
WK: 6 - Day: 5.2
a. Yes b. No
c. Possible but need to use sqlloader d. Yes if that database is also of same size
utility
Answer: B
Explanation: No, it's not possible because in the dump file the object definitions and data format will be in the
format of Oracle.
106.What does feedback parameter in exports state
a. it states whether the export is b. it gives us feedback after every 'n' rows are
successfull or unsucessfull exported
c. such parameter is not available d. it states whetherconstraints are exported or
skipped
Answer: B
Explanation: If we give some value for feedbackup parameter after that many rows exported it'll give the
feedback default value is 0.
107.Can we export a table's structure but not records
a. Yes it is possible b. No it is not possible
Answer: A
Explanation: We can export only a table's structure by giving the parameter rows=N.
108.What is the purpose of destroy parameter while importing
a. destroys the previous export and b. destroys all the constraints on a table
creates a fresh export
c. destroys all the indexes associated d. destroys the datafile and recreates datafile
with a table
Answer: D
Explanation: If u give destroy=y then the it'll destroys the datafiles and recreates datafiles.
109.Is it possible to export single partition of a table
a. Yes b. No
Answer: A
Explanation: Yes, by giving the parameter tables=(<table_name>:<partition_name>)
110.Which oracle utility is faster for downloading data into a dumpfile
a. Exp b. expdp
c. sqlldr d. all of the above
Answer: B
Explanation: In EXPDP we can start multiple process to download the data by specifing PARALLEL=<value>
111.Which role is required for performing a FULL export datadump
a. EXP_FULL_DATABASE b. DBA
c. A or B d. Both A and B
e. None
Answer: C
Explanation: DBA or EXP_FULL_DATABASE any one of these roles are required to take full export of
database.
112.In which schema is the master table created when performing expdb
a. SYS b. SYSTEM
c. Schema through which the utility is d. No table will be created
invoked
e. None
Answer: C
g
Oracle 11 – FAQs Page 220 of 242
WK: 6 - Day: 5.2
Explanation: It will create master table in user schema through which expdp utility is invoked and after
completion of job it will delete master table from the schema.
113.Which parameter of expdb should I use when a particular tablespace needs to be backed up
a. TABLESPACES b. TABLESPACE
c. TRANSPORT_TABLESPACE d. Both A and C
Answer: A
Explanation: Tablespace is not a valid parameter and transport_tablespace will be used when we are
transporting tablespace. For exporting one tablespace we have to use TABLESPACES=<tablespacename>.
114.Which parameter should I use to speedup the expdb
a. PROCESSES b. PARALLEL
c. THREADS d. None
Answer: B
Explanation: To speedup the process of expdp we should use parallel. Processes and Threads are not a vaild
parameters in expdp.
115.Can i interrupt a expdb process?
a. Yes b. No
Answer: A
Explanation: Yes we can.This is the main advantage of expdp over tradational exp/imp.
116.Which two PL/SQL Packages are used by the Oracle Data Dump?
a. UTL_DATADUMP b. DBMS_METADATA
c. DBMS_DATADUMP d. ULT_FILE
e. DBMS_SQL
Answer: B, C
Explanation: Oracle has given dbms_metadata and dbms_datapump for Oracle Data Pump utility. With help of
this packages we can start/stop/view the datapupm jobs.
117.Which command-line parameter of expdb and impdb clients connects we to an existing job?
a. CONNECT_CLIENT b. CONTINUE_CLIENT
c. APPEND d. ATTACH
Answer: D
Explanation: Continue_client is for starting the job. Attach is to attach existing job and remaining not a vaild
parameters in expdp.
118. Which parameter is not a valid for using impdp client?
a. REMAP_TABLE b. REMAP_SCHMA
c. REMAP_TABLESPACE d. REMAP_DATAFILE
e. None
Answer: A
Explanation: REMAP_TABLE is not a vaild parameter in impdp. all other are valid parameters if we want to see
all the valid parameter which we can specify with impdp at \$impdp help=y
119. Users are experiencing delays in query response time in a database application. Which area should we look
at first to resolve the problem?
a. Memory b. SGA
c. SQL statements d. I/O
Answer: C
Explanation: Inefficient SQL coding may cause bottelnecks in application performance.
120. Which of the following is measurable tuning goal that can be used to evaluate system performance?
a. Number of concurrent users b. Database size
c. Making the system run faster d. Database hit percentages
Answer: D
g
Oracle 11 – FAQs Page 221 of 242
WK: 6 - Day: 5.2
Explanation: If overall successfull Database hit percentage is more than 85% then we can say the performance
of the db is good.
121.Performance has degraded on your system. And we discover paging and swapping is occuring. What is
possible cause of this problem?
a. The SGA is too small b. PGA is too large
c. SGA is too large d. None
Answer: B
Explanation: If pga size is too large then users are able to communicate with server but the performance will be
degraded if enough space is not available in ram, to support this user transactions oracle start using swapping
and paging.
122.Which facility of oracle is used to format the output of an sql*trace?
a. Analyze b. tkprof
c. explain plan d. server manager
Answer: B
Explanation: tkprof is the utility by which we can convert a oracle trace file into ASCII file.
123.Which role is needed for a user to run autotrace facility?
a. Plustrace b. plusrole
c. plusgrant d. none
Answer: A
Explanation: To runautotrace user should have plustrace role.
124.Which script creates plan_table?
a. utlexcpt.sql b. utlmontr.sql
c. utlsidsx.sql d. utlxplan.sq
Answer: D
Explanation: For creating a plan table we have to execute utlxplan.sql from $ORACLE_HOME/rdbms/admin
directory.
125.Which tkprof option is used to set recursive sql statements?
a. SYS b. record
c. sort d. insert
Answer: A
Explanation: If we specify SYS=Y then it will dump recursive sql statements also.
126.We query v$librarycache view. Which column contains executions of an item stored in library cache?
a. Reloads b. Pins
c. Invalidations d. Gets
Answer: B
Explanation: In v$librarycache view "PINS" column contains executions of an item
127.Which change would we make if we wanted to decrease the number of disk sorts?
a. Increase sort_area_retained_size b. Decrease sort_Area_size
c. Increase sort_area_size d. Decrease sort_Area_retained_size
Answer: C
Explanation: we have to increase the value of sort_area_size to decrease the number of disk sorts.
128.What happens in M.T.S. configured environment?
a. A server process is allocated for a b. A server process is allocated for many client
client process processes
c. A server process is deallocated for d. none
client process
Answer: B
g
Oracle 11 – FAQs Page 222 of 242
WK: 6 - Day: 5.2
Explanation: One server process will be allocated for many client processes it will respond requests on the basis
first come first serve.
129.Which part of SGA becomes mandatory if we configure M.T.S?
a. Java pool b. Shared pool
c. Database buffer cache d. Large pool
Answer: D
Explanation: While configuring MTS it is mandatory to use large_pool_size=<value> because it will create user
global area in large_pool otherwise it will use shared_pool for creating user global area.
130.Where does Oracle store user session and cursor state information in a MTS environment?
a. User global area (UGA) b. Program global area (PGA)
c. System global area (SGA) d. None
Answer: A
Explanation: User global area(UGA) store user session and cursor state information in MTS.SGA is for
database and PGA is used when we are using dedicated server.
131.Which of the following options of dispatcher parameter of init.ora would we consider while starting
dispacthers of MTS? (choose two)
a. Dispatchers=3 b. Protocol=tcp
c. Port=1768 d. All
Answer: A, B
Explanation: We can start dispatchers for different protocols so it is mandatory to specify protocol and
dispatchers.
132.MTS is configured in our database and we want to use dedicated connection, then which of the following
clause we will use in tnsnames.ora file?
a. server=shared b. srvr=dedicated
c. client=dedicated d. All
Answer: B
Explanation: We have to specify srvr=dedicated for dedicated server request.
133.Which of the following parameter of init.ora informs to the database on which listener process listens for
database connection requests?
a. LOCAL_LISTENER b. SHARED_SERVERS
c. MAX_SHARED_SERVERS d. Not possible
Answer: A
Explanation: LOCAL_LISTENER informs to the database on which listener process listens requests.
SHARED_SERVER is for starting shared server process and max_shared_server is for maximum how many
shared server process we can start for this database.
134.Which of following is true for MTS?
a. start the listener and then start the b. start the database and then start the listener
database
c. start the database and listener d. none
simultaneously
Answer: A
Explanation: We have to start the listener first and then database because we specified local_listener parameter
in init.ora file. In this we specfied address of same listener which we specified in listener.ora file.
135. Which of the following decides how many dispacthers does your MTS server shall have?
a. MTS_DISPATCHERS b. SHARED_SERVERS
c. LOCAL_LISTENER d. None
Answer: A
Explanation: MTS_DISPATCHERS will show how many dispatchers does MTS server shall have. This depends
on how busy is your dispatchers.
g
Oracle 11 – FAQs Page 223 of 242
WK: 6 - Day: 5.2
136. Which of the following view provides information about dispatcher processes like name, network address …
a. v$queue b. v$sga
c. v$dispatcher_rate d. v$dispatcher
Answer: D
Explanation: V$DISPATCHER will provide an information about dispatchers we have started and the network
address etc. Other are for v$sga for sga infomation,v$queue for message queues information, and
dispatcher_rate will show the infomation about how frequntly perticular dispatchers processing a requests.
137.Which of the following provide statistics for the dispatcher processes?
a. v$queue b. v$sga
c. v$dispatcher_rate d. v$dispatcher
Answer: C
Explanation: V$DISPATCHER_RATE will provide the dispatcher statistics.v$dispatcher_rate will show the
infomation about how frequntly perticular dispatchers processing a requests.
138.Which of the following contains information about shared server message queues?
a. v$queue b. v$sga
c. v$dispatcher_rate d. v$dispatcher
Answer: A
Explanation: V$QUEUE will contains the information of message queues.
139.Which of the following view provides information about connections to the database through dispatchers?
a. v$queue b. v$circuit
c. v$dispatcher_rate d. v$dispatcher
Answer: B
Explanation: V$CIRCUIT will provide the infrmation about all the connections made to the database through
dispatchers.
140.How many types of partitions we can create?
a. 3 b. 4
c. 2 d. unlimited
Answer: B
Explanation: There are 4 partitions. range partition, hash partition, list partion and composite partition
141.We are trying to create partition based on range. Can we create sub partition is also on range?
a. Yes b. No
Answer: B
Explanation: We can create a subpartition using hash on a partitioned table not using range.
142.The following statements which is true?
a. SQL> update <table> set b. SQL> update <table> set <column>=<value>
<column>=<value> partition partition where deptno in (select <column> from
name; <table> partition (partition name);
c. SQL> update <table> set d. none of the above
partition(partition name) set
<column>=<value>;
Answer: B
Explanation: Directly we cannot select partition values to update in partition both a and c are wrong syntaxs.If
we want to modify one partition values we have to use subquery.
143.Can we create all partitions on the same tablespace?
a. Yes b. No
Answer: A
Explanation: Specifying different tablespace for each partition is optional. We can have all the partitions in a
single tablespace.
g
Oracle 11 – FAQs Page 224 of 242
WK: 6 - Day: 5.2
144.One user is trying to create on table with four(4) partitions.The specified first three partitions in different
tablespaces. He did not specify any tablespace from the last partition? Then which tablespace it will took?
a. Third partition tablespace b. User default tablespace
c. System tablespace d. One of the three partitioned tablespaces
Answer: B
Explanation: If we don't specify any tablespace for any partition it'll take the user default tablespace.
145.How many types of indexes we can create on partition table?
a. 3 b. 5
c. 7 d. 4
Answer: D
Explanation: We can create global index,local index,local prefixindex,local nonprefixindex.These four types of
indexes we can create on partition table.
146.Can we create partitioned index on non-partitioned table?
a. Yes b. No
Answer: A
Explanation: We can create a partitioned index on non-partitioned table or a non-partitionede index on
partitioned table.
147.Can we create subpartition after creating the partition?
a. Yes b. No
Answer: B
Explanation: Once we create the table we can't change it into partitioned table in the same way once we create
the partitioned table we cann't go for sub-partition.
148.Can we create local index by giving the range for your partitions?
a. Yes b. No
Answer: B
Explanation: While creating local index we have to follow same no. of partitions and range we can't specify
range.
149.Can we create partitioned index with subpartition?
a. Yes b. No
Answer: A
Explanation: Yes, we can create a partioned index with subpartition.
150.Identify the new partitioning method available for global indexes.
a. Range Partitioned b. Range-hash partitioned
c. Has Partitioned d. List-hash Partitioned
e. None
Answer: C
Explanation: From 9i onwords we can create Hash partitioned global index on ant table.
151.What is the main reason to create a reverse-key index on a column?
a. Column is populated using a b. Column contains many different values.
sequence.
c. Column is mainly used for value d. Column implements an inverted list attribute.
range scans.
Answer: A
Explanation: If in your table we are using sequence to insert a value in one column then reverse-key index will
give best performance.Because it will maintain reverse values of columns.
152.What are the two main benefits of index-organized tables?(Choose two)
a. More concurrency. b. Faster full table scan.
c. Fast primary key based access. d. Less contention in the segment header.
g
Oracle 11 – FAQs Page 225 of 242
WK: 6 - Day: 5.2
a. FILE b. LOGFILE
c. BADFILE d. DIRECT
Answer: C, D
Explanation: file, logfile are valid options for dbv utilitiy.
186.The Oracle database is experiencing peak transaction volume. In order to reduce I/O bottlenecks created by
large amounts of redo write activity, which of the following steps can be taken?
a. Increase the size of the buffer cache. b. Increase the size of the rollback segments.
c. Increase the size of the log buffer. d. Increase the size of the shared pool.
Answer: C
Explanation: Increase the size of the redolog buffer so that u can minimise the I/O to write the transaction info.
to the current redolog file.
187.The alert log can contain specific information about which database backup activity?
a. Placing datafiles in begin and end b. Placing tablespace in begin and end backup
backup mode. mode.
c. Changing the database backup mode d. Performing an operating system backup of the
from open to close. database files.
Answer: B
Explanation: When we places any tablespace in begin backup or end backup mode oracle write information in
alert file.Directly we cannot put one datafile in begin backup mode. Alert file doesnot maintain the information
what we perform on os.
188.Currently, there is only one copy of a control file for a production Oracle database. How could the DBA
reduce the likelihood that a disk failure would eliminate the only copy of the control file in the Oracle
database?
a. Add another control filename to the b. Issue alter database backup to trace and
init.ora file restart the instance
c. Copy the control file and issue the d. Shutdown database, copy control file to
alter database statement second location, and add second name and
location on to init.ora.
Answer: D
Explanation: In order to multiplex the controlfile shutdown the database, copy the controlfile to new
location,specify the location in init.ora and open the DB.
189.The DBA has just created a database in noarchivelog mode. Which of the following two reasons may cause
him or her to leave the database in that mode for production operation? (Choose two)
a. Medium transaction volume on the b. Business requirement for point-in-time
database system between backups database recoveries
c. Low transaction volume on the d. Limited available disk space on the machine
database system between backups hosting Oracle
Answer: C, D
Explanation: If u are entering the data which is reproducable, or u don't have the enough space on disk to
generate archivelogs then ALM is preferrable.
190.In archive log multiplexing environments on Oracle8i databases, which of the following parameters is used
for defining the name of the archive log in the additional destinations?
a. LOG_ARCHIVE_DEST_N b. LOG_ARCHIVE_MIN_SUCCEED_DEST
c. LOG_ARCHIVE_DEST d. LOG_ARCHIVE_DUPLEX_DEST
Answer: A
Explanation: In Oracle8 we can specify only two destinations, Oracle8i we can specify five destinations where
as in Oracle9i u can specify 10 destinations.
191.To take the backup through Rman utility
a. Database should be open mode b. Database should be archive log mode
c. Database should be mounted d. Database should be shutdown mode
e. Instance only should be started up
g
Oracle 11 – FAQs Page 231 of 242
WK: 6 - Day: 5.2
Answer: C
Explanation: To take the Bkup through RMANatleast the DB must be mounted doesn't matter whether the DB is
running in ALM or NALM
192.When ever we connect target database through Rman utility
a. It needs dedicated server b. It starts two dedicated servers
c. It supports shared severs d. A And B
e. C and A
Answer: D
Explanation: RMAN utility needs dedicated server process and whenever we connect to target database it will
start two dedicated server processes.
193.The DBA is evaluating the use of RMAN in a backup and recovery strategy for Oracle databases. Which of the
following are reasons not to use RMAN in conjunction with the overall method for backing up and recovering
those databases?
a. When automation of the backup b. When the database consists of large but
processing is required mostly static datafiles
c. When use of a recovery catalog is d. When your backup strategy must encompass
not feasible files other than Oracle database files
Answer: D
Explanation: RMAN backup the datafiles,controlfiles and archivelogs only.
194.The DBA is developing scripts in RMAN. In order to store but not process commands in a script in RMAN for
database backup, which of the following command choices are appropriate?
a. execute script { ... } b. create script { ... }
c. run { ... } d. allocate channel { ... }
Answer: B
Explanation: create script {..} will create the script and u can refer whenever u want to execute this script.
195.Which of the following best describes multiplexing in backup sets?
a. one archive log in one backup set b. multiple controlfiles in one backup set with file
with file blocks stored contiguously blocks for each stored noncontiguously
c. multiple datafiles in one backup set d. one datafile in multiple backup sets with file
with file blocks for each stored blocks stored contiguously
noncontiguously
Answer: C
Explanation: Multiple datafiles in one backupset describes the multiplexing.
196.The DBA is planning backup capacity using RMAN. Which of the following choices best describes
streaming?
a. The ability RMAN has to take multiple b. The process by which RMAN communicates
backups of multiple databases at the with the underlying OS
same time
c. The method used for writing backup d. The performance gain added by parallel
sets to tape processing in RMAN
Answer: C
Explanation: While doing the backup using RMAN it will use a specific format and minimises the backup size of
database. RMAN can write backupsets in disk or tape.
197.The DBA has executed a level 0 backup and a level 1 backup. If the DBA then executes another level 1
backup, what information in the database will be backed up?
a. All changed blocks since the level 2 b. All changed blocks since the level 1 backup
backup only
c. No changes will be saved d. All blocks in the database
Answer: B
Explanation: level 1 backup will copy the objects which have been changed since previous same level (1)
backup or less than that level (0) backup.
g
Oracle 11 – FAQs Page 232 of 242
WK: 6 - Day: 5.2
198.When is the best time to execute the command "resync database"?
a. After creating a recovery catalog and b. After closing a recovery catalog and taking the
starting Recovery Manager. database offline.
c. After closing a recovery catalog and d. After closing a recovery catalog and closing
taking your database online. Recovery Manager.
e. After creating a recovery catalog and
closing and closing Recovery
Manager.
Answer: A
Explanation: After connecting to recovery catalog just issue a command resync catalog so that if we did any
changes in database that will be stored in recovery catalog for backup and recovery in future.
199.Which statement regarding memory usage by the Recovery Manager is true?
a. Memory is allocate from the database b. Memory could never be allocated from the
shared pool. database large pool
c. Memory could be allocate from PGA d. Memory allocated to a Recovery Manager
of the backup process. buffer is a function of the
DB_FILE_MULTIBLOCK_READ_COUNT
value.
Answer: B
Explanation: Memory could be allocated in large_pool because for RMAN utility we cannot shared server
process.We must use Dedicated server process. Large pool is only useful when your using MTS.
g
Oracle 11 – using Unix Commands Page 233 of 242
WK: 2 - Day: 6.14
dbaoc_err.AAAT03ZZ
dbstart_log.AAAT03ZZ
calerterr.AAAT01ZZ
dbaoc_log.AAAT01ZZ
ls job?.sql List files which start with job followed by any single
character followed by .sql
Example: jobd.sql jobr.sql
ls alert*.???[0-1,9] alert_AAAT01ZZ.1019
alert_AAAD00ZZ.1020
alert_AAAI09ZZ.1021
touch - touch filename Create a 0 byte file or to change the timestamp of file
to current time (wild cards as above can be used with
the file names)
mkdir mkdir directoryname Create Directory
mkdir -p directorypath Create directory down many levels in single pass
mkdir -p /home/biju/work/yday/tday
rmdir rmdir directoryname Remove directory
rm rm filename Remove file
rm -rf directoryname Remove directory with files. Important - There is no
way to undelete a file or directory in UNIX. So be
careful in deleting files and directories. It is always
good to have rm -i filename for deletes
cp cp filename newfilename Copy a file
cp -r * newloc To copy all files and subdirectories to a new location,
use -r, the recursive flag.
mv mv filename newfilename Rename (Move) a file. Rename filename to
newfilename.
mv filename directoryname Move filename under directoryname with the same file
name.
mv filename Move filename to directoryname as newfilename.
directoryname/newfilename
mv * destination If you use a wildcard in the filename, mv catenates all
files to one single file, unless the destination is a
directory.
cp -i file1 file2 Use the -i flag with rm, mv and cp to confirm before
mv -i file1 file2 destroying a file.
rm -i file*
file file filename To see what kind of file, whether editable. Executable
files are binary and you should not open them.
file d* dbshut: ascii text
dbsnmp: PA-RISC1.1 shared executable dynamically
linked -not stripped
dbstart: ascii text
dbv: PA-RISC1.1 shared executable dynamically
linked -not stripped
demobld: commands text
demodrop: commands text
vi vi filename Edit a text file. Vi is a very powerful and "difficult to
understand" editor. But once you start using, you'll love
it! All you want to know about vi are here. More vi tricks
later!!
cat cat filename See contents of a text file. cat (catenate) will list the
whole file contents. Cat is mostly used to catenate two
or more files to one file using the redirection operator.
cat file1 file2 file3 > files Catenate the contents of file1, file2 and file3 to a single
file called file. If you do not use the redirection, the
result will be shown on the standard output, i.e.,
g
Oracle 11 – using Unix Commands Page 235 of 242
WK: 2 - Day: 6.14
screen.
more more filename Show the contents of the file, one page at a time. In
page page filename more/page, use space to see next page and ENTER to
see next line. If you wish to edit the file (using vi),
press v; to quit press q.
tail tail -n filename To see the specified number of lines from the end of
the file.
head head -n filename To see the specified number of lines from the top of
the file.
pg pg filename To show the contents of the file, page by page. In pg,
you go up and down the pages with + and - and
numbers.
1 First Page of the file
$ Last Page of the file
+5 Skip 5 pages
-6 Go back 6 pages
ENTER Next page
- Previous Page
q Quit
/string Search for string
env Env To see value of all environment variables.
To set an environment variable: In ksh or sh "export VARIABLENAME=value", Note
there is no space between =.
In csh "setenv VARIABLENAME value"
echo $VARIABLENAME See value of an environment variable
echo echo string To print the string to standard output
echo "Oracle SID is $ORACLE_SID" Will display "Oracle SID is ORCL" if the value of
ORACLE_SID is ORCL.
lp lp filename To print a file to system default printer.
chmod chmod permission filename Change the permissions on a file - As explained under
ls -l, the permissions is read, write, execute for owner,
group and others.
You can change permissions by using numbers or the
characters r,w,x. Basically, you arrive at numbers
using the binary format.
Examples:
rwx = 111 = 7
rw_ = 110 = 6
r__ = 100 = 4
r_x = 101 = 5
chmod +rwx filename Give all permissions to everyone on filename
chmod 777 filename
chmod u+rwx,g+rx,o-rwx filename Read, write, execute for owner, read and execute for
chmod 750 filename group and no permission for others
chown chown newuser filename Change owner of a file
chgrp chgrp newgroup filename Change group of a file
chown newuser:newgroup filename Change owner and group of file
compress compress filename Compress a file - compressed files have extension .Z.
To compress file you need to have enough space to
hold the temporary file.
uncompress uncompress filename Uncompress a file
df df [options] [moutpoint] Freespace available on the system (Disk Free); without
arguments will list all the mount points.
df -k /ora0 Freespace available on /ora0 in Kilobytes. On HP-UX,
you can use "bdf /ora0".
g
Oracle 11 – using Unix Commands Page 236 of 242
WK: 2 - Day: 6.14
df -k . If you're not sure of the mount point name, go to the
directory where you want to see the freespace and
issue this command, where "." indicates current
directory.
du du [-s] [directoryname] Disk used; gives operating system blocks used by
each subdirectory. To convert to KB, for 512K OS
blocks, divide the number by 2.
du –s Gives the summary, no listing for subdirectories
find Find files. Find is a very useful command, searches recursively
through the directory tree looking for files that match a
logical expression. It has may options and is very
powerful.
find /ora0/admin -name "*log" Simple use of find - to list all files whose name end in
-print log under /ora0/admin and its subdirectories
find . -name "*log" -print -exec rm To delete files whose name ends in log. If you do not
{} \; use the "-print" flag, the file names will not be listed on
the screen.
grep Global regular expression print To search for an expression in a file or group of files.
grep has two flavors egrep (extended - expands wild
card characters in the expression) and frep (fixed-
string - does not expand wild card characters). This is
a very useful command, especially to use in scripts.
grep oracle /etc/passwd To display the lines containing "oracle" from
/etc/passwd file.
grep -i -l EMP_TAB *.sql To display only the file names (-l option) which
contains the string EMP_TAB, ignore case for the
string (-i option), in all files with SQL extension.
grep -v '^#' /etc/oratab Display only the lines in /etc/oratab where the lines do
not (-v option; negation) start with # character (^ is a
special character indicating beginning of line, similarly
$ is end of line).
ftp ftp [hostname] File Transfer Protocol - to copy file from one computer
to another
ftp AAAd01hp Invoke ftp, connect to server AAAd01hp.
Connected to AAAd01hp.com. Program prompts for user name, enter the login name
220 AAAd01hp.com FTP server to AAAd01hp.
(Version 1.1.214.2 Mon May 11
12:21:14 GMT 1998) ready.
Name (AAAd01hp:oracle): BIJU
331 Password required for BIJU. Enter password - will not be echoed.
Password:
230 User BIJU logged in. Specifying to use ASCII mode to transfer files. This is
Remote system type is UNIX. used to transfer text files.
Using binary mode to transfer
files.
ftp> ascii
200 Type set to A. Specifying to use binary mode to transfer files. This is
ftp> binary used for program and your export dump files.
200 Type set to I. To see the files in the remote computer.
ftp> ls
200 PORT command successful. Transfer the file check.sql from the remote computer to
150 Opening ASCII mode data the local computer. The file will be copied to the
connection for /usr/bin/ls. present directory with the same name. You can
total 8 optionally specify a new name and directory location.
-rw-rw-rw- 1 b2t dba 43 Sep 22
16:01 afiedt.buf
drwxrwxrwx 2 b2t dba 96 Jul 9 08:47
app
drwxrwxrwx 2 b2t dba 96 Jul 9 08:49
bin
g
Oracle 11 – using Unix Commands Page 237 of 242
WK: 2 - Day: 6.14
-rw-rw-rw- 1 b2t dba 187 Jul 30
14:44 check.sql
226 Transfer complete.
ftp> get check.sql
200 PORT command successful. ! Runs commands on the local machine.
150 Opening BINARY mode data
connection for check.sql (187
bytes).
226 Transfer complete.
187 bytes received in 0.02 seconds
(7.79 Kbytes/s)
ftp> !ls
AAAP02SN a4m08.txt tom3.txt Transfer file from local machine to remote machine,
a4m01.txt under /tmp directory with name test.txt.
ftp> put a4m01.txt /tmp/test.txt
mail mail "[email protected]" < message.log Mail a file to internet/intranet address. mail the
contents of message.log file to [email protected]
mail -s "Messages from Me" Mail the contents of message.log to xyz and abc with a
"[email protected]" "[email protected]" < subject.
message.log
who who [options] To see who is logged in to the computer.
who –T Shows the IP address of each connection
who –r Shows when the computer was last rebooted, run-
level.
ps Ps Process status - to list the process id, parent process,
status etc. ps without any arguments will list current
sessions processes.
ps –f ull listing of my processes, with time, terminal id,
parent id, etc.
ps –ef As above for all the processes on the server.
kill kill [-flag] processid To kill a process - process id is obtained from the ps
command or using the v$process table in oracle.
kill 12345 Kill the process with id 12345
kill -9 12345 To force termination of process id 12345
script script logfilename To record all your commands and output to a file.
Mostly useful if you want to log what you did, and sent
to customer support for them to debug. Start logging to
the log filename. The logging is stopped when you do
"exit".
hostname Hostname Displays the name of the computer.
uname uname –a To see the name of the computer along with Operating
system version and license info.
date Date Displays the current date and time.
date "+%m%d%Y" displays date in MM/DD/YYYY format
cal Cal displays calendar of current month
cal 01 1991 Displays January 1991 Calendar
telnet telnet [hostname] To open a connection to another computer in the
network. Provide the alias name or IP address of the
computer.
& command & Add & to the end of the command to run in background
nohup command & No hangup - do not terminate the background job even
if the shell terminates.
fg Fg To bring a background job to foreground
bg Bg To take a job to the background. Before issuing this
command, press ^Z, to suspend the process and then
g
Oracle 11 – using Unix Commands Page 238 of 242
WK: 2 - Day: 6.14