Oracle Database 11g SQL and PLSQL New Features
Oracle Database 11g SQL and PLSQL New Features
Learning objective
After completing this topic, you should be able to recognize ways for using SQL*Plus
commands to display the structure of a table and perform some editing and file
management tasks.
1. Understanding SQL*Plus
Disclaimer
Although certain aspects of the Oracle 11g Database are case and spacing insensitive, a
common coding convention has been used throughout all aspects of this course.
This convention uses lowercase characters for schema, role, user, and constraint names,
and for permissions, synonyms, and table names (with the exception of the DUAL table).
Lowercase characters are also used for column names and user-defined procedure,
function, and variable names shown in code.
Uppercase characters are used for Oracle keywords and functions, for view, table,
schema, and column names shown in text, for column aliases that are not shown in
quotes, for packages, and for data dictionary views.
The spacing convention requires one space after a comma and one space before and
after operators that are not Oracle-specific, such as +, -, /, and <. There should be no
space between an Oracle-specific keyword or operator and an opening bracket, between
a closing bracket and a comma, between the last part of a statement and the closing
semicolon, or before a statement.
String literals in single quotes are an exception to all of the convention rules provided
here. Please use this convention for all interactive parts of this course.
End of Disclaimer
SQL is a command language for communication with the Oracle Server from any tool or
application. Oracle SQL contains many extensions.
When you enter a SQL statement, it is stored in a part of memory called the SQL buffer
and remains there until you enter a new SQL statement. SQL*Plus is an Oracle tool that
recognizes and submits SQL statements to Oracle Database 11g for execution. It
contains its own command language.
SQL
can be used by a range of users, including those with little or no programming experience
is a nonprocedural language
reduces the amount of time required for creating and maintaining systems
is an English-like language
SQL*Plus
its purpose
its definition
SQL does not have a continuation character, whereas SQL*Plus uses a dash as a
continuation character if the command is longer than one line.
whether commands can be abbreviated
SQL commands cannot be abbreviated, whereas SQL*Plus commands can.
how commands are executed
SQL uses a termination character to execute commands immediately, but SQL*Plus does
not require termination characters to do this.
how data is formatted
SQL uses functions to perform some formatting, whereas SQL*Plus uses commands to
format data.
SQL*Plus is an environment in which you can
execute SQL statements to retrieve, modify, add, and remove data from the database
format, perform calculations on, store, and print query results in the form of reports
create script files to store SQL statements for repeated use in the future
SQL*Plus commands can be divided into seven main categories:
environment
format
file manipulation
execution
edit
interaction
miscellaneous
environment
Environment commands affect the general behavior of SQL statements for the session.
format
Format commands format query results.
file manipulation
File manipulation commands save, load, and run script files.
execution
Execution commands send SQL statements from the SQL buffer to the Oracle server.
edit
Edit commands modify SQL statements in the buffer.
interaction
Interaction commands create and pass variables to SQL statements, print variable values,
and print messages to the screen.
miscellaneous
Other SQL*Plus commands connect to the database, manipulate the SQL*Plus
environment, and display column definitions.
The way in which you invoke SQL*Plus depends on the type of operating system or
Windows environment that you are running.
To log in from a Windows environment you
Note
To ensure the integrity of your password, you do not enter it at the operating
system prompt. Instead, you enter only your username and you then enter your
password at the password prompt.
You can optionally change the look of the SQL*Plus environment by using the "SQL Plus"
Properties dialog box.
In the SQL Plus window, you right-click the title bar and in the context menu that appears,
you select Properties. You can then use the Colors tab of the "SQL Plus" Properties
dialog box to set the Screen Text and the Screen Background.
In SQL*Plus, you can display the structure of a table using the DESCRIBE command. The
result of the command is a display of column names and data types, as well as an
indication whether a column must contain data.
In this syntax, tablename is the name of any existing table, view, or synonym that is
accessible to the user.
DESC[RIBE] tablename
To describe the DEPARTMENTS table, for example, you use this command. It displays
information about the structure of the table.
In this example
Null? specifies whether a column must contain data NOT NULL indicates that a column must
contain data
Type displays the data type for a column
Null? specifies whether a column must contain data NOT NULL indicates that a
column must contain dataType displays the data type for a column
Null?
--------
Type
NOT NULL
NOT NULL
NUMBER(p,s)
VARCHAR2(s)
DATE
CHAR(s)
NUMBER(p,s)
The data type NUMBER(p,s) is a numeric value that has a maximum number of digits p,
which is the number of digits to the right of the decimal point s.
VARCHAR2(s)
The data type VARCHAR2(s) is a variable-length character value of maximum size s.
DATE
The data type DATE is a date and time value between January 1, 4712 B.C. and A.D.
December 31, 9999.
CHAR(s)
The data type CHAR(s) is a fixed-length character value of size s.
Question
Which statements accurately describe SQL*Plus?
Options:
1.
It is an ANSI-standard language
2.
It is an Oracle-proprietary environment
3.
4.
Answer
SQL*Plus is an Oracle-proprietary environment and its commands do not allow
manipulation of database values.
Option 1 is incorrect. SQL, not SQL*Plus, is an ANSI-standard language for
communicating with the Oracle server to access data.
Option 2 is correct. SQL*Plus is an Oracle-proprietary environment for executing
SQL statements. SQL*Plus has the ability to recognize SQL statements and send
them to the Oracle server.
Option 3 is correct. Although SQL statements can manipulate data and table
definitions in a database, SQL*Plus commands cannot.
Option 4 is incorrect. SQL*Plus keywords can be abbreviated, but SQL commands
cannot. For example, both DESCRIBE and DESC are acceptable commands for
displaying the structure of a table.
Question
Which statements accurately describe SQL?
Options:
1.
2.
3.
4.
Answer
2. SQL*Plus commands
SQL*Plus commands are entered one line at a time and are not stored in the SQL buffer.
When using SQL*Plus commands you need to keep in mind that if you press Enter
before completing a command, SQL*Plus prompts you with a line number. Also, you can
terminate the SQL buffer either by entering one of the terminator characters semicolon
or slash or by pressing Enter twice, whereupon the SQL prompt appears.
This table contains selected SQL*Plus editing commands.
You can enter only one SQL*Plus command for each SQL prompt. SQL*Plus commands
are not stored in the buffer. To continue a SQL*Plus command on the next line, you end
the first line with a hyphen (-). This table contains more SQL*Plus editing commands.
You use the L[IST] command to display the contents of the SQL buffer. The asterisk (*)
beside line 2 in the buffer indicates that line 2 is the current line. Any edits that you made
apply to the current line.
LIST
1 SELECT last_name
2* FROM employees
You change the number of the current line by entering the number n of the line that
you want to edit to display the new current line.
1
1* SELECT last_name
You use the A[PPEND] command to add text to the current line the newly edited line is
displayed.
A , job_id
1* SELECT last_name, job_id
You then verify the new contents of the buffer by using the LIST command.
LIST
1 SELECT last_name, job_id
2* FROM employees
Note
Many SQL*Plus commands, including LIST and APPEND, can be abbreviated to
their first letter. LIST can be abbreviated to L and APPEND can be abbreviated to
A.
When using the CHANGE command, you
- use L[IST] to display the contents of the buffer.
LIST
1* SELECT * from employees
- use the C[HANGE] command to alter the contents of the current line in the SQL buffer.
For example, you can replace the employees table with the departments table the new
current line displays.
c/employees/departments
1* SELECT * from departments
- use the L[IST] command to verify the new contents of the buffer.
LIST
1* SELECT * from departments
You use SQL statements to communicate with the Oracle server and SQL*Plus
commands to control the environment, format query results, and manage files.
Some of the SQL*Plus file commands are
@ filename
ED[IT]
ED[IT] [filename[.ext]]
EXIT
SAV[E] filename [.ext] [REP[LACE]APP[END]]
The SAV[E] filename [.ext] [REP[LACE]APP[END]] command saves current
contents of the SQL buffer to a file. You use APPEND to add to an existing file and
REPLACE to overwrite an existing file. The default extension is .sql.
GET filename [.ext]
The GET filename [.ext] command writes the contents of a previously saved file to the
SQL buffer. The default extension for the file name is .sql.
STA[RT] filename [.ext]
The STA[RT] filename [.ext] command runs a previously saved command file.
@ filename
The @ filename command runs a previously saved command file the same as START.
ED[IT]
The ED[IT] command invokes the editor and saves the buffer contents to a file named
afiedt.buf.
ED[IT] [filename[.ext]]
The ED[IT] [filename[.ext]] command invokes the editor to edit the contents of a
saved file.
SPO[OL] [filename[.ext]| OFF|OUT]
The SPO[OL] [filename[.ext]| OFF|OUT] command stores query results in a file. OFF
closes the spool file. OUT
closes the spool file and sends the file results to the printer.
EXIT
The EXIT command quits SQL*Plus.
You use the SAVE command to store the current contents of the buffer in a file. In this
way, you can store frequently used scripts for use in the future.
LIST
1 SELECT last_name, manager_id, department_id
2* FROM employees
SAVE my_query
Created file my_query
You use the START command to run a script in SQL*Plus. Alternatively, you can also use
the symbol "@" to run a script, for example @my_query.
LIST
1 SELECT last_name, manager_id, department_id
2* FROM employees
SAVE my_query
Created file my_query
START my_query
LAST_NAME
MANAGER_ID
DEPARTMENT_ID
----------- ------------- ------------King
90
Kochhar
100
90
...
107 rows selected.LIST
You use the EDIT command to edit an existing script. This will open an editor with the
script file in it.
EDIT my_query
When you have made the changes, you quit the editor to return to the SQL*Plus
command line.
SELECT last_name, manager_id, department_id
FROM employees
/
Note
The / character is a delimiter that signifies the end of the statement. When
encountered in a file, SQL*Plus runs the statement prior to this delimiter. The
delimiter must be the first character of a new line immediately following the
statement.
Most of the PL/SQL programs perform input and output through SQL statements, to store
data in database tables or query those tables. All other PL/SQL I/O is performed through
APIs that interact with other programs.
For example, the DBMS_OUTPUT package has procedures such as PUT_LINE. To see the
result outside of PL/SQL requires another program, such as SQL*Plus, to read and
display the data passed to DBMS_OUTPUT.
SQL*Plus does not display DBMS_OUTPUT data unless you first issue this SQL*Plus
command.
SET SERVEROUTPUT ON
Note
SIZE sets the number of bytes of the output that can be buffered within the Oracle
Database server. The default is UNLIMITED. n cannot be less than 2000 or
greater than 1,000,000.
The DBMS_OUTPUT line length limit is increased from 255 bytes to 32,767 bytes.
Resources are not preallocated when SERVEROUTPUT is set. And because there is no
performance penalty, you use UNLIMITED unless you want to conserve physical memory.
SET SERVEROUT[PUT] {ON | OFF} [SIZE {n | UNL[IMITED]}]
[FOR[MAT] {WRA[PPED] | WOR[D_WRAPPED] | TRU[NCATED]}]
The SPOOL command stores query results in a file, or optionally sends the file to a printer.
The SPOOL command has been enhanced. You can now append to, or replace an
existing file, where previously you could only use SPOOL to create and replace a file.
REPLACE is the default.
To spool the output generated by commands in a script without displaying the output on
the screen, you use SET TERMOUT OFF as it does not affect the output from commands
that run interactively.
SPO[OL] [file_name[.ext] [CRE[ATE] | REP[LACE] |
APP[END]] | OFF | OUT]
You must use quotes around file names containing white space. To create a valid HTML
file using SPOOL APPEND commands, you must use PROMPT or a similar command to
create the HTML page header and footer.
The SPOOL APPEND command does not parse HTML tags. SET
SQLPLUSCOMPAT[IBILITY] to 9.2 or earlier to disable the CREATE, APPEND, and SAVE
parameters.
SPO[OL] [file_name[.ext] [CRE[ATE] | REP[LACE] |
APP[END]] | OFF | OUT]
The options that can be used with the SQL*Plus SPOOL command are
file_name[.ext]
CRE[ATE]
REP[LACE]
APP[END]
OFF
OUT
file_name[.ext]
The file_name[.ext] option spools output to the specified file name.
CRE[ATE]
The CRE[ATE] option creates a new file with the name specified.
REP[LACE]
The REP[LACE] option replaces the contents of an existing file. If the file does not exist,
REPLACE creates the file.
APP[END]
The APP[END] option adds the contents of the buffer to the end of the file that you specify.
OFF
The OFF option stops the spooling.
OUT
The OUT option stops spooling and sends the file to your computer's standard or default
printer.
When you use the AUTOTRACE command EXPLAIN shows the query execution path by
performing an EXPLAIN PLAN.
You use the STATISTICS option to display SQL statement statistics. The formatting of
your AUTOTRACE report may vary depending on the version of the server to which you
are connected and the configuration of the server.
The DBMS_XPLAN package provides an easy way to display the output of the EXPLAIN
PLAN command in several, predefined formats.
SET AUTOT[RACE] {ON | OFF | TRACE[ONLY]} [EXP[LAIN]]
[STAT[ISTICS]]
The AUTOTRACE command displays a report after the successful execution of SQL DML
statements, such as SELECT, INSERT, UPDATE, or DELETE.
The report can now include execution statistics and the query execution path.
Question
Which SQL*Plus command can be used to delete a specified range of lines of
code from the SQL buffer?
Options:
1.
CL BUFF
2.
DEL
3.
DEL m n
4.
DEL n
Answer
The SQL*Plus command that can be used to delete a specified range of lines of
code from the SQL buffer is DEL m n.
Option 1 is incorrect. The CL BUFF command or CLEAR BUFFER is used to
delete all lines from the SQL buffer.
Option 2 is incorrect. The DEL command is used to delete the current line from the
SQL buffer.
Option 3 is correct. The DEL m n command is used to delete a specified range of
lines lines m to n inclusive from the SQL buffer.
Option 4 is incorrect. The DEL n command is used to delete line n from the SQL
buffer.
Question
Which SQL*Plus command is used to run a previously saved script?
Options:
1.
2.
3.
4.
Answer
The SQL*Plus command that is used to run a previously saved script is START
filename [.ext].
Option 1 is incorrect. The EDIT filename [.ext] command is used to invoke
the editor so the contents of a saved file can be edited.
Option 2 is incorrect. The GET filename [.ext] command is used to write the
contents of a previously saved file to the SQL buffer.
Option 3 is incorrect. The SAVE filename [.ext] command is used to save the
contents of the SQL buffer to a file.
Option 4 is correct. The START filename [.ext] command is used to run a
script in SQL*Plus. The @ symbol can also be used to run a script.
CUST_LAST_NAME
-------------Kinski
Garcia
Olin
Dench
Altman
de Funes
Chapman
CUST_LAST_NAME
-------------Kinski
Garcia
Olin
Dench
Altman
de Funes
Chapman
Summary
SQL*Plus is an execution environment that you can use to send SQL commands to the
database server, and to edit and save SQL commands. You can execute commands from
the SQL prompt or from a script file.
SQL*Plus commands are entered one line at a time and are not stored in the SQL buffer.
SQL*Plus editing commands are APPEND, CHANGE, CLEAR, DELETE, INPUT, LIST and
RUN. SQL*Plus file commands control the environment, format query results, and manage
files. SQL*Plus file commands are SAVE, GET, START, EDIT, SPOOL, and EXIT. You use
the SQL*Plus SERVEROUTPUT command to display DBMS_OUTPUT data stored by
PL/SQL programs. The SPOOL command stores query results in a file, or sends the file to
a printer or allows you to append to an existing file. The AUTOTRACE command displays a
report after the successful execution of SQL DML statements, such as SELECT, INSERT,
UPDATE, or DELETE.
You can create and save a SQL script which you can then execute from SQL*Plus.
After completing this topic, you should be able to recognize the steps for using SQL
Developer and SQL Worksheet to connect to a database, browse and export database
objects, and use SQL*Plus to enter and execute SQL and PL/SQL statements.
creating reports
You can connect to any target Oracle database schema using the standard Oracle
database authentication. When connected, you can perform operations on objects in the
database.
Oracle SQL Developer does not require an installer. To install SQL Developer, you need
an unzip tool.
To install SQL Developer, you create a folder as local drive:\SQL Developer. Then you
download the SQL Developer kit from the Oracle SQL Developer Home page at
www.oracle.com. Finally, you unzip the downloaded SQL Developer kit into the folder
created at the start.
To start SQL Developer, you go to local drive:\SQL Developer and double-click
sqldeveloper.exe.
SQL Developer has two main navigation tabs:
Connections
Reports
Connections
By using the Connections tab, you can browse database objects and users to which you
have access.
Reports
By using the Reports tab, you can run predefined reports or create and add your own
reports.
SQL Developer uses the left side for navigation to find and select objects, and the right
side to display information about selected objects.
You can customize many aspects of the appearance and behavior of SQL Developer by
setting preferences.
The menus at the top of the SQL Developer user interface contain standard entries, plus
entries for features specific to the tool:
View
Navigate
Run
Debug
Source
Migration
Tools
View
The View menu contains options that affect what is displayed in the SQL Developer
interface.
Navigate
The Navigate menu contains options for navigating to panes and for execution of
subprograms.
Run
The Run menu contains the Run File and Execution Profile options that are relevant
when a function or procedure is selected.
Debug
The Debug menu contains options that are relevant when a function or procedure is
selected.
Source
The Source menu contains options for use when editing functions and procedures.
Migration
The Migration menu enables you to migrate from another database, such as Microsoft
SQL Server and Microsoft Access, to an Oracle database.
Tools
The Tools menu invokes SQL Developer tools such as SQL*Plus, Preferences, and SQL
Worksheet.
A connection is a SQL Developer object that specifies the necessary information for
connecting to a specific database as a specific user of that database. To use SQL
Developer, you must have at least one database connection, which may be existing,
created, or read.
You can create and test connections for multiple databases and for multiple schemas.
By default, the tnsnames.ora file is located in the $ORACLE_HOME/network/admin
directory. But it can also be in the directory specified by the TNS_ADMIN environment
variable or registry value.
When you start SQL Developer and display the New/Select Database Connection
window, SQL Developer automatically reads any connections defined in the
tnsnames.ora file on your system.
Note
On Windows systems, if the tnsnames.ora file exists, but its connections are not
being used by SQL Developer, you define TNS_ADMIN as a system environment
variable.
You can export connections to an XML file so that you can reuse it later.
You can create additional connections as different users to the same database or to
connect to the different databases.
To create a database connection, you start SQL Developer. Then, on the Connections
tabbed page, you right-click Connections and select New Connection.
You enter the connection name, username, password, host name, port, and system
identifier (SID) or Service name for the database that you want to connect to.
On the Oracle tabbed page, you enter the
If you select the Save Password check box, the password is saved to an XML file.
Therefore, the next time you access the SQL Developer connection, you will not be
prompted for the password.
The other tabbed pages enable you to set up connections to non-Oracle databases.
You click Test to make sure that the connection has been set correctly.
And you click Connect.
The new database connection appears in the navigation pane.
You can now use the database navigator to browse through many objects in a database
schema including Tables, Views, Indexes, Packages, Procedures, Triggers, and Types.
You can see the definition of the objects broken into tabs of information that is pulled out
of the data dictionary.
If you select a table in the Connections navigator, the details about columns, constraints,
grants, statistics, triggers, and more, are displayed on an easy-to-read tabbed page.
For example, to see the definition of the CUSTOMERS table, you expand the Connections
node in the Connections navigator. Then you expand Tables and click CUSTOMERS.
Using the Data tab, you can enter new rows, update data, and commit these changes to
the database.
You can export DDL and data using the Export utility. For a selected database
connection, you can export some or all objects of one or more types of database objects
to a file containing SQL data definition language (DDL) statements to create these
objects.
To specify options for the export operation, you select Tools - Export DDL (and Data).
You select the objects to export on the Export tabbed page of the Export page.
You specify the objects or types of objects to export on the Filter Objects tabbed page.
You type the name of the file that you want the data saved to, in the File field, or click
Browse to select a directory for the file.
Then you click Apply to proceed with the export.
The export procedure starts.
After the export is completed, you can examine the contents of the exported file.
In this example, the data for views is exported to the Exported_Data.sql file.
You can export table data by using the submenu when you right-click the Tables object in
the navigator.
The export utility offers you wide flexibility in the different formats that you can export to.
You can import data from an Excel spreadsheet using the Import Data submenu.
You can also export data using the Export Data submenu.
You can export data using the Export DDL submenu.
You can also use the Migration tool to export and import data from other data sources.
Question
Which statements accurately describe SQL Developer?
Options:
1.
2.
3.
4.
Answer
SQL Developer allows you to browse and manage database objects and does not
require an installer.
Option 1 is correct. Oracle SQL Developer is a free graphical tool designed to
improve productivity and simplify the development of everyday database tasks.
These tasks include browsing and managing database objects.
Option 2 is correct. Oracle SQL Developer does not require an installer. To install
SQL Developer, you need an unzip tool.
Option 3 is incorrect. SQL Developer is a graphical tool that can be used to
connect to an Oracle database and complete everyday database tasks.
Option 4 is incorrect. Although SQL Developer allows you to execute SQL
statements and scripts, it is not an extension to SQL.
Question
In Oracle Database 11g, where can the tnsnames.ora file be located?
Options:
1.
2.
3.
4.
Answer
In Oracle Database 11g, the tnsnames.ora file can be located in the
$ORACLE_HOME/network/admin directory or in the directory specified by the
TNS_ADMIN environment variable.
Option 1 is incorrect. The tnsnames.ora file is not located in the
$ORACLE_HOME/bin directory. The bin directory contains a number of required
files and tools, such as SQL*Plus.
Option 2 is correct. By default, the tnsnames.ora file is located in the
$ORACLE_HOME/network/admin directory.
Option 3 is incorrect. The directory $ORACLE_HOME/tns_admin is not created by
default when Oracle Database 11g is installed, and the tnsnames.ora file is not
located within it.
Option 4 is correct. The tnsnames.ora file can be stored in the directory
specified by the TNS_ADMIN environment variable or registry value.
creating a table
inserting data
Execute Statement
Run Script
Commit
Rollback
Cancel
SQL History
Autotrace
Clear
Execute Statement
The Execute Statement icon enables you to execute the statement at the cursor in the
Enter SQL Statement box. You can use bind variables in the SQL statements but not
substitution variables.
Run Script
The Run Script icon enables you to execute all statements in the Enter SQL Statement
box using Script Runner. You can use substitution variables in the SQL statements but not
bind variables.
Commit
The Commit icon enables you to write any changes to the database and end the
transaction.
Rollback
The Rollback icon enables you to discard any changes to the database, without writing
them to the database, and end the transaction.
Cancel
The Cancel icon enables you to stop the execution of any statements currently being
executed.
SQL History
The SQL History icon enables you to display a dialog box with information about the SQL
statements that you have executed.
Execute Explain Plan
The Execute Explain Plan icon enables you to generate the execution plan, which you
can see by clicking the Explain tab.
Autotrace
The Autotrace icon enables you to generate trace information for the statement.
Clear
The Clear icon enables you to erase the statement or statements in the Enter SQL
Statement box.
In SQL Worksheet, you can use the Enter SQL Statement box to enter a single SQL
statement or multiple SQL statements. For a single statement, the semicolon at the end is
optional.
When you enter the statement, the SQL keywords are automatically highlighted. To
execute a SQL statement, you ensure that your cursor is within the statement and click
the Execute Statement icon. Alternatively, you can press the F9 key.
In the example, because there are multiple SQL statements, the first statement is
terminated with a semicolon. The cursor is in the first statement and so when the
statement is executed, results corresponding to the first statement are displayed in the
Results box.
To execute multiple SQL statements and see the results, you click the Run Script icon.
Alternatively, you can press the F5 key.
The results corresponding to all the statements are displayed in the Script Output box.
You can save your SQL statements from the SQL Worksheet into a text file. To save the
contents of the Enter SQL Statement text box, you click the Save icon or select File Save.
In the Windows Save dialog box, you enter a file name and the location where you want
the file saved. Then you click Save.
After you save the contents to a file, the Enter SQL Statement text box displays a tabbed
page of your file contents. You can have multiple files open at once. Each file displays as
a tabbed page.
You can select a default path to look for scripts and to save scripts.
You select Tools - Preferences and on the Preferences page you expand Database and
then select Worksheet Parameters. You then enter a path in the "Select default path to
look for scripts" field.
To run a saved SQL script, you use the @ command In the Enter SQL Statement window,
followed by the location and name of the file that you want to run.
And then you click the Run Script icon.
The results from running the file are displayed on the Script Output tabbed page.
You can also save the script output by clicking the Save icon on the Script Output tabbed
page. The Windows File Save dialog box appears and you can specify a name and
location for your file.
Note
You can also right-click in the Enter SQL Statement area and select Open File
from the shortcut menu.
Question
Identify the true statements regarding SQL Worksheet.
Options:
1.
SQL Worksheet can be used to enter and execute SQL and SQL*Plus commands
only
2.
3.
When you connect to a database, a SQL Worksheet window for that connection is
automatically opened
4.
You can specify any actions that can be processed by the database connection
associated with the worksheet
Answer
When you connect to a database, a SQL Worksheet window for that connection is
automatically opened. Within SQL Worksheet, you can specify any actions that
can be processed by the database connection associated with the worksheet.
Option 1 is incorrect. SQL Worksheet can be used to enter and execute SQL,
PL/SQL, and some SQL*Plus commands.
Option 2 is incorrect. SQL Worksheet supports a number of SQL*Plus commands,
but any unsupported commands are ignored and are not passed to the database.
Option 3 is correct. When you connect to a database, a SQL Worksheet window
for that connection is automatically opened. This is customizable in the Worksheet
Parameters.
Option 4 is correct. In SQL Worksheet, you can specify any actions that can be
processed by the database connection associated with the worksheet. These
actions include creating a table, inserting data, and saving and running SQL
scripts.
Question
Which statements are true regarding entering and executing SQL statements from
SQL Worksheet?
Options:
1.
2.
3.
4.
You cannot enter multiple statements in the Enter SQL Statement box
Answer
When entering and executing SQL statements from SQL Worksheet, shortcut keys
are available for executing statements and scripts and SQL keywords are
automatically highlighted.
Option 1 is incorrect. For single statements in SQL Worksheet, the semicolon at
the end is optional.
Option 2 is correct. SQL Worksheet provides shortcut keys you can use. For
example, you can execute a SQL statement using the F9 key, and run scripts
using the F5 key.
Option 3 is correct. When you enter a statement in SQL Worksheet, the SQL
keywords are automatically highlighted in the window. This increases the
readability of your code.
Option 4 is incorrect. In SQL Worksheet, you can use the Enter SQL Statement
box to enter a single SQL statement or multiple SQL statements.
Find
Run
Compile
( param1 IN VARCHAR2
) AS
BEGIN
NULL;
END PROCEDURE1;
Note
To display the line numbers in the Code Editor, you select Tools - Preferences
followed by Code Editor - Line Gutter and select the Show Line Numbers
option.
To compile your code, you click the Compile icon in the Code Editor window.
If your code compiles successfully, you see the message "Compiled" on the Messages Log window. If there are errors or warnings, you see "Compiled (with errors)" on the
Messages tabbed page.
The details for the errors are on the Compiler tabbed page. If you have any errors, you fix
the errors and then recompile.
To execute your PL/SQL code, you click the Run icon.
The Run PL/SQL dialog box appears. Within it, the call to your named PL/SQL block is
wrapped in an anonymous block of code.
DECLARE
PARAM1 VARCHAR2(200);
BEGIN
PARAM1 := NULL;
PROCEDURE1(
PARAM1 => PARAM1
);
END;
You enter any variable values which are converted to your parameter values in your
stored block of code and then click OK.
The results from running your code are displayed on the Running-Log tabbed page.
To use the PL/SQL debugger in SQL Developer, you must compile the code in debug
mode.
Summary
Oracle SQL Developer is a free graphical tool designed to improve productivity and
simplify the development of database tasks. To use SQL Developer you need to set up a
connection to at least one database.
You can use SQL Worksheet to enter and execute SQL, PL/SQL, and selected SQL*Plus
statements.
You can also create, execute, and debug procedures, functions, packages, and triggers
with SQL Developer using PL/SQL. You use the Code Editor to run, compile, and debug
PL/SQL code.
After completing this topic, you should be able to identify the steps for using SQL
Developer to create reports and migrate to an Oracle Database 11g database.
Ask Tom
Metalink
Docs
10.2 docs
9.2 docs
search.oracle.com
OTN Forums
In this example, the OTN Forums are searched using the search term SQL Developer.
The search results display in your browser window.
You can customize many aspects of the SQL Developer interface and environment by
modifying the SQL Developer preferences according to your preferences and needs.
To modify SQL Developer preferences, you select Tools - Preferences.
The Preferences dialog box displays.
Note
You can toggle your line numbers on and off using the Show Line Numbers
check box.
SQL Developer provides many reports about the database and its objects. These reports
are grouped into categories:
Object reports
Charts
Jobs reports
PL/SQL reports
Security reports
Streams reports
Table reports
XML reports
To display a report, you select the Reports tab, and then select the report type.
You can also create your own user-defined reports. For example, to display a chart, you
expand the Charts node and then select the particular chart you want to display. The
example shows an Object Distribution chart report.
You can generate reports about your PL/SQL code. You can find out information about
arguments, search source code for object name or text strings, and find out the length of
your PL/SQL routines.
To run the Search Source Code report, you click the Search Source Code node, specify
a database connection in the Select Connection dialog box, and click OK. This report
enables you to find either text strings or object names in your PL/SQL code.
In the Enter Bind Values dialog box, you enter a value for example customer in the
Value text field and click Apply.
The search result shows all occurrences of the text string "customer" in the PL/SQL code.
The results show the owner, the PL/SQL object name, code type, line number, and the
text on that line number.
User-defined reports are any reports that are created by SQL Developer users.
To create a user-defined report, you right-click the User Defined Reports node, and
select Add Report.
This displays the Create Report dialog box. You specify the report name and an optional
description to indicate that the report contains sales orders organized by sales
representatives.
You enter the complete SQL statement for retrieving the information to be displayed in the
user-defined report in the SQL box. You can also include an optional tool tip to be
displayed when the cursor stays briefly over the report name in the Reports navigator
display.
To retrieve the information you click Apply.
SELECT order_mode, order_total, sales_rep_id
FROM orders
WHERE sales_rep_id IS NOT NULL
GROUP BY order_mode, order_total, sales_rep_id
ORDER BY sales_rep_id
Your new report OrdersByRep appears under the node User Defined Reports and its
contents are displayed in the right-hand pane.
You can organize user-defined reports in folders, and you can create a hierarchy of
folders and subfolders.
To create a folder for user-defined reports, you right-click the User Defined Reports
node or any folder name under that node and select Add Folder.
This displays the Create Folder dialog box that enables you to name your new folder.
Note
Information about user-defined reports, including any folders for these reports, is
stored in a file named UserReports.xml under the directory for user-specific
information.
Question
Which statements accurately describe report creation in SQL Developer?
Options:
1.
2.
SQL Developer reports are grouped into categories, such as object reports and
charts
3.
You can customize existing report templates, but you cannot create user-defined
reports
4.
Answer
In SQL Developer reports are grouped into categories, such as object reports and
charts. And you can generate reports about PL/SQL code.
Option 1 is incorrect. You can organize user-defined reports into folders, and you
can create a hierarchy of folders and subfolders. However, these folders do not
have to be created before the reports are created.
Option 2 is correct. SQL Developer provides many reports about the database and
its objects. Reports are grouped into categories, such as object reports, charts,
PL/SQL reports, and XML reports.
Option 3 is incorrect. Although SQL Developer provides many reports about the
database and its objects, you can also create your own user-defined reports.
Option 4 is correct. You can generate reports about your PL/SQL code. You can
find out information about arguments, search source code for object name or text
strings, and find out the length of your PL/SQL routines.
2. Using SQL*Plus
SQL Worksheet supports most SQL*Plus statements. SQL*Plus statements must be
interpreted by SQL Worksheet before being passed to the database. Any SQL*Plus
statements that are not supported by the SQL Worksheet are ignored and not passed to
the database.
To display the SQL*Plus command-line interface, you first close all SQL worksheets and
then select Tools - SQL*Plus.
You must use the Oracle SID in your SQL Developer connection in order for the
SQL*Plus menu item to be enabled.
Supplement
Selecting the link title opens the resource in a new browser window.
Launch window
View the full list of SQL*Plus statements that are and are not supported by SQL
Developer here.
Question
What must be considered when invoking SQL*Plus from SQL Developer?
Options:
1.
2.
3.
You must close all SQL worksheets to enable the SQL*Plus menu option
4.
You must use the Oracle SID in your SQL Developer connection
Answer
When invoking SQL*Plus from SQL Developer, you must close all SQL worksheets
to enable the SQL*Plus menu option. And you must use the Oracle SID in your
SQL Developer connection.
Option 1 is incorrect. To launch SQL*Plus from SQL Developer, the system on
which you are using SQL Developer must have an Oracle home directory or
folder, with a SQL*Plus executable under that location.
enables you to migrate an entire third-party database, including triggers and stored procedures
enables you to see and compare the captured model and the converted model, and to customize
each if you want, so that you can control how much automation there is in the migration process
Question
Identify the valid statements regarding the use of SQL Developer for migration.
Options:
1.
2.
At migration time, you migrate the data and then generate the Oracle schema
objects
3.
4.
Answer
When using SQL Developer for migration a representation of the structure of the
source database is stored in a migration repository and the process of migrating a
third-party database is simplified.
Option 1 is correct. SQL Developer captures information from the source database
and displays it in the captured model, which is a representation of the structure of
the source database. This representation is stored in a migration repository.
Option 2 is incorrect. When you are ready to migrate, you generate the Oracle
schema objects, and then migrate the data. SQL Developer contains logic to
extract data from the data dictionary of the source database, create the captured
model, and convert it.
Option 3 is incorrect. You can migrate from a database, such as Microsoft SQL
Server and Microsoft Access, to an Oracle database. SQL Developer enables you
to migrate an entire third-party database, including triggers and stored procedures.
Option 4 is correct. SQL Developer enables you to simplify the process of
migrating a third-party database to an Oracle database. It also reduces the effort
and risks involved in a migration project.
Username oe
Password oe
Hostname localhost.easynomadtravel.com
Port 1521
SID orcl
Step 3: Next you need to test the new connection. If the Status is Success, you can
connect to the database using this new connection.
First you click the Test button in the New/Select Database Connection window.
If the status is Success, you click the Connect button.
The new connection is created.
Step 4: To browse the CUSTOMERS table and display its data, you expand the
mydbconnection node by clicking the + sign next to it. Then you expand the Tables
node by clicking the + sign next to it. And you click CUSTOMERS to display the structure
of the CUSTOMERS table.
The Columns tab displays the columns in the table.
You click the Data tab to display the customers' data.
Step 5: You use the SQL Worksheet to select the information for all line item orders with
an ordered quantity greater than 200.
To display the SQL worksheet, you select Tools - SQL Worksheet to display the Select
Connection window.
Then you select the new mydbconnection from the Connection drop-down list if not
already selected and click OK.
The mydbconnection Enter SQL statement window displays. You enter this statement in
the Enter SQL Statement box.
SELECT order_id, line_item_id, product_id, unit_price, quantity
FROM order_items
WHERE quantity > 200
You click the Execute Statement icon or press F9 to display the results of the SQL
statement in the Results window.
The results are displayed in the form of a table with order_id, line_item_id,
product_id, unit_price, and quantity columns. Three items with a quantity
exceeding 200 have been found.
Summary
SQL Developer enables you to use search engines such as Ask Tom, Google and OTN
Forums. You can customize the SQL Developer interface according to your preferences.
SQL Developer provides categories of reports about the database and its objects and
enables you to create user-defined reports which you can organize in folders you've
added.
SQL Worksheet supports most SQL*Plus statements and you can open the SQL*Plus
command-line window from within SQL Developer.
You can migrate from a database such as Microsoft SQL Server and Microsoft Access to
an Oracle database. SQL Developer enables you to simplify the migration process.
SQL Developer enables you to create and connect to a new database connection,
browse database objects such as tables, use the SQL Worksheet to execute and save
SQL scripts, open and run .sql files from folders and create your own reports.
After completing this topic, you should be able to identify the steps for using SQL
Developer to create reports and migrate to an Oracle Database 11g database.
Ask Tom
Metalink
Docs
10.2 docs
9.2 docs
search.oracle.com
OTN Forums
In this example, the OTN Forums are searched using the search term SQL Developer.
The search results display in your browser window.
You can customize many aspects of the SQL Developer interface and environment by
modifying the SQL Developer preferences according to your preferences and needs.
To modify SQL Developer preferences, you select Tools - Preferences.
The Preferences dialog box displays.
Most preferences are self-explanatory. Some preferences involve performance or system
resource trade-offs for example, enabling a feature that adds execution time and
other preferences involve only personal aesthetic taste. The preferences are grouped into
categories.
In this example the Code Editor preferences are displayed.
Note
You can toggle your line numbers on and off using the Show Line Numbers
check box.
SQL Developer provides many reports about the database and its objects. These reports
are grouped into categories:
Object reports
Charts
Jobs reports
PL/SQL reports
Security reports
Streams reports
Table reports
XML reports
To display a report, you select the Reports tab, and then select the report type.
You can also create your own user-defined reports. For example, to display a chart, you
expand the Charts node and then select the particular chart you want to display. The
example shows an Object Distribution chart report.
You can generate reports about your PL/SQL code. You can find out information about
arguments, search source code for object name or text strings, and find out the length of
your PL/SQL routines.
To run the Search Source Code report, you click the Search Source Code node, specify
a database connection in the Select Connection dialog box, and click OK. This report
enables you to find either text strings or object names in your PL/SQL code.
In the Enter Bind Values dialog box, you enter a value for example customer in the
Value text field and click Apply.
The search result shows all occurrences of the text string "customer" in the PL/SQL code.
The results show the owner, the PL/SQL object name, code type, line number, and the
text on that line number.
User-defined reports are any reports that are created by SQL Developer users.
To create a user-defined report, you right-click the User Defined Reports node, and
select Add Report.
This displays the Create Report dialog box. You specify the report name and an optional
description to indicate that the report contains sales orders organized by sales
representatives.
You enter the complete SQL statement for retrieving the information to be displayed in the
user-defined report in the SQL box. You can also include an optional tool tip to be
displayed when the cursor stays briefly over the report name in the Reports navigator
display.
To retrieve the information you click Apply.
SELECT order_mode, order_total, sales_rep_id
FROM orders
WHERE sales_rep_id IS NOT NULL
GROUP BY order_mode, order_total, sales_rep_id
ORDER BY sales_rep_id
Your new report OrdersByRep appears under the node User Defined Reports and its
contents are displayed in the right-hand pane.
You can organize user-defined reports in folders, and you can create a hierarchy of
folders and subfolders.
To create a folder for user-defined reports, you right-click the User Defined Reports
node or any folder name under that node and select Add Folder.
This displays the Create Folder dialog box that enables you to name your new folder.
Note
Information about user-defined reports, including any folders for these reports, is
stored in a file named UserReports.xml under the directory for user-specific
information.
Question
Which statements accurately describe report creation in SQL Developer?
Options:
1.
2.
SQL Developer reports are grouped into categories, such as object reports and
charts
3.
You can customize existing report templates, but you cannot create user-defined
reports
4.
Answer
In SQL Developer reports are grouped into categories, such as object reports and
charts. And you can generate reports about PL/SQL code.
Option 1 is incorrect. You can organize user-defined reports into folders, and you
can create a hierarchy of folders and subfolders. However, these folders do not
have to be created before the reports are created.
Option 2 is correct. SQL Developer provides many reports about the database and
its objects. Reports are grouped into categories, such as object reports, charts,
PL/SQL reports, and XML reports.
Option 3 is incorrect. Although SQL Developer provides many reports about the
database and its objects, you can also create your own user-defined reports.
Option 4 is correct. You can generate reports about your PL/SQL code. You can
find out information about arguments, search source code for object name or text
strings, and find out the length of your PL/SQL routines.
2. Using SQL*Plus
SQL Worksheet supports most SQL*Plus statements. SQL*Plus statements must be
interpreted by SQL Worksheet before being passed to the database. Any SQL*Plus
statements that are not supported by the SQL Worksheet are ignored and not passed to
the database.
To display the SQL*Plus command-line interface, you first close all SQL worksheets and
then select Tools - SQL*Plus.
You must use the Oracle SID in your SQL Developer connection in order for the
SQL*Plus menu item to be enabled.
This opens the SQL*Plus command-line window on top of SQL Developer.
To use this feature, the system on which you are using SQL Developer must have an
Oracle home directory or folder, with a SQL*Plus executable under that location. If the
location of the SQL*Plus executable is not already stored in your SQL Developer
preferences, you are asked to specify its location.
To do this you select Tools - Preferences and then select Database. You enter the path
to the sqlplus.exe file in the SQL*Plus executable text box and click OK.
SQL Developer does not support all SQL*Plus statements.
For example, the SQL*Plus statements append, archive, and attribute are not
supported by SQL Developer.
Supplement
Selecting the link title opens the resource in a new browser window.
Launch window
View the full list of SQL*Plus statements that are and are not supported by SQL
Developer here.
Question
What must be considered when invoking SQL*Plus from SQL Developer?
Options:
1.
2.
3.
You must close all SQL worksheets to enable the SQL*Plus menu option
4.
You must use the Oracle SID in your SQL Developer connection
Answer
When invoking SQL*Plus from SQL Developer, you must close all SQL worksheets
to enable the SQL*Plus menu option. And you must use the Oracle SID in your
SQL Developer connection.
Option 1 is incorrect. To launch SQL*Plus from SQL Developer, the system on
which you are using SQL Developer must have an Oracle home directory or
folder, with a SQL*Plus executable under that location.
Option 2 is incorrect. If the location of the SQL*Plus executable, sqlplus.exe, is
not already stored in your SQL Developer preferences, you are asked to specify
its location the first time you invoke SQL*Plus.
Option 3 is correct. To enable the SQL*Plus menu option, and to invoke SQL*Plus
from SQL Developer, you must close all open SQL worksheets.
Option 4 is correct. You must use the Oracle SID in your SQL Developer
connection in order for the SQL*Plus menu item to be enabled.
enables you to migrate an entire third-party database, including triggers and stored procedures
enables you to see and compare the captured model and the converted model, and to customize
each if you want, so that you can control how much automation there is in the migration process
Question
Identify the valid statements regarding the use of SQL Developer for migration.
Options:
1.
2.
At migration time, you migrate the data and then generate the Oracle schema
objects
3.
4.
Answer
When using SQL Developer for migration a representation of the structure of the
source database is stored in a migration repository and the process of migrating a
third-party database is simplified.
Option 1 is correct. SQL Developer captures information from the source database
and displays it in the captured model, which is a representation of the structure of
the source database. This representation is stored in a migration repository.
Option 2 is incorrect. When you are ready to migrate, you generate the Oracle
schema objects, and then migrate the data. SQL Developer contains logic to
extract data from the data dictionary of the source database, create the captured
model, and convert it.
Option 3 is incorrect. You can migrate from a database, such as Microsoft SQL
Server and Microsoft Access, to an Oracle database. SQL Developer enables you
to migrate an entire third-party database, including triggers and stored procedures.
Username oe
Password oe
Hostname localhost.easynomadtravel.com
Port 1521
SID orcl
Step 3: Next you need to test the new connection. If the Status is Success, you can
connect to the database using this new connection.
First you click the Test button in the New/Select Database Connection window.
If the status is Success, you click the Connect button.
The new connection is created.
Step 4: To browse the CUSTOMERS table and display its data, you expand the
mydbconnection node by clicking the + sign next to it. Then you expand the Tables
node by clicking the + sign next to it. And you click CUSTOMERS to display the structure
of the CUSTOMERS table.
The Columns tab displays the columns in the table.
You click the Data tab to display the customers' data.
Step 5: You use the SQL Worksheet to select the information for all line item orders with
an ordered quantity greater than 200.
To display the SQL worksheet, you select Tools - SQL Worksheet to display the Select
Connection window.
Then you select the new mydbconnection from the Connection drop-down list if not
already selected and click OK.
The mydbconnection Enter SQL statement window displays. You enter this statement in
the Enter SQL Statement box.
SELECT order_id, line_item_id, product_id, unit_price, quantity
FROM order_items
WHERE quantity > 200
You click the Execute Statement icon or press F9 to display the results of the SQL
statement in the Results window.
The results are displayed in the form of a table with order_id, line_item_id,
product_id, unit_price, and quantity columns. Three items with a quantity
exceeding 200 have been found.
Step 6: You set your script pathing preference.
First you select Tools - Preferences.
Then you expand the Database node and select Worksheet Parameters.
You enter C:\Oracle Data\Scripts in the "Select default path to look for scripts" field
and click OK.
Step 7: You need to save the SQL statement to a script file.
You click the Save icon on the toolbar.
In the Save dialog box you name the file ItemsGT200.sql, save it in your C:\Oracle
Data\Scripts folder, and click Save.
The SQL statement has been saved.
Step 8: You open and run the file ItemsGT200.sql from your
C:\Oracle Data\Scripts folder.
You click the Open icon.
You select ItemsGT200.sql and click Open.
You click the Execute Statement icon or F9 to display the results.
The results of your query display.
Step 9: You need to create a report named CUSTBYACCTMGR and save it to a folder
named CUSTOMERREPORTS.
On the Reports tabbed page, you right-click User Defined Reports and select Add
Folder.
In the Create Folder dialog box, you enter the name CUSTOMERREPORTS, add a
description, and click Apply.
The subfolder CUSTOMERREPORTS is created in the folder User Defined Reports.
You right-click the CUSTOMERREPORTS node and select Add Report.
The Create Report dialog box displays. You name the report CUSTBYACCTMGR, enter this
query in the SQL text field, and click Apply.
SELECT COUNT(*), account_mgr_id
FROM customers
GROUP BY account_mgr_id
The report CUSTBYACCTMGR is created in the folder CUSTOMERREPORTS.
You click the report to view its contents.
Finally, you exit SQL Developer.
Summary
SQL Developer enables you to use search engines such as Ask Tom, Google and OTN
Forums. You can customize the SQL Developer interface according to your preferences.
SQL Developer provides categories of reports about the database and its objects and
enables you to create user-defined reports which you can organize in folders you've
added.
SQL Worksheet supports most SQL*Plus statements and you can open the SQL*Plus
command-line window from within SQL Developer.
You can migrate from a database such as Microsoft SQL Server and Microsoft Access to
an Oracle database. SQL Developer enables you to simplify the migration process.
SQL Developer enables you to create and connect to a new database connection,
browse database objects such as tables, use the SQL Worksheet to execute and save
SQL scripts, open and run .sql files from folders and create your own reports.
Abstract
This article describes the Data Change Notification and lock enhancements included with Oracle
Database 11g, and discusses their scope and uses in database management.
Data Change Notification
result-set-change notifications, which result from DML or DDL changes to the result set
associated with the registered queries are published
new static data dictionary views, which allow you to see which queries are registered for resultset-change notifications
In this example, the application has cached the result set of a query on OE.ORDERS.
The Data Change Notification process consists of several steps.
1. A registration for the query on OE.ORDERS using the CQN PL/SQL interface is created. In
addition, a stored PL/SQL procedure to process notifications and supply the server-side PL/SQL
procedure as the notification handler is created.
2. The database populates the registration information in the data dictionary.
3. A user modifies one of the registered objects with DML statements and commits the transaction. For
example, a user updates a row in the OE.ORDERS table on the back-end database. The data for
OE.ORDERS cached in the middle tier is now stale.
4. Oracle Database adds a message that describes the change to an internal queue.
5. A JOBQ background process is notified of a new change notification message.
6. The JOBQ process executes the stored procedure specified by the client application. In this example,
JOBQ passes the data to a server-side PL/SQL procedure. The implementation of the PL/SQL callback
procedure determines how the notification is handled.
7. Inside the server-side PL/SQL procedure, you can implement logic to notify the middle tier client
application of the changes to the registered objects. For example, it notifies the application of the ROWID
of the changed row in OE.ORDERS.
8. The middle-tier application queries the back-end database to retrieve the changed data.
9. The client application updates the cache with the new data.
When you run a query, Domain Name Service (DNS) which is a system for naming computers and
network services that is organized into a hierarchy of domains generates either object-change
notifications or result-set-change notifications.
The following two examples can be used to illustrate the difference between object-change notifications
and result-set-change notifications.
Example 1
SELECT order_id, order_total
FROM orders
WHERE sales_rep_id = 158;
In this example, DNS generates an object-change notification for this query for any DML or DDL change
to the ORDERS table, even if the changed row or rows did not satisfy the query predicate for example, if
sales_rep_id = 160.
DNS generates a result-set-change notification only if the query result set itself changed and both of
these conditions are true:
the changed row or rows satisfy the query predicate sales_rep_id = 158 either before or
after the change
the change affected at least one of the columns in the SELECT list order_id or order_total
as the result of either an UPDATE or an INSERT
Example 2
SELECT customer_id, cust_first_name, cust_last_name
FROM customers
WHERE credit_limit = 1400;
In this example, DNS generates an object-change notification for this query for any DML or DDL change
to the CUSTOMERS table, even if the changed row or rows did not satisfy the query predicate for
example, if credit_limit = 1200.
DNS generates a result-set-change notification for this query only if the query result set itself changed,
which means that both of these conditions are true:
the changed row or rows satisfy the query predicate credit_limit = 1400 either before
or after the change
the change affected at least one of the columns in the SELECT list customer_id or
cust_first_name or cust_last_name as the result of either an UPDATE or an INSERT statement
There are several dictionary views that you can query to see the status of CQN. For example, to view toplevel information about all registrations, you can use the DBA_CHANGE_NOTIFICATION_REGS and the
USER_CHANGE_NOTIFICATION_REGS dictionary views.
Two dictionary views are added in Oracle Database 11g to support result-set-change notifications. These
are
DBA_CQ_NOTIFICATION_QUERIES
USER_CQ_NOTIFICATION_QUERIES
These views contain the query ID, query text, and registration ID values.
Utilizing lock enhancements
In this example, the ORDERS table is locked in session 1. In session 2, a user tries to put a lock on the
same ORDERS table, but specifies to wait 60 seconds. This means that if the table is already locked by
another user, session 2 will wait 60 seconds for the lock. If the lock in the other session is not released
after 60 seconds, an appropriate error message is returned to the session 2 user.
Summary
Data Change Notification and lock enhancements are two of the new features that have been added in
Oracle Database 11g.
Continuous Query Notification (CQN) enables an application to register queries with the database for
either object change notification or result-set-change notification. Result-set-change notifications result
from changes to the result set associated with the queries. You can use two new dictionary views,
DBA_CQ_NOTIFICATION_QUERIES and USER_CQ_NOTIFICATION_QUERIES, to see which queries are
registered for result-set-change notifications.
You can use the new WAIT syntax with the LOCK TABLE statement to specify the maximum number of
seconds your table should wait to obtain a DML lock. If the table is already locked by another user, you
receive an appropriate error message. You can use the new DDL_LOCK TIMEOUT parameter to specify a
DDL lock timeout. You can set DDL_LOCK TIMEOUT at the system level, or you can set it at the session
level, using an ALTER SESSION statement.
After completing this topic, you should be able to use SQL and PL/SQL language
functionality enhancements to connect to a database and create a report, examine
dependency at the element level, and modify an exception handler.
Exercise overview
In this exercise, you're required to identify the correct code that uses various Oracle
Database 11g language functionality enhancements to retrieve table information, examine
dependencies, and handle errors.
This involves the following tasks:
examining dependencies
handling exceptions
Suppose you're a database administrator for a large computer retailer. You've just
upgraded to Oracle Database 11g, and want to use some of the new language
functionality enhancements to retrieve information from a table, examine dependencies,
and handle errors.
Step 1 of 2
First, you want to return the number of occurrences of the string "RAM" from the
PRODUCT_DESCRIPTION column and the associated product name.
Which statement should you use?
Options:
1.
2.
3.
Result
To return the position of the occurrences of the string as specified, you use this
statement:
SELECT REGEXP_COUNT (product_description, 'ram', 1, 'i')
Count, product_name FROM product_information WHERE
REGEXP_COUNT (product_description, 'ram', 1, 'i') > 0;
Option 1 is incorrect. Although this statement will return the number of
occurrences of the string "RAM" in the PRODUCT_DESCRIPTION column, it does
not return the associated information from the PRODUCT_NAME column.
Option 2 is correct. The result of this statement will have two columns, COUNT and
PRODUCT_NAME. These will show the number of occurrences of the string "RAM"
from the PRODUCT_DESCRIPTION column and the product names associated
with the product descriptions that contained the string.
Option 3 is incorrect. In order for this statement to return the required results, the
REGEXP_COUNT function should be searching for a value greater than zero
instead of a value equal to zero.
Step 2 of 2
Next, you want to return the position of the occurrences of the second
subexpression in the string "(SD)(RAM)" in the PRODUCT_DESCRIPTION column
of the PRODUCT_INFORMATION table. You also want to return the associated
information from the PRODUCT_NAME column.
Which query should you use?
Options:
1.
2.
3.
Result
To return the position of the occurrences of the second subexpression as
specified, you should use this query:
SELECT REGEXP_INSTR (product_name, '(SD)(RAM)', 1, 1, 0,
'i',2) POSITION, product name FROM product_information WHERE
REGEXP_INSTR (product_name, '(SD)(RAM)', 1, 1, 0, 'i',2) >
0;
Option 1 is incorrect. Although this query will complete without error, it will not
return the required results because it does not contain the specified string within
the REGEXP_INSTR function.
Option 2 is incorrect. Although this query will return the position of the occurrences
of the second subexpression in the string "(SD)(RAM)" in the
PRODUCT_DESCRIPTION column, it does not return the associated information
from the PRODUCT_NAME column.
Option 3 is correct. This query will return the required results in two columns. The
POSITION column returns the position of the second subexpression in the
specified string, and the second column PRODUCT_NAME returns the associated
product names.
Step 1 of 3
First, you want to create a view based on the CUSTOMERS table to store the
customer id, first name, last name, e-mail address, and credit limit data for
customers with credit limits over 2500.
Which statement should you use?
Options:
1.
2.
3.
Result
To create a view as specified, you should use this statement:
CREATE view best_customers AS SELECT customer_id,
cust_first_name, cust_last_name, cust_email, credit_limit
FROM customers WHERE credit_limit > 2500;
Option 1 is incorrect. This statement would create a view of the customer id, first
name, last name, e-mail address and credit limit of customers who have credit
limits below 2500 and not above 2500.
Option 2 is incorrect. This statement would create a view of the customer id, first
name, last name, and credit limit of customers with credit limits over 2500.
However, it does not include the required CUST_EMAIL in the view.
Option 3 is correct. This statement will create the required view, based on the
customer id, first name, last name, e-mail address and credit limit of customers in
the CUSTOMERS table that have a credit limit of greater than 2500.
Next, you want to create a function named "GET_BEST_CUSTOMERS" that returns the
number of customers with credit ratings greater than or equal to 2500.
You have already written part of the code.
CREATE OR REPLACE FUNCTION GET_BEST_CUSTOMERS
(p_credit_limit NUMBER DEFAULT 2500)
RETURN NUMBER
IS
v_highest_amount NUMBER := 0;
BEGIN
<required code>
END GET_BEST_CUSTOMERS;
Step 2 of 3
Which code segment should be used instead of the line <required code> inside
the executable section to return the required results?
Options:
1.
2.
3.
Result
To create a function named "GET_BEST_CUSTOMERS" as specified, you should
replace the missing code with this code segment:
SELECT COUNT(*) INTO v_highest_amount FROM best_customers
WHERE credit_limit >= p_credit_limit;
RETURN v_highest_amount;
Option 1 is incorrect. To achieve the desired results, the value returned from
selecting COUNT(*) from the BEST_CUSTOMERS view should be stored in the
V_HIGHEST_AMOUNT variable. The statement should also return
V_HIGHEST_AMOUNT.
Option 2 is incorrect. To achieve the desired results, the value returned from
selecting COUNT(*) from the BEST_CUSTOMERS view should be stored in the
V_HIGHEST_AMOUNT variable.
Option 3 is correct. This statement will return the number of customers who have
a credit rating greater than or equal to 2500.
Step 3 of 3
Finally, you want to view the status of the BEST_CUSTOMERS view and
GET_BEST_CUSTOMERS function you have created by querying the
USER_OBJECTS data dictionary view.
Which statement should you use to return the object name, type, and status?
Options:
1.
2.
3.
4.
Result
To return the object name, object type, and status, you should query the
USER_OBJECTS data dictionary view using this statement:
SELECT object_name, object_type, status FROM user_objects
WHERE object_name LIKE '%BEST_CUST%';
Option 1 is incorrect. Although the view name BEST_CUSTOMERS and the function
name GET_BEST_CUSTOMERS contain the string "BEST_CUST", this query will not
return the correct results because the LIKE operator should be used in place of
the equals sign operator in the WHERE clause.
Option 2 is incorrect. The columns from the USER_OBJECTS table that should be
returned are OBJECT_NAME, OBJECT_TYPE, and STATUS. The underscore
symbol is a required part of the column name.
Option 3 is incorrect. Although this query will return the object name and status of
the BEST_CUSTOMERS view and the GET_BEST_CUSTOMERS function, it will not
return the required object type.
Option 4 is correct. This query will return the object name, object type, and status
of the BEST_CUSTOMERS view and GET_BEST_CUSTOMERS function. In this case,
both objects would have a status of VALID.
Step 1 of 2
You want to add an exception handler to your GET_BEST_CUSTOMERS function to
catch all exceptions.
Which statement correctly alters a session to enable compiler warnings?
Options:
1.
2.
3.
4.
Result
This statement alters a session to enable compiler warnings:
ALTER SESSION SET PLSQL_WARNINGS = 'enable:all';
Option 1 is incorrect. It is the PLSQL_WARNINGS parameter and not
PLSQL_WARNING that is used to enable or disable the reporting of warning
messages by the PL/SQL compiler.
Option 2 is correct. This statement enables the reporting of warning messages by
the PL/SQL compiler. The ENABLE value is used to enable a specific warning or a
set of warnings.
Option 3 is incorrect. The SET keyword is required before the PLSQL_WARNINGS
parameter for the statement to function correctly.
Option 4 is incorrect. This statement sets the PLSQL_WARNINGS parameter
system level instead of the session level.
Next, you want to add an exception handler to the GET_BEST_CUSTOMERS function to
catch for all exceptions.
You have already written part of the code.
CREATE OR REPLACE FUNCTION GET_BEST_CUSTOMERS
(p_credit_limit NUMBER DEFAULT 2500)
RETURN NUMBER
IS
v_highest_amount NUMBER := 0;
BEGIN
SELECT COUNT(*)
INTO v_highest_amount
FROM best_customers
WHERE credit_limit >= p_credit_limit;
RETURN v_highest_amount;
EXCEPTION
WHEN OTHERS THEN
RETURN 0;
END GET_BEST_CUSTOMERS;
Step 2 of 2
What code should replace the "WHEN OTHERS THEN" line in the current
EXCEPTION section to enable the GET_BEST_CUSTOMERS function to catch all
exceptions?
Options:
1.
2.
3.
4.
Result
To enable the GET_BEST_CUSTOMERS function to catch all exceptions, you use
this code:
WHEN OTHERS THEN RAISE_APPLICATION_ERROR (-20001, 'Error
occurred in get_best_customers.');
Option 1 is incorrect. The THEN keyword is required following the WHEN OTHERS
clause and before the call to the RAISE_APPLICATION_ERROR function.
Option 2 is incorrect. The correct function name is RAISE_APPLICATION_ERROR
and not RAISE APPLICATION ERROR.
Option 3 is incorrect. When using the RAISE_APPLICATION_ERROR function, the
specified error message must be enclosed in single quotes.
Option 4 is correct. Using the RAISE_APPLICATION_ERROR function in
conjunction with the WHEN OTHERS THEN clause allows you specify an error
number and message to return when errors are encountered in your code.
You have successfully used some of the new language functionality enhancements
provided in Oracle Database 11g to retrieve table information, examine dependencies,
and handle errors.
After completing this topic, you should be able to recognize the steps for using native
dynamic SQL and the DBMS_SQL package to specify dynamic SQL statements, using
CLOB data types and also abstract data types.
IS
dynamic_pl CLOB := p_pgm;
BEGIN
EXECUTE IMMEDIATE dynamic_pl;
-- next line is for learning purposes only
DBMS_OUTPUT.PUT_LINE ('Just executed the following code: ' ||
dynamic_pl);
END gen_pl;
In this example, you can pass any PL/SQL block as a parameter to the procedure.
CREATE OR REPLACE PROCEDURE gen_pl
(p_pgm CLOB)
IS
dynamic_pl CLOB := p_pgm;
BEGIN
EXECUTE IMMEDIATE dynamic_pl;
-- next line is for learning purposes only
DBMS_OUTPUT.PUT_LINE ('Just executed the following code: ' ||
dynamic_pl);
END gen_pl;
EXECUTE gen_pl('begin dbms_output.put_line
(''put any code here''); end;')
put any code here
Just executed the following code: begin dbms_output.put_line('put
any
code
here'); end;
PL/SQL procedure successfully completed.
This is an example of passing a call to the DBMS_OUTPUT procedure. You can pass in any
string to represent a PL/SQL block.
EXECUTE gen_pl('begin dbms_output.put_line
(''put any code here''); end;')
put any code here
Just executed the following code: begin dbms_output.put_line('put
any
code
here'); end;
Question
Which statements accurately describe dynamic SQL?
Options:
1.
2.
PL/SQL programs can use dynamic SQL with the DBMS_SQL package only
3.
The full text of a dynamic SQL statement might be unknown until run time
4.
Answer
A dynamic SQL statement is a string literal, string variable, or string expression.
And the full text of a dynamic SQL statement might be unknown until run time.
Option 1 is correct. A dynamic SQL statement is a string literal, string variable, or
string expression. Using dynamic SQL, you can make your PL/SQL programs
more general and flexible because you do not need to know the full text of the
dynamic SQL statement until run time.
Option 2 is incorrect. A PL/SQL program can use dynamic SQL with either native
dynamic SQL or the DBMS_SQL package.
Option 3 is correct. The full text of a dynamic SQL statement might be unknown
until runtime. Your program may build the SQL statement based on different
scenarios within an application.
Option 4 is incorrect. Because a full dynamic SQL statement may not be known
until run time, its syntax is checked at run time instead of at compile time.
Question
When should native dynamic SQL be used in place of the DBMS_SQL package?
Options:
1.
2.
When you do not know how many columns a SELECT statement will return
3.
4.
When you want to use the %FOUND cursor attributes after issuing a dynamic SQL
INSERT statement
Answer
You should use native dynamic SQL when the dynamic SQL statement retrieves
rows into records. And you should use it when you want to use the %FOUND cursor
attributes after issuing a dynamic SQL INSERT statement.
Option 1 is correct. Native dynamic SQL should be used when a SQL statement
retrieves rows into records. It is recommended that you always use native
dynamic SQL, except when you cannot.
Option 2 is incorrect. The DBMS_SQL package, instead of native dynamic SQL,
should be used when you do not know how many columns a SELECT statement
with return, or what their data types are.
Option 3 is incorrect. The DBMS_SQL package, instead of native dynamic SQL,
should be used when you do not know the SELECT list at compile time.
Option 4 is correct. You should use native dynamic SQL when you want to use the
%FOUND, %NOTFOUND, %ISOPEN, or %ROWCOUNT SQL cursor attributes after
issuing a dynamic SQL statement that is an INSERT, UPDATE, DELETE, or singlerow SELECT statement.
2. Interoperability
You can switch between the DBMS_SQL package and native dynamic SQL by using the
two new functions that are added into the DBMS_SQL package as of Oracle Database
11g.
The DBMS_SQL.TO_REFCURSOR function converts a SQL cursor number to a weakly
typed variable of the PL/SQL data type REF CURSOR. You can use the REF CURSOR
variable in native dynamic SQL statements.
The DBMS_SQL.TO_CURSOR_NUMBER function converts a REF CURSOR variable, either
strongly or weakly typed, to a SQL cursor number. You can pass the SQL cursor number
to DBMS_SQL subprograms.
The two new functions have specific code associated with them:
DBMS_SQL.TO_REFCURSOR
DBMS_SQL.TO_REFCURSOR
(cursor_number IN INTEGER)
RETURN SYS_REFCURSOR;
DBMS_SQL.TO_CURSOR_NUMBER
DBMS_SQL.TO_CURSOR_NUMBER
(rc IN OUT SYS_REFCURSOR)
RETURN INTEGER;
It is best practice to avoid having any client programs use SQL directly. You should
implement the SQL via PL/SQL routines.
If you are doing a query with an unbounded result, you need a REF CURSOR.
If the WHERE clause is not known until run time, you can use DBMS_SQL for processing
and then switch to do BULK COLLECT.
In this example, a DBMS_SQL cursor is created, opened, parsed, and executed.
CREATE OR REPLACE PROCEDURE do_query (rep_id NUMBER)
IS
TYPE num_list IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
TYPE cur_type IS REF CURSOR;
src_cur cur_type;
c_hndl NUMBER;
cust_nos num_list;
crdt_nos num_list;
ret INTEGER;
sql_stmt CLOB;
BEGIN
c_hndl := DBMS_SQL.OPEN_CURSOR;
sql_stmt := 'SELECT customer_id, credit_limit FROM customers
WHERE account_mgr_id = :b1';
DBMS_SQL.PARSE(c_hndl, sql_stmt, DBMS_SQL.NATIVE);
DBMS_SQL.BIND_VARIABLE(c_hndl, 'b1', rep_id);
ret := DBMS_SQL.EXECUTE(c_hndl);
-- continued on next page
The cursor is then transformed into a PL/SQL REF CURSOR that is consumed by native
dynamic SQL.
The cursor is switched to native dynamic SQL and the fetch is performed with native
dynamic SQL.
-- continued from previous page
-- switch from dbms_sql to native dynamic SQL
src_cur := DBMS_SQL.TO_REFCURSOR(c_hndl);
-- fetch with native dynamic SQL
FETCH src_cur BULK COLLECT INTO cust_nos, crdt_nos;
IF cust_nos.COUNT > 0 THEN
DBMS_OUTPUT.PUT_LINE ('Customer Credit Limit');
DBMS_OUTPUT.PUT_LINE ('-------- ------------');
FOR i IN 1 .. cust_nos.COUNT LOOP
DBMS_OUTPUT.PUT_LINE(cust_nos(i) || ' ' ||
crdt_nos(i));
END LOOP;
END IF;
CLOSE src_cur;
END do_query;
/
This example shows the execution of the DO_QUERY procedure.
EXECUTE do_query(145)
Customer
-------308
309
Credit Limit
-----------1200
1200
310
5000
360
3600
344
2400
380
3700
...
934
600
PL/SQL procedure successfully completed.
When using the DBMS_SQL.TO_REFCURSOR function, the cursor passed in by the cursor
number parameter must be opened, parsed, and executed.
After the cursor number is transformed into a REF CURSOR
you cannot use DBMS_SQL.ISOPEN to check to see if the cursor number is still open
Toggling between a REF CURSOR and a DBMS_SQL cursor number after starting to fetch
is not allowed.
You can use the DBMS_SQL.TO_CURSOR_NUMBER function to transform a REF CURSOR
into a DBMS_SQL cursor number.
In this example, a REF CURSOR is opened and transformed into a DBMS_SQL cursor.
When using the DBMS_SQL.TO_CURSOR_NUMBER function, the REF CURSOR passed in
must be opened first.
CREATE OR REPLACE PROCEDURE do_query2 (sql_stmt VARCHAR2, rep_id
NUMBER)
IS
TYPE cur_type IS REF CURSOR;
src_cur cur_type;
c_hndl NUMBER;
desctab DBMS_SQL.DESC_TAB;
colcnt NUMBER; custid NUMBER; crdvar NUMBER;
BEGIN
OPEN src_cur FOR sql_stmt USING rep_id;
-- switch from native dynamic SQL to DBMS_SQL:
c_hndl := DBMS_SQL.TO_CURSOR_NUMBER(src_cur);
DBMS_SQL.DESCRIBE_COLUMNS(c_hndl, colcnt, desctab);
-- define columns
FOR i in 1 .. colcnt LOOP
IF desctab(i).col_type=1 THEN
DBMS_SQL.DEFINE_COLUMN(c_hndl, i, custid);
ELSIF desctab(i).col_type = 2 THEN
DBMS_SQL.DEFINE_COLUMN(c_hndl, i, crdvar);
END IF;
END LOOP;
-- continued on next page
After the REF CURSOR is transformed into a DBMS_SQL cursor number, the REF CURSOR
is no longer accessible by any native dynamic SQL operations.
You cannot toggle between a REF CURSOR and a DBMS_SQL cursor number after
fetching is started.
-- continued from previous page
-- fetch rows
WHILE DBMS_SQL.FETCH_ROWS(c_hndl) > 0 LOOP
FOR i IN 1 .. colcnt LOOP
IF desctab(i).col_type=1 THEN
DBMS_SQL.COLUMN_VALUE(c_hndl, i, custid);
ELSIF desctab(i).col_type = 2 THEN
DBMS_SQL.COLUMN_VALUE(c_hndl, i, crdvar);
END IF;
END LOOP;
-- could do more processing...
END LOOP;
DBMS_SQL.CLOSE_CURSOR(c_hndl);
END do_query2;
/
In this example, the DO_QUERY2 procedure is created. It is then executed with a SQL
statement passed to it as the first parameter.
This SQL statement is opened as a REF CURSOR. The procedure converts the REF
CURSOR into a DBMS_SQL cursor. The rest of the processing for this procedure uses
DBMS_SQL.
EXECUTE do_query2('SELECT customer_id, credit_limit FROM customers
WHERE account_mgr_id = :b1', 148)
PL/SQL procedure successfully completed.
Opaque types are abstract data types. With data implemented as simply a series of
bytes, the internal representation is not exposed.
Often opaque types are provided by Oracle's supplied packages rather than being
implemented by you.
Opaque types are similar in some basic ways to object types, with similar concepts of
static methods, instances, and instance methods. Typically, only the methods supplied
with an opaque type allow you to manipulate the state and internal byte representation.
For example, XMLType, provided with Oracle Database 11g, facilitates handling XML data
natively in the database.
Note
PL/SQL data types, such as INDEX-BY tables, Booleans, and records are not
supported as bind and define variable data types in DBMS_SQL. However,
DBMS_SQL supports all SQL data types.
In this example, DBMS_SQL is used with the varray column in the CUSTOMERS table. The
PHONE_LIST_TYP object is defined in the Order Entry sample schema as
Phone_list_typ VARRAY(5) OF VARCHAR2(25).
CREATE OR REPLACE PROCEDURE update_phone_nos
(p_new_nos phone_list_typ, p_cust_id customers.customer_id%TYPE)
IS
some_phone_nos phone_list_typ;
c_hndl NUMBER;
r NUMBER;
sql_stmt CLOB :=
'UPDATE customers SET phone_numbers = :b1
WHERE customer_id = :b2
RETURNING phone_numbers INTO :b3';
BEGIN
c_hndl := DBMS_SQL.OPEN_CURSOR;
DBMS_SQL.PARSE(c_hndl, sql_stmt, dbms_sql.native);
DBMS_SQL.BIND_VARIABLE (c_hndl, 'b1', p_new_nos);
DBMS_SQL.BIND_VARIABLE (c_hndl, 'b2', p_cust_id);
DBMS_SQL.BIND_VARIABLE (c_hndl, 'b3', some_phone_nos);
r := DBMS_SQL.EXECUTE (c_hndl);
DBMS_SQL.VARIABLE_VALUE(c_hndl, 'b3', some_phone_nos);
DBMS_SQL.CLOSE_CURSOR(c_hndl);
-- continued on next page
number
number
number
number
=
=
=
=
12345678
22222222
33333333
44444444
updated.
updated.
updated.
updated.
the warehouse ID for which you are updating the trucking information
Supplement
Selecting the link title opens the resource in a new browser window.
Launch window
Question
Which data types does the DBMS_SQL package support?
Options:
1.
2.
3.
4.
Answer
The DBMS_SQL package supports abstract data types and all SQL data types.
Option 1 is correct. The DBMS_SQL package supports abstract types, including
opaque types.
Option 2 is correct. DBMS_SQL supports abstract data types, varrays, nested
tables, REFs, opaque types, and all SQL data types.
Option 3 is incorrect. Opaque types are abstract data types, which are supported
by DBMS_SQL.
Option 4 is incorrect. PL/SQL data types, such as INDEX-BY tables, Booleans,
and records are not supported as bind and define variable data types in
DBMS_SQL.
Summary
Native dynamic SQL and the DBMS_SQL package are two ways to implement a dynamic
SQL statement programmatically.
You can switch between the DBMS_SQL package and native dynamic SQL by using the
two new functions that are added into the DBMS_SQL package.
DBMS_SQL supports abstract data types. You can use varrays, nested tables, REFs, and
opaque types with the DBMS_SQL package. PL/SQL data types, such as INDEX-BY
tables, Booleans, and records are not supported as bind and define variable data types in
DBMS_SQL.
After completing this topic, you should be able to identify the steps for using language
enhancements to improve sequence usability, control loop iterations, employ named and
mixed notation in calls to PL/SQL, and place a table in read-only mode.
1. Sequence enhancement
Prior to Oracle Database 11g, you were forced to write a SQL statement in order to use a
sequence object value in a PL/SQL subroutine.
Typically, you would write a SELECT statement to reference the pseudocolumns of
NEXTVAL and CURRVAL to obtain a sequence number. This method created a usability
problem.
In Oracle Database 11g, the limitation of forcing you to write a SQL statement to retrieve
a sequence value is lifted. With the sequence enhancement feature
Question
Within an anonymous PL/SQL block, you have declared a variable
v_new_orderno, with the data type NUMBER.
Which line of code would correctly set the value of the v_new_orderno variable
to the next value of the orderno_seq sequence in the executable section of your
PL/SQL block?
Options:
1.
v_new_orderno = orderno_seq.CURRVAL;
2.
v_new_orderno := orderno_seq.CURRVAL;
3.
v_new_orderno = orderno_seq.NEXTVAL;
4.
v_new_orderno := orderno_seq.NEXTVAL;
Answer
This line of code would correctly set the value of the v_new_orderno variable as
specified:
v_new_orderno := orderno_seq.NEXTVAL;
Option 1 is incorrect. To set the value of the variable to the next value of the
orderno_seq sequence, a colon is required before the equals sign. And the
pseudocolumn NEXTVAL and not CURRVAL should be used.
Option 2 is incorrect. To set the value of the variable to the next value of the
orderno_seq sequence, the pseudocolumn NEXTVAL not CURRVAL should
be used.
Option 3 is incorrect. To set the value of the variable to the next value of the
orderno_seq sequence, a colon is required before the equals sign.
Option 4 is correct. With the release of Oracle Database 11g, you can use the
CURRVAL and NEXTVAL pseudocolumns, qualified by a sequence name, directly
in a PL/SQL expression.
DECLARE
v_total SIMPLE_INTEGER := 0;
BEGIN
FOR i IN 1..10 LOOP
v_total := v_total + i;
dbms_output.put_line
('Total is: '|| v_total);
CONTINUE WHEN i > 5;
v_total := v_total + i;
dbms_output.put_line
('End of Loop Total is:
'|| v_total);
END LOOP;
END;
/
The end result of the TOTAL variable is 70.
Total is: 1
End of Loop Total
2
Total is: 4
End of Loop Total
6
Total is: 9
End of Loop Total
12
Total is: 16
End of Loop Total
20
Total is: 25
End of Loop Total
30
Total is: 36
Total is: 43
Total is: 51
Total is: 60
Total is: 70
is:
is:
is:
is:
is:
After the innermost loop is terminated by the CONTINUE statement, control transfers to
the next iteration of the outermost loop, labeled BeforeTopLoop in this example.
CREATE OR REPLACE PROCEDURE two_loop
IS
v_total NUMBER := 0;
BEGIN
<<BeforeTopLoop>>
FOR i IN 1..10 LOOP
v_total := v_total + 1;
dbms_output.put_line
('Total is: ' || v_total);
FOR j IN 1..10 LOOP
CONTINUE BeforeTopLoop WHEN i + j > 5;
v_total := v_total + 1;
END LOOP;
END LOOP;
END two_loop;
Procedure created.
When this pair of loops completes, the value of the TOTAL variable is 20.
You can also use the CONTINUE statement inside an inner block of code that does not
contain a loop as long as the block is nested inside an appropriate outer loop.
--RESULTS:
EXECUTE two_loop
Total
Total
Total
Total
Total
Total
Total
Total
Total
Total
is:
is:
is:
is:
is:
is:
is:
is:
is:
is:
1
6
10
13
15
16
17
18
19
20
PL/SQL procedure
successfully completed.
The CONTINUE statement gives you greater programming functionality. However, there
are some limitations for its use.
You should be careful not to use the CONTINUE statement outside of a loop or to pass
through a procedure, function, or method boundary. Doing so generates a compiler error.
PL/SQL allows arguments in a subroutine call to be specified using positional, named, or
mixed notation.
Before Oracle Database 11g, only the positional notation was supported in calls from
SQL. Starting in Oracle Database 11g, named and mixed notation can be used for
specifying arguments in calls to PL/SQL subroutines from SQL statements.
The benefits of named and mixed notation from SQL are that
for long parameter lists, with most having default values, you can omit values from the optional
parameters
you can avoid duplicating the default value of the optional parameter at each call site
In this example, the call to the function f within the SELECT SQL statement uses the
named notation.
Prior to Oracle Database 11g, you could not use the named or mixed notation when
passing parameters to a function from within a SQL statement. Prior to Oracle Database
11g, you received the 'ORA-00907: missing right parenthesis' error.
CREATE OR REPLACE FUNCTION f (
p1 IN NUMBER DEFAULT 1,
p5 IN NUMBER DEFAULT 5)
RETURN NUMBER
IS
v number;
BEGIN
v:= p1 + (p5 * 2);
RETURN v;
END f;
/
Function created.
SELECT f(p5 => 10) FROM DUAL;
F(P5=>10)
---------21
Question
What must be considered when using the CONTINUE statement?
Options:
1.
2.
3.
4.
It cannot be used to filter data inside a loop body before main processing begins
Answer
The CONTINUE statement allows you to transfer control within a loop back to a
new iteration. And it cannot appear outside a loop.
Option 1 is correct. The CONTINUE statement gives you greater programming
functionality. It allows you to transfer control within a loop back to a new iteration
or to leave the loop.
Option 2 is incorrect. You cannot use the CONTINUE statement to pass through a
procedure, function or method boundary. This will result in a compiler error.
Option 3 is correct. The CONTINUE statement cannot appear outside of a loop.
Doing so will result in a compile error.
Option 4 is incorrect. The CONTINUE statement offers you a simplified means to
control loop iterations. It is commonly used to filter data inside the body of a loop
before the main processing begins.
3. Read-only tables
You can specify READ ONLY to place a table in read-only mode. When the table is in
read-only mode, you cannot issue any DML statements that affect the table or any
SELECT ... FOR UPDATE statements.
You can issue DDL statements as long as they do not modify any table data. Operations
on indexes associated with the table are allowed when the table is in read-only mode.
You use the ALTER TABLE syntax to put a table into read-only mode. This prevents DDL
or DML changes during table maintenance.
You specify READ WRITE to return a read-only table to read/write mode.
ALTER TABLE customers READ ONLY;
-- perform table maintenance and then
-- return table back to read/write mode
Note
The DROP command is executed only in the data dictionary, so access to the table
contents is not required. The space used by the table will not be reclaimed, until
the tablespace is made read/write again and the required changes can be made to
the block segment headers, and so on.
You decide to try out the various usability features introduced in Oracle Database 11g,
using the OE schema.
You start SQL Developer by double-clicking the SQL Developer 1.2 icon on your
desktop.
You right-click mydbconnection and select Connect.
You enter your password when prompted and click OK.
You create a new sequence using this code.
CREATE SEQUENCE customer_seq
START WITH 1000;
/
In the Object navigator, you right-click Sequences and select New Sequence from the
shortcut menu.
You then enter the sequence Name and Start Value, and click OK.
This code, which uses the SELECT statement, represents an older way of performing this
task.
CREATE OR REPLACE PROCEDURE add_customer
(p_last_name customers.cust_last_name%TYPE,
p_first_name customers.cust_first_name%TYPE)
IS
BEGIN
INSERT INTO customers(customer_id,cust_last_name,
cust_first_name)
You modify the code so that you can call this sequence directly and eliminate the SELECT
statement.
Supplement
Selecting the link title opens the resource in a new browser window.
Launch window
END LOOP;
dbms_output.put_line ('Out of the loop,
index value is: ' || v_index);
END continue_loop;
/
To run the CONTINUE_LOOP procedure, you right-click it and select Run from the options
menu.
You enter the parameter value in the Run PL/SQL window, and then click OK.
If you execute the CONTINUE_LOOP procedure and pass it the value of 5, these are the
results.
In the loop before CONTINUE,
index value is: 1
In the loop before CONTINUE,
index value is: 2
In the loop before CONTINUE,
index value is: 3
In the loop after CONTINUE,
index value is: 3
In the loop before CONTINUE,
index value is: 4
In the loop after CONTINUE,
index value is: 4
In the loop before CONTINUE,
index value is: 5
Out of the loop,
index value is: 5
You can modify a table to be read-only by running this code in the Code Editor window.
ALTER TABLE customers READ ONLY;
You run the procedure by expanding the Procedures node in the Object navigator and
right-click the ADD_CUSTOMER procedure. You select Run from the shortcut menu.
You then enter the parameter values in the Run PL/SQL window, and then click OK.
If you try to modify the table by adding an extra customer, for example you receive an
error message.
To change the table back to being read-write, you use this code.
ALTER TABLE customers READ WRITE;
Question
Identify the statement that would correctly place a table called ORDERS into readonly mode.
Options:
1.
2.
3.
4.
Answer
This statement would correctly place a table called ORDERS into read-only mode:
ALTER TABLE orders READ ONLY;
Option 1 is correct. To place a table named ORDERS into read-only mode, the
statement ALTER TABLE orders READ ONLY; is used. In read-only mode,
DML and DDL statements that affect the table are not permitted.
Option 2 is incorrect. This is not the correct statement for placing a table named
ORDERS into read-only mode. In the ALTER TABLE statement, there should not be
a hyphen between READ and ONLY.
Option 3 is incorrect. This is not the correct statement for placing a table named
ORDERS into read-only mode. In the ALTER TABLE statement, the word MODE
should not be used.
Option 4 is incorrect. This is not the correct statement for placing a table named
ORDERS into read-only mode. In the ALTER TABLE statement, there should not be
a hyphen between READ and ONLY, and the word MODE should not be used.
Summary
In Oracle Database 11g, you can use the NEXTVAL and CURRVAL pseudocolumns in any
PL/SQL context where an expression of NUMBER data type may legally appear.
The CONTINUE statement that is added to PL/SQL enables you to transfer control within
a loop back to a new iteration or to leave the loop. PL/SQL allows arguments in a
subroutine call to be specified using positional, named, or mixed notation.
You use the ALTER TABLE syntax to put a table into read-only mode. You specify READ
WRITE to return a read-only table to read/write mode. You can drop a table that is in readonly mode.
Improving Performance
Learning objective
After completing this topic, you should be able to recognize the steps for improving the
performance of SQL and PL/SQL with a new compiler, a new, faster data type, inlining
for faster performance, caching, and flashback enhancements.
Note
In Oracle Database 10g, Release 1, the configuration of initialization parameters
and the command setup for native compilation were simplified. The only
parameter required was PLSQL_NATIVE_LIBRARY_DIR. The parameters related
to the compiler, linker, and make utility are obsolete
PLSQL_NATIVE_C_COMPILER, PLSQL_NATIVE_LINKER,
PLSQL_NATIVE_MAKE_FILE_NAME, PLSQL_NATIVE_MAKE_UTILITY. Native
compilation is turned on and off by a separate initialization parameter,
PLSQL_CODE_TYPE, rather than being one of several options in the
PLSQL_COMPILER_FLAGS parameter, which is now deprecated. The
spnc_commands file, located in your ORACLE_HOME/plsql directory, contains
the information for compiling and linking, rather than a makefile.
The SIMPLE_INTEGER data type is a predefined subtype of the BINARY_INTEGER or
PLS_INTEGER data type that has the same numeric range as BINARY_INTEGER. It
differs significantly from PLS_INTEGER in its overflow semantics.
Incrementing the largest SIMPLE_INTEGER value by one silently produces the smallest
value and decrementing the smallest value by one silently produces the largest value.
These "wrap around" semantics conform to the IEEE standard for 32-bit integer
arithmetic.
The SIMPLE_INTEGER predefined subtype has several key features. It
is a predefined subtype
My_Integer_t := 0;
My_Integer_t := 0;
v01
v03
My_Integer_t := 0;
My_Integer_t := 0;
v04
two
lmt
My_Integer_t := 0;
v05
My_Integer_t := 0;
CONSTANT My_Integer_t := 2;
CONSTANT My_Integer_t := 100000000;
Note
There is no difference between using the PLS_INTEGER data type and the
BINARY_INTEGER data type. Starting in Oracle Database 10g, they are exactly
the same.
The main processing of this code performs a loop where simple mathematical
computations take place. The loop is timed using DBMS_UTILITY.GET_CPU_TIME.
If you use the SIMPLE_INTEGER in a mixed operation with any other numeric types, or
pass it as a parameter or a bind, or define it where a PLS_INTEGER is expected, a
compiler warning is issued. If you violate a limitation, a compiler error is raised.
BEGIN
t0 := DBMS_UTILITY.GET_CPU_TIME();
WHILE v01 < lmt LOOP
v00 := v00 + Two;
v01 := v01 + Two;
v02 := v02 + Two;
v03 := v03 + Two;
v04 := v04 + Two;
v05 := v05 + Two;
END LOOP;
IF v01 <> lmt OR v01 IS NULL THEN
RAISE Program_Error;
END IF;
t1 := DBMS_UTILITY.GET_CPU_TIME();
DBMS_OUTPUT.PUT_LINE(
RPAD(LOWER($$PLSQL_Code_Type), 15)||
RPAD(LOWER(My_Integer_t_Name), 15)||
Supplement
Selecting the link title opens the resource in a new browser window.
Launch window
View the complete DBMS_UTILITY.GET_CPU_TIME loop.
The procedure p is executed under two different conditions native compilation with the
SIMPLE_INTEGER data type and native compilation with the PLS_INTEGER data type.
The procedure is natively compiled and the code type is set to use the
SIMPLE_INTEGER.
ALTER PROCEDURE p COMPILE
PLSQL_Code_Type = NATIVE PLSQL_CCFlags = 'simple:true'
REUSE SETTINGS;
Procedure altered.
EXECUTE p()
native
simple_integer
51 centiseconds
pls_integer
884 centiseconds
2. Inlining
Procedure inlining is an optimization process that replaces procedure calls with a copy of
the body of the procedure to be called.
The copied procedure almost always runs faster than the original call because the need
to create and initialize the stack frame for the called procedure is eliminated.
The optimization can be applied over the combined text of the call context and the copied
procedure body. Propagation of constant actual arguments often causes the copied body
to collapse under optimization.
When inlining is achieved, you will notice performance gains of two to ten times.
With Oracle Database 11g, the PL/SQL compiler can automatically find calls that should
be inlined and can do that inlining correctly and quickly. There are some controls to
specify where and when the compiler should do this work, using the
PLSQL_OPTIMIZATION_LEVEL database parameter, but usually, a general request is
sufficient.
When implementing inlining, it is recommended that the process should be applied to
smaller programs, and programs that execute frequently. For example, you may want to
inline small helper programs.
To help you identify which programs to inline, you can use the plstimer PL/SQL
performance tool. This tool specifically analyzes program performance in terms of time
spent in procedures and time spent from particular call sites. It is important that you
identify the procedure calls that may benefit from inlining.
You can use inlining by setting the PLSQL_OPTIMIZE_LEVEL parameter to 3. When this
parameter is set to 3, the PL/SQL compiler searches for calls that might profit by inlining
and inlines the most profitable calls.
Profitability is measured by those calls that will help the program speed up the most and
keep the compiled object program as short as possible.
You can set the PLSQL_OPTIMIZE_LEVEL parameter using an ALTER SESSION
command.
ALTER SESSION SET plsql_optimize_level = 3;
Another way to use inlining is to use PRAGMA INLINE in your PL/SQL code. This
identifies whether a specific call should be inlined or not.
Setting this pragma to "YES" will have an effect only if the optimize level is set to 2 or
higher.
When a program is noninlined, the a:=a*b assignment at the end of the loop looks like it
could be moved before the loop.
However, it cannot be because a is passed as an IN OUT parameter to the TOUCH
procedure.
The compiler cannot be certain what the procedure does to its parameters. This results in
the multiplication and assignment being completed ten times instead of only once, even
though multiple executions are not necessary.
END IF;
a := a*b;
END LOOP;
a := b;
FOR i IN 1..10 LOOP ...
a := a*b;
END LOOP;
a := b;
a := a*b;
FOR i IN 1..10 LOOP ...
END LOOP;
Supplement
Selecting the link title opens the resource in a new browser window.
Launch window
View the complete code of the loop inlining transformation.
To influence the optimizer to use inlining, you can set the PLSQL_OPTIMIZE_LEVEL
parameter to a value of 2 or 3. By setting this parameter, you are making a request that
inlining be used. It is up to the compiler to analyze the code and determine whether
inlining is appropriate.
Setting it to 2 means no automatic inlining is attempted. When the optimize level is set to
3, the PL/SQL compiler searches for calls that might profit by inlining and inlines the most
profitable calls.
Within a PL/SQL subroutine, you can use PRAGMA INLINE to suggest that a specific call
be inlined. When using PRAGMA INLINE, the first argument is the simple name of a
subroutine, a function name, a procedure name, or a method name. The second
argument is either the constant string "NO" or "YES". The pragma can go before any
statement or declaration. If you put it in the wrong place, you receive a syntax error
message from the compiler.
Setting the PRAGMA INLINE to "YES" strongly encourages the compiler to inline the
call. The compiler keeps track of the resources used during inlining and makes the
decision to stop inlining when the cost becomes too high.
CREATE OR REPLACE PROCEDURE small_pgm
IS
a PLS_INTEGER;
FUNCTION add_it(a PLS_INTEGER, b PLS_INTEGER)
RETURN PLS_INTEGER
IS
BEGIN
RETURN a + b;
END;
BEGIN
pragma INLINE (small_pgm, 'YES');
a := add_it(3, 4) + 6;
END small_pgm;
Setting the PRAGMA INLINE to "NO" always works, regardless of any other pragmas that
might also apply to the same statement. The pragma also applies at all optimization
levels, and it applies no matter how badly the compiler would like to inline a particular call.
To identify that a specific call should not be inlined, you use this code.
PRAGMA INLINE (function_name, 'NO');
Pragmas apply only to calls in the next statement following the pragma. Programs that
make use of smaller helper subroutines are good candidates for inlining.
Only local subroutines can be inlined. You cannot inline an external subroutine and cursor
functions should not be inlined.
Inlining can increase the size of a unit. However, be careful about suggesting to inline
functions that are deterministic.
The compiler inlines code automatically, provided that you are using native compilation
and have set the PLSQL_OPTIMIZE_LEVEL to 3.
If you have set PLSQL_Warnings = 'enable:all', using the SQL*Plus SHOW
ERRORS command displays the name of the code that is inlined.
The PLW-06004 compiler message tells you that a PRAGMA INLINE(, 'YES')
referring to the named procedure was found. The compiler will, if possible, inline this call.
The PLW-06005 compiler message tells you the name of the code that is inlined.
Alternatively, you can query the USER/ALL/DBA_ERRORS dictionary view.
Deterministic functions compute the same outputs for the same inputs every time it is
invoked and have no side effects. In Oracle Database 11g, the PL/SQL compiler can
figure out whether a function is deterministic. It may not find all of the ones that truly are,
but it will find many of them. It will never mistake a nondeterministic function for a
deterministic function.
Question
2.
3.
4.
Answer
To use inlining, you set the PLSQL_OPTIMIZER_LEVEL parameter to 3 or use
PRAGMA INLINE in your PL/SQL code.
Option 1 is correct. One of the methods for using inlining is to set the
PLSQL_OPTIMIZE_LEVEL parameter to 3. When this parameter is set to 3, the
PL/SQL compiler searches for calls that might profit by inlining and inlines the
most profitable calls.
Option 2 is incorrect. The parameter used for inlining is
PLSQL_OPTIMIZE_LEVEL and not PLSQL_OPTIMIZER_LEVEL.
Option 3 is correct. One of the methods for using inlining is to use PRAGMA
INLINE in your PL/SQL code. This identifies whether a specific call should be
inlined or not. Setting this parameter to YES will have an effect only if the optimize
level is set to two or higher.
Option 4 is incorrect. Using the plstimer PL/SQL performance tool isn't a
method of using inlining. It is a tool for identifying the procedure calls that might
benefit from inlining.
3. Caching
You can improve the performance of your queries by caching the results of a query in
memory and then using the cached results in future executions of the query or query
fragments.
The cached results reside in the result cache memory portion of the SGA. This feature is
designed to speed up query execution on systems with large memories.
SQL result caching is useful when your queries need to analyze a large number of rows
to return a small number of rows or a single row. Two new optimizer hints are available to
turn on and turn off SQL result caching. These are
/*+ result_cache */
/*+ no_result_cache */
These hints let you override settings of the RESULT_CACHE_MODE initialization
parameter.
You can execute DBMS_RESULT_CACHE.MEMORY_REPORT to produce a memory usage
report of the result cache.
Suppose you need to find the greatest average value of credit limit grouped by state over
the whole population.
The query results in a huge number of rows analyzed to yield a few or one row. In your
query, the data changes fairly slowly, say every hour, but the query is repeated fairly
often, say every second.
In this case, you use the new optimizer hint /*+ result_cache */ in your query.
SELECT /*+ result_cache */
AVG(cust_credit_limit), cust_state_province
FROM sh.customers
GROUP BY cust_state_province;
Starting in Oracle Database 11g, you can use the PL/SQL cross-section function result
caching mechanism. This caching mechanism provides you with a language-supported
and system-managed means for storing the results of PL/SQL functions in a shared
global area (SGA), which is available to every session that runs your application.
The caching mechanism is both efficient and easy to use, and it relieves you of the
burden of designing and developing your own caches and cache-management policies.
To enable result caching for a function, use the RESULT_CACHE clause in your PL/SQL
function.
If a result-cached function is called, the system checks the cache.
If the cache contains the result from a previous call to the function with the same
parameter values, the system returns the cached result to the caller and does not
reexecute the function body.
If the cache does not contain the result, the system executes the function body and adds
the result, for these parameter values, to the cache before returning control to the caller.
The cache can accumulate many results one result for every unique combination of
parameter values with which each result-cached function has been called. If the system
needs more memory, it ages out, or deletes one or more cached results.
You can specify the database objects that are used to compute a cached result, so that if
any of them are updated, the cached result becomes invalid and must be recomputed.
The best candidates for result caching are functions that are called frequently but depend
on information that changes infrequently or never.
Suppose you need a PL/SQL function that derives a complex metric.The data that your
function calculates changes slowly, but the function is frequently called.
In this case, you use the new result_cache clause in your function definition.
CREATE OR REPLACE FUNCTION productName
(prod_id NUMBER, lang_id VARCHAR2)
RETURN NVARCHAR2
RESULT_CACHE RELIES_ON (product_descriptions)
IS
result VARCHAR2(50);
BEGIN
SELECT translated_name INTO result
FROM product_descriptions
WHERE product_id = prod_id AND language_id = lang_id;
RETURN result;
END;
When writing code for the PL/SQL result cache option, you need to
optionally include the RELIES_ON clause to specify any tables or views on which the function
results depend
In the example, the productName function has result caching enabled through the
RESULT_CACHE option in the function declaration.
In this example, the RELIES_ON clause is used to identify the PRODUCT_DESCRIPTIONS
table on which the function results depend.
CREATE OR REPLACE FUNCTION productName
(prod_id NUMBER, lang_id VARCHAR2)
RETURN NVARCHAR2
RESULT_CACHE RELIES_ON (product_descriptions)
IS
result VARCHAR2(50);
BEGIN
SELECT translated_name INTO result
FROM product_descriptions
Question
Which statements accurately describe SQL result caching?
Options:
1.
It is enabled for a function using the CACHE_RESULT clause in the PL/SQL function
2.
It is useful when your queries need to analyze a large number of rows to return a
small number of rows or a single row
3.
It includes two new optimizer hints for turning SQL result caching on and off
4.
Answer
SQL result caching is useful when your queries need to analyze a large number of
rows to return a small number of rows or a single row. In addition, two new
optimizer hints are available to turn on and off SQL result caching.
Option 1 is incorrect. To enable result caching for a function, you use the
RESULT_CACHE clause in your PL/SQL code.
Option 2 is correct. SQL result caching is useful when your queries need to
analyze a large number of rows to return a small number of rows or a single row.
You can improve the performance of your queries by caching the results in
memory and then using the cached results in future executions of the query or
query fragments.
Option 3 is correct. Two new optimizer hints are available to turn SQL result
caching on and off, result_cache, and no_result_cache. These hints let you
override settings of the RESULT_CACHE_MODE initialization parameter.
Option 4 is incorrect. Result caching is efficient and easy to use, and it relieves
you of the burden of designing and developing your own caches and cachemanagement policies.
You must provide the name of the Flashback Data Archive. You must also provide the
name of the first tablespace of the Flashback Data Archive.
You can optionally identify the Maximum amount of space that the Flashback Data
Archive can use in the first tablespace. The default is unlimited.
Unless your space quota on the first tablespace is also unlimited, you must specify this
value, otherwise, you receive the ORA-55621 error.
You must provide the retention time number of days that Flashback Data Archive data
for the table is guaranteed to be stored.
When you use Flashback Data Archive to access historical data a default Flashback Data
Archive named fla1 is created that uses up to 10 GB of the tbs1 tablespace, whose
data will be retained for five years.
CONNECT sys/oracle@orcl AS sysdba
-- create the Flashback Data Archive
CREATE FLASHBACK ARCHIVE DEFAULT fla1
TABLESPACE tbs1 QUOTA 10G RETENTION 5 YEAR;
Next you specify the default Flashback Data Archive. By default, the system has no
Flashback Data Archive.
You can set it by specifying the name of an existing Flashback Data Archive in the SET
DEFAULT clause of the ALTER FLASHBACK ARCHIVE statement.
Alternatively, you can include DEFAULT in the CREATE FLASHBACK ARCHIVE statement
when you create a Flashback Data Archive.
-- Specify the default Flashback Data Archive
ALTER FLASHBACK ARCHIVE fla1 SET DEFAULT;
Next the Flashback Data Archive is enabled. By default, flashback archiving is disabled.
At any time, you can enable flashback archiving for a table.
-- Enable Flashback Data Archive
ALTER TABLE oe1.inventories FLASHBACK ARCHIVE;
ALTER TABLE oe1.warehouses FLASHBACK ARCHIVE;
Note
If Automatic Undo Management is disabled, you receive the ORA-55614 error
when you try to modify the table.
To enable flashback archiving for a table, include the FLASHBACK ARCHIVE clause in
either the CREATE TABLE or ALTER TABLE statement.
In the FLASHBACK ARCHIVE clause, you can specify the Flashback Data Archive where
the historical data for the table will be stored. The default is the default Flashback Data
Archive for the system. If a table already has flashback archiving enabled, and you try to
enable it again with a different Flashback Data Archive, an error occurs.
To disable flashback archiving for a table, specify NO FLASHBACK ARCHIVE in the
ALTER TABLE statement.
For example, you can use Flashback Data Archive to access historical data, and retrieve
the inventories of product 3108.
The initial values from the first statement are displayed.
SELECT product_id, warehouse_id, quantity_on_hand
FROM oe1.inventories
WHERE product_id = 3108;
PRODUCT_ID WAREHOUSE_ID QUANTITY_ON_HAND
---------- ------------ ---------------3108
8
122
3108
9
110
3108
2
194
3108
4
170
3108
6
146
Next you change the data to update the QUANTITY_ON_HAND values for product 3108 to
300.
UPDATE oe1.inventories
SET quantity_on_hand = 300
WHERE product_id = 3108;
The QUANTITY_ON_HAND values for product 3108 are updated to 300.
SELECT product_id, warehouse_id, quantity_on_hand
FROM oe1.inventories
WHERE product_id = 3108;
PRODUCT_ID WAREHOUSE_ID QUANTITY_ON_HAND
---------- ------------ ---------------3108
8
300
3108
9
300
3108
2
300
3108
3108
4
6
300
300
After the update occurs, you can still view the flashback data with this query.
These are the values retrieved for the inventories of product 3108 as of June 26, 2007.
This date is before the update statement was executed.
SELECT product_id, warehouse_id, quantity_on_hand
FROM oe1.inventories AS OF TIMESTAMP TO_TIMESTAMP
('2007-06-26 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
WHERE product_id = 3108;
PRODUCT_ID WAREHOUSE_ID QUANTITY_ON_HAND
---------- ------------ ---------------3108
8
122
3108
9
110
3108
2
194
3108
4
170
3108
6
146
You can view information about your flashback archives from the dictionary views. The
directory views are
*_FLASHBACK_ARCHIVE
*_FLASHBACK_ARCHIVE_TS
*_FLASHBACK_ARCHIVE_TABLES
*_FLASHBACK_ARCHIVE
The *_FLASHBACK_ARCHIVE view displays information about Flashback Data Archives.
*_FLASHBACK_ARCHIVE_TS
The *_FLASHBACK_ARCHIVE_TS view displays tablespaces of Flashback Data Archives.
*_FLASHBACK_ARCHIVE_TABLES
The *_FLASHBACK_ARCHIVE_TABLES view displays information about tables that are
enabled for flashback archiving.
This example provides information about tables that are enabled for flashback archiving
using the *_FLASHBACK_ARCHIVE_TABLES view.
DESCRIBE dba_flashback_archive_tables
Name
Null?
----------------------------------- -------TABLE_NAME
NOT NULL
OWNER_NAME
NOT NULL
FLASHBACK_ARCHIVE_NAME
NOT NULL
Type
--------------VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(255)
ARCHIVE_TABLE_NAME
VARCHAR2(53)
FLASHBACK_ARCHIVE_NAME ARCHIVE_TABLE_NAME
---------------------FLA1
FLA1
SYS_FBA_HIST_70355
SYS_FBA_HIST_70336
Type
-----------------VARCHAR2(255)
NUMBER
NUMBER
TIMESTAMP(9)
TIMESTAMP(9)
VARCHAR2(7)
Question
What data dictionary view enables you to view information about tables that are
enabled for flashback archiving?
Options:
1.
ALL/USER/DBA_FLASHBACK_ARCHIVE
2.
ALL/USER/DBA_FLASHBACK_ARCHIVE_TABLES
3.
ALL/USER/DBA_FLASHBACK_ARCHIVE_TS
4.
ALL/USER/DBA_FLASHBACK_ARCHIVES
Answer
The ALL/USER/DBA_FLASHBACK_ARCHIVE_TABLES data dictionary view
enables you to view information about tables that are enabled for flashback
archiving.
Option 1 is incorrect. The ALL/USER/DBA_FLASHBACK_ARCHIVE data dictionary
view displays information about Flashback Data Archives.
Option 2 is correct. The ALL/USER/DBA_FLASHBACK_ARCHIVE_TABLES data
dictionary view displays information about tables that are enabled for flashback
archiving. This view contains TABLE_NAME, OWNER_NAME,
FLASHBACK_ARCHIVE_NAME, and ARCHIVE_TABLE_NAME columns.
Option 3 is incorrect. The ALL/USER/DBA_FLASHBACK_ARCHIVE_TS data
dictionary view displays the tablespaces of Flashback Data Archives.
Option 4 is incorrect. This is not a valid data dictionary view. You would query
ALL/USER/DBA_FLASHBACK_ARCHIVE for information about Flashback Data
Archives.
Summary
In Oracle Database 11g, PL/SQL source is compiled directly to a dynamic link library (DLL
). The PL/SQL native compilation works out of the box, without requiring a C compiler on
a production box. This makes real native compilation faster than C native compilation.
Inlining is the process of replacing a call to a subroutine with a copy of the body of the
subroutine that is called. The copied procedure generally runs faster than the original and
can provide performance gains of up to ten times.
You can also use SQL query result caching and PL/SQL function result caching to
improve performance.
Flashback Data Archives provide the ability to store and track all transactional changes to
a record. This feature enables you to save development resources because you no
longer need to build this intelligence into your application.
After completing this topic, you should be able to examine and then improve performance
in Oracle Database 11g.
Excercise overview
In this exercise, you're required to identify the correct code that uses various Oracle
Database 11g performance enhancements to examine the performance of a data type,
and examine SQL and PL/SQL result caching and inlining.
This involves the following tasks:
examining performance
Step 1 of 3
What statement queries the V$PARAMETER view and returns the name and value
for the PLSQL_CODE_TYPE parameter?
Options:
1.
SELECT name
FROM v$parameter
WHERE name like 'plsql%';
2.
3.
4.
Result
The statement, SELECT name, value FROM v$parameter WHERE name
like 'plsql%'; queries the V$PARAMETER view and returns the name and
value for the PLSQL_CODE_TYPE parameter.
Option 1 is incorrect. This statement will return the name of all parameters from
the V$PARAMETER view that begin with the string "plsql". However, it will not
return the values associated with these parameters.
Option 2 is correct. This statement will return two columns. The first column will
contain the name of any parameter that begins with the string "plsql". The
second column will contain the values associated with the parameters returned in
the first column.
Option 3 is incorrect. To successfully return a name from the V$PARAMETER view
beginning with the string "plsql", the string in the WHERE clause should be
enclosed in single quotes.
Option 4 is incorrect. This statement will result in a "table or view does not exist
error" because it queried the VPARAMETER view instead of the V$PARAMETER
view.
Next you want to store the current CPU time in 100^th's of a second in the t_end
variable.
You have already written part of the code.
CREATE OR REPLACE PROCEDURE test_simple_integer IS
sim_counter SIMPLE_INTEGER :=0 ;
t_start
SIMPLE_INTEGER :=0 ;
t_end
SIMPLE_INTEGER :=0 ;
t_max
SIMPLE_INTEGER := 10000000;
BEGIN
t_start := DBMS_UTILITY.GET_CPU_TIME();
WHILE sim_counter < t_max LOOP
sim_counter := sim_counter + 1;
END LOOP;
<required code>
DBMS_OUTPUT.PUT_LINE((t_end-t_start)||' centiseconds with
Simple counter');
END test_simple_integer ;
/
Step 2 of 3
What line of code should be used instead of the line <required code> inside
the executable section to return the required results?
Options:
1.
t_end = DBMS_UTILITY.GET_CPU_TIME();
2.
t_end := DBMS_UTILITY.GET_CPU_TIME();
3.
t_end := DBMS_UTILITY_GET_CPU_TIME();
4.
t_end := GET_CPU_TIME();
Result
To store the current CPU time in 100^th's of a second in the t_end variable, you
should replace the line <required code> inside the executable section with the
t_end := DBMS_UTILITY.GET_CPU_TIME(); code segment.
Option 1 is incorrect. To assign the value returned by
DBMS_UTILITY.GET_CPU_TIME() to the t_end variable, the assignment
operator ":=" is required.
Option 2 is correct. The DBMS_UTILITY package contains a number of utilityrelated subprograms. The GET_CPU_TIME function returns the current CPU time
in 100^th's of a second.
Option 3 is incorrect. This line of code would result in an error as the package is
named DBMS_UTILITY and the function is named GET_CPU_TIME. The package
name and function name must be separated by a period.
Option 4 is incorrect. To call the GET_CPU_TIME function, you must include the
name of the package to which the function belongs. In this case, the package is
called DBMS_UTILITY.
Step 3 of 3
What statement alters only a session to use native compilation?
Options:
1.
2.
3.
4.
Result
The statement ALTER SESSION SET PLSQL_CODE_TYPE = 'NATIVE'; alters
only a session to use native compilation.
Option 1 is correct. This statement will alter a session so that native compilation is
used. With PL/SQL native compilation, PL/SQL statements in a PL/SQL program
unit are compiled into native code and stored in the SYSTEM tablespace.
Step 1 of 3
What should replace <required code> to achieve the desired results?
/*+ result_cache */
2.
3.
/*+ result_cache_on */
4.
Result
To enable SQL result caching in a query that retrieves data from the
INVENTORIES and PRODUCT_INFORMATION tables, you should replace the line
<required code> inside the executable section with the /*+ result_cache
*/ code segment.
Option 1 is correct. You use the new optimizer hint /*+ result_cache */ to
turn on SQL result caching in a query.
Option 2 is incorrect. The correct optimizer hint to enable result caching is /*+
result_cache */ and not /*+ result cache */.
Option 3 is incorrect. To enable result caching, you use the optimizer hint /*+
result_cache */. Result_cache_on is not a valid optimizer hint.
Option 4 is incorrect. To enable result caching, you use the optimizer hint /*+
result_cache */, and not /*+ result cache on */.
You notice that the GET_WAREHOUSE_NAMES function is called frequently and the content
of the data returned does not frequently change. You decide that this code would benefit
from enabling PL/SQL result caching.
You have already written part of the code.
CREATE OR REPLACE TYPE list_typ IS TABLE OF VARCHAR2(35);
/
CREATE OR REPLACE FUNCTION get_warehouse_names
RETURN list_typ
<required code>
IS
v_count BINARY_INTEGER;
v_wh_names list_typ;
BEGIN
SELECT count(*)
INTO v_count
FROM warehouses;
FOR i in 1..v_count LOOP
SELECT warehouse_name
INTO v_wh_names(i)
FROM warehouses;
END LOOP;
RETURN v_wh_names;
END get_warehouse_names;
/
Step 2 of 3
What statement should replace the line <required code> to turn on PL/SQL
result caching?
RESULT_CACHE RELIES_ON
2.
3.
4.
Result
To turn on PL/SQL result caching in a query that retrieves data from the
GET_WAREHOUSE_NAMES function, you should replace the line <required
code> inside the executable section with the RESULT_CACHE RELIES_ON
(warehouses) code segment.
Option 1 is incorrect. When including the RELIES ON clause of RESULT_CACHE,
you need to specify the table or view on which the results of the function depend.
Option 2 is incorrect. To enable PL/SQL result caching for a function, you include
the RESULT_CACHE option in the function definition. The underscore between
RESULT and CACHE is required.
Step 3 of 3
Which statements accurately describe inlining?
Options:
1.
2.
3.
4.
Result
You can influence inlining and you cannot control inlining by setting
PLSQL_OPTIMIZE_LEVEL and PRAGMA_INLINE.
Option 1 is incorrect. You can only influence inlining by using the
PLSQL_OPTIMIZE_LEVEL parameter or PRAGMA INLINE. The compiler makes
the final decisions on inlining.
Option 2 is correct. You can influence inlining by setting
PLSQL_OPTIMIZE_LEVEL or by using PRAGMA INLINE. However, this is not a
guarantee that inlining will be used. The compiler makes its decision based on the
algorithms applied to the code.
Option 3 is correct. You can request that inlining should be used with the
PLSQL_OPTIMIZE_LEVEL parameter or PRAGMA INLINE. But, it is up to the
compiler to analyze the code and determine whether inlining is appropriate.
Option 4 is incorrect. By setting the PLSQL_OPTIMIZE_LEVEL parameter to 3 or
by using PRAGMA INLINE in your PL/SQL code, you can make use of inlining.
You have successfully examined performance of a data type in Oracle Database 11g.
You've improved performance by enabling SQL result caching in a query and modified an
existing function to enable PL/SQL caching. You've also explored inlining.
After completing this topic, you should be able to identify the steps for using the PL/SQL
Debugger tool in SQL Developer.
Run
Debug
Compile
Resume
Step Over
Step Into
Step Out
Pause
Terminate
Find Execution Point
The Find Execution Point command navigates to the next execution point.
Resume
The Resume command continues execution.
Step Over
The Step Over command bypasses the next subprogram and goes to the next statement
after the subprogram, provided that the subprogram does not have any breakpoint
elsewhere.
Step Into
The Step Into command executes a single program statement at a time. If the execution
point is located on a call to a subprogram, the Step Into command steps into that
subprogram and places the execution point on the subprogram's first statement.
Step Out
The Step Out command leaves the current subprogram and goes to the next statement
with a breakpoint.
Step to End of Method
The Step to End of Method command goes to the last statement in the current
subprogram, or to the next breakpoint if there are any in the current procedure.
Pause
The Pause command stops execution but does not exit.
Terminate
The Terminate command stops and exits the execution.
The debugging tabbed pages are
Data
Watches
Data
The Data tabbed page is located under the code text area and displays information about
all variables.
Watches
The Watches tabbed page is located under the code text area and displays information
about watchpoints you have entered.
If you cannot see some of the debugging tabs, you can redisplay such tabs using the
View - Debugger menu option.
Question
What PL/SQL Debugger command halts program execution but does not exit the
debugger?
Options:
1.
Pause
2.
Resume
3.
4.
Terminate
Answer
The Pause PL/SQL Debugger command halts program execution without exiting
the debugger.
Option 1 is correct. The Pause command of the PL/SQL Debugger is used to halt
the execution of the debugger without exiting the debugger.
Option 2 is incorrect. The Resume command of the PL/SQL Debugger is used to
resume execution of the debugger once it has been paused or stopped.
Option 3 is incorrect. The Step to End of Method command of the PL/SQL
Debugger goes to the last statement in the current subprogram or to the next
breakpoint if there are any in the current subprogram.
Option 4 is incorrect. The Terminate command of the PL/SQL Debugger is used to
halt execution and exit the debugger.
Question
What privileges must an application developer have to be able to debug a PL/SQL
subprogram?
Options:
1.
2.
3.
4.
Answer
An application developer must have DEBUG ANY PROCEDURE and DEBUG
CONNECT SESSION privileges to be able to debug a PL/SQL subprogram.
Option 1 is incorrect. The correct privilege required to debug a PL/SQL
subprogram is DEBUG ANY PROCEDURE and not DEBUG ALL PROCEDURES.
The DEBUG CONNECT SESSION privilege is also required.
Option 2 is correct. The DEBUG ANY PROCEDURE privilege is required to debug
a PL/SQL subprogram. This privilege is the equivalent to the DEBUG privilege
granted on all objects in the database.
Option 3 is correct. The DEBUG CONNECT SESSION privilege is required to
debug a PL/SQL subprogram. When the debugger becomes connected to a
session, the session login user and the currently enabled session-level roles are
fixed as the privilege environment for that debugging connection.
Option 4 is incorrect. The correct privilege required to debug a PL/SQL
subprogram is DEBUG CONNECT SESSION and not DEBUG SESSION
CONNECT. The DEBUG ANY PROCEDURE privilege is also required.
Supplement
Selecting the link title opens the resource in a new browser window.
Launch window
View the code used to create a new emp_list2 procedure.
The emp_list2 procedure calls the get_location function, which returns the name of
the city in which an employee works.
For debugging purposes, you can set breakpoints in the procedure emp_list2. Here
emp_list2 is displayed in edit mode, and three breakpoints have been added.
To compile the emp_list2 procedure for debugging, you right-click the code, and then
select Compile for Debug from the shortcut menu.
Once completed, the Messages tabbed page displays the message that the procedure
was compiled.
You now compile the get_location function for debug mode. You do this by displaying
the function in edit mode.
To compile the function for debugging, you right-click the code, and then select Compile
for Debug from the shortcut menu.
The Messages tabbed page displays the message that the function was compiled.
Next you want to debug the emp_list2 procedure. You click the Debug icon on the
procedure's toolbar.
An anonymous PL/SQL block displays in the Debug PL/SQL dialog box, and you are
prompted to enter the parameters for the procedure. The procedure emp_list2 has one
parameter pMaxRows which specifies the number of records to return. You replace
the second pMaxRows with a number such as 2, and then you click OK.
DECLARE
PMAXROWS NUMBER;
BEGIN
PMAXROWS := NULL;
EMP_LIST2(
PMAXROWS => 2
);
END;
The debugging program stops at the first breakpoint.
i NUMBER := 1;
v_city VARCHAR2(30);
BEGIN
OPEN emp_cursor;
FETCH emp_cursor INTO emp_record;
emp_tab(i) := emp_record;
WHILE (emp_cursor%FOUND) AND (i <= pMaxRows) LOOP
i := i + 1
FETCH emp_cursor INTO emp_record;
emp_tab(i) := emp_record;
v_city := get_location(emp_record.department_name);
dbmas_output.put_line('Emplyee ' | |
emp_record.last_name | | ' works in ' | | v_city );
END LOOP;
CLOSE emp_cursor;
FOR j IN REVERSE 1..i LOOP
The Debugging - Log tabbed page opens.
Finished processing prepared classes.
Source breakpoint occurred at line 16 of EMP_LIST2.pls.
The Step Into command executes a single program statement at a time. If the execution
point is located on a call to a subprogram, the Step Into command steps into that
subprogram and places the execution point on the subprogram's first statement. If the
execution point is located on the last statement of a subprogram, choosing Step Into
causes the debugger to return from the subprogram, placing the execution point on the
line of code that follows the call to the subprogram you are returning from.
The term single stepping refers to using Step Into to run successively through the
statements in your program code.
There is more than one way to step into a subprogram. You can
click the Step Into icon on the toolbar of the Debugging - Log tabbed page
When you press F7 again, program control moves to the first breakpoint in the code. The
arrow next to the breakpoint indicates that this is the line of code that will be executed
next.
Various tabs display below the code window.
Note
The Step Into and Step Over commands offer the simplest way of moving through
your program code. Although the two commands are very similar, they each offer
a different way to control code execution.
Selecting the Step Into command executes a single program statement at a time. If the
execution point is located on a call to a subprogram, the Step Into command steps into
that subprogram and places the execution at the first statement in the subprogram.
For example, pressing F7 executes the line of code at the first breakpoint. In this case,
program control is transferred to the section where the cursor is defined.
You can view your data while you are debugging your code. You can use the Data tabbed
page to display and modify the variables. You can also set watches to monitor a subset of
the variables displayed in the Data tabbed page.
To display or hide the Data, Smart Data, and Watches tabbed pages, you select View Debugger, and then you select the tabs you want to display or hide.
For example, the Data tabbed page displays the values of the variables when i = 1.
When the value of i changes to 2, the values of the variables change accordingly.
You can modify the variables while debugging the code. To modify the value of a variable
in the Data tabbed page, you right-click the variable name, and then select Modify Value
from the shortcut menu.
The Modify Value dialog box displays the current value of the variable. You can enter a
new value in the second text box, and then click OK.
The Step Over debugging command, like Step Into, enables you to execute program
statements one at a time. However, if you issue the Step Over command when the
execution point is located on a subprogram call, the debugger runs that subprogram
without stopping instead of stepping into it and then positions the execution point on
the statement that follows the subprogram call.
If the execution point is located on the last statement of a subprogram, choosing Step
Over causes the debugger to return from the subprogram, placing the execution point on
the line of code that follows the call to the subprogram you are returning from.
click the Step Over icon on the toolbar of the Debugging - Log tabbed page
In the emp_list2 procedure, stepping over will execute the open cursor line without
transferring program control to the cursor definition, as was the case with the Step Into
option example.
The Step Out of the code option leaves the current subprogram and goes to the next
statement subprogram.
With program control at the first break point, you click the Debug icon to display the
Debug PL/SQL dialog box.
You change the pMaxRows parameter to 4, for example.
If you now press Shift+F7, the program control leaves the emp_list2 procedure.
Control now goes to the next statement in the anonymous block.
You continue to press Shift+F7, which takes you to the next anonymous block that prints
the contents of the SQL buffer.
The SQL buffer contains the locations of the first four employees.
When stepping through your application code in the debugger, you may want to run to a
particular location without having to single step or set a breakpoint. To run to a specific
program location, in a subprogram editor, position your text cursor on the line of code
where you want the debugger to stop.
To run to the cursor location you can
right-click the line of code and choose Run to Cursor in the procedure editor
choose the Debug - Run to Cursor option from the main menu
your program executes without stopping, until the execution reaches the location marked by the
text cursor in the source editor
the Run to Cursor command will cause your program to run until it encounters a breakpoint or
when your program finishes if your program never actually executes the line of code where the text
cursor is
The Step to End of Method debugging command moves control to the last statement in
the current subprogram, or to the next breakpoint if there are any in the current
subprogram.
You display the Debugging window again and click the Step to End of Method icon on
the debugger toolbar.
If there is a second breakpoint, the Step to End of Method debugging tool will transfer
control to that breakpoint.
Selecting Step to End of Method again will go through the iterations of the WHILE loop
first, and then it will transfer the program control to the next executable statement in the
anonymous block.
Question
When using the PL/SQL Debugger Step Over command, what occurs when the
execution point is located on a subprogram call?
Options:
1.
The debugger runs that subprogram without stopping and then positions the
execution point on the statement that follows the subprogram call
2.
The debugger runs until it encounters a breakpoint or until the program finishes
3.
The debugger steps into the subprogram and places the execution point at the first
statement in the subprogram
4.
The debugger will transfer the program control to the next executable statement in a
block
Answer
When using the PL/SQL Debugger Step Over command when the execution point
is located on a subprogram call, the debugger runs that subprogram without
stopping and then positions the execution point on the statement that follows the
subprogram call.
Option 1 is correct. When using the Step Over command, if the execution point is
located on a subprogram call, the debugger will run that subprogram without
stopping instead of stepping into it and will then position the execution point on
the statement that follows the subprogram call.
Option 2 is incorrect. This occurs when using the Run to Cursor command, not the
Step Over command. When using the Run to Cursor command, if a program
never executes the line of code where the text cursor is, the debugger will run the
program until it encounters a breakpoint or until the program finishes.
Option 3 is incorrect. This does not occur when you use the Step Over command,
but the Step Into command, which executes a single program statement at a time.
When the execution point is located on a call to a subprogram, this command
steps into that subprogram and places the execution point on the subprogram's
first statement.
Option 4 is incorrect. When the execution point is located on the last statement of
a subprogram, the Step Over command causes the debugger to return from the
subprogram and place the execution point on the line of code that follows the call
to the subprogram you are returning from.
Summary
The PL/SQL Debugger is a powerful debugging tool that enables you to step through your
code line by line and analyze the contents of variables, arguments, loops, and branch
statements. By setting breakpoints, you can manually control when the program should
run and when it should pause. This allows you to quickly move over the sections that you
know work correctly and concentrate on the sections that are causing problems. The
debugging tools at your disposal are: Find Execution Point, Resume, Step Over, Step
Into, Step Out, Step to End of Method, Pause, and Terminate.
To gather employee information which it stores in a record, you create the procedure
emp_list2. The emp_list2 procedure calls the function get_location, which
returns the name of the city in which an employee works. You use the debugger tools to
step through the code, modify parameters, and view program output.
14 v_city VARCHAR2(30);
15 BEGIN
16
OPEN emp_cursor;
17
FETCH emp_cursor INTO emp_record;
18
emp_tab(i) := emp_record;
19
WHILE (emp_cursor%FOUND) AND (i <= pMaxRows) LOOP
20
i := i + 1;
21
FETCH emp_cursor INTO emp_record;
22
emp_tab(i) := emp_record;
23
v_city := get_location(emp_record.department_name);
24
dbms_output.put_line('Employee ' ||
25
emp_record.last_name || ' works in ' || v_city );
26
END LOOP;
27
CLOSE emp_cursor;
28
FOR j IN REVERSE 1..i LOOP
29
DBMS_OUTPUT.PUT_LINE (emp_tab(j).last_name);
30
END LOOP;
31 END emp_list2;
Copyright 2008 SkillSoft. All rights reserved.
SkillSoft and the SkillSoft logo are trademarks or registered trademarks
of SkillSoft in the United
Debugging a Procedure
Learning objective
After completing this topic, you should be able to debug a procedure using the PL/SQL
Debugger.
Exercise overview
In this exercise, you're required to correctly identify the results of using various debugging
tools.
This involves the following tasks:
Supplement
Selecting the link title opens the resource in a new browser window.
Launch window
View the code for creating a new emp_list2 procedure.
Step 1 of 2
You now want to debug the procedure by stepping into the code.
When stepping into code, what does the debugger do if the execution point is
located on the last statement of a subprogram?
Options:
1.
It bypasses the next subprogram and goes to the next statement after that
subprogram
2.
It returns from the subprogram and places the execution point on the line of code
that follows the original call
3.
It steps into the subprogram and places the execution point on the first statement
Result
When stepping into code, the debugger returns from the subprogram and places
the execution point on the line of code that follows the original call if the execution
point is located on the last statement of a subprogram.
Option 1 is incorrect. The Step Over not the Step Into command, bypasses the
next subprogram and goes to the next statement after the subprogram, provided
that the subprogram does not have a breakpoint elsewhere.
Option 2 is correct. When using the Step Into command, if the execution point is
located on the last statement of a subprogram, the debugger returns from the
subprogram and places the execution point on the line of code that follows the call
to the subprogram you are returning from.
Option 3 is incorrect. The Step Into command executes one program statement at
a time. If the execution point is located on a call to a subprogram, the Step Into
command steps into that subprogram and places the execution point on the first
statement.
Step 2 of 2
You now want to use the Step Over command to analyze the code.
What is the result of the debugger stepping over the provided code in line 16?
Options:
1.
2.
It executes the open cursor line and transfers program control to the cursor definition
3.
It executes the open cursor line without transferring program control to the cursor
definition
4.
It goes to the last statement in the current subprogram or to the next breakpoint
Result
The result of stepping over the provided code in the debugger at line 16 is that the
debugger will execute the open cursor line without transferring program control to
the cursor definition.
Option 1 is incorrect. Using the Step Out command would cause the debugger to
leave the current subprogram and go to the next statement subprogram.
Option 2 is incorrect. Stepping into the code would cause the debugger to execute
the open cursor line and would transfer program control to the cursor definition.
Option 3 is correct. Stepping over will execute the open cursor line without
transferring program control to the cursor definition. This differs from stepping into
the code because program control is not transferred to the cursor definition.
Option 4 is incorrect. Using Step to End of Method would cause the debugger to
go to the last statement in the current subprogram or to the next breakpoint.
You continue debugging the emp_list2 procedure using some of the other debugging
methods at your disposal.
Step 1 of 2
You've positioned the cursor on the line of code where you want the debugger to
stop. What must you do next to run to a particular location without having to single
step or set a breakpoint?
Options:
1.
2.
3.
4.
Result
When debugging, to run to a particular location without having to single step or set
a breakpoint you must position the cursor on the line of code where you want the
debugger to stop and then run to the cursor.
Option 1 is correct. To run to a specific program location, in a subprogram editor,
you position your text cursor on the line of code where you want the debugger to
stop. You can then run to the cursor location by choosing Run To Cursor from the
shortcut menu or Debug menu, or by pressing F4.
Option 2 is incorrect. To run to a particular location without having to single step or
set a breakpoint, you use the Run to Cursor method. Stepping into code executes
a single program statement at a time.
Option 3 is incorrect. Using Run to Cursor, not Step Out, enables you to run to a
particular location without having to single step or set a breakpoint. Stepping out
leaves the current subprogram and goes to the next statement subprogram.
Option 4 is incorrect. Stepping to the end of the method does not allow you to run
to a particular location without having to single step or set a breakpoint. Instead,
the Step to End of Method command goes to the last statement in the current
subprogram or to the next breakpoint if there are any in the current subprogram.
Step 2 of 2
What command causes the debugger to go to the last statement in the current
subprogram or to the next breakpoint if there are any in the current subprogram?
Options:
1.
Run to Cursor
2.
Step Into
3.
Step Over
4.
Result
The Step to End of Method command causes the debugger to go to the last
statement in the current subprogram or to the next breakpoint in the current
subprogram.
Option 1 is incorrect. Run to Cursor allows you to run to a particular location in
your code without having to single step or set a breakpoint.
Option 2 is incorrect. Step Into executes a single program statement at a time.
Option 3 is incorrect. Step Over leaves the current subprogram and goes to the
next statement subprogram.
Option 4 is correct. Step to End of Method goes to the last statement in the current
subprogram. However, if any breakpoints exist before the end of the subprogram,
the debugger will stop at the next one.
Summary
A procedure has been debugged using the Step Into, Step Over, Run to Cursor, and Step
to End of Method commands.
After completing this topic, you should be able to recognize the steps for making effective
use of collections in PL/SQL and deciding which is the best collection to use in a given
scenario.
1. Understanding collections
A collection is a group of elements, all of the same type. Each element has a unique
subscript that determines its position in the collection.
Collections work like the arrays found in most third-generation programming languages.
They can store instances of an object type and, conversely, can be attributes of an object
type.
Collections can also be passed as parameters. You can use them to move columns of
data into and out of database tables or between client-side applications and stored
subprograms. Object types are used not only to create object relational tables, but also to
define collections.
You can use any of the three categories of collections:
nested tables
varrays
associative arrays
Nested tables can have any number of elements. Varrays are an ordered collection of
elements. And associative arrays known as "index-by tables" in earlier Oracle releases
are sets of keyvalue pairs, where each key is unique and is used to locate a
corresponding value in the array. The key can be an integer or a string.
PL/SQL offers two collection types:
nested tables
varrays
nested tables
A nested table holds a set of values. That is, it is a table within a table. Nested tables are
unbounded, meaning the size of the table can increase dynamically.
Nested tables are available in both PL/SQL and the database. Within PL/SQL, nested
tables are like one-dimensional arrays whose size can increase dynamically. Within the
database, nested tables are column types that hold sets of values.
varrays
Variable-size arrays, or varrays, are also collections of homogeneous elements that hold a
fixed number of elements, although you can change the number of elements at run time.
They use sequential numbers as subscripts.
You can define equivalent SQL types, allowing varrays to be stored in database tables.
They can be stored and retrieved through SQL, but with less flexibility than nested tables.
The Oracle database stores the rows of a nested table in no particular order. When you
retrieve a nested table from the database into a PL/SQL variable, the rows are given
consecutive subscripts starting at 1. This gives you array-like access to individual rows.
Nested tables are initially dense but they can become sparse through deletions. And,
therefore, they have nonconsecutive subscripts.
You can use varrays to reference the individual elements for array operations, or
manipulate the collection as a whole.
Varrays are always bounded and never sparse. You can specify the maximum size of the
varray in its type definition. Its index has a fixed lower bound of 1 and an extensible upper
bound.
A varray can contain a varying number of elements from zero (when empty) to the
maximum specified in its type definition. To reference an element, you can use the
standard subscripting syntax.
If you already have code or business logic that uses some other language, you can
usually translate that language's array and set types directly to PL/SQL collection types:
hash tables and other kinds of unordered lookup tables in other languages become associative
arrays in PL/SQL
If you are writing original code or designing business logic from the start, you should
consider the strengths of each collection type and decide which is appropriate.
You use varrays when
you need to delete or update some elements, but not all the elements at once
you would usually create a separate lookup table, with multiple entries for each row of the main
table, and access it through join queries
The table compares the listing characteristics of PL/SQL collection types with those of DB
collection types.
Supplement
Selecting the link title opens the resource in a new browser window.
Launch window
Note
Collections can be nested.
This is the syntax for defining a string-indexed collection in PL/SQL.
TYPE type_name IS VARRAY (max_elements) OF
element_datatype [NOT NULL];
To create a table based on a nested table, you first define an object type.
You create the typ_item type, which holds the information for a single line item.
CREATE TYPE typ_item AS OBJECT - -create object
(prodid NUMBER(5),
price NUMBER(7,2) )
/
CREATE TYPE typ_item_nst - - define nested table type
AS TABLE OF typ_item
/
Note
You must create the typ_item_nst nested table type based on the previously
declared type because it is illegal to declare multiple data types in this nested
table declaration.
Next you declare a column of that collection type. You create the typ_item_nst type,
which is created as a table of the typ_item type.
CREATE TABLE pOrder ( - - create database table
ordid NUMBER(5),
supplier NUMBER(5),
requester NUMBER(4),
ordered DATE,
items typ_item_nst)
NESTED TABLE items STORE AS item_stor_tab
Finally you create the pOrder table. And you use the nested table type in a column
declaration, which will include an arbitrary number of items based on the typ_item_nst
type. Thus, each row of pOrder may contain a table of items.
The NESTED TABLE STORE AS clause is required to indicate the name of the storage
table in which the rows of all the values of the nested table reside. The storage table is
created in the same schema and the same tablespace as the parent table.
Note
The USER_COLL_TYPES dictionary view holds information about collections.
The rows for all nested tables of a particular column are stored within the same segment.
This segment is called the storage table.
A storage table is a system-generated segment in the database that holds instances of
nested tables within a column. You specify the name for the storage table by using the
NESTED TABLE STORE AS clause in the CREATE TABLE statement. The storage table
inherits storage options from the outermost table.
To distinguish between nested table rows belonging to different parent table rows, a
system-generated nested table identifier unique for each outer row that encloses a
nested table is created.
Operations on storage tables are performed implicitly by the system. You should not
access or manipulate the storage table, except implicitly through its containing objects.
Privileges of the column of the parent table are transferred to the nested table.
To create a table based on a varray, you first create the typ_project type, which holds
the information for a project.
Then you create the typ_ projectlist type, which is created as a varray of the
project type. The varray contains a maximum of 50 elements.
CREATE TYPE typ_Project AS OBJECT( - - create object
project_no NUMBER(2),
title VARCHAR2(35),
cost NUMBER(7,2))
/
CREATE TYPE typ_ProjectList AS VARRAY (50) OF typ_Project
- - define VARRAY type
/
Next you create the DEPARTMENT table and use the varray type in a column declaration.
Each element of the varray stores a project object.
Question
Which statements accurately describe either varrays or nested tables?
Options:
1.
2.
Nested tables are best used when a data set is not very large and it's important to
preserve the order of elements in the collection column
3.
Using nested tables to store large amounts of persistent data allows the Oracle
server to use a separate table to hold collection data that can grow over time
4.
Varray data is stored inline, so retrieving and storing varrays involves fewer disk
accesses
Answer
Using nested tables to store large amounts of persistent data allows the Oracle
server to use a separate table to hold collection data that can grow over time.
Varray data is stored inline, so retrieving and storing varrays involves fewer disk
accesses.
Option 1 is incorrect. Although varrays do not allow piecewise updates, nested
tables do.
Option 2 is incorrect. If your data set is not very large and it is important to
preserve the order of elements in a collection column, you should use varrays and
not nested tables.
Option 3 is correct. To store large amounts of persistent data in a column
collection, you should use nested tables. This way, the Oracle server can use a
separate table to hold the collection data, which can grow over time.
Option 4 is correct. Because varray data is stored inline in the same tablespace
retrieving and storing varrays involves fewer disk accesses. Varrays are
therefore more efficient than nested tables.
Question
Which statements accurately describe storage tables?
Options:
1.
2.
They are named using the STORAGE TABLE AS clause of the CREATE TABLE
statement
3.
They are system-generated segments in the database that hold instances of nested
tables within a column
4.
Answer
Storage tables are system-generated segments in the database that hold
instances of nested tables within a column. And they inherit the storage options
from the outermost table.
Option 1 is incorrect. Operations on storage tables are performed implicitly by the
system. You should not access or manipulate the storage table, except implicitly
through its containing objects.
Option 2 is incorrect. You specify the name for the storage table by using the
NESTED TABLE STORE AS clause in the CREATE TABLE statement.
Option 3 is correct. The rows for all nested tables of a particular column are stored
within the same segment, which is called the storage table.
Option 4 is correct. A storage table inherits storage options from the outermost
table. Privileges of the column of the parent table are transferred to the nested
table.
There are several points about collections that you must know when working with them.
You can declare collections as the formal parameters of functions and procedures so that
you can pass collections to stored subprograms and from one subprogram to another.
A function's RETURN clause can be a collection type.
Collections follow the usual scoping and instantiation rules.
In a block or subprogram, collections are instantiated when you enter the block or
subprogram and cease to exist when you exit. In a package, collections are instantiated
when you first reference the package and cease to exist when you end the database
session.
In this example, a nested table is used as the formal parameter of a packaged procedure
the data type of an IN parameter for the procedure ALLOCATE_PROJ. It is also used as
the return data type of the TOP_PROJECT function.
CREATE OR REPLACE PACKAGE manage_dept_proj AS
TYPE typ_proj_details IS TABLE OF typ_Project;
...
PROCEDURE allocate_proj
(propose_proj IN typ_proj_details);
FUNCTION top_project (n NUMBER)
RETURN typ_proj_details;
...
Until you initialize it, a collection is atomatically null the collection itself is null, and not
its elements. To initialize a collection, you can
use a constructor
use a fetch
You can assign another collection variable directly. You can copy the entire contents of one
collection to another as long as both are built from the same data type.
In this example, you pass three elements to the typ_ProjectList() constructor,
which returns a varray containing those elements.
DECLARE
- -this example uses a constructor
v_accounting_project typ_ProjectList;
BEGIN
v_accounting_project :=
typ_ProjectList
(typ_Project (1, 'Dsgn New Expense Rpt', 3250),
typ_Project (2, 'Outsource Payroll', 12350),
typ_Project (3, 'Audit Accounts Payable',1425));
INSERT INTO department
VALUES(10, 'Accounting', 123, v_accounting_project);
...
END;
/
In this example of the initialization of a collection, an entire collection from the database is
fetched into the local PL/SQL collection variable.
DECLARE
- - this example uses a fetch from the database
v_accounting_project typ_ProjectList;
BEGIN
SELECT projects
INTO v_accounting_project
FROM department
WHERE dept_id = 10;
...
END;
/
In this example, the entire contents of one collection variable are assigned to another
collection variable.
DECLARE
END;
/
Every element reference includes a collection name and a subscript enclosed in
parentheses. The subscript determines which element is processed. To reference an
element, you can specify its subscript by using this syntax.
In the syntax, subscript is an expression that yields a positive integer. For nested tables,
the integer must lie in the range 1 to 2147483647. For varrays, the integer must lie in the
range from 1 to maximum_size that you provide.
collection_name(subscript)
The first example shows you how to reference a specific collection element. The second
example shows you how to reference a field in a collection.
v_accounting_project(1)
v_accounting_project(1).cost
You can use collection methods from procedural statements, but not from SQL
statements. The methods are
EXISTS
COUNT
LIMIT
EXTEND
TRIM
DELETE
EXISTS
The EXISTS statement returns TRUE if the nth element in a collection exists. Otherwise,
EXISTS(n) returns FALSE.
COUNT
The COUNT statement returns the number of elements that a collection contains.
LIMIT
For nested tables that have no maximum size, the LIMIT statement returns NULL. And for
varrays, LIMIT returns the maximum number of elements that a varray can contain.
FIRST and LAST
The FIRST and LAST statements return the first and last smallest and largest index
numbers in a collection, respectively.
PRIOR and NEXT
The PRIOR(n) statement returns the index number that precedes index n in a collection.
And the NEXT(n) statement returns the index number that follows index n.
EXTEND
The EXTEND statement appends one null element, EXTEND(n) appends n elements, and
EXTEND(n, i) appends n copies of the ith element.
TRIM
The TRIM statement removes one element from the end, while TRIM(n) removes n
elements from the end of a collection.
DELETE
The DELETE statement removes all elements from a nested or associative array table.
DELETE(n) removes the nth element and DELETE(m, n) removes a range. The DELETE
statement does not work on varrays.
In this example, the FIRST method finds the smallest index number, and the NEXT
method traverses the collection starting at the first index. The output from this block of
code is Project too expensive: Outsource Payroll.
DECLARE
i INTEGER;
v_accounting_project typ_ProjectList;
BEGIN
v_accounting_project := typ_ProjectList(
typ_Project (1,'Dsgn New Expense Rpt', 3250),
typ_Project (2, 'Outsource Payroll', 12350),
typ_Project (3, 'Audit Accounts Payable',1425));
i := v_accounting_project.FIRST ;
WHILE i IS NOT NULL LOOP
IF v_accounting_project(i).cost > 10000 then
DBMS_OUTPUT.PUT_LINE('Project too expensive: '
|| v_accounting_project(i).title);
END IF;
i := v_accounting_project.NEXT (i);
END LOOP;
END;
/
You can use the PRIOR and NEXT methods to traverse collections indexed by any series
of subscripts. In the example, the NEXT method is used to traverse a varray.
PRIOR(n) returns the index number that precedes index n in a collection. NEXT(n)
returns the index number that succeeds index n. If n has no predecessor, PRIOR(n)
returns NULL. Likewise, if n has no successor, NEXT(n) returns NULL. PRIOR is the
inverse of NEXT.
PRIOR and NEXT do not wrap from one end of a collection to the other. When traversing
elements, PRIOR and NEXT ignore deleted elements.
DECLARE
i INTEGER;
v_accounting_project typ_ProjectList;
BEGIN
v_accounting_project := typ_ProjectList(
typ_Project (1,'Dsgn New Expense Rpt', 3250),
typ_Project (2, 'Outsource Payroll', 12350),
typ_Project (3, 'Audit Accounts Payable',1425));
i := v_accounting_project.FIRST ;
WHILE i IS NOT NULL LOOP
IF v_accounting_project(i).cost > 10000 then
DBMS_OUTPUT.PUT_LINE('Project too expensive: '
|| v_accounting_project(i).title);
END IF;
i := v_accounting_project.NEXT (i);
END LOOP;
END;
/
This code uses the COUNT, EXTEND, LAST, and EXISTS methods on the my_projects
varray.
The COUNT method reports that the projects collection holds three projects for department
10. The EXTEND method creates a fourth empty project. The LAST method reports that
four projects exist. When testing for the existence of a fifth project, the program reports
that it does not exist.
DECLARE
v_my_projects typ_ProjectList;
v_array_count INTEGER;
v_last_element INTEGER;
BEGIN
SELECT projects INTO v_my_projects FROM department
WHERE dept_id = 10;
v_array_count := v_my_projects.COUNT ;
dbms_output.put_line('The # of elements is: ' || v_array_count);
v_my_projects.EXTEND ; --make room for new project
v_last_element := v_my_projects.LAST ;
10
Accounting
123
PROJECTLIST(PROJECT(1, 'Dsgn New Expense Rpt', 3250),
PROJECT(2, 'Outsource Payroll', 12350),
PROJECT(3, 'Audit Accounts Payable', 1425),
PROJECT(4, 'Information Technology', 789))
In most cases, if you reference a nonexistent collection element, PL/SQL raises a
predefined exception. The exceptions are
COLLECTION_IS_NULL
NO_DATA_FOUND
SUBSCRIPT_BEYOND_COUNT
SUBSCRIPT_OUTSIDE_LIMIT
VALUE_ERROR
COLLECTION_IS_NULL
The exception COLLECTION_IS_NULL is raised when you try to operate on an
automatically null collection.
NO_DATA_FOUND
The exception NO_DATA_FOUND is raised when a subscript designates an element that
was deleted.
SUBSCRIPT_BEYOND_COUNT
The exception SUBSCRIPT_BEYOND_COUNT is raised when a subscript exceeds the
number of elements in a collection.
SUBSCRIPT_OUTSIDE_LIMIT
The exception SUBSCRIPT_OUTSIDE_LIMIT is raised when a subscript is outside the
legal range.
VALUE_ERROR
The exception VALUE_ERROR is raised when a subscript is null or not convertible to an
integer.
In the first case, the nested table is automatically null.
In the second case, the subscript is null.
In the third case, the subscript is outside the legal range.
In the fourth case, the subscript exceeds the number of elements in the table.
In the fifth case, the subscript designates a deleted element.
DECLARE
TYPE NumList IS TABLE OF NUMBER;
nums NumList; -- automatically null
BEGIN
/* Assume execution continues despite the raised
exceptions.
*/
nums(1) := 1; -- raises COLLECTION_IS_NULL
nums := NumList(1,2); -- initialize table
nums(NULL) := 3 -- raises VALUE_ERROR
nums(0) := 3; -- raises SUBSCRIPT_OUTSIDE_LIMIT
nums(3) := 3; -- raises SUBSCRIPT_BEYOND_COUNT
nums.DELETE(1); -- delete element 1
IF nums(1) = 1 THEN -- raises NO_DATA_FOUND
...
Question
Which collection method removes one element from the end of a collection?
Options:
1.
COUNT
2.
EXTEND
3.
4.
TRIM
Answer
The TRIM collection method removes one element from the end of a collection.
Option 1 is incorrect. The COUNT method returns the number of elements that a
collection contains.
Option 2 is incorrect. The EXTEND method appends one null element. The syntax
EXTEND(n) is used to append n elements. And the syntax EXTEND(n, i) is
used to append n copies of the ith element.
Option 3 is incorrect. The FIRST AND LAST method returns the first and last or
smallest and largest index numbers in a collection, respectively.
Option 4 is correct. The TRIM method removes one element from the end of a
collection. The syntax TRIM(n) removes n elements from the end of a collection.
Summary
Collections are a grouping of elements, all of the same type. The types of collections are
nested tables, varrays, and associative arrays. You can define nested tables in PL/SQL
program units and in the database. Nested tables, varrays, and associative arrays can be
used in a PL/SQL program.
When using collections in PL/SQL programs, you can access collection elements, use
predefined collection methods, and use exceptions that are commonly encountered with
collections.
After completing this topic, you should be able to recognize the steps for using
materialized views and query rewrite enhancements to improve query execution times.
in data warehouses
by the optimizer
in distributed environments
need to be refreshed
are transparent
The SH sample schema comes with a materialized view.
The view is defined with this code.
SELECT
,
,
,
,
FROM
,
,
WHERE
AND
GROUP BY
,
,
,
t.week_ending_day
p.prod_subcategory
sum(s.amount_sold) AS dollars
s.channel_id
s.promo_id
sales s
times t
products p
s.time_id = t.time_id
s.prod_id = p.prod_id
t.week_ending_day
p.prod_subcategory
s.channel_id
s.promo_id;
Type
-----------DATE
VARCHAR2(50)
NUMBER
NUMBER
NUMBER
Oracle Database 11g introduces both new and enhanced catalog views for materialized
views that enable you to track the materialized view freshness.
the partition change tracking (PCT) information for a given materialized view
USER/ALL/DBA_MVIEWS
USER/ALL/DBA_MVIEW_DETAIL_RELATIONS
USER/ALL/DBA_MVIEW_DETAIL_PARTITION
USER/ALL/DBA_MVIEW_DETAIL_SUBPARTITION
USER/ALL/DBA_MVIEWS
The USER/ALL/DBA_MVIEWS catalog view is extended where new columns are added to
describe the number of PCT tables, and the number of fresh and stale PCT regions.
The USER/ALL/DBA_MVIEWS extension describes all materialized views in the database.
USER/ALL/DBA_MVIEW_DETAIL_RELATIONS
The USER/ALL/DBA_MVIEW_DETAIL_RELATIONS catalog view is extended where new
columns are added to indicate whether the detail table is PCT-enabled, and to show the
numbers of fresh and stale PCT partitions.
The USER/ALL/DBA_MVIEW_DETAIL_RELATIONS extension represents the named
detail relations that are either in the FROM list of a materialized view, or that are indirectly
referenced through views in the FROM list.
USER/ALL/DBA_MVIEW_DETAIL_PARTITION
A new catalog view for PCT partition USER/ALL/DBA_MVIEW_DETAIL_PARTITION is
created to describe the freshness of each PCT partition.
specified in the FROM list of the subquery that defines a materialized view accessible to the
current user
indirectly referenced through views in that FROM list
Inline views in the materialized view definition are not represented in this view or the
related views.
Three new columns are added to this view in Oracle Database 11g. The views are
extended to show whether the detail partition supports PCT with respect to a given MV. If
the detail partition does support PCT, the catalog views display how many fresh and stale
PCT partitions are present in that detail table.
The code shows the new columns that are added to the
DBA/ALL/USER_MVIEW_DETAIL_RELATIONS catalog views:
- DETAILOBJ_PCT is the detail object PCT supported
- NUM_FRESH_PCT_PARTITIONS specifies the number of fresh PCT partitions
- NUM_STALE_PCT_PARTITIONS specifies the number of stale PCT partitions
DESCRIBE all_mview_detail_relations
Name
Null
------------------------------ -------OWNER
NOT NULL
MVIEW_NAME
NOT NULL
DETAILOBJ_OWNER
NOT NULL
DETAILOBJ_NAME
NOT NULL
DETAILOBJ_TYPE
DETAILOBJ_ALIAS
DETAILOBJ_PCT
NUM_FRESH_PCT_PARTITIONS
NUM_STALE_PCT_PARTITIONS
Type
---------------VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(9)
VARCHAR2(30)
VARCHAR2(1)
NUMBER
NUMBER
SALES_1995
SALES
SALES_1996
SALES
SALES_H1_1997
SALES
SALES_Q1_2003
25
SALES
SALES_Q2_2003
26
SALES
SALES_Q3_2003
27
SALES
SALES_Q4_2003
28
Type
-----VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(30)
NUMBER
CHAR(5)
The refresh performance improvements reduce the time required to refresh materialized
views.
Previously, when the materialized view was being refreshed, it was implicitly disabled for
query rewrite even if its data was acceptable to the user. This was especially true when
atomic refresh was in progress and the user saw the data in the materialized view in a
transactional state of the past refresh.
In Oracle Database 11g, when the materialized view is refreshed in the atomic mode, it is
eligible for query rewrite if the rewrite integrity mode is set to STALE_TOLERATED.
Previously, the fast refresh was extended to support MVs that have UNION ALL
operators. However, the fast refresh of the UNION ALL MV did not apply partition-based
(PCT) refresh. In Oracle Database 11g, PCT refresh is now allowed for UNION ALL MV
fast refresh.
The materialized view with UNION ALL operators is fast refreshable but unlike most of
the fast refreshable MVs, it is not created with an index. Previously, in order to speed up
refresh execution, you needed to manually create such an index, but now it is automatic.
Summaries are aggregate views that are created to improve query execution times. In an
Oracle Database, summaries are implemented with a materialized view.
There are many well-known techniques that you can use to increase query performance.
For example, you can create additional indexes, or you can partition your data.
Many data warehouses are also using a technique called summaries. The basic process
for a summary is to precompute the result of a long-running query and store this result in
a database table called summary table, which is comparable to a CREATE TABLE AS
SELECT (CTAS) statement.
Instead of precomputing the same query result many times, the user can directly access
the summary table. Although this approach has the benefit of enhancing query response
time, it also has many drawbacks. The user needs to be aware of the summary table's
existence in order to rewrite the query to use that table instead.
Also, the data contained in a summary table is frozen, and must be manually refreshed
whenever modifications occur on the real tables.
With Oracle Database summary management, the user no longer has to be aware of
summaries that have been defined. The DBA creates materialized views that are
automatically used by the system when rewriting SQL queries.
Using MVs offers another advantage over manually creating summaries tables, in that the
data can be refreshed automatically.
In a typical use of summary management, the database administrator creates the
materialized view or summary table. When the end user queries tables and views, the
query rewrite mechanism of the Oracle server automatically rewrites the SQL query to
use the summary table.
The use of the materialized view is transparent to the end user or application querying the
data.
The implementation of summary management in Oracle Database includes the use of
these components:
the SQL Access Advisor that recommends materialized views and indexes to be created
the DBMS_ADVISOR.TUNE_MVIEW procedure, which shows you how to make your materialized
view fast refreshable and use general query rewrite
After your data has been transformed, staged, and loaded into the detail tables, you
invoke the summary management process by
using the SQL Access Advisor to determine how you will use materialized views
REFRESH FAST AS
SELECT_statement_goes_here);
The parameters for the DBMS_ADVISOR.TUNE_MVIEW procedure are
name is the task name for looking up the results in a catalog view. If not specified, the system will
generate a name and return.
mv_create_stmt is the original materialized view creation statement.
DBMS_ADVISOR.TUNE_MVIEW
( name, 'CREATE MATERIALIZED VIEW my_mv_name
REFRESH FAST AS
SELECT_statement_goes_here);
Question
How are materialized views used?
Options:
1.
In data warehouses, they are used to compute and store aggregated data
2.
In distributed environments, they are used to download a subset of data from central
servers
3.
4.
Answer
In data warehouses, materialized views are used to compute and store
aggregated data. And they can be used by the optimizer to improve query
performance.
Option 1 is correct. In data warehouses, materialized views are used to compute
and store aggregated data, such as sums and averages. Materialized views in
these environments are typically referred to as summaries because they store
summarized data.
Option 2 is incorrect. In distributed environments, materialized views are used to
replicate data at distributed sites and synchronize updates done at several sites
with conflict resolution methods.
Option 3 is incorrect. In mobile computing environments, materialized views are
used to download a subset of data from central servers to mobile clients. This
involves periodic refreshes from the central servers and propagation of updates by
clients back to the central servers.
Option 4 is correct. The optimizer can use materialized views to improve query
performance by automatically recognizing when a materialized view can and
should be used to satisfy a request.
Question
Which catalog view, available to all users, has been extended to include new
columns showing the number of PCT tables and the number of fresh and stale PCT
regions?
Options:
1.
DBA_MVIEW_DETAIL_PARTITION
2.
DBA_MVIEW_DETAIL_RELATIONS
3.
DBA_MVIEW_DETAIL_SUBPARTITION
4.
DBA_MVIEWS
Answer
The USER/ALL/DBA_MVIEWS catalog view has been extended to include new
columns showing the number of PCT tables and the number of fresh and stale PCT
regions.
Option 1 is incorrect. The new USER/ALL/DBA_MVIEW_DETAIL_PARTITION
catalog view for PCT partition has been created to describe the freshness of each
PCT partition.
Option 2 is incorrect. The USER/ALL/DBA_MVIEW_DETAIL_RELATIONS catalog
view has been extended to include new columns that indicate whether the detail
table is PCT-enabled.
Option 3 is incorrect. The new USER/ALL/DBA_MVIEW_DETAIL_SUBPARTITION
catalog view for PCT subpartition has been created to describe the freshness of
each PCT subpartition.
Option 4 is correct. The USER/ALL/DBA_MVIEWS catalog view has been
extended to include the new columns NUM_PCT_TABLES,
NUM_FRESH_PCT_REGIONS, and NUM_STALES_PCT_REGIONS.
materialized views contain already precomputed aggregates and joins, you can use the
process called query rewrite to quickly answer the query using materialized views.
One of the major benefits of creating and maintaining materialized views is the ability to
take advantage of query rewrite. This transforms a SQL statement expressed in terms of
tables or views into a statement accessing one or more materialized views that are
defined in the detail tables.
The transformation is transparent to the end user or application, requiring no intervention
and no reference to the materialized view in the SQL statement. Because query rewrite is
transparent, materialized views can be added or dropped just like indexes without
invalidating the SQL in the application code.
A query undergoes several checks to determine whether it is a candidate for query
rewrite. If the query fails any of the checks, then the query is applied to the detail tables
rather than the materialized view. This can be costly in terms of response time and
processing power.
The optimizer uses two different methods to recognize when to rewrite a query in terms of
a materialized view:
matching
comparing
matching
In the matching method, the optimizer matches the SQL text of the query with the SQL text
of the materialized view definition.
comparing
If the optimizer fails to match the SQL text of the query with the SQL text of the MV
definition, it uses the more general method in which it compares joins, selections, data
columns, grouping columns, and aggregate functions between the query and materialized
views.
Dimensions, constraints, and rewrite integrity levels affect whether or not a given query is
rewritten to use one or more materialized views. Additionally, query rewrite can be
enabled or disabled by REWRITE and NOREWRITE hints, and the
QUERY_REWRITE_ENABLED session parameter.
The DBMS_MVIEW.EXPLAIN_REWRITE procedure advises whether query rewrite is
possible on a query and, if so, which materialized views will be used. It also explains why
a query cannot be rewritten.
A query is rewritten only when a certain number of conditions are met. Query rewrite must
be enabled for the session. And a materialized view must be enabled for query rewrite.
In addition, the rewrite integrity level should allow the use of the materialized view. For
example, if a materialized view is not fresh and query rewrite integrity is set to ENFORCED,
then the materialized view is not used.
And either all or part of the results requested by the query must be obtainable from the
precomputed result stored in the materialized view or views.
To test these conditions, the optimizer may depend on some of the data relationships
declared by the user using constraints and dimensions, among others, hierarchies,
referential integrity, and uniqueness of key data, and so on.
Query rewrite is available only with the cost-based optimizer. The Oracle database
optimizes the input query with and without rewrite, and selects the least costly alternative.
The optimizer rewrites a query by rewriting one or more query blocks, one at a time.
If the rewrite logic has a choice between multiple materialized views to rewrite a query
block, it selects the one that can result in reading the least amount of data.
After a materialized view has been picked for a rewrite, the optimizer performs the rewrite
and then tests whether the rewritten query can be rewritten further with another
materialized view. This can be the case only when nested materialized views exist.
This process continues until no further rewrites are possible. Query rewrite is attempted
recursively to take advantage of nested materialized views.
Query rewrite operates on queries and subqueries in the following types of SQL
statements:
SELECT
Oracle Database 11g supports query rewrite with inline views when
the text from the inline views in the materialized view exactly matches the text in the request query
the request query contains inline views that are equivalent to the inline views in the materialized
view
Two inline views are considered equivalent when
the join graphs including all the selections in the WHERE clauses are equivalent
Supplement
Selecting the link title opens the resource in a new browser window.
Launch window
View the full code for transforming and rewriting the query.
Because query rewrite occurs transparently, it is not always evident that it has taken
place. The rewritten statement is not stored in the V$SQL view, nor can it be dumped in a
trace file. Of course, if the query runs faster, rewrite should have occurred, but that is not
proof.
There are two ways to confirm that the query rewrite has occurred:
use the EXPLAIN PLAN statement and check whether the OBJECT_NAME column contains the
name of a materialized view
Note
Because the constraint information of the remote tables is not available at the
remote site, query rewrite will not make use of any constraint information.
Whenever a query contains columns that are not found in the MV, a join back is used to
rewrite the query. If the join back table is not found at the local site, query rewrite will not
take place.
This reduces or eliminates the data from network network round trips which is a costly
operation.
The materialized view in this example is present at the local site, but it references tables
that are all found at the remote site.
CREATE MATERIALIZED VIEW sum_sales_prod_week_mv
ENABLE QUERY REWRITE AS
SELECT p.prod_id, t.week_ending_day, s.cust_id,
SUM(s.amount_sold) AS sum_amount_sold
FROM sales@remotedbl s,products@remotedbl p, times@remotedbl t
WHERE s.time_id=t.time_id AND s.prod_id=p.prod_id
GROUP BY p.prod_id, t.week_ending_day, s.cust_id;
The query in this example contains tables that are found at a single remote site.
Question
Which statements accurately describe the cost-based query rewrite process?
Options:
1.
After the initial rewrite, the optimizer tests if the query can be rewritten further
2.
3.
4.
Answer
The cost-based query rewrite process is only available with the cost-based
optimizer which tests if the query can be rewritten further after the initial rewrite.
Option 1 is correct. After a materialized view has been picked for a rewrite, the
optimizer performs the rewrite and then tests whether the rewritten query can be
rewritten further with another materialized view. This process continues until no
further rewrites are possible.
Option 2 is incorrect. Query rewrite is attempted recursively to take advantage of
nested materialized views.
Option 3 is correct. Query rewrite is available only with the cost-based optimizer.
The Oracle database optimizes the input query with and without rewrite, and
selects the least costly alternative.
Option 4 is incorrect. The optimizer rewrites a query by rewriting one or more
query blocks one at a time and not several at a time.
Summary
Materialized views can be used to summarize, compute, replicate, and distribute data.
Oracle Database 11g introduces both new and enhanced catalog views for materialized
views that enable you to track the materialized view freshness. Summaries improve query
execution times.
When base tables contain large amounts of data, you can save time by using a process
called query rewrite to quickly answer a query using materialized views. The query rewrite
in Oracle Database 11g supports queries containing inline views.
After completing this topic, you should be able to identify the steps for creating and
enabling a compound trigger and control its firing order using new trigger clauses.
initial section
optional section
initial section
The initial section declares the variables and subprograms. The code in this section
executes before any of the code in the optional section.
The code for the initial section is
-- Initial section
-- Declarations
-- Subprograms
optional section
The optional section defines the code for each possible trigger point. Depending on
whether you are defining a compound trigger for a table or for a view, these triggering
points are different and in a specific order.
The code for the optional section is
-- Optional section
BEFORE STATEMENT IS ...;
-- Optional section
BEFORE EACH ROW IS ...;
-- Optional section
AFTER EACH ROW IS ...;
-- Optional section
AFTER STATEMENT IS ...;
With views, an INSTEAD OF EACH ROW clause takes the place of the BEFORE EACH
ROW and AFTER EACH ROW clauses.
local variables declared in the compound trigger sections are reinitialized, and any values
computed thus far are lost
the side effects from firing the compound trigger are not rolled back
The firing order of compound triggers is not guaranteed. Their firing can be interleaved
with the firing of conventional triggers.
the target of FOLLOWS does not contain the corresponding section as source code
A compound trigger is used to populate an audit table that records the value of the order
total, for any inserted or updated rows in the ORDERS table when the update changes a
value in the ORDER_TOTAL column. It also records the value of the order total, the time
stamp of when the change was made, and the user ID.
It is also used to bulk insert records into the audit table to improve performance.
The example tracks changes on the ORDER_TOTAL column in the ORDERS table to an
audit table. You can assume that a single UPDATE statement updates many rows.
Before creating the trigger, you should have three object definitions:
a sequence
Supplement
Selecting the link title opens the resource in a new browser window.
Launch window
View the complete output of the compound trigger.
Your session is set up with a trigger. This will enable inlining for your session.
ALTER SESSION SET PLSQL_Warnings = 'enable:all';
ALTER SESSION SET PLSQL_Optimize_Level = 3;
ALTER SESSION SET PLSQL_Code_Type = native;
In this example, for each row that is either inserted or updated in the ORDERS table, the
AFTER EACH ROW code runs and builds the O_TOTALS collection so that it holds the
values of the order ID and order total being changed in the ORDERS table.
When the number of records in the O_TOTALS collection reaches 7, the FLUSH_ARRAY
subroutine is called.
The FLUSH_ARRAY subroutine performs a bulk insert into the ORDERTOTALS_AUDIT
table. Not more than 7 records are bulk-inserted into the table because this is what the
threshold is set to.
CREATE OR REPLACE TRIGGER maintain_ordertotals_audit_trg
FOR INSERT OR UPDATE OF order_total ON orders
COMPOUND TRIGGER
--Initial section begins
--Declarations
threshold CONSTANT SIMPLE_INTEGER := 7;
TYPE order_totals_t IS TABLE OF ordertotals_audit%rowtype
INDEX BY PLS_INTEGER;
o_totals order_totals_t;
idx SIMPLE_INTEGER := 0;
-- subprogram
PROCEDURE Flush_Array IS
n CONSTANT SIMPLE_INTEGER := o_totals.Count();
BEGIN
FORALL j IN 1..n
INSERT INTO ordertotals_audit VALUES o_totals(j);
o_totals.Delete();
idx := 0;
The AFTER STATEMENT code is run that calls the FLUSH_ARRAY subroutine.
If there are less than 7 records being modified in the ORDERS table, this code ensures
that these records are recorded into the ORDERTOTALS_AUDIT table.
-- Optional section
BEFORE STATEMENT IS
BEGIN
o_totals.Delete();
idx := 0;
END BEFORE STATEMENT;
AFTER EACH ROW IS
BEGIN
idx := idx + 1;
o_totals(idx).order_ID := :New.order_ID;
o_totals(idx).Change_Date := SYSDATE();
o_totals(idx).user_id := sys_context('userenv',
'session_user');
o_totals(idx).old_total := :OLD.order_total;
o_totals(idx).new_total := :NEW.order_total;
IF idx >= Threshold THEN -- PLW-06005: inlining... done
Flush_Array();
END IF;
END AFTER EACH ROW;
AFTER STATEMENT IS
Supplement
Selecting the link title opens the resource in a new browser window.
Launch window
View the complete output of the AFTER STATEMENT code.
Because the session settings are enabled for inlining and viewing all compiler messages,
when creating this trigger, you are able to view the message that inlining has been
performed.
In SQL*Plus, you first issue the SHOW ERRORS command. The trigger is successfully
compiled, so it generates only an informational warning.
SP2-0814: Trigger created with compilation warnings
SHOW ERRORS
Errors for TRIGGER MAINTAIN_ORDERTOTALS_AUDIT:
LINE/COL ERROR
-------- --------------------------------------------------32/7
PLW-06005: inlining of call of procedure
39/5
CHANGE_DATE
--------------27-JUL-07
27-JUL-07
27-JUL-07
27-JUL-07
27-JUL-07
27-JUL-07
USER_ID
OLD_TOTAL NEW_TOTAL
---------- --------- --------OE1
10523 11049.15
OE1
78
81.9
OE1
144054.8 151257.54
OE1
60065 63068.25
OE1
21116.9 22172.75
OE1
66816
67140
In Oracle Database 11g, the CREATE TRIGGER clause now includes three clauses that
give you more control over triggers:
DISABLE
ENABLE
FOLLOWS
DISABLE
The DISABLE clause enables you to create a trigger in a disabled state so that you can
ensure that your code compiles successfully before you enable the trigger.
ENABLE
The ENABLE clause enables the trigger.
FOLLOWS
The FOLLOWS clause enables you to specify that the trigger you are creating fires after
certain other triggers.
Question
Which statements accurately describe the use of a compound trigger?
Options:
1.
2.
3.
4.
Answer
When using a compound trigger, an exception that occurs in one section of the
compound trigger must be handled in that section. And the firing order of
compound triggers is not guaranteed.
Option 1 is correct. An exception that occurs in one section of a compound trigger
must be handled in that section. You cannot transfer control to another section.
Option 2 is incorrect. A compound trigger must be implemented in PL/SQL. It must
be either a PL/SQL block or a PL/SQL procedure. It cannot be a C or Java
procedure or call such a procedure.
Option 3 is incorrect. A compound trigger body cannot have an initialization block.
Therefore, it cannot have an exception section.
Option 4 is correct. The firing order of compound triggers is not guaranteed. Their
firing can be interleaved with the firing of conventional triggers.
Question
What happens if, after a compound trigger is fired, the triggering statement rolls
back due to a DML exception?
Options:
1.
2.
3.
Side effects from firing the compound trigger are rolled back
4.
Answer
If the triggering statement rolls back due to a DML exception after a compound
trigger is fired, any values that have been computed are lost. And local variables
declared in the compound trigger sections are reinitialized.
Option 1 is correct. If, after the compound trigger is fired, the triggering statement
rolls back due to a DML exception, any values that have been computed are lost.
Option 2 is correct. Once a compound trigger has been fired, if the triggering
statement rolls back due to a DML exception, local variables declared in the
compound trigger sections are reinitialized.
Option 3 is incorrect. In this situation, the side effects from firing the compound
trigger would not be rolled back.
Option 4 is incorrect. This situation would not cause the trigger to fire again.
However, if the triggering statement of a compound trigger is within a FORALL
statement, each execution of the triggering statement would fire the trigger again.
To ensure that a trigger fires after certain other triggers defined on the same object, you
can use the FOLLOWS clause when you create the first trigger.
If two or more triggers are defined with the same timing point, and the order in which they
fire is important, you can control the firing order using the FOLLOWS clause. Without the
FOLLOWS clause, you are not guaranteed a firing order when two or more triggers of the
same type are created on an object.
If trigger execution order is specified by using the FOLLOWS clause, the order of execution
of compound trigger sections is determined by the FOLLOWS clause.
If FOLLOWS is specified only for some triggers but not all triggers, the order of execution
of triggers is guaranteed only for those that are related using the FOLLOWS clause.
The FOLLOWS clause applies to both compound and simple triggers. It enables you to
order the executions of multiple triggers relative to each other.
It can be placed in the definition of a simple trigger with a compound trigger target.
Alternatively, it can be placed in the definition of a compound trigger with a simple trigger
target.
It applies only to the section of the compound trigger with the same timing point as the
simple trigger. If the compound trigger has no such timing point, FOLLOWS is quietly
ignored.
When defining triggers that contain the FOLLOWS clause, the specified triggers must
already exist, they must be defined on the same table as the trigger being created, and
they must have been successfully compiled. You do not need to have them enabled.
Note
If it is practical, you should consider replacing the set of individual triggers for a
particular timing point with a single compound trigger that explicitly codes the
actions in the order you intend.
For example, suppose two AFTER ROW ... FOR UPDATE triggers are defined on the
same table. One trigger needs to reference the :OLD value and the other trigger needs to
change the :OLD value.
In this case, you can use the FOLLOWS clause to order the firing sequence.
You specify FOLLOWS to indicate that the trigger being created should fire after the
specified triggers.
2412
3167
54
94
9 rows selected.
In this example, the first statement changes the quantity of an item ordered. This causes
the COMPUTE_TOTAL trigger to fire and the order total is updated.
UPDATE order_items SET quantity = 100
WHERE order_id = 2412 AND line_item_id = 9;
1 row updated.
The value of the updated ORDER_TOTAL is displayed.
SELECT order_id, order_date,
customer_id, order_status, order_total
FROM orders WHERE customer_id = 170;
ORDER_ID ORDER_DAT CUSTOMER_ID ORDER_STATUS ORDER_TOTAL
---------- --------- ----------- ------------ ----------2412 29-MAR-04
170
9
67140
The value for a product ID is changed in the ORDER_ITEMS table. This causes the
CHANGE_PRODUCT trigger to fire. Because the CHANGE_PRODUCT trigger has the
FOLLOWS clause, it fires following the execution of the COMPUTE_TOTAL trigger.
UPDATE order_items SET product_id=3165
WHERE order_id = 2412 AND line_item_id = 8;
Do processing here...
1 row updated.
Question
Identify the correct statements regarding the use of the FOLLOWS clause with
triggers.
Options:
1.
It applies only to the section of the compound trigger with the same timing point as
the simple trigger
2.
3.
It can be placed in the definition of a simple trigger with a compound trigger target
4.
Answer
The FOLLOWS clause applies only to the section of the compound trigger with the
same timing point as the simple trigger. And it can be placed in the definition of a
simple trigger with a compound trigger target.
Option 1 is correct. The FOLLOWS clause applies only to the section of the
compound trigger with the same timing point as the simple trigger. If the
compound trigger has no such timing point, the FOLLOWS clause is quietly
ignored.
Option 2 is incorrect. The FOLLOWS clause applies to both compound and simple
triggers.
Option 3 is correct. The FOLLOWS clause can be placed in the definition of a
simple trigger with a compound trigger target or in the definition of a compound
trigger with a simple trigger target.
Option 4 is incorrect. The FOLLOWS clause can be placed in the definition of a
compound trigger with a simple trigger target. It can also be placed in the
definition of a simple trigger with a compound trigger target.
REFERENCES product_information(product_id)
ON DELETE CASCADE);
You create an audit table to store the information.
Next, you create the compound trigger. You name the trigger
MAINTAIN_PRICES_AUDIT_TRG. You build the trigger so that it checks for updates of
the MIN_PRICE and LIST_PRICE columns in the PRODUCT_INFORMATION table.
CREATE OR REPLACE TRIGGER maintain_prices_audit_trg
FOR UPDATE OF min_price, list_price ON product_information
COMPOUND TRIGGER
--Initial section begins
--Declarations
threshold CONSTANT SIMPLE_INTEGER := 7;
TYPE productprice_t IS TABLE OF productprice_audit%rowtype
INDEX BY PLS_INTEGER;
v_prices productprice_t;
idx SIMPLE_INTEGER := 0;
-- subprogram
PROCEDURE Flush_Array IS
n CONSTANT SIMPLE_INTEGER := v_prices.Count();
BEGIN
Supplement
Selecting the link title opens the resource in a new browser window.
Launch window
View the complete code for the MAINTAIN_PRICES_AUDIT_TRG trigger.
You then execute this query to analyze the data for supplier 102050.
SELECT product_id, supplier_id, min_price, list_price
FROM product_information
WHERE supplier_id = 102050;
Now suppose supplier 102050 is increasing its prices by 5%. You issue the UPDATE
statement to update the list prices and minimum prices for supplier 102050.
UPDATE product_information
SET list_price = list_price * 1.05,
min_price = min_price * 1.05
WHERE supplier_id = 102050;
You can refer to the Results table to verify that the data has changed.
SELECT product_id, supplier_id, min_price, list_price
FROM product_information
WHERE supplier_id = 102050;
Next, you examine the contents of the PRODUCTPRICE_AUDIT table.
SELECT *
From productprice_audit
Next you want to create a disabled trigger, and then execute a statement that would fire
the trigger if it were not disabled. Then you want to enable the trigger and observe the
results. This trigger uses the FOLLOWS clause to ensure that its firing order occurs after
the MAINTAIN_PRICES_AUDIT_TRG you created previously.
To do this, you create a trigger named INFORM_LIST_PRICE in a disabled state. This
trigger fires after the LIST_PRICE is updated and following
MAINTAIN_PRICES_AUDIT_TRG. You configure the trigger to display the message
"Warning new list price is unknown for product: xyz," where xyz is the product number.
CREATE OR REPLACE TRIGGER inform_list_price
AFTER UPDATE OF list_price ON product_information
FOR EACH ROW
FOLLOWS maintain_prices_audit_trg
DISABLE
BEGIN
IF :new.list_price IS NULL then
dbms_output.put_line('Warning - new list price is unknown
for product: ' || :old.product_id);
END IF;
END inform_list_price;
Next, you execute the following UPDATE statement and observe whether your trigger is
fired.
UPDATE product_information
SET list_price = list_price * 1.05,
min_price = min_price * 1.05
WHERE supplier_id = 102050;
Next, you enable the trigger.
ALTER TRIGGER inform_list_price ENABLE;
You execute this UPDATE statement and observe whether your trigger is fired. You can
view any messages in the Script Output tab.
UPDATE product_information
SET list_price = list_price * 1.05,
min_price = min_price * 1.05
WHERE supplier_id = 102050;
Summary
A compound trigger enables you to create a single trigger on a table that allows you to
specify actions for each of the four triggering timing points. The DISABLE clause of the
CREATE TRIGGER statement enables you to create a trigger in a disabled state.
You can create a disabled trigger which is safe, because you can enable it only when you
know it will be compiled without an error. To ensure that a trigger fires after certain other
triggers defined on the same object, you use the FOLLOWS clause when you create the
first trigger.
You create a disabled trigger and then enable it. You can create a compound trigger and
build it such that it checks for specified updates.
-- Optional section
BEFORE STATEMENT IS
BEGIN
o_totals.Delete();
idx := 0;
END BEFORE STATEMENT;
AFTER EACH ROW IS
BEGIN
idx := idx + 1;
o_totals(idx).order_ID := :New.order_ID;
o_totals(idx).Change_Date := SYSDATE();
o_totals(idx).user_id := sys_context('userenv', 'session_user');
o_totals(idx).old_total := :OLD.order_total;
o_totals(idx).new_total := :NEW.order_total;
IF idx >= Threshhold THEN -- PLW-06005: inlining... done
Flush_Array();
END IF;
END AFTER EACH ROW;
AFTER STATEMENT IS
BEGIN
-- PLW-06005: inlining... done
Flush_Array();
END AFTER STATEMENT;
END maintain_ordertotals_audit_trg;
BEGIN
v_prices.Delete();
idx := 0;
END BEFORE STATEMENT;
AFTER EACH ROW IS
BEGIN
idx := idx + 1;
v_prices(idx).product_id := :new.product_id;
v_prices(idx).change_date := SYSDATE();
v_prices(idx).user_id := sys_context('userenv',
'session_user');
v_prices(idx).old_min_price := :OLD.min_price;
v_prices(idx).new_min_price := :NEW.min_price;
v_prices(idx).old_list_price := :OLD.list_price;
v_prices(idx).new_list_price := :NEW.list_price;
IF idx >= threshold THEN -- PLW-06005: inlining... done
Flush_Array();
END IF;
END AFTER EACH ROW;
AFTER STATEMENT IS
BEGIN
-- PLW-06005: inlining... done
Flush_Array();
END AFTER STATEMENT;
END maintain_prices_audit_trg;
After completing this topic, you should be able to recognize the steps for implementing
SecureFile LOBs.
1. SecureFile LOBs
With SecureFile LOBs, the LOB data type is completely reengineered with dramatically
improved performance, manageability, and ease of application development.
This new implementation also offers advanced, next-generation functionality such as
intelligent compression and transparent encryption. This feature significantly strengthens
the native content management capabilities of Oracle Database.
SecureFile LOBs are introduced to supplement the original BasicFile LOBs
implementation that is identified by the BASICFILE SQL parameter.
Starting with Oracle Database 11g, you have the option of using the new SecureFile
storage paradigm for LOBs.
You can specify to use the new paradigm by using the SECUREFILE keyword in the
CREATE TABLE statement. If that keyword is left out, and the BASICFILE storage
keyword is used instead, the old storage paradigm for basic file LOBs is used. This is the
default behavior.
You can modify the init.ora file and change the default behavior for the storage of
LOBs by setting the DB_SECUREFILE initialization parameter. The values allowed are
ALWAYS
FORCE
PERMITTED
NEVER
IGNORE
ALWAYS
Setting the parameter to ALWAYS attempts to create all LOB files as SECUREFILES, but
creates any LOBs not in ASSM tablespaces as BASICFILE LOBs.
FORCE
Setting the parameter to FORCE means that all LOBs created in the system are created as
SECUREFILE LOBs.
PERMITTED
The PERMITTED parameter is the default. It allows SECUREFILES to be created when
specified with the SECUREFILE keyword in the CREATE TABLE statement.
NEVER
Setting the parameter to NEVER creates any LOBs that are specified as SECUREFILE
LOBs as BASICFILE LOBs.
IGNORE
Setting the parameter to IGNORE ignores the SECUREFILE keyword and all SECUREFILE
options.
To create a column to hold a LOB that is a SecureFile, you need to
When you define a column to hold SecureFile data, you must have Automatic Segment
Space Management (ASSM) enabled for the tablespace in order to support SecureFiles.
CREATE TABLESPACE sf_tbs1
DATAFILE 'sf_tbs1.dbf' SIZE 1500M REUSE
AUTOEXTEND ON NEXT 200M
MAXSIZE 3000M
SEGMENT SPACE MANAGEMENT AUTO;
In this example, the code creates the CUSTOMER_PROFILES table. The column
PROFILE_INFO will hold the LOB data in the SecureFile format because the storage
clause identifies the format.
CONNECT oe1/oe1@orcl
CREATE TABLE customer_profiles
(id NUMBER,
first_name VARCHAR2 (40),
last_name VARCHAR2 (80),
profile_info BLOB)
LOB(profile_info) STORE AS SECUREFILE
(TABLESPACE sf_tbs1);
Question
What is the default value for the DB_SECUREFILE initialization parameter?
Options:
1.
ALWAYS
2.
FORCE
3.
IGNORE
4.
PERMITTED
Answer
The default value for the DB_SECUREFILE initialization parameter is PERMITTED.
Option 1 is incorrect. The ALWAYS value attempts to create all LOB files as
SECUREFILES, but creates any LOBs not in ASSM tablespaces as BASICFILE
LOBs.
Option 2 is incorrect. When DB_SECUREFILE is set to FORCE, all LOBs created in
the system are created as SECUREFILE LOBs.
Option 3 is incorrect. This value ignores the SECUREFILE keyword and all
SECUREFILE options.
Option 4 is correct. PERMITTED is the default value of the DB_SECUREFILE
initialization parameter. This value allows SECUREFILES to be created when
specified with the SECUREFILE keyword in the CREATE TABLE statement.
Note
The LOADLOBFROMBFILE_PROC procedure can be used to read both SecureFile
and BasicFile formats.
In this example
END LOOP;
COMMIT;
END write_lob;
This code uses the INSERT statement to initialize the locator. The LOADLOBFROMBFILE
routine is then called and the LOB column value is inserted.
The write and read performance statistics for LOB storage is captured through output
messages.
CREATE OR REPLACE PROCEDURE write_lob (p_file IN VARCHAR2)
IS
i
NUMBER;
v_fn VARCHAR2(15);
v_ln VARCHAR2(40);
v_b BLOB;
BEGIN
DBMS_OUTPUT.ENABLE;
DBMS_OUTPUT.PUT_LINE('Begin inserting rows...');
FOR i IN 1 .. 30 LOOP
v_fn:=SUBSTR(p_file,1,INSTR(p_file,'.')-1);
v_ln:=SUBSTR(p_file,INSTR(p_file,'.')+1,LENGTH(p_file)INSTR(p_file,'.')-4);
INSERT INTO customer_profiles
VALUES i, v_fn, v_ln, EMPTY_BLOB())
RETURNING profile_info INTO v_b;
loadLOBFromBFILE_proc(v_b,p_file);
DBMS_OUTPUT.PUT_LINE('Row '|| i ||' inserted.');
END LOOP;
COMMIT;
END write_lob;
When writing Data to the SecureFile LOB, the Microsoft Word files are stored in the
SECUREFILES directory.
To read them into the PROFILE_INFO column in the CUSTOMER_PROFILES table, the
WRITE_LOB procedure is called and the name of the .doc files is passed as a
parameter.
set
set
set
set
serveroutput on
verify on
term on
linesize 200
Note
This script is run in SQL*Plus because TIMING is a SQL*Plus option and is not
available in SQL Developer.
The output of the WRITE_LOB procedure is similar to this code.
timing start load_data
execute write_lob('karl.brimmer.doc');
Begin inserting rows...
Row 1 inserted.
...
PL/SQL procedure successfully completed.
execute write_lob('monica.petera.doc');
Begin inserting rows...
Row 1 inserted.
...
PL/SQL procedure successfully completed.
execute write_lob('david.sloan.doc');
Begin inserting rows...
Row 1 inserted.
...
PL/SQL procedure successfully completed.
timing stop
timing for: load_data
Elapsed: 00:00:00.96
To retrieve the records that were inserted, you can call the READ_LOB procedure.
set
set
set
set
serveroutput on
verify on
term on
linesize 200
Note
This text appears as garbage because it is a binary file.
COMPRESS HIGH
COMPRESS MEDIUM
NOCOMPRESS
COMPRESS HIGH
The COMPRESS HIGH option provides the best compression, but incurs the most work.
COMPRESS MEDIUM
The COMPRESS MEDIUM option is the default.
NOCOMPRESS
The NOCOMPRESS option disables compression.
You can also use DBMS_LOB.SETOPTIONS to enable or disable compression on
individual LOBs.
To test how efficient deduplication and compression are on SecureFiles, you decide to
1. check the space being used by the CUSTOMER_PROFILES table
2. enable deduplication and compression for the PROFILE_INFO LOB column in the
CUSTOMER_PROFILES table
3. examine the space being used after deduplication and compression are enabled
4. reclaim the space and examine the results
This procedure checks for LOB space usage.
CREATE OR REPLACE PROCEDURE check_space
IS
l_fs1_bytes NUMBER;
l_fs2_bytes NUMBER;
l_fs3_bytes NUMBER;
l_fs4_bytes NUMBER;
l_fs1_blocks NUMBER;
l_fs2_blocks NUMBER;
l_fs3_blocks NUMBER;
l_fs4_blocks NUMBER;
l_full_bytes NUMBER;
l_full_blocks NUMBER;
l_unformatted_bytes NUMBER;
l_unformatted_blocks NUMBER;
BEGIN
DBMS_SPACE.SPACE_USAGE(
segment_owner => 'OE1',
segment_name => 'CUSTOMER_PROFILES',
segment_type => 'TABLE',
Supplement
Selecting the link title opens the resource in a new browser window.
Launch window
View the complete code for checking for LOB space usage.
Before you enable deduplication and compression, the space usage displays.
This amount will be used as a baseline for comparison.
execute check_space
anonymous block completed
FS1 Blocks = 0 Bytes = 0
FS2 Blocks = 1 Bytes = 8192
FS3 Blocks = 0 Bytes = 0
4. Encryption
The encryption option enables you to turn on or off the LOB encryption, and optionally
select an encryption algorithm.
Encryption is performed at the block level and you can specify the encryption algorithm:
3DES168
AES128
AES192 (default)
AES256
The column encryption key is derived from PASSWORD and all LOBs in the LOB column
will be encrypted. DECRYPT keeps the LOBs in cleartext. And LOBs can be encrypted on
a per-column or per-partition basis.
The current Transparent Data Encryption (TDE) syntax is used for extending encryption
to LOB data types. TDE enables you to encrypt sensitive data in database columns as it
is stored in the operating system files.
Transparent data encryption is a key-based access control system that enforces
authorization by encrypting data with a key that is kept secret.
There can be only one key for each database table that contains encrypted columns,
regardless of the number of encrypted columns in a given table. Each table's column
encryption key is, in turn, encrypted with the database server's master key.
No keys are stored in the database. Instead, they are stored in an Oracle wallet, which is
part of the external security module.
To enable TDE, you need to create a directory to store the TDE wallet. This is required for
the SecureFiles LOB encryption.
mkdir $ORACLE_HOME/wallet
You also need to modify the sqlnet.ora file to identify the location of the TDE wallet,
using this code.
ENCRYPTION_WALLET_LOCATION=
(SOURCE=(METHOD=FILE)(METHOD_DATA= (DIRECTORY
=/u01/app/oracle/product/11.1.0/db_1/wallet)))
In this example, the CUSTOMER_PROFILES table is modified so that the PROFILE_INFO
column uses encryption.
ALTER TABLE customer_profiles
MODIFY (profile_info ENCRYPT USING 'AES192');
Table altered.
You can query the USER_ENCRYPTED_COLUMNS dictionary view to see the status of
encrypted columns.
SELECT *
FROM user_encrypted_columns;
TABLE_NAME
COLUMN_NAME
ENCRYPTION_ALG
SAL
----------------- ----------------- ---------------- --CUSTOMER_PROFILES PROFILE_INFO
AES 192 bits key YES
Question
Identify the accurate statements regarding the use of encryption with LOBs.
Options:
1.
2.
3.
4.
There are multiple keys for each database table that contains encrypted columns
Answer
Encryption with LOBs requires a directory to store the Transparent Data
Encryption wallet. And it enables you to encrypt sensitive data in database
columns.
Option 1 is correct. To enable Transparent Data Encryption (TDE), you need to
create a directory to store the TDE wallet. You also need to modify the
sqlnet.ora file to identify the location of the TDE wallet.
Option 2 is incorrect. No encryption keys are stored in the database. Instead, they
are stored in an Oracle wallet, which is part of the external security module.
Option 3 is correct. TDE enables you to encrypt sensitive data in database
columns as it is stored in the operating system files. TDE is a key-based access
control system that enforces authorization by encrypting data with a key that is
kept secret.
Option 4 is incorrect. There can be only one key for each database table that
contains encrypted columns regardless of the number of encrypted columns in a
given table.
You may have LOB data in tables that were created before Oracle Database 11g. You can
migrate the LOB data from the BasicFile format to the SecureFile format.
In this example, you may have previously created a table with a LOB column stored in the
BasicFile format, which is the default and only choice before Oracle Database 11g.
The BasicFile format migrates to the SecureFile format through a series of steps.
connect system/oracle@orcl
CREATE TABLESPACE bf_tbs1
DATAFILE 'bf_tbs1.dbf' SIZE 800M REUSE
EXTENT MANAGEMENT LOCAL
UNIFORM SIZE 64M
SEGMENT SPACE MANAGEMENT AUTO;
connect oe1/oe1@orcl
CREATE TABLE customer_profiles
(id NUMBER,
first_name VARCHAR2 (40),
last_name VARCHAR2 (80),
profile_info BLOB)
LOB(profile_info) STORE AS BASICFILE
(TABLESPACE bf_tbs1);
Note
For this example, you need to drop the CUSTOMER_PROFILES table, and recreate it with this code:
DROP TABLE customer_profiles;
In this example, data is loaded into the PROFILE_INFO BLOB column in the
CUSTOMER_PROFILES table.
This example builds and populates the BasicFile LOB format column so that it can be
migrated to the SecureFile format.
set serveroutput on
set verify on
set term on
set linesize 200
timing start load_data
execute write_lob('karl.brimmer.doc');
execute write_lob('monica.petera.doc');
execute write_lob('david.sloan.doc');
timing stop
PL/SQL procedure successfully completed.
Note
The elapsed time is much longer than loading the data in the SecureFile format.
These commands read back the 90 records from the CUSTOMER_PROFILES table. For
each record, the size of the LOB value plus the first 200 characters of the LOB are
displayed on screen.
A SQL*Plus timer is started to capture the total elapsed time for the retrieval.
Later, you can use this timing information to compare the performance between the
BasicFile format and the SecureFile format LOBs.
set serveroutput on
set verify on
set term on
set lines 200
timing start read_data
execute read_lob;
timing stop
PL/SQL procedure successfully completed.
timing for: read_data
Elapsed: 00:00:01.15
By querying the DBA_SEGMENTS view, you can see that the LOB segment subtype name
for BasicFile LOB storage is ASSM.
col segment_name format a30
col segment_type format 13
SELECT
FROM
WHERE
AND
SEGMENT_NAME
SEGMENT_TYPE
SEGME
------------------------------ ------------------ ----SYS_LOB0000080068C00004$$
LOBSEGMENT
ASSM
The migration from BasicFile to SecureFiles LOB storage format is performed online. This
means that the CUSTOMERS_PROFILES table will continue to be accessible during the
migration.
This type of operation is called online redefinition. Online redefinition requires an interim
table for data storage.
In this example, the interim table is defined with the SecureFiles LOB storage format. And
the LOB is stored in the sf_tbs1 tablespace.
After the migration is completed, the PROFILE_INFO LOB is stored in the sf_tbs1
tablespace.
CREATE TABLE customer_profiles_interim
(id NUMBER,
first_name VARCHAR2 (40),
last_name VARCHAR2 (80),
profile_info BLOB)
LOB(profile_info) STORE AS SECUREFILE
(TABLESPACE sf_tbs1);
After running this code and completing the redefinition operation, you can drop the interim
table.
connect system/oracle@orcl
DECLARE
error_count PLS_INTEGER := 0;
BEGIN
DBMS_REDEFINITION.START_REDEF_TABLE
('OE1', 'customer_profiles', 'customer_profiles_interim',
'id id, first_name first_name,
last_name last_name, profile_info profile_info',
OPTIONS_FLAG => DBMS_REDEFINITION.CONS_USE_ROWID);
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS
('OE1', 'customer_profiles', 'customer_profiles_interim',
1, true,true,true,false, error_count);
DBMS_OUTPUT.PUT_LINE('Errors := ' || TO_CHAR(error_count));
DBMS_REDEFINITION.FINISH_REDEF_TABLE
('OE1', 'customer_profiles', 'customer_profiles_interim');
END;
connect oe1/oe1@orcl
DROP TABLE customer_profiles_interim;
You can then check the segment type of the migrated LOB. Note that the segment
subtype for SecureFile LOB storage is SECUREFILE. For BasicFile format, it is ASSM.
Question
Which statements accurately describe migrating from BasicFile to the SecureFile
format?
Options:
1.
2.
3.
4.
Answer
An interim table is required for data storage. And the DBMS_REDEFINITION
package is required when migrating from BasicFile to the SecureFile format.
Option 1 is correct. Online redefinition requires an interim table for data storage.
The interim table is defined with the SecureFiles LOB storage format.
Option 2 is incorrect. By querying DBA_SEGMENTS, and not DBA_LOBS, you can
determine the segment name and type for BasicFile LOB storage. The DBA_LOBS
data dictionary view can be used to view the compression and deduplication
settings for the SecureFiles LOB segment.
Option 3 is correct. The DBMS_REDEFINITION package is used to perform online
redefinition, including redefining table columns and column names.
Option 4 is incorrect. The process of migrating from BasicFile to SecureFile format
is known as online, and not offline, redefinition.
DBMS_OUTPUT.ENABLE;
DBMS_OUTPUT.PUT_LINE('Begin inserting rows...');
FOR i IN 1 .. 5 LOOP
v_id:=SUBSTR(p_file, 1, 4);
INSERT INTO product_descriptions
VALUES (v_id, EMPTY_BLOB())
RETURNING detailed_product_info INTO v_b;
loadLOBFromBFILE_proc(v_b,p_file);
DBMS_OUTPUT.PUT_LINE('Row '|| i ||' inserted.');
END LOOP;
COMMIT;
END write_lob;
/
Next you execute the procedures to load the data.
If you are using SQL*Plus, you can set the timing on to observe the time. If you are using
SQL Developer, you issue only these EXECUTE statements. In SQL Developer, some of
the SQL*Plus commands are ignored.
set
set
set
set
serveroutput on
verify on
term on
lines 200
Now you connect as system and run the redefinition script. You need to replace orcl
with your sid, and replace OE1 with your ID.
connect system/oracle@orcl
DECLARE
error_count PLS_INTEGER := 0;
BEGIN
DBMS_REDEFINITION.START_REDEF_TABLE
('OE1', 'product_descriptions',
'product_descriptions_interim',
'product_id product_id, detailed_product_info
detailed_product_info',
OPTIONS_FLAG =>
DBMS_REDEFINITION.CONS_USE_ROWID);
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS
('OE1', 'product_descriptions',
'product_descriptions_interim',
1, true,true,true,false, error_count);
DBMS_OUTPUT.PUT_LINE('Errors := ' ||
TO_CHAR(error_count));
DBMS_REDEFINITION.FINISH_REDEF_TABLE
('OE1', 'product_descriptions',
'product_descriptions_interim');
END;
/
In SQL Developer, using your OE User ID, you remove the interim table using this code.
DROP TABLE product_descriptions_interim;
Next you check the segment type in the data dictionary.
SELECT segment_name, segment_type, segment_subtype
FROM dba_segments
WHERE tablespace_name = 'SF_TBS2'
AND segment_type = 'LOBSEGMENT';
Next you turn on compression and deduplication for your PRODUCT_DESCRIPTIONS
table.
You modify the table to enable deduplication and compression using this code.
ALTER TABLE product_descriptions
MODIFY LOB (detailed_product_info)
(DEDUPLICATE LOB
COMPRESS HIGH);
Finally, you alter the table to reclaim the free space using this code.
ALTER TABLE product_descriptions ENABLE ROW MOVEMENT;
ALTER TABLE product_descriptions SHRINK SPACE COMPACT;
ALTER TABLE product_descriptions SHRINK SPACE;
Summary
The new SecureFile format for LOBs dramatically improves performance, manageability,
and ease of application development. It also offers intelligent compression and
transparent encryption.
To read data from a SecureFile LOB, you use the LOADLOBFROMBFILE_PROC procedure.
You use the WRITE_LOB procedure to read data into the LOB column.
The SecureFile format offers features such as deduplication and compression. You can
test how efficient deduplication and compression are on SecureFiles.
You can also turn on or off the LOB encryption, and specify an encryption algorithm.
Encryption is performed at the block level.
You can migrate the older version BasicFile format to the SecureFile format. And the
performance of the SecureFile format LOBs is faster than the BasicFile format LOBs.
With Oracle 11g, you can migrate a BasicFile format LOB to a SecureFile format LOB.
After completing this topic, you should be able to recognize the steps for performing
PIVOT and UNPIVOT operations in various ways on the server side.
aggregation operator is applied, enabling the query to condense large data sets into
smaller, more readable results. And data that was originally on multiple rows can be
transformed into a single row of output, enabling intra-row calculations without a SQL
JOIN operation.
By performing pivots on the server side, you can
Question
Identify the benefits of using pivoting operations.
Options:
1.
2.
3.
4.
Answer
Benefits of using pivoting operations include enhanced processing speed and
reduced network load.
Option 1 is correct. By performing pivots on the server side the processing burden
is removed from client applications, simplifying client-side development and
potentially enhancing processing speed.
Option 2 is incorrect. Pivoting enables you to transform multiple rows of input into
fewer rows, generally with more columns.
Option 3 is incorrect. When pivoting, an aggregation operator is applied, enabling
the query to condense large data sets into smaller, more readable results.
Option 4 is correct. By performing pivots on the server side, network load is
reduced because only aggregated pivot results need to traverse the network and
not the detail data.
You can use the PIVOT clause to write cross-tabulation queries that rotate rows into
columns, aggregating the data in the process of rotation.
The XML keyword is required when you use either a subquery or the wildcard ANY in the
pivot_in_clause to specify pivot values. You cannot specify XML when you specify
explicit pivot values using expressions in the pivot_in_clause.
If the XML keyword is used, the output will include grouping columns and one column of
XMLType rather than a series of pivoted columns.
table_reference PIVOT [ XML ]
( aggregate_function ( expr ) [[AS] alias ]
[, aggregate_function ( expr ) [[AS] alias ] ]...
pivot_for_clause
pivot_in_clause )
The optional AS alias enables you to specify an alias for each measure.
The aggregate_function operates on the table's data, and the result of the
computation appears in the cross-tab report. It has an implicit GROUP BY based on the
columns in the source data.
The expr argument for the aggregate function is the measure to be pivoted. It must be a
column or expression of the query_table_expression on which the PIVOT clause is
operating.
You use the pivot_for_clause to specify one or more columns whose values are to
be pivoted into columns.
pivot_for_clause =
FOR { column |( column [, column]... ) }
In the pivot_in_clause, you specify the pivot column values from the columns you
specified in the pivot_for_clause.
For expr, you specify a constant value of a pivot column. You can optionally provide an
alias for each pivot column value.
pivot_in_clause =
IN ( { { { expr | ( expr [, expr]... ) } [ [ AS] alias] }...
| subquery | { ANY | ANY [, ANY]...} } )
You can use a subquery to extract the pivot column values by way of a nested subquery.
If you specify ANY, all values of the pivot columns are pivoted into columns.
Subqueries and wildcards are useful if you do not know the specific values in the pivot
columns. However, you will need to do further processing to convert the XML output into a
tabular format.
The values evaluated by the pivot_in_clause become the columns in the pivoted
data.
pivot_in_clause =
IN ( { { { expr | ( expr [, expr]... ) } [ [ AS] alias] }...
| subquery | { ANY | ANY [, ANY]...} } )
Suppose you have recently created a new view, sales_view, in the SH schema.
You want to pivot some of the data in the sales_view view.
SQL> CREATE OR REPLACE VIEW sales_view AS
2 SELECT
3
prod_name AS product,
4
country_name AS country,
5
channel_id AS channel,
6
SUBSTR(calendar_quarter_desc, 6,2) AS quarter,
7
SUM(amount_sold) AS amount_sold,
8
SUM(quantity_sold) AS quantity_sold
9 FROM sh.sales, sh.times, sh.customers,
10
sh.countries, sh.products
11 WHERE sales.time_id = times.time_id AND
12
sales.prod_id = products.prod_id AND
13
sales.cust_id = customers.cust_id AND
14
customers.country_id = countries.country_id
15 GROUP BY prod_name, country_name, channel_id,
16 SUBSTR(calendar_quarter_desc, 6, 2);
This code displays the definition of sales_view.
SQL> DESCRIBE sales_view
Name
Null?
Type
---------------------------------------------------------PRODUCT
NOT NULL VARCHAR2(50)
COUNTRY
NOT NULL VARCHAR2(40)
CHANNEL
NOT NULL NUMBER
QUARTER
VARCHAR2(2)
AMOUNT_SOLD
NUMBER
QUANTITY_SOLD
NUMBER
This code displays some of the calendar_quarter_desc column values in the SH
schema.
Note
The two-character quarter value is extracted from the calendar_quarter_desc
column starting at position 6.
This code shows some sample data from the sales_view.
There are currently 9,502 rows selected.
SQL> SELECT product, country, channel, quarter, quantity_sold
FROM sales_view;
PRODUCT
-----------Y Box
Y Box
Y Box
. . .
Y Box
Y Box
Y Box
Y Box
Y Box
. . .
Bounce
Bounce
COUNTRY
CHANNEL
------------ ---------Italy
4
Italy
4
Italy
4
QUARTER QUANTITY_SOLD
-------- ------------01
21
02
17
03
20
Japan
Japan
Japan
Japan
Japan
2
2
2
2
3
01
02
03
04
01
35
39
36
46
65
Italy
Italy
2 01
2 02
34
43
. . .
9502 rows selected.
If you use the QUARTER column to pivot, two key changes are performed by the PIVOT
operator:
the QUARTER column will become multiple columns, each holding one quarter
the row count will drop from 9,502 to just 71, representing the distinct products in the schema
This statement displays the distinct channel_id and channel_desc column values
from the CHANNEL table.
The valid quarter values are 01, 02, 03, and 04.
SQL> SELECT DISTINCT channel_id, channel_desc
2 FROM sh.channels
3 ORDER BY channel_id;
CHANNEL_ID CHANNEL_DESC
---------- -------------------2
Partners
3
Direct Sales
4
Internet
5
Catalog
9
Tele Sales
This code shows how to pivot on the QUARTER column in the sales_view data.
It uses a subquery inline view in the FROM clause. This is required, because when you
issue a SELECT * directly from the sales view, the query outputs rows for each row of
sales_view.
SQL> SELECT *
2 FROM
3
(SELECT product, quarter, quantity_sold
4
FROM sales_view) PIVOT (sum(quantity_sold)
5
FOR quarter IN ('01', '02', '03', '04'))
6 ORDER BY product DESC;
PRODUCT
'01'
'02'
'03'
'04'
------------------ ---------- ---------- ---------- ---------Y Box
1455
1766
1716
1992
Xtend Memory
3146
4121
4122
Unix/Windows 1-user
4259
3887
4601
4049
Standard Mouse
3376
1699
2654
2427
Smash up Boxing
1608
2127
1999
2110
. . .
71 rows selected.
SQL> SELECT *
The result set shows the product column followed by a column for each value of the
quarter specified in the IN clause.
The numbers in the pivoted output are the sum of quantity_sold for each product at
each quarter.
If you also specify an alias for each measure, the column name is a concatenation of the
pivot column value or alias, an underscore (_), and the measure alias.
For example, you can use this code to specify aliases.
SQL> SELECT *
2 FROM (SELECT product, quarter, quantity_sold
3
FROM sales_view) PIVOT (sum(quantity_sold)
4
FOR quarter IN ('01' AS Q1, '02' AS Q2, '03 AS Q3,
5
'04' AS Q4))
6 ORDER BY product DESC;
PRODUCT
'01'
'02'
'03'
'04'
---------------- ---------- ---------- ---------- ---------Y Box
1455
1766
1716
1992
Xtend Memory
3146
4121
4122
Unix/Windows
4259
3887
4601
4049
Standard Mouse
3376
1699
2654
2427
. . .
71 rows selected.
Prior to Oracle Database 11g, you could accomplish this using the CASE expression
syntax.
The advantage of the new syntax over the syntax used prior to Oracle Database 11g is
that it enables greater query optimization by the Oracle database. The query optimizer
recognizes the PIVOT keyword and, as a result, uses algorithms optimized to process it
efficiently.
SQL> SELECT product,
2 SUM(CASE when quarter = '01'
3
THEN quantity_sold ELSE NULL END) Q1,
4 SUM(CASE when quarter = '02'
5
THEN quantity_sold ELSE NULL END) Q2,
6 SUM(CASE when quarter = '03'
7
THEN quantity_sold ELSE NULL END) Q3,
5 PIVOT
6 (SUM(order_total) FOR order_mode IN ('direct' AS Store,
7
'online' AS Internet))
8 ORDER BY year;
SQL> SELECT * FROM pivot_table ORDER BY year;
YEAR
STORE
INTERNET
---------- ---------- ---------1990
61655.7
1996
5546.6
1997
310
1998
309929.8
100056.6
1999 1274078.8 1271019.5
2000
252108.3
393349.4
6 rows selected.
a pivoting column is required to be a column of the table reference on which the pivot is operating
. . .
71 rows selected.
In this example, more values for the QUARTER column have been specified.
SQL> SELECT *
2 FROM
3
(SELECT product, channel, quarter, quantity_sold
4
FROM sales_view
5
) PIVOT (sum(quantity_sold) FOR (channel, quarter) IN
6
((3, '01') AS Direct_Sales_Q1,
7
(3, '02') AS Direct_Sales_Q2,
8
(3, '03') AS Direct_Sales_Q3,
9
(3, '04') AS Direct_Sales_Q4,
10
(4, '01') AS Internet_Sales_Q1,
11
(4, '02') AS Internet_Sales_Q2,
12
(4, '03') AS Internet_Sales_Q3,
13
(4, '04') AS Internet_Sales_Q4))
14 ORDER BY product DESC;
Oracle Database 11g enables you to pivot using multiple aggregations.
The query in this example pivots SALES_VIEW on the CHANNEL column. The
amount_sold and quantity_sold measures are pivoted. The query creates column
headings by concatenating the pivot columns with the aliases of the aggregate functions,
plus an underscore.
When you use multiple aggregation, you can omit the alias for only one aggregation. If
you omit an alias, the corresponding result column name is the pivot value, or the alias for
the pivot value.
SQL> SELECT *
2 FROM
3
(SELECT product, channel, amount_sold, quantity_sold
4
FROM sales_view) PIVOT (SUM(amount_sold) AS sums,
5
SUM(quantity_sold) as sumq
6
FOR channel IN (3 AS Dir_Sales, 4 AS Int_Sales))
7 ORDER BY product DESC;
PRODUCT
DIR_SALES_SUMS DIR_SALES_SUMQ INT_SALES_SUMS
INT_SALES_SUMQ
----------- -------------- -------------- --------------------------Y Box
1081050.96
3552
382767.45
1339
Xtend Memory
217011.38
8562
40553.93
1878
Unix/Windows
1999882.17
1872
Standard Mouse
153199.63
1195
Smash up Boxing 174592.24
904
...
71 rows selected.
9313
376071.62
6140
28768.04
5106
27858.84
You can distinguish between NULL values that are generated from the use of PIVOT and
those that exist in the source data.
This example illustrates NULL that PIVOT generates.
The first code example assumes an existing table named sales2.
SQL> SELECT * FROM sales2;
PROD_ID
------100
100
100
200
QTR
--Q1
Q1
Q2
Q1
AMOUNT_SOLD
----------10
20
50
The query in this second code example returns prod_id rows and the resulting pivot
columns Q1, Q1_COUNT_TOTAL, Q2, and Q2_COUNT_TOTAL.
For each unique value of prod_id, Q1_COUNT_TOTAL, the query returns the total
number of rows whose QTR value is Q1. The unique value of Q2_COUNT_TOTAL returns
the total number of rows whose QTR value is Q2.
SQL> SELECT * FROM sales2;
PROD_ID
------100
100
100
200
QTR
--Q1
Q1
Q2
Q1
AMOUNT_SOLD
----------10
20
50
SQL> SELECT *
2 FROM
3 (SELECT prod_id, qtr, amount_sold
4
FROM sales2) PIVOT (SUM(amount_sold), COUNT(*) AS count_total
5
FOR qtr IN ('Q1', 'Q2') )
6 ORDER BY prod_id DESC;
PROD_ID
------100
200
Q1 Q1_COUNT_TOTAL Q2
Q2_COUNT_TOTAL
--- -------------- --------- -------------30
2
1
50
1
0
The result set for the second code example shows that there are two sales rows for
prod_id 100 for quarter Q1, and one sales row for prod_id 100 and quarter Q2.
For prod_id 200, there is one sales row for quarter Q1 and no sales row for quarter Q2.
Using Q2_COUNT_TOTAL, you can identify that the NULL for PROD_ID 100 in Q2 is the
result of a row in the original table whose measure is of NULL value. The NULL for
PROD_ID 200 in Q2 is due to no row being present in the original table for prod_id
200 in quarter Q2.
SQL> SELECT * FROM sales2;
PROD_ID
------100
100
100
200
QTR
--Q1
Q1
Q2
Q1
AMOUNT_SOLD
----------10
20
50
SQL> SELECT *
2 FROM
3 (SELECT prod_id, qtr, amount_sold
4
FROM sales2) PIVOT (SUM(amount_sold), COUNT(*) AS count_total
5
FOR qtr IN ('Q1', 'Q2') )
6 ORDER BY prod_id DESC;
PROD_ID
------100
200
Q1 Q1_COUNT_TOTAL Q2
Q2_COUNT_TOTAL
--- -------------- --------- -------------30
2
1
50
1
0
You use the XML keyword to specify pivot values. You can do this using either of two
methods:
a subquery
the ANY keyword
If you use the ANY keyword, the XML string for each row includes only the pivot values
found in the input data for that row.
a subquery
If you use a subquery, the XML string includes all pivot values found by the subquery, even
if there are no aggregate values.
Each output row will include
a single column of XMLType containing an XML string for all value and measure pairs
The XML string for each row will hold aggregated data corresponding to the row's implicit
GROUP BY value. The values of the pivot column are evaluated at execution time.
The ANY keyword acts as a wildcard. If you specify ANY, all values found in the pivot
column will be used for pivoting.
When you use the ANY keyword, the ANY string for each output row will include only the
pivot values found in the input data corresponding to that row.
This example shows the use of the ANY wildcard keyword.
SQL> SET LONG 1024;
2 SELECT *
3 FROM
4
(SELECT product, channel, quantity_sold
5
FROM sales_view
6
) PIVOT XML (SUM(quantity_sold) FOR channel IN (ANY) )
7 ORDER BY product DESC;
The XML output includes all channel values in the sales_view view. The ANY keyword is
available only in PIVOT operations as part of an XML operation. This output includes data
for cases where the channel exists in the data set.
You can use wildcards or subqueries to specify the pivot IN list members when the
values of the pivot column are not known.
PRODUCT
-------------------------------------------------CHANNEL_XML
-----------------------------------------------------------------. . .
1.44MB External 3.5" Diskette
<PivotSet>
<item><column name = "CHANNEL">3</column><column name =
"SUM(QUANTITY_SOLD)">14189</column></item>
<item><column name = "CHANNEL">2</column><column name =
"SUM(QUANTITY_SOLD)">6455</column></item>
<item><column name = "CHANNEL">4</column><column name =
"SUM(QUANTITY_SOLD)">2464</column></item></PivotSet>
71 rows selected.
This example shows how to specify PIVOT values using a subquery.
SQL> SELECT *
2 FROM
3
(SELECT product, channel, quantity_sold
4
FROM sales_view
5
) PIVOT XML(SUM(quantity_sold)
6
FOR channel IN (SELECT distinct channel_id
7
FROM sh.channels));
This code shows part of the output after running the query. The XML output includes all
channel values and the sales data corresponding to each channel and for each product.
PRODUCT
---------CHANNEL_XML
--------------------------------------------------------------. . .
Y Box
<PivotSet>
<item><column name = "CHANNEL">9</column><column name =
"SUM(QUANTITY_SOLD)">1</column></item>
<item><column name = "CHANNEL">2</column><column name =
"SUM(QUANTITY_SOLD)">2037</column></item>
<item><column name = "CHANNEL">5</column><column name =
"SUM(QUANTITY_SOLD)"></column></item>
<item><column name = "CHANNEL">3</column><column name =
"SUM(QUANTITY_SOLD)">3552</column></item>
. . .
Subquery-based pivots give results different from those of the ANY wildcard. In this
example, when you use a subquery, the XMLType column will show value and measure
pairs for all channels for each product even if the input data has no such
product/channel combination.
For example, the XML string in this example shows Channel 5, although it has no value
for the SUM(QUANTITY_SOLD) column. Pivots that use a subquery will, therefore, often
have longer output than queries based on the ANY keyword.
PRODUCT
---------CHANNEL_XML
--------------------------------------------------------------. . .
Y Box
<PivotSet>
<item><column name = "CHANNEL">9</column><column name =
"SUM(QUANTITY_SOLD)">1</column></item>
<item><column name = "CHANNEL">2</column><column name =
"SUM(QUANTITY_SOLD)">2037</column></item>
<item><column name = "CHANNEL">5</column><column name =
"SUM(QUANTITY_SOLD)"></column></item>
<item><column name = "CHANNEL">3</column><column name =
"SUM(QUANTITY_SOLD)">3552</column></item>
. . .
Depending on how you process the query results, subquery-style output may be more
convenient to work with than the results derived from ANY.
PRODUCT
---------CHANNEL_XML
--------------------------------------------------------------. . .
Y Box
<PivotSet>
<item><column name = "CHANNEL">9</column><column name =
"SUM(QUANTITY_SOLD)">1</column></item>
<item><column name = "CHANNEL">2</column><column name =
"SUM(QUANTITY_SOLD)">2037</column></item>
<item><column name = "CHANNEL">5</column><column name =
"SUM(QUANTITY_SOLD)"></column></item>
<item><column name = "CHANNEL">3</column><column name =
"SUM(QUANTITY_SOLD)">3552</column></item>
. . .
If you are working with pivoted data, an UNPIVOT operation cannot reverse any
aggregations that have been made by PIVOT or any other means.
In this example, the first table contains two rows before the unpivoting operation. In the
second table, the unpivoting operation on the QUARTER column displays five rows.
Unpivoting generally transforms fewer rows of input into more rows.
Data from sources such as spreadsheets and flat files is often in pivoted form. For
instance, sales data will often be stored in a separate column for each time period.
UNPIVOT can normalize such data, transforming multiple columns into a single column.
When the data is normalized with UNPIVOT, it is much more accessible to relational
database processing with SQL. By placing data in a normalized layout, queries can
readily apply SQL aggregate and analytic functions, enabling powerful analysis. Similarly,
it is more efficient to specify the WHERE clause predicates on normalized data.
The UNPIVOT clause rotates columns from a previously pivoted table or a regular table
into rows.
When using the UNPIVOT operator, you need to specify
the names for the columns that will result from the unpivot operation
the columns that will be unpivoted back into values of the column specified in the
unpivot_for_clause
You can use an alias to map the column name to another value.
The UNPIVOT operation turns a set of value columns into one column. The data types of
all the value columns must be of the same data type, such as numeric or character.
If all the value columns are CHAR, the unpivoted column is CHAR. If any value column is
VARCHAR2, the unpivoted column is VARCHAR2. If all the value columns are NUMBER, the
unpivoted column is NUMBER.
If any value column is BINARY_DOUBLE, the unpivoted column is BINARY_DOUBLE. If no
value column is BINARY_DOUBLE, but any value column is BINARY_FLOAT, the
unpivoted column is BINARY_FLOAT.
The UNPIVOT clause rotates columns into rows.
The [INCLUDE] | [EXCLUDE] [NULLS] clause gives you the option of including or
Q1
Q2
Q3
Q4
QUARTER
------Q1
Q2
Q3
Q4
Q1
Q2
Q3
Q4
Q1
QUANTITY_SOLD
------------1455
1766
1716
1992
3146
4121
4122
3802
4259
Prior to Oracle Database 11g, you could simulate the UNPIVOT syntax using these
existing SQL commands.
As with PIVOT, the UNPIVOT syntax enables more efficient query processing. The
UNPIVOT keyword alerts the query optimizer to the desired behavior. As a result, the
optimizer calls highly efficient algorithms.
SQL> SELECT product, 'Q1' as quarter, Q1 as quantity_sold
2 FROM pivotedTable WHERE Q1 is not NULL
3 union all
4 SELECT product, 'Q2' as quarter, Q2 as quantity_sold
5 FROM pivotedTable WHERE Q2 is not NULL
6 union all
7 SELECT product, 'Q3' as quarter, Q3 as quantity_sold
8 FROM pivotedTable WHERE Q3 is not NULL
9 union all
ORDER_MODE YEARLY_TOTAL
----------- -----------direct
61655.7
direct
5546.6
direct
310
direct
309929.8
online
100056.6
direct
1274078.8
online
1271019.5
direct
252108.3
online
393349.4
9 rows selected.
Question
What are the functions of the UNPIVOT operator?
Options:
1.
2.
3.
4.
Answer
The UNPIVOT operator is used to rotate data from columns into rows, or to
normalize data.
Option 1 is correct. Data from sources such as spreadsheets and flat files is often
in pivoted form. UNPIVOT can normalize such data, transforming multiple columns
into a single column.
Option 2 is incorrect. An UNPIVOT does not reverse a PIVOT operation. Instead, it
rotates data from columns into rows.
Option 3 is incorrect. If you are working with pivoted data, an UNPIVOT operation
cannot reverse any aggregations that have been made by PIVOT or any other
means.
Option 4 is correct. An UNPIVOT operation does not reverse a PIVOT operation.
Instead, it rotates data found in multiple columns of a single row into multiple rows
of a single column.
Question
Suppose the data type of one value column is BINARY_DOUBLE and the data type
of the remainder of the value columns is BINARY_FLOAT.
When unpivoting these columns, what is the resulting unpivoted column data
type?
Options:
1.
BINARY_DOUBLE
2.
BINARY_FLOAT
3.
CHAR
4.
NUMBER
5.
VARCHAR2
Answer
The data type of the resulting unpivoted column will be BINARY_DOUBLE.
Option 1 is correct. If the data type of any value column is BINARY_DOUBLE, the
resulting unpivoted column data type will be BINARY_DOUBLE.
Option 2 is incorrect. If no value column has the data type of BINARY_DOUBLE,
and the data type of any value column is BINARY_FLOAT, the resulting unpivoted
column data type will be BINARY_FLOAT.
Option 3 is incorrect. If the data type of all value columns is CHAR, the resulting
unpivoted column data type will be CHAR.
Option 4 is incorrect. If the data type of all value columns is NUMBER, the resulting
unpivoted column data type will be NUMBER.
Option 5 is incorrect. If the data type of any value column is VARCHAR2, the
resulting unpivoted column data type will be VARCHAR2.
DIRECT_SALES_Q1
INTERNET_SALES_Q1
NUMBER
NUMBER
This example shows the code used to unpivot the CHANNEL and QUARTER columns using
the multi_col_pivot table.
Here, explicit values have been used for the unpivoted CHANNEL and QUARTER columns.
The query in this example returns 142 rows.
SQL> SELECT *
2 FROM multi_col_pivot
3 UNPIVOT (quantity_sold FOR (channel, quarter) IN
4
( Direct_Sales_Q1 AS ('Direct', 'Q1'),
5
Internet_Sales_Q1 AS ('Internet', 'Q1') ) )
6 ORDER BY product DESC, quarter;
PRODUCT
------------------------Y Box
Y Box
Xtend Memory
Xtend Memory
. . .
142 rows selected.
CHANNEL
-------Internet
Direct
Internet
Direct
QUARTER QUANTITY_SOLD
------- ------------Q1
253
Q1
771
Q1
350
Q1
1935
This example demonstrates unpivoting on the CHANNEL and QUARTER columns without
using aliases as explicit values for the unpivoted columns.
In this case, each unpivoted column uses the column name as its value.
SQL> SELECT *
2 FROM multi_col_pivot
3 UNPIVOT (quantity_sold FOR (channel, quarter) IN
4
(Direct_Sales_Q1, Internet_Sales_Q1 ) );
PRODUCT
CHANNEL
------------ ----------------Y Box
DIRECT_SALES_Q1
Y Box
INTERNET_SALES_Q1
Xtend Memory DIRECT_SALES_Q1
...
142 rows selected.
QUARTER
QUANTITY_SOLD
----------------- ------------DIRECT_SALES_Q1
771
INTERNET_SALES_Q1
253
DIRECT_SALES_Q1
1935
This example shows the code used to create the multi_agg_pivot table using the
CHANNEL column, and the amount_sold and quantity_sold measures in the SH
schema.
This example uses only the CHANNEL values 3 (Direct Sales) and 4 (Internet), but
you can use other values for the CHANNEL column.
In this case, the query creates column headings by concatenating the pivot columns with
the aliases of the aggregate functions, plus an underscore.
SQL> CREATE TABLE multi_agg_pivot AS
2 SELECT *
3 FROM
4 (SELECT product, channel, quarter, quantity_sold, amount_sold
5
FROM sales_view) PIVOT
6
(sum(quantity_sold) sumq, sum(amount_sold) suma
7
FOR channel IN (3 AS Direct, 4 AS Internet) )
8 ORDER BY product DESC;
Table created.
SQL> SELECT * FROM multi_agg_pivot;
PRODUCT
QUARTER DIRECT_SUMQ DIRECT_SUMA INTERNET_SUMQ
INTERNET_SUMA
---------- ------- ----------- ----------- -----------------------. . .
Bounce
01
1000
21738.97
347
6948.76
Bounce
02
1212
26417.37
453
9173.59
Bounce
03
1746
37781.27
528
10029.99
Bounce
04
1741
38838.63
632
12592.07
. . .
283 rows selected.
When you use multiple aggregation, you can omit the alias for only one aggregation. If
you omit an alias, the corresponding result column name is the pivot value, or the alias for
the pivot value.
This code shows the structure of the newly created multi_agg_pivot table.
SQL> DESCRIBE multi_agg_pivot
Name
Null?
Type
------------------- -------- --------------------------PRODUCT
NOT NULL VARCHAR2(50)
QUARTER
VARCHAR2(8)
DIRECT_SUMQ
DIRECT_SUMA
INTERNET_SUMQ
INTERNET_SUMA
NUMBER
NUMBER
NUMBER
NUMBER
This example uses the newly created multi_agg_pivot table. This code unpivots the
measures amount_sold and quantity_sold.
Channels are mapped to the value "3" for Direct_sumq and Direct_suma, and to the
value "4" for Internet_sumq and Internet_suma.
The channel mapping is consistent with the values used in the pivot operation that
created the multi_agg_pivot table. However, any values could have been used for the
channel mappings.
SQL> SELECT *
2 FROM multi_agg_pivot
3 UNPIVOT ((total_amount_sold, total_quantity_sold)
4 FOR channel IN ((Direct_sumq, Direct_suma) AS 3,
5
(Internet_sumq, Internet_suma) AS 4 ))
6 ORDER BY product DESC, quarter, channel;
PRODUCT QUARTER CHANNEL TOTAL_AMOUNT_SOLD TOTAL_QUANTITY_SOLD
------- ------- ------- ----------------- ------------------Bounce
01
3
1000
21738.97
Bounce
01
4
347
6948.76
Bounce
02
3
1212
26417.37
Bounce
02
4
453
9173.59
Bounce
03
3
1746
37781.27
Bounce
03
4
528
10029.99
Bounce
04
3
1741
38838.63
Bounce
04
4
632
12592.07
. . .
566 rows selected.
Summary
The new pivot functionality in Oracle Database 11g enables you to transform multiple
rows of input into fewer rows, generally with fewer columns. When pivoting, an
aggregation operator is applied, enabling the query to condense large data sets into
smaller, more readable results. Pivoting is therefore a key technique in business
intelligence (BI) queries. You can perform pivots on the server side, which can improve
processing speed and reduce network load.
You use the PIVOT clause to write cross-tabulation queries that rotate rows into columns,
aggregating the data in the process of rotation. You use the pivot_in_clause to
specify pivot values. You use the pivot_for_clause to specify one or more columns
whose values are to be pivoted into columns. When pivoting you can specify an alias for
each measure, and you can optionally provide an alias for each pivot column value.
You can pivot multiple columns. You can also pivot using multiple aggregations. Using the
XML keyword in the PIVOT syntax requires using either the ANY wildcard keyword or a
subquery. If you use the ANY keyword, the XML string for each row includes only the pivot
values found in the input data for that row. If you use a subquery, the XML string includes
all the pivot values found by the subquery, even if there are no aggregate values.
An UNPIVOT operation rotates data found in multiple columns of a single row into multiple
rows of a single column. You use the unpivot_for_clause to specify one or more
names for the columns that will result from the unpivot operation. You use the
unpivot_in_clause to specify the input data columns whose names will become
values in the output columns of the unpivot_for_clause. When unpivoting, you can
use an alias to map the column name to another value.
You can use the UNPIVOT clause to unpivot on multiple columns and on multiple
aggregations.
Implementing Pivoting
Learning objective
After completing this topic, you should be able to create reports with the PIVOT operator.
Exercise overview
In this exercise, you're required to identify the code that correctly creates reports using
pivoting.
This involves the following tasks:
DESCRIBE my_sales_view
NAME
---------------------------------------------------------PRODUCT
COUNTRY
CHANNEL
MONTH_YEAR
AMOUNT_SOLD
QUANTITY_SOLD
Null
Type
-------NOT NULL VARCHAR2(50)
NOT NULL VARCHAR2(40)
NOT NULL NUMBER
VARCHAR2(43)
NUMBER
NUMBER
6 rows selected.
Step 1 of 2
You want to view the distinct values in the MONTH_YEAR column of the
MY_SALES_VIEW view so you can use them to pivot. The results should be sorted
by MONTH_YEAR.
Which statement will return the required results?
Options:
1.
2.
3.
4.
Result
This statement will return the required results:
SELECT DISTINCT month_year FROM my_sales_view ORDER BY
month_year;
Option 1 is incorrect. This statement will return the distinct MONTH_YEAR values
from the MY_SALES_VIEW view. However, it will not sort them by MONTH_YEAR as
required.
Option 2 is incorrect. This statement will result in an error because a FROM clause
is required in the SELECT statement.
Option 3 is correct. This statement will return all of the distinct MONTH_YEAR
column values from the MY_SALES_VIEW view.
Option 4 is incorrect. To return the distinct values from a column, the DISTINCT
keyword is required in the SELECT statement.
Step 2 of 2
You want to create a report that pivots on the MONTH_YEAR column of the
MY_SALES_VIEW view. The numbers in the pivoted output should be the sum of
the QUANTITY_SOLD for each product for the first and fourth months in 2007.
Which code completes the statement that will create the required report?
SELECT *
FROM
(SELECT product, month_year, quantity_sold
<MISSING CODE>
ORDER BY product DESC;
Options:
1.
2.
3.
Result
To create the required report, you complete the statement using a PIVOT
statement to return the list of products.
Option 1 is incorrect. To create the required report, IN ('2007-01', '200704' )) must be included to complete the FOR clause.
Option 2 is incorrect. To create the required report, the PIVOT statement should
be PIVOT (sum(quantity_sold).
Option 3 is correct. This statement will return a list of products for the
MONTH_YEAR values 2007-01 and 2007-04. The numbers in the pivoted output
are the sum of the QUANTITY_SOLD for each product.
You want to create a report that pivots on both the CHANNEL and MONTH_YEAR columns
for the quantity sold.
Step 1 of 2
You want to create a report that pivots on both the CHANNEL and MONTH_YEAR
columns for the quantity sold. You only want to use the CHANNEL column values 3
(Direct Sales) and 4 (Internet), and only the 2007-M1 value for the
MONTH_YEAR column.
Which code completes the statement that will create the required report?
SELECT *
FROM
(SELECT product, channel, month_year, quantity_sold
FROM my_sales_view) <MISSING CODE>
AS Direct_Sales_01_2007,
(4, '2007-01') AS
Internal_Sales_01_2007))
ORDER BY product DESC;
Options:
1.
2.
3.
Result
To create the required report, you complete the statement using the code
PIVOT (sum(quantity_sold) FOR (channel, month_year) IN ((3,
'2007-01')
Option 1 is incorrect. To create the required report, the PIVOT keyword is required
before (sum(quantity_sold).
Option 2 is correct. This statement creates a report that pivots on both the
CHANNEL and MONTH_YEAR columns of the MY_SALES_VIEW view for the quantity
sold. The results will include PRODUCT, DIRECT_SALES_01_2007, and
INTERNET_SALES_01_2007 columns.
Step 2 of 2
You want to create a report that displays the product, channel, amount sold, and
quantity sold from the MY_SALES_VIEW view. The report should pivot on the sum
of the amount sold and display the sum of the quantity sold. Only the CHANNEL
column values 3 (Direct Sales) and 4 (Internet) should be used.
Which code completes the statement that will create the required report?
SELECT *
FROM
(SELECT product, channel, amount_sold, quantity_sold
FROM my_sales_view) PIVOT
<MISSING CODE>
ORDER BY product DESC;
Options:
1.
2.
3.
Result
To create the required report, you complete the statement to pivot on the sum of
the values.
Option 1 is incorrect. To create the required report, the statement should pivot on
the sum of the values in the AMOUNT_SOLD column and not on just the
AMOUNT_SOLD column itself.
Option 2 is incorrect. To create the required report, the IN clause should specify
that the values returned be from CHANNEL 3 (Direct Sales) and 4
(Internet Sales).
Option 3 is correct. This statement will create the required report, displayed in
PRODUCT, DIR_SALES_SUMS, DIR_SALES_SUMQ, and INT_SALES_SUMS
columns.
You have successfully created and modified reports using the PIVOT operator in Oracle
Database 11g.