Database Systems Handbook 3rd Complete Updated by Muhammad Sharif
Database Systems Handbook 3rd Complete Updated by Muhammad Sharif
Database Systems Handbook 3rd Complete Updated by Muhammad Sharif
==============
Dedication
I dedicate all my efforts to my reader who gives me an urge and inspiration
to work more.
Muhammad Sharif
Author
Database Systems Handbook
Acknowledgments
We are grateful to numerous individuals who contributed
to the preparation of relational database systems and
management, 3rd edition is completed on 09/09/2022.
First, we wish to thank our reviewers for their detailed
Levels of Data
1. Real world Data (Entity, attributes)
2. Metadata (Record types, item types, Max, Min Lingth)
3. Data occurrences (Employee_Name =>'Amir')
Categories of Data
Types of Data
A database is a collection of related data. Data are facts, which can be recorded, stored, retrieved, or
deleted. A typical database represents some aspect of the real world and is used for specific purposes.
This very general definition is usually more restricted. According to [6] a database has the following
properties:
A database represents some aspect of the real world, which is called Universe of Discourse (UoD). It has
some source from which data are derived and some degree of interaction with the real world.
A database is a logically coherent collection of data with some underlying meaning.
A database is designed, built, and populated with data for a specific purpose. There is an audience which
is interested in the contents of the database.
A more precise definition is given in ISO/IEC 2382-1. A database is a set of data organized according to
some data model; it describes properties of certain objects and relationships among them. It can be
used in one or more applications.
Example Metadata for Relation Class Roster catalogs (Attr_Cat(attr_name, rel_name, type, position like 1,2,3,
access rights on objects, what is the position of attribute in the relation). Simple definition is data about data.
1997: XML applied to database processing. Many vendors begin to integrate XML into DBMS products.
The ANSI-SPARC Application systems Architecture levels
2-tier architecture (basic client-server APIs like ODBC, JDBC, and ORDS are used), Client and disk are connected by
APIs called network.
3-tier architecture (Used for web applications, it uses a web server to connect with a database server).
Three-tier architecture is a well-established software application architecture that organizes applications into three
logical and physical computing tiers: the presentation tier, or user interface; the application tier, where data is
processed; and the data tier, where the data associated with the application is stored and managed.
The chief benefit of three-tier architecture is that because each tier runs on its own infrastructure, each tier can be
developed simultaneously by a separate development team, and can be updated or scaled as needed without
impacting the other tiers.
Sonewhere we divide web server (Apache) first and Application server(Oracle WL Server) second into two separate
layers within three tier architecture. And that are also called four tier in some books. More details are below.
For Web accessible database approach we use client->Application->Web Server->Application Server->DBMS-
>Database.
There are 3 types of buses used in uniform While in non-uniform Memory Access, There are
1 Memory Access which are: Single, Multiple 2 types of buses used which are: Tree and
and Crossbar. hierarchical.
Advantages of NUMA
Improves the scalability of the system.
Memory bottleneck (shortage of memory) problem is minimized in this architecture.
NUMA machines provide a linear address space, allowing all processors to directly address all memory.
Distributed Databases
Distributed database system (DDBS) = Database Systems + Communication
A set of databases in a distributed system that can appear to applications as a single data source.
A distributed DBMS (DDBMS) can have the actual database and DBMS software distributed over many sites,
connected by a computer network.
3. Middleware or Peer-to-Peer
Client-server: A Client-Server system has one or more client processes and one or more server processes,
and a client process can send a query to any one server process. Clients are responsible for user-interface
issues, and servers manage data and execute transactions. There may be multiple server process. The two
different client-server architecture models are:
1. Single Server Multiple Client
2. Multiple Server Multiple Client
Client Server architecture layers
1. Presentation layer
2. Logic layer
3. Data layer
Presentation layer
The basic work of this layer provides a user interface. The interface is a graphical user interface. The graphical user
interface is an interface that consists of menus, buttons, icons, etc. The presentation tier presents information
related to such work as browsing, sales purchasing, and shopping cart contents. It attaches with other tiers by
computing results to the browser/client tier and all other tiers in the network. Its other name is external layer.
Logic layer
The logical tier is also known as the data access tier and middle tier. It lies between the presentation tier and the
data tier. it controls the application’s functions by performing processing. The components that build this layer exist
on the server and assist the resource sharing these components also define the business rules like different
government legal rules, data rules, and different business algorithms which are designed to keep data structure
consistent. This is also known as conceptual layer.
Data layer
The 3-Data layer is the physical database tier where data is stored or manipulated. It is internal layer of database
management system where data stored.
Collaborative/Multi server: Collaborating Server system. We can have a collection of database servers,
each capable of running transactions against local data, which cooperatively execute transactions spanning
multiple servers. This is an integrated database system formed by a collection of two or more autonomous
database systems. Multi-DBMS can be expressed through six levels of schema:
1. Multi-database View Level − Depicts multiple user views comprising subsets of the integrated distributed
database.
2. Multi-database Conceptual Level − Depicts integrated multi-database that comprises global logical multi-
database structure definitions.
3. Multi-database Internal Level − Depicts the data distribution across different sites and multi-database to
local data mapping.
4. Local database View Level − Depicts a public view of local data.
5. Local database Conceptual Level − Depicts local data organization at each site.
6. Local database Internal Level − Depicts physical data organization at each site.
There are two design alternatives for multi-DBMS −
1. A model with a multi-database conceptual level.
2. Model without multi-database conceptual level.
Peer-to-Peer: The Middleware architecture is designed to allow a single query to span multiple servers,
without requiring all database servers to be capable of managing such multisite execution strategies. It is
especially attractive when trying to integrate several legacy systems, whose basic capabilities cannot be
extended. Architecture model for DDBMS, In these systems, each peer acts both as a client and a server for
imparting database services. The peers share their resources with other peers and coordinate their activities.
Its scalability and flexibility is growing and shrinking. All nodes have the same role and functionality. Harder to
manage because all machines are autonomous and loosely coupled.
This architecture generally has four levels of schemas:
1. Global Conceptual Schema − Depicts the global logical view of data.
2. Local Conceptual Schema − Depicts logical data organization at each site.
3. Local Internal Schema − Depicts physical data organization at each site.
4. Local External Schema − Depicts user view of data
Autonomous databases
1. Autonomous Transaction Processing - Serverless
2. Autonomous Transaction Processing - Dedicated
3. Autonomous data warehourse processing - Analytics
Autonomous Serverless is a simple and elastic deployment choice. Oracle autonomously operates all aspects of
the database lifecycle from database placement to backup and updates.
Autonomous Dedicated is a private cloud in public cloud deployment choice. A completely dedicated compute,
storage, network, and database service for only a single tenant.
Autonomous Shared is a private cloud in public cloud deployment choice. A completely dedicated compute,
storage, network, and database service for only a single tenant.
Heterogeneous Distributed Databases (Dissimilar schema for each site database, it can be any
variety of dbms, relational, network, hierarchical, object oriented)
Types of Heterogeneous Distributed Databases
1. Federated − The heterogeneous database systems are independent and integrated so that they function
as a single database system.
2. Un-federated − The database systems employ a central coordinating module
In a heterogeneous distributed database, different sites have different operating systems, DBMS products, and data
models.
Database Gateways
Traditionally, a database gateway is a software component that links two different DBMS suites Like Oracle SQL-
Oracle Database to DB2 SQL -DB2 Database. Another alternative is to use the software called Open Database
Connectivity (ODBC).
There are various types of databases used for storing different varieties of data in their respective DBMS data model
environment. Each database has data models except NoSQL. One is Enterprise Database Management System that
is not included in this figure. I will write details one by one in where appropriate. Sequence of details is not necessary.
1. It has a declarative language, that is logic, serving both as the communication method (query
language) and a host language, providing DDL and DML.
2. It supports principle features of database systems, that is efficient access to massive amounts of
data, sharing of data, and concurrent access to data.
Native XML Databases
We were not surprised that the number of start-up companies as well as some established data management
companies determined that XML data would be best managed by a DBMS that was designed specifically to deal with
semi-structured data — that is, a native XML database.
Conceptual Database
This step is related to the modeling in the Entity-Relationship (E/R) Model to specify sets of data called entities,
relations among them called relationships and cardinality restrictions identified by letters N and M, in this case, the
many-many relationships stand out.
Conventional Database
This step includes Relational Modeling where a mapping from MER to relations using rules of mapping is carried
out. The posterior implementation is done in Structured Query Language (SQL).
Non-Conventional database
This step involves Object-Relational Modeling which is done by the specification in Structured Query Language. In
this case, the modeling is related to the objects and their relationships with the Relational Model.
Traditional database
Temporal database
typical databases
NewSQL Database
Autonomous database
Cloud database
Spatiotemporal
Enterprise Database Management System
Google Cloud Firestore
Couchbase
Memcached, Coherence (key-value store)
HBase, Big Table, Accumulo (Tabular)
MongoDB, CouchDB, Cloudant, JSON-like (Document-based)
Neo4j (Graph Database)
Redis (Data model: Key value)
Elasticsearch (Data model: search engine)
Microsoft access (Data model: relational)
Cassandra (Data model: Wide column)
MariaDB (Data model: Relational)
Splunk (Data model: search engine)
Snowflake (Data model: Relational)
Azure SQL Server Database (Relational)
Amazon DynamoDB (Data model: Multi-Model)
Hive (Data model: Relational)
Non-relational (NoSQL) Data model
NoSQL is non-tabular database management where we create an object, document, key-value, and graph to store
the data. NoSQL provides flexible schemas to store a large amount of data.
In key values NoSQL database, we can store the keys of the database. So in the query, we can access the only key.
It can store value using keys. These keys are stored in a key-value hash table that is used to fast access data. For
example, Riak and amazon’s dynamo are the best NoSQL database
In a wide column NoSQL database, we can store large data in only one column and you should know the related
query pattern to access it. Google’s Bigtable and Hbase are the most popular column-based databases.
In graph type NoSQL we can store data in Nodes and edges. A node can store information like objects and edges will
create the relationships between data from node to node. InfoGrid and infinite Graph are graph-based NoSQL
databases.
In Documents form we can store our data in a document like JSON, each document contains a pair of fields and
values. In values, we can store all types of data types of related data and in a file that can store the key of data values.
It is a more natural way to store data in a document that is very easy and flexible to manage and access.
BASE Model:
Basically Available – Rather than enforcing immediate consistency, BASE-modelled NoSQL databases will ensure the
availability of data by spreading and replicating it across the nodes of the database cluster.
Soft State – Due to the lack of immediate consistency, data values may change over time. The BASE model breaks
off with the concept of a database that enforces its consistency, delegating that responsibility to developers.
Eventually Consistent – The fact that BASE does not enforce immediate consistency does not mean that it never
achieves it. However, until it does, data reads are still possible (even though they might not reflect the reality).
Just as SQL databases are almost uniformly ACID compliant, NoSQL databases tend to conform to BASE principles.
NewSQL Database
NewSQL is a class of relational database management systems that seek to provide the scalability of NoSQL systems
for online transaction processing (OLTP) workloads while maintaining the ACID guarantees of a traditional database
system.
Examples and properties of Relational Non-Relational Database:
The term NewSQL categorizes databases that are the combination of relational models with the advancement in
scalability, and flexibility with types of data. These databases focus on the features which are not present in NoSQL,
which offers a strong consistency guarantee. This covers two layers of data one relational one and a key-value store.
Use Cases: Big Data, Social Network Use Cases: E-Commerce, Telecom industry, and
8. Applications, and IoT. Gaming.
END
Datatypes Descriptions
-------------------------------- ---------------------------------------------------------------------------
BINARY_FLOAT 32-bit floating point number. This data type requires 4 bytes.
BINARY_DOUBLE 64-bit floating point number. This data type requires 8 bytes.
If max_string_size = extended
Number having precision p and scale s. The precision p can range from 1
32767 bytes or characters
to 38. The scale s can range from -84 to 127. Both precision and scale
If max_string_size = standard
are in decimal digits. A number value requires from 1 to 22 bytes.
Number(p,s) data type 4000
bytes or characters
The character data types represent alphanumeric text. PL/SQL uses the
SQL character data types such as CHAR, VARCHAR2, LONG, RAW, LONG
Character data types RAW, ROWID, and UROWID.
CHAR(n) is a fixed-length character type whose length is from 1 to
32,767 bytes.
VARCHAR2(n) is varying length character data from 1 to 32,767 bytes.
VARCHAR2 Dont accupy space 32,767 bytes 4,000 bytes ( 1 char = 1 byte)
for null values.
A user-defined data type (UDT) is a data type that derived from an existing data type. You can use UDTs to extend
the built-in types already available and create your own customized data types.
There are six user-defined types:
1. Distinct type
2. Structured type
3. Reference type
4. Array type
5. Row type
6. Cursor type
Abstract Data Types in Oracle One of the shortcomings of the Oracle 7 database was the limited number of
intrinsic data types.
REGEXP_REPLACE(
SYS_GUID(),
'([0-9A-F]{8})([0-9A-F]{4})([0-9A-F]{4})([0-9A-F]{4})([0-9A-F]{12})',
'{\1-\2-\3-\4-\5}'
)
/
Format of Rowid
Database Key
A key is a field of a table that identifies the tuple in that table.
Super key
An attribute or a set of attributes that uniquely identifies a tuple within a relation.
Candidate key
A super key such that no proper subset is a super key within the relation. Contains no unique subset (irreducibility).
Possibly many candidate keys (specified using UNIQUE), one of which is chosen as the primary key. PRIMARY KEY
(sid), UNIQUE (id, grade)) A candidate can be unique but its value can be changed.
uniqueness is not guaranteed. Hence, they are combined to uniquely identify records in a table. You can you
composite key as PK but the Composite key will go to other tables as a foreign key.
Alternate key
A relation can have only one primary key. It may contain many fields or a combination of fields that can be used as
the primary key. One field or combination of fields is used as the primary key. The fields or combinations of fields
that are not used as primary keys are known as candidate keys or alternate keys.
Sort Or control key
A field or combination of fields that are used to physically sequence the stored data is called a sort key. It is also
known s the control key.
Alternate key
An alternate key is a secondary key it can be simple to understand an example:
Let's take an example of a student it can contain NAME, ROLL NO., ID, and CLASS.
Unique key
A unique key is a set of one or more than one field/column of a table that uniquely identifies a record in a database
table.
You can say that it is a little like a primary key but it can accept only one null value and it cannot have duplicate
values.
The unique key and primary key both provide a guarantee for uniqueness for a column or a set of columns.
There is an automatically defined unique key constraint within a primary key constraint.
There may be many unique key constraints for one table, but only one PRIMARY KEY constraint for one table.
Artificial Key
The key created using arbitrarily assigned data are known as artificial keys. These keys are created when a primary
key is large and complex and has no relationship with many other relations. The data values of the artificial keys are
usually numbered in a serial order.
For example, the primary key, which is composed of Emp_ID, Emp_role, and Proj_ID, is large in employee relations.
So it would be better to add a new virtual attribute to identify each tuple in the relation uniquely. Rownum and
rowid are artificial keys. It should be a number or integer, numeric.
Surrogate key
SURROGATE KEYS is An artificial key that aims to uniquely identify each record and is called a surrogate key. This
kind of partial key in DBMS is unique because it is created when you don’t have any natural primary key. You can't
insert values of the surrogate key. Its value comes from the system automatically.
No business logic in key so no changes based on business requirements
Surrogate keys reduce the complexity of the composite key.
Surrogate keys integrate the extract, transform, and load in DBs.
Compound Key
COMPOUND KEY has two or more attributes that allow you to uniquely recognize a specific record. It is possible that
each column may not be unique by itself within the database.
Operators
Sargable queries
Sargable is a word that concatenates the three words: search, argument and able.
As per wikipedia SARGable is defined as “In relational databases, a condition (or predicate) in a query is said to be
sargable if the DBMS engine can take advantage of an index to speed up the execution of the query. The term is
derived from a contraction of Search ARGument ABLE”
SELECT
PurchaseOrderID, ExpectedDeliveryDate
FROM
Purchasing.PurchaseOrders
ORDER BY
CASE
WHEN (ExpectedDeliveryDate IS NOT NULL) THEN 0 ELSE 1
END;
The query is not sargable, it uses the IX_Purchasing_PurchaseOrders_ExpectedDeliveryDate index for the
ExpectedDeliveryDate column on which an Index Scan is performed instead of an optimized Index Seek.
Sargable and non sargable operators:
For instance, WHERE foo LIKE '%bar%' is said by many to be not sargable, but some RDBMSs are able to use
indexes on such queries.
For me, SARGable means that SQL Server can perform an index seek using your search predicates.
A Search ARgument ABLE predicate is one where SQL SERVER can utilize an index seek operation, if an index exists.
A SARGable predicate is one where SQL server can isolate the single value or range of index key values to process
SARGable predicates include the following operators: =, >, >=, <, <=, IN, BETWEEN, and LIKE (in the case of prefix
matching)
Non-SARGable operators include: NOT, NOT IN, <>, and LIKE (not prefix matching), as well as the use of functions
or calculations against the table, and type conversions where the datatype does not fulfill the index created.
Collation is a set of rules that tell database engine how to compare and sort the character data in SQL Server.
Collation can be set at different levels in SQL Server. Below are the three levels:
1. SQL Server Level
2. Database Level
3. Column level
SQL UNION clause is used to select distinct values from the tables.
SQL UNION ALL clause used to select all values including duplicates from the tables
The UNION operator is used to combine the result-set of two or more SELECT statements.
Every SELECT statement within UNION must have the same number of columns
The columns must also have similar data types
The columns in every SELECT statement must also be in the same order
EXCEPT or MINUS These are the records that exist in Dataset1 and not in Dataset2.
Each SELECT statement within the EXCEPT query must have the same number of fields in the result sets with similar
data types.
The difference is that EXCEPT is available in the PostgreSQL database while MINUS is available in MySQL and Oracle.
There is absolutely no difference between the EXCEPT clause and the MINUS clause.
IN operator allows you to specify multiple values in a WHERE clause. The IN operator is a shorthand for multiple OR
conditions.
ANY operator
Returns a Boolean value as a result Returns true if any of the subquery values meet the condition . ANY means that
the condition will be true if the operation is true for any of the values in the range.
NOT IN can also take literal values whereas not existing need a query to compare the results.
SELECT CAT_ID FROM CATEGORY_A WHERE CAT_ID NOT IN (SELECT CAT_ID FROM CATEGORY_B)
NOT EXISTS
SELECT A.CAT_ID FROM CATEGORY_A A WHERE NOT EXISTS (SELECT B.CAT_ID FROM CATEGORY_B B WHERE
B.CAT_ID = A.CAT_ID)
NOT EXISTS could be good to use because it can join with the outer query & can lead to usage of the index if the
criteria use an indexed column.
EXISTS AND NOT EXISTS are typically used in conjuntion with a correlated nested query. The result of EXISTS is a
boolean value, TRUE if the nested query ressult contains at least one tuple, or FALSE if the nested query result
contains no tuples
Supporting operators in different DBMS environments:
Keyword Database System
TOP SQL Server, MS Access
LIMIT MySQL, PostgreSQL, SQLite
FETCH FIRST Oracle
But 10g onward TOP Clause no longer supported replace with ROWNUM clause.
SQL FUNCTIONS
Practical example:
SELECT INITCAP( PS.FIRSTNAME)INI_FN,LOWER(nvl2(PS.ADDRESS, PS.FIRSTNAME,
'ALIALIALI'))LOWER,
COALESCE(PS.ADDRESS, 'ALIALIALI')NVL_COL,PS.LASTNAME,
UPPER(coalesce(PS.ADDRESS,PS.LASTNAME, 'ALIALIALI') )COAL_ADD,
CASE WHEN PS.ADDRESS IS NULL THEN 'LAHORE' ELSE 'KASUR' END ADDRESS,
substr(PS.ADDRESS, 2,5)
,substr(PS.ADDRESS,INSTR(ps.address,'B')) SUB_INS, PS.CITY,
LPAD(PS.CITY,2), RPAD(PS.CITY,3), --lpad and rpad working same
LPAD(PS.CITY,4,'SHARIF'), ps.address,TRIM(PS.ADDRESS) TRIM_NAME ---remove
spaces
FROM PERSONS PS WHERE 1=1; --coalesce with one argument and nvl function work
same.
Subquery Concept
SQL analytical Functions (Oracle analytic functions calculate an aggregate value based on a group of rows and
return multiple rows for each group.)
Name
CUME_DIST
DENSE_RANK
FIRST_VALUE
LAG
LAST_VALUE
LEAD
NTH_VALUE
NTILE
PERCENT_RANK
RANK
ROW_NUMBER
MAX(), FAST(), LAST()
Note: All aggregate functions described above ignore NULL value except for the count function.
A SQL scalar function returns a single value based on the input value. Following are the widely used SQL
scalar functions:
END
Database Instance is the data which is stored in the database at a particular moment is called an instance of
the database. Also called database state (or occurrence or snapshot). The content of the database, instance is also
called an extension.
The term instance is also applied to individual database components,
E.g., record instance, table instance, entity instance
Types of Instances
Initial Database Instance: Refers to the database instance that is initially loaded into the system.
Valid Database Instance: An instance that satisfies the structure and constraints of the database.
The database instance changes every time the database is updated.
Database Schema is the overall design or skeleton structure of the database. It represents the logical view, visual
diagram having relationals of objects of the entire database.
A database schema can be represented by using a visual diagram. That diagram shows the database objects and
their relationship with each other.
A database schema is designed by the database designers to help programmers whose software will interact with
the database. The process of database creation is called data modeling.
Types of Schema:
Relational Schema definition
Relational schema refers to the meta-data that describes the structure of data within a certain domain . It is the
blueprint of a database that outlines the way any database will have some number of constraints that must be
applied to ensure correct data (valid states).
Database Schema definition
A relational schema may also refer to as a database schema. A database schema is the collection of relation schemas
for a whole database. A relational or Database schema is a collection of meta-data. Database schema describes the
structure and constraints of data represented in a particular domain . A Relational schema can be described as a
blueprint of a database that outlines the way data is organized into tables. This blueprint will not contain any type
of data. In a relational schema, each tuple is divided into fields called Domain.
A schema provides a logical grouping of SQL objects. A schema consists of a library, a journal, a journal
receiver, a catalog, and, optionally, a data dictionary.
Other definitions: The overall design of the database.Structure of database, Schema is also called intension.
ANSI-SPARC schemas
External Level: View level, user level, external schema, Client level.
Conceptual Level: Community view, ERD Model, conceptual schema, server level, Conceptual (high-level,
semantic) data models, entity-based or object-based data models, what data is stored .and relationships, it’s deal
Logical data independence (External/conceptual mapping). This is also called canonical data model.
logical schema: It is sometimes called conceptual schema too (server level), Implementation (representational)
data models. HOW the system should be implemented regardless of the DBMS. It was created by Data Architects
and Business Analysts. The logical data model defines the structure and its rules of the data elements and set the
relationships between them.
Internal Level: Physical representation, Internal schema, Database level, Low level. It deals with how data is stored
in the database and Physical data independence (Conceptual/internal mapping)
Physical data level: Physical storage, physical schema, some-time deals with internal schema. It is detailed in
administration manuals. HOW the system will be implemented using a specific DBMS system.
Data independence
IT is the ability to make changes in either the logical or physical structure of the database without requiring
reprogramming of application programs.
Data Independence types
Logical data independence=>Immunity of external schemas to changes in the conceptual schema
Physical data independence=>Immunity of the conceptual schema to changes in the internal schema.
Data abstraction Process of hiding (suppressing) unnecessary details so that the high-level concept can be
made more visible. A data model is a relatively simple representation, usually graphical, of more complex real-
world data structures.
Specialization: A can be specialized into B, C, DB, C, or D (special cases of A) Has-a, Has-A, Has An, Has-An
approach is used in the specialization
Composition: IS-MADE-OF (like aggregation)
Identification: IS-IDENTIFIED-BY
Ontology is the fundamental part of Semantic Web. The goal of World Wide Web Consortium (W3C) is to bring the
web into (its full potential) a semantic web with reusing previous systems and artifacts. Most legacy systems have
been documented in structural analysis and structured design (SASD), especially in simple or Extended ER Diagram
(ERD). Such systems need up-gradation to become the part of semantic web. In this paper, we present ERD to OWL-
DL ontology transformation rules at concrete level. These rules facilitate an easy and understandable transformation
from ERD to OWL. Ontology engineering is an important aspect of semantic web vision to attain the meaningful
representation of data. Although various techniques exist for the creation of ontology, most of the methods involve
the number of complex phases, scenario-dependent ontology development, and poor validation of ontology. This
research work presents a lightweight approach to build domain ontology using Entity Relationship (ER) model.
We now discuss four abstraction concepts that are used in semantic data models, such as the EER model as well as
in KR schemes: (1) classification and instantiation, (2) identification, (3) specialization and generalization, and (4)
aggregation and association.
One ongoing project that is attempting to allow information exchange among computers on the Web is called the
Semantic Web, which attempts to create knowledge representation models that are quite general in order to allow
meaningful information exchange and search among machines.
One commonly used definition of ontology is a specification of a conceptualization. In this definition, a
conceptualization is the set of concepts that are used to represent the part of reality or knowledge that is of interest
to a community of users.
Data Modelling
Data Modelling is the diagrammatic representation showing how the entities are related to each other. It is the initial
step towards database design. We first create the conceptual model, then the logical model and finally move to the
physical model.
The two types of Data Modeling Techniques are
1. Entity Relationship (E-R) Model
2. UML (Unified Modelling Language)
UML Diagrams Notations
UML stands for Unified Modeling Language. ERD stands for Entity Relationship Diagram. UML is a popular and
standardized modeling language that is primarily used for object-oriented software. Entity-Relationship diagrams
are used in structured analysis and conceptual modeling.
Object-oriented data models are typically depicted using Unified Modeling Language (UML) class diagrams. Unified
Modeling Language (UML) is a language based on OO concepts that describes a set of diagrams and symbols that
can be used to graphically model a system. UML class diagrams are used to represent data and their relationships
within the larger UML object-oriented system’s modeling language.
Associations
UML uses Boolean attributes instead of unary relationships but allows relationships of all other entities. Optionally,
each association may be given at most one name. Association names normally start with a capital letter. Binary
associations are depicted as lines between classes. Association lines may include elbows to assist with layout or
when needed (e.g., for ring relationships).
ER Diagram and Class Diagram Synchronization Sample
Supporting the synchronization between ERD and Class Diagram. You can transform the system design from the
data model to the Class model and vice versa, without losing its persistent logic.
Conversions of Terminology of UML and ERD
Types of Attributes-
In ER diagram, attributes associated with an entity set may be of the following types-
1. Simple attributes/atomic attributes/Static attributes
2. Key attribute
3. Unique attributes
4. Stored attributes
5. Prime attributes
6. Derived attributes (DOB, AGE, Oval is a derived attribute)
7. Composite attribute (Address (street, door#, city, town, country))
8. The multivalued attribute (double ellipse (Phone#, Hobby, Degrees))
9. Dynamic Attributes
10. Boolean attributes
The fundamental new idea in the MOST model is the so-called dynamic attributes. Each attribute of an object class
is classified to be either static or dynamic. A static attribute is as usual. A dynamic attribute changes its value with
time automatically.
Attributes of the database tables which are candidate keys of the database tables are called prime attributes.
Symbols of Attributes:
The Entity
The entity is the basic building block of the E-R data model. The term entity is used in three different meanings or
for three different terms and are:
Entity type
Entity instance
Entity set
Characteristics
DFDs show the flow of data between different processes or a specific system.
DFDs are simple and hide complexities.
DFDs are descriptive and links between processes describe the information flow.
number of external entities. At this level, the designer must keep a balance in describing the system using the level
0 diagram. Balance means that he should give proper depth to the level 0 diagram processes.
1-level DFD In 1-level DFD, the context diagram is decomposed into multiple bubbles/processes. In this level,
we highlight the main functions of the system and breakdown the high-level process of 0-level DFD into
subprocesses.
2-level DFD In 2-level DFD goes one step deeper into parts of 1-level DFD. It can be used to plan or record the
specific/necessary detail about the system’s functioning.
Detailed DFDs are detailed enough that it doesn’t usually make sense to break them down further.
Logical data flow diagrams focus on what happens in a particular information flow: what information is being
transmitted, what entities are receiving that info, what general processes occur, etc. It describes the functionality
of the processes that we showed briefly in the Level 0 Diagram. It means that generally detailed DFDS is expressed
as the successive details of those processes for which we do not or could not provide enough details.
Logical DFD
Logical data flow diagram mainly focuses on the system process. It illustrates how data flows in the system. Logical
DFD is used in various organizations for the smooth running of system. Like in a Banking software system, it is used
to describe how data is moved from one entity to another.
Physical DFD
Physical data flow diagram shows how the data flow is actually implemented in the system. Physical DFD is more
specific and closer to implementation.
N-ary
N-ary (many entities involved in the relationship)
An N-ary relationship exists when there are n types of entities. There is one limitation of the N-ary any entities so it
is very hard to convert into an entity, a rational table.
A relationship between more than two entities is called an n-ary relationship.
Examples of relationships R between two entities E and F
Normalize the ERD and remove FD from Entities to enter the final steps
Transformation Rule 1. Each entity in an ER diagram is mapped to a single table in a relational database;
Transformation Rule 2. A key attribute of the entity type is represented by the primary key.
All single-valued attribute becomes a column for the table
Transformation Rule 3. Given an entity E with primary identify, a multivalued attributed attached to E in
an ER diagram is mapped to a table of its own;
Table T also contains columns for all attributes attached to the relationship. Relationship occurrences are
represented by rows of the table, with the related entity instances uniquely identified by their primary
key values as rows.
Case 1: Binary Relationship with 1:1 cardinality with the total participation of an entity
Total participation, i.e. min occur is 1 with double lines in total.
A person has 0 or 1 passport number and the Passport is always owned by 1 person. So it is 1:1 cardinality
with full participation constraint from Passport. First Convert each entity and relationship to tables.
Case 2: Binary Relationship with 1:1 cardinality and partial participation of both entities
A male marries 0 or 1 female and vice versa as well. So it is a 1:1 cardinality with partial participation
constraint from both. First Convert each entity and relationship to tables. Male table corresponds to Male
Entity with key as M-Id. Similarly, the Female table corresponds to Female Entity with the key as F-Id.
Marry Table represents the relationship between Male and Female (Which Male marries which female).
So it will take attribute M-Id from Male and F-Id from Female.
Case 3: Binary Relationship with n: 1 cardinality
Case 4: Binary Relationship with m: n cardinality
Case 5: Binary Relationship with weak entity
In this scenario, an employee can have many dependents and one dependent can depend on one
employee. A dependent does not have any existence without an employee (e.g; you as a child can be
dependent on your father in his company). So it will be a weak entity and its participation will always be
total.
Generalization
Reverse processes of defining subclasses (bottom-up approach). Bring together common attributes in entities (ISA,
IS-A, IS AN, IS-AN)
Union
Models a class/subclass with more than one superclass of distinct entity types. Attribute inheritance is selective.
Thus the minimum number of relationship instances in which entities can participate: thus1 for total participation,
0 for partial
Diagrammatically, use a double line from relationship type to entity type
There are two types of participation constraints:
Total participation, i.e. min occur is 1 with double lines in total. DottedOval is a derived attribute
1. Partial Participation
2. Total Participation
When we require all entities to participate in the relationship (total participation), we use double lines to specify.
(Every loan has to have at least one customer)
Mapping of EERD to Relational Modeling
It expresses some entity occurrences associated with one occurrence of the related entity=>The specific.
The cardinality of a relationship is the number of instances of entity B that can be associated with entity A. There is
a minimum cardinality and a maximum cardinality for each relationship, with an unspecified maximum cardinality
being shown as N. Cardinality limits are usually derived from the organization's policies or external constraints.
For Example:
At the University, each Teacher can teach an unspecified maximum number of subjects as long as his/her weekly
hours do not exceed 24 (this is an external constraint set by an industrial award). Teachers may teach 0 subjects if
they are involved in non-teaching projects. Therefore, the cardinality limits for TEACHER are (O, N).
The University's policies state that each Subject is taught by only one teacher, but it is possible to have Subjects that
have not yet been assigned a teacher. Therefore, the cardinality limits for SUBJECT are (0,1). Teacher and subject
have M: N relationship connectivity. And they are binary (two) ternary too if we break this relationship. Such
situations are modeled using a composite entity (or gerund).
Cardinality Constraint: Quantification of the relationship between two concepts or classes (a constraint on
aggregation)
Remember cardinality is always a relationship to another thing.
Max Cardinality(Cardinality) Always 1 or Many. Class A has a relationship to Package B with a cardinality of one,
which means at most there can be one occurrence of this class in the package. The opposite could be a Package
that has a Max Cardinality of N, which would mean there can be N number of classes
Min Cardinality(Optionality) Simply means "required." Its always 0 or 1. 0 would mean 0 or more, 1 or more
The three types of cardinality you can define for a relationship are as follows:
Minimum Cardinality. Governs whether or not selecting items from this relationship is optional or required. If you
set the minimum cardinality to 0, selecting items is optional. If you set the minimum cardinality to greater than 0,
the user must select that number of items from the relationship.
Optional to Mandatory, Optional to Optional, Mandatory to Optional, Mandatory to Mandatory
Summary Of ER Diagram Symbols
Maximum Cardinality. Sets the maximum number of items that the user can select from a relationship. If you set the
minimum cardinality to greater than 0, you must set the maximum cardinality to a number at least as large If you do
not enter a maximum cardinality, the default is 999.
Type of Max Cardinality: 1 to 1, 1 to many, many to many, many to 1
Default Cardinality. Specifies what quantity of the default product is automatically added to the initial solution that
the user sees. Default cardinality must be equal to or greater than the minimum cardinality and must be less than
or equal to the maximum cardinality.
Replaces cardinality ratio numerals and single/double line notation
Associate a pair of integer numbers (min, max) with each participant of an entity type E in a relationship type R,
where 0 ≤ min ≤ max and max ≥ 1 max=N => finite, but unbounded
Relationship types can also have attributes
Attributes of 1:1 or 1:N relationship types can be migrated to one of the participating entity types
For a 1:N relationship type, the relationship attribute can be migrated only to the entity type on the N-side of the
relationship
Attributes on M: N relationship types must be specified as relationship attributes
In the case of Data Modelling, Cardinality defines the number of attributes in one entity set, which can be associated
with the number of attributes of other sets via a relationship set. In simple words, it refers to the relationship one
table can have with the other table. They can be One-to-one, One-to-many, Many-to-one, or Many-to-many. And
third may be the number of tuples in a relation.
In the case of SQL, Cardinality refers to a number. It gives the number of unique values that appear in the table for
a particular column. For eg: you have a table called Person with the column Gender. Gender column can have values
either 'Male' or 'Female''.
cardinality is the number of tuples in a relation (number of rows).
The Multiplicity of an association indicates how many objects the opposing class of an object can be instantiated.
When this number is variable then the.
Multiplicity Cardinality + Participation dictionary definition of cardinality is the number of elements in a particular
set or other.
Multiplicity can be set for attribute operations and associations in a UML class diagram (Equivalent to ERD) and
associations in a use case diagram.
A cardinality is how many elements are in a set. Thus, a multiplicity tells you the minimum and maximum allowed
members of the set. They are not synonymous.
Given the example below:
0-1 ---------- 1-1
Multiplicities:
The first multiplicity, for the left entity: 0-1
The second multiplicity, for the right entity: 1-
Cardinalities for the first multiplicity:
Lower cardinality: 0
Upper cardinality: 1
Cardinalities for the second multiplicity:
Lower cardinality: 1
Upper cardinality:
Multiplicity is the constraint on the collection of the association objects whereas Cardinality is the count of the
objects that are in the collection. The multiplicity is the cardinality constraint.
A multiplicity of an event = Participation of an element + cardinality of an element.
UML uses the term Multiplicity, whereas Data Modelling uses the term Cardinality. They are for all intents and
purposes, the same.
Cardinality (sometimes referred to as Ordinality) is what is used in ER modeling to "describe" a relationship between
two Entities.
Cardinality and Modality
The maindifference between cardinality and modality is that cardinality is defined as the metric used to specify the
number of occurrences of one object related to the number of occurrences of another object. On the contrary,
modality signifies whether a certain data object must participate in the relationship or not.
Cardinality refers to the maximum number of times an instance in one entity can be associated with instances in
the related entity. Modality refers to the minimum number of times an instance in one entity can be associated
with an instance in the related entity.
Cardinality can be 1 or Many and the symbol is placed on the outside ends of the relationship line, closest to the
entity, Modality can be 1 or 0 and the symbol is placed on the inside, next to the cardinality symbol. For a
cardinality of 1, a straight line is drawn. For a cardinality of Many a foot with three toes is drawn. For a modality of
1, a straight line is drawn. For a modality of 0, a circle is drawn.
zero or more
1 or more
1 and only 1 (exactly 1)
Multiplicity = Cardinality + Participation
Cardinality: Denotes the maximum number of possible relationship occurrences in which a certain entity can
participate (in simple terms: at most).
Note: Connectivity and Modality/ multiplicity/ Cardinality and Relationship are same terms.
Participation: Denotes if all or only some entity occurrences participate in a relationship (in simple terms: at least).
BASIS FOR
CARDINALITY MODALITY
COMPARISON
Generalization is like a bottom-up approach in which two or more entities of lower levels combine to form a
higher level entity if they have some attributes in common.
Generalization is more like a subclass and superclass system, but the only difference is the approach.
Generalization uses the bottom-up approach. Like subclasses are combined to make a superclass. IS-A, ISA, IS A, IS
AN, IS-AN Approach is used in generalization
Generalization is the result of taking the union of two or more (lower level) entity types to produce a higher level
entity type.
Generalization is the same as UNION. Specialization is the same as ISA.
A specialization is a top-down approach, and it is the opposite of Generalization. In specialization, one higher-level
entity can be broken down into two lower-level entities. Specialization is the result of taking a subset of a higher-
level entity type to form a lower-level entity type.
Normally, the superclass is defined first, the subclass and its related attributes are defined next, and the
relationship set is then added. HASA, HAS-A, HAS AN, HAS-AN.
UML to EER specialization or generalization comes in the form of hierarchical entity set:
Mapping Process
1. Create tables for all higher-level entities.
2. Create tables for lower-level entities.
3. Add primary keys of higher-level entities in the table of lower-level entities.
4. In lower-level tables, add all other attributes of lower-level entities.
5. Declare the primary key of the higher-level table and the primary key of the lower-level table.
6. Declare foreign key constraints.
This section presents the concept of entity clustering, which abstracts the ER schema to such a degree that the
entire schema can appear on a single sheet of paper or a single computer screen.
END
CHAPTER 4 DISCOVERING BUSINESS RULES AND DATABASE CONSTRAINTS
Overview of Database Constraints
Definition of Data integrity Constraints placed on the set of values allowed for the attributes of relation as relational
Integrity.
Constraints– These are special restrictions on allowable values.
For example, the Passing marks for a student must always be greater than 50%.
Categories of Constraints
Constraints on databases can generally be divided into three main categories:
1. Constraints that are inherent in the data model. We call these inherent model-based constraints or implicit
constraints.
2. Constraints that can be directly expressed in schemas of the data model, typically by specifying them in the
DDL (data definition language, we call these schema-based constraints or explicit constraints.
3. Constraints that cannot be directly expressed in the schemas of the data model, and hence must be
expressed and enforced by the application programs. We call these application-based or semantic
constraints or business rules.
is NULL. Referential integrity can be violated if the value of any foreign key in t refers to a tuple that does not exist
in the referenced relation.
Note: Insertions Constraints and constraints on NULLs are called explicit. Insert can violate any of the four types of
constraints discussed in the implicit constraints.
1. Business Rule or default relation constraints or semantic constraints
These rules are applied to data before (first) the data is inserted into the table columns. For example, Unique, Not
NULL, Default constraints.
1. The primary key value can’t be null.
2. Not null (absence of any value (i.e., unknown or nonapplicable to a tuple)
3. Unique
4. Primary key
5. Foreign key
6. Check
7. Default
2. Null Constraints
Comparisons Involving NULL and Three-Valued Logic:
SQL has various rules for dealing with NULL values. Recall from Section 3.1.2 that NULL is used to represent a missing
value, but that it usually has one of three different interpretations—value unknown (exists but is not known), value
not available (exists but is purposely withheld), or value not applicable (the attribute is undefined for this tuple).
Consider the following examples to illustrate each of the meanings of NULL.
1. Unknownalue. A person’s date of birth is not known, so it is represented by NULL in the database.
2. Unavailable or withheld value. A person has a home phone but does not want it to be listed, so it is withheld
and represented as NULL in the database.
3. Not applicable attribute. An attribute Last_College_Degree would be NULL for a person who has no college
degrees because it does not apply to that person.
3. Enterprise Constraints
Enterprise constraints – sometimes referred to as semantic constraints – are additional rules specified by users or
database administrators and can be based on multiple tables.
Here are some examples.
A class can have a maximum of 30 students.
A teacher can teach a maximum of four classes per semester.
An employee cannot take part in more than five projects.
The salary of an employee cannot exceed the salary of the employee’s manager.
4. Key Constraints or Uniqueness Constraints :
These are called uniqueness constraints since it ensures that every tuple in the relation should be unique.
A relation can have multiple keys or candidate keys(minimal superkey), out of which we choose one of the keys as
primary key, we don’t have any restriction on choosing the primary key out of candidate keys, but it is suggested to
go with the candidate key with less number of attributes.
Null values are not allowed in the primary key, hence Not Null constraint is also a part of key constraint.
5. Domain, Field, Row integrity ConstraintsA domain of possible values must be associated with every
attribute (for example, integer types, character types, date/time types). Declaring an attribute to be of a
particular domain act as the constraint on the values that it can take. Domain Integrity rules govern the values.
In the specific field/cell values must be with in column domain and represent a specific location within at table
7. Assertions constraints
An assertion is any condition that the database must always satisfy. Domain constraints and Integrity constraints
are special forms of assertions.
8. Authorization constraints
We may want to differentiate among the users as far as the type of access they are permitted to various data values
in the database. This differentiation is expressed in terms of Authorization.
The most common being:
Read authorization – which allows reading but not the modification of data;
Insert authorization – which allows the insertion of new data but not the modification of existing data
Update authorization – which allows modification, but not deletion.
The types of constraints we discussed so far may be called state constraints because they define the constraints that
a valid state of the database must satisfy. Another type of constraint, called transition constraints, can be defined
to deal with state changes in the database. An example of a transition constraint is: “the salary of an employee can
only increase.”
What is the use of data constraints?
Constraints are used to:
Avoid bad data being entered into tables.
At the database level, it helps to enforce business logic.
Improves database performance.
Enforces uniqueness and avoid redundant data to the database.
END
SQL version:
1970 – Dr. Edgar F. “Ted” Codd described a relational model for databases.
1974 – Structured Query Language appeared.
1978 – IBM released a product called System/R.
1986 – SQL1 IBM developed the prototype of a relational database, which is standardized by ANSI.
1989- First minor changes but not standards changed
1992 – SQL2 launched with features like triggers, object orientation, etc.
SQL1999 to 2003- SQL3 launched
SQL2006- Support for XML Query Language and OOP Incorporation with language.
SQL2011-improved support for temporal databases
SQL-86 in 1986, the most recent version in 2011 (SQL:2016).
SQL-86
The first SQL standard was SQL-86. It was published in 1986 as ANSI standard and in 1987 as International
Organization for Standardization (ISO) standard. The starting point for the ISO standard was IBM’s SQL standard
implementation. This version of the SQL standard is also known as SQL 1.
SQL-89
The next SQL standard was SQL-89, published in 1989. This was a minor revision of the earlier standard, a superset
of SQL-86 that replaced SQL-86. The size of the standard did not change.
SQL-92
The next revision of the standard was SQL-92 – and it was a major revision. The language introduced by SQL-92 is
sometimes referred to as SQL 2. The standard document grew from 120 to 579 pages. However, much of the growth
was due to more precise specifications of existing features.
The most important new features were:
An explicit JOIN syntax and the introduction of outer joins: LEFT JOIN, RIGHT JOIN, FULL JOIN.
The introduction of NATURAL JOIN and CROSS JOIN
SQL:1999
SQL:1999 (also called SQL 3) was the fourth revision of the SQL standard. Starting with this version, the standard
name used a colon instead of a hyphen to be consistent with the names of other ISO standards. This standard was
published in multiple installments between 1999 and 2002.
In 1993, the ANSI and ISO development committees decided to split future SQL development into a multi-part
standard.
SQL Evaluation and Part of SQL Standards (The first installment of 1995 and SQL:1999 had many parts) as
Part 1: SQL/Framework (100 pages) defined the fundamental concepts of SQL.
Part 2: SQL/Foundation (1050 pages) defined the fundamental syntax and operations of SQL: types, schemas, tables,
views, query and update statements, expressions, and so forth. This part is the most important for regular SQL users.
Part 3: SQL/CLI (Call Level Interface) (514 pages) defined an application programming interface for SQL.
Part 4: SQL/PSM (Persistent Stored Modules) (193 pages) defined extensions that make SQL procedural.
Part 5: SQL/Bindings (270 pages) defined methods for embedding SQL statements in application programs written
in a standard programming language. SQL/Bindings. The Dynamic SQL and Embedded SQL bindings are taken from
SQL-92. No active new work at this time, although C++ and Java interfaces are under discussion.
Part 6: SQL/XA. An SQL specialization of the popular XA Interface developed by X/Open (see below).
Part 7: SQL/Temporal. A newly approved SQL subproject to develop enhanced facilities for temporal data
management using SQL.
Part 8: SQL Multimedia (SQL/Mm)
A new ISO/IEC international standardization project for the development of an SQL class library for multimedia
applications was approved in early 1993. This new standardization activity, named SQL Multimedia (SQL/MM), will
specify packages of SQL abstract data type (ADT) definitions using the facilities for ADT specification and invocation
provided in the emerging SQL3 specification.
SQL:2006 further specified how to use SQL with XML. It was not a revision of the complete SQL standard, just Part
14, which deals with SQL-XML interoperability.
The current SQL standard is SQL:2019. It added Part 15, which defines multidimensional array support in SQL.
Static or Embedded SQL are SQL statements in an application that do not change at runtime and,
therefore, can be hard-coded into the application. This is a central idea of embedded SQL: placing SQL
statements in a program written in a host programming language. The embedded SQL shown in Embedded SQL
Example is known as static SQL.
Traditional SQL interfaces used an embedded SQL approach. SQL statements were placed directly in an
application's source code, along with high-level language statements written in C, COBOL, RPG, and other
programming languages. The source code then was precompiled, which translated the SQL statements
into code that the subsequent compile step could process. This method is referred to as static SQL. One
performance advantage to this approach is that SQL statements were optimized at the time the high-level
program was compiled, rather than at runtime while the user was waiting. Static SQL statements in the
same program are treated normally.
Common table expressions (CTEs) enable you to name subqueries temporarily for a result set. You then refer to
these like normal tables elsewhere in your query. This can make your SQL easier to write and understand later. CTEs
go in with the clause above the select statement.
Recursive common table expression (CTE)
RCTE is a CTE that references itself. By doing so, the CTE repeatedly executes, and returns subsets of data, until it
returns the complete result set.
A recursive CTE is useful in querying hierarchical data such as organization charts where one employee reports to a
manager or a multi-level bill of materials when a product consists of many components, and each component itself
also consists of many other components.
For a final example, suppose we have a cycle in the data. By adding one more row to the table, there is
now a flight from Cairo to Paris and one from Paris to Cairo. Without accounting for possible cyclic data
like this, it is quite easy to generate a query that will go into an infinite loop processing the data.
The following query returns that information:
INSERT INTO FLIGHTS VALUES(’Cairo’, ’Paris’, ’Euro Air’, ’1134’, 440)
WITH destinations (departure, arrival, connections, cost, itinerary) AS
(SELECT f.departure, f.arrival, 1, price,
from training.training_subject s,
training.subject_wise_topic ts,
training.training_topic t,
TRAINING.TRAINING_SUBJECT_QUESTION TSQ,
training.training_question_setup TQS
where S.SUBJECT_ID = TS.SUBJECT_ID
AND T.TRAINING_TOPIC_ID = TS.TOPIC_ID
AND TS.SUBJECT_ID = TSQ.SUBJECT_ID
AND TS.TOPIC_ID = TSQ.TRAINING_TOPIC_ID
AND TSQ.Question_Id = TQS.Question_Id
AND s.active = 'Y'
and s.category_id = 'TCG004'
AND TS.ACTIVE = 'Y'
START WITH S.SUBJECT_ID IN (SELECT SUBJECT_ID
FROM TRAINING.TRAINING_SUBJECT S
WHERE S.ACTIVE = 'Y'
START WITH TS.TOPIC_ID IN (SELECT TOPIC_ID
FROM TRAINING.SUBJECT_WISE_TOPIC TS
WHERE TS.ACTIVE = 'Y'
START WITH TS.TOPIC_ID IN
(SELECT TSQ.TRAINING_TOPIC_ID
FROM TRAINING.TRAINING_SUBJECT_QUESTION TSQ
WHERE TSQ.ACTIVE = 'Y')
CONNECT BY NOCYCLE PRIOR TSQ.TRAINING_TOPIC_ID = TS.TOPIC_ID
Query-By-Example (QBE)
Query-By-Example (QBE) is the first interactive database query language to exploit such modes of HCI. In QBE, a
query is constructed on an interactive terminal involving two-dimensional ‘drawings’ of one or more relations,
visualized in tabular form, which are filled in selected columns with ‘examples’ of data items to be retrieved (thus
the phrase query-by-example).
It is different from SQL, and from most other database query languages, in having a graphical user interface that
allows users to write queries by creating example tables on the screen.
QBE, like SQL, was developed at IBM and QBE is an IBM trademark, but a number of other companies sell QBE-like
interfaces, including Paradox.
A convenient shorthand notation is that if we want to print all fields in some relation, we can place P. under the
name of the relation. This notation is like the SELECT * convention in SQL. It is equivalent to placing a P. in every
field:
Example of QBE:
III. Physical design. The physical design step involves the selection of indexes (access methods), partitioning, and
clustering of data. The logical design methodology in step II simplifies the approach to designing large relational
databases by reducing the number of data dependencies that need to be analyzed. This is accomplished by inserting
conceptual data modeling and integration steps (II(a) and II(b) of pictures into the traditional relational design
approach.
IV. Database implementation, monitoring, and modification .
Once thedesign is completed, and the database can be created through the implementation of the formal schema
using the data definition language (DDL) of a DBMS.
Following the relation in above image consist degree=4, 5=cardinality, data values/cells = 20.
Characteristics of relation
1. Distinct Relation/table name
2. Relations are unordered
3. Cells contain exactly one atomic (Single) value means Each cell (field) must contain a single value
4. No repeating groups
5. Distinct attributes name
6. Value of attribute comes from the same domain
7. Order of attribute has no significant
8. The attributes in R(A1, ...,An) and the values in t = <V1,V2, ..... , Vn> are ordered.
9. Each tuple is a distinct
10. order of tuples that has no significance.
11. tuples may be stored and retrieved in an arbitrary order
12. Tables manage attributes. This means they store information in form of attributes only
13. Tables contain rows. Each row is one record only
14. All rows in a table have the same columns. Columns are also called fields
15. Each field has a data type and a name
16. A relation must contain at least one attribute (column) that identifies each tuple (row) uniquely
toy_name varchar2(100));
The global temp table is accessible to everyone. Global, you create this and it is registered in the data dictionary, it
lives "forever". the global pertains to the schema definition
Private/Local Temporary Tables
Starting in Oracle Database 18c, you can create private temporary tables. These tables are only visible in your
session. Other sessions can't see the table!
The temporary tables could be very useful in some cases to keep temporary data. Local, it is created "on the fly"
and disappears after its use. you never see it in the data dictionary.
Details of temp tables:
A temporary table is owned by the person who created it and can only be accessed by that user.
A global temporary table is accessible to everyone and will contain data specific to the session using it;
multiple sessions can use the same global temporary table simultaneously. It is a global definition for a temporary
table that all can benefit from.
Local temporary table – These tables are invisible when there is a connection and are deleted when it is closed.
Clone Table Temporary tables are available in MySQL version 3.23 onwards
There may be a situation when you need an exact copy of a table and the CREATE TABLE . or the SELECT. commands
do not suit your purposes because the copy must include the same indexes, default values, and so forth.
There are Magic Tables (virtual tables) in SQL Server that hold the temporal information of recently inserted and
recently deleted data in the virtual table.
The INSERTED magic table stores the before version of the row, and the DELETED table stores the after version of
the row for any INSERT, UPDATE, or DELETE operations.
A record is a collection of data objects that are kept in fields, each having its name and datatype. A Record can be
thought of as a variable that can store a table row or a set of columns from a table row. Table columns relate to the
fields.
External Tables
An external table is a read-only table whose metadata is stored in the database but whose data is
stored outside the database.
Table Partitioning In this way, large tables can be broken down into smaller, more manageable parts.
A non-partitioned table cannot store more than 2 billion rows. It is possible to overcome this limit by distributing
the rows across several partitions. Each partition must not contain more than 2 billion rows.
Parallelization
Partitioning allows operations to be parallelized by using several execution threads for each table.
Horizontal partitioning divides a table into multiple tables that contain the same number of columns, but fewer rows.
Vertical partitioning splits a table into two or more tables containing different columns.
Table partitioning vertically (Table columns)
Collections Records
All items are of the same data type All items are different data types
Same data type items are called elements Different data type items are called fields
For creating a collection variable you can use %TYPE For creating a record variable you can use %ROWTYPE or
%TYPE
Lists and arrays are examples Tables and columns are examples
By default, tables are heap-organized. This means the database is free to store rows wherever there is space. You
can add the "organization heap" clause if you want to be explicit.
and right join is same. full outer join and full join is same. same as left.
Full outer join will return all rows from left like Child table and persons.
select * from persons ps cross join child cd – It will return 3*4=12 if one
table has 3 rows and send table or right table has 4 rows.
select * from child ps cross join persons cd where ps.childid <>4 Comparison
operator, It will retrun only others multiple of rows except 4.logical
operator like Not will work same. Below query will also return same results.
select * from child CD cross join persons PS where PS.PERSONID BETWEEN 1 AND 4.
select * from child CD cross join persons PS where PS.PERSONID NOT BETWEEN 1
AND 4;
"TOPIC_NAME" VARCHAR2(255),
"DESCRIPTION" VARCHAR2(255),
CONSTRAINT "TOPIC_ID_PK" PRIMARY KEY ("TOPIC_ID")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ENABLE
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ;
CREATE UNIQUE INDEX "EMR"."TOPIC_ID_PK" ON "EMR"."TOPIC" ("TOPIC_ID")
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ;
ALTER TABLE "EMR"."TOPIC" MODIFY ("TOPIC_ID" NOT NULL ENABLE);
ALTER TABLE "EMR"."TOPIC" ADD CONSTRAINT "TOPIC_ID_PK" PRIMARY KEY
("TOPIC_ID")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ENABLE;
Scalar Subqueries
Scalar subqueries return one column and at most one row. You can replace a column with a scalar subquery in most
cases.
We can once again be faced with possible ambiguity among attribute names if attributes of the same name exist—
one in a relation in the FROM clause of the outer query, and another in a relation in the FROM clause of the nested
query. The rule is that a reference to an unqualified attribute refers to the relation declared in the innermost nested
query.
On the other hand, when we TRUNCATE a table, the table structure remains the same, so you will not face any of
the above problems.
In general, ANSI SQL permits the use of ON DELETE and ON UPDATE clauses to cover
CASCADE, SET NULL, or SET DEFAULT.
MS Access, SQL Server, and Oracle support ON DELETE CASCADE.
MS Access and SQL Server support ON UPDATE CASCADE.
Oracle does not support ON UPDATE CASCADE.
Oracle supports SET NULL.
MS Access and SQL Server do not support SET NULL.
Refer to your product manuals for additional information on referential constraints.
While MS Access does not support ON DELETE CASCADE or ON UPDATE CASCADE at the SQL command-line level,
A view is a virtual relation or one that does not exist but is dynamically derived it can be constructed by performing
operations (i.e., select, project, join, etc.) on values of existing base relation (a named relation that is designed in a
conceptual schema whose tuples are physically stored in the database). Views are viewable in the external
schema.
Types of View
1. User-defined view
a. Simple view (Single table view)
b. Complex View (Multiple tables having joins, group by, and functions)
c. Inline View (Based on a subquery in from clause to create a temp table and form a complex
query)
d. Materialized View (It stores physical data, definitions of tables, hold query result, re-executation
not required for repeated query)
e. Dynamic view
f. Static view
2. Database View
3. System Defined Views
4. Information Schema View
5. Catalog View
6. Dynamic Management View
7. Server-scoped Dynamic Management View
8. Sources of Data Dictionary Information View
a. General Views
b. Transaction Service Views
c. SQL Service Views
Advantages of View:
Provide security
Hide specific parts of the database from certain users
Customize base relations based on their needs
It supports the external model
Provide logical independence
Views don't store data in a physical location.
Views can provide Access Restriction, since data insertion, update, and deletion is not possible with the
view.
We can DML on view if it is derived from a single base relation, and contains the primary key or a
candidate key
Advantages of Materialized view
MVs are the schema objects with storage.
In MVs the underlying query results are stored in separate storage
Data in the MVs gets periodically refreshed depends on the requirement
The data from the MVs might not be latest
MVs are mostly used for data warehousing applications or business intelligence or reporting purpose
MVs can be set to refresh manually or as per schedule.
END
In most cases, if you can place your relations in the third normal form (3NF), then you will have avoided most of
the problems common to bad relational designs. Boyce-Codd (BCNF) and the fourth normal form (4NF) handle
special situations that arise only occasionally.
Denormalization in Databases
Denormalization is a database optimization technique in which we add redundant data to one or more tables. This
can help us avoid costly joins in a relational database. Note that denormalization does not mean not doing
normalization. It is an optimization technique that is applied after normalization.
Types of Denormalization
The two most common types of denormalization are two entities in a one-to-one relationship and two entities in a
one-to-many relationship.
Pros of Denormalization: -
Retrieving data is faster since we do fewer joins Queries to retrieve can be simpler (and therefore less likely to
have bugs), since we need to look at fewer tables.
Cons of Denormalization: -
Updates and inserts are more expensive. Denormalization can make an update and insert code harder to write.
Data may be inconsistent. Which is the “correct” value for a piece of data?
Data redundancy necessities more storage.
Relational Decomposition
Decomposition is used to eliminate some of the problems of bad design like anomalies, inconsistencies, and
redundancy.
When a relation in the relational model is not inappropriate normal form then the decomposition of a relationship
is required. In a database, it breaks the table into multiple tables.
Types of Decomposition
1 Lossless Decomposition
If the information is not lost from the relation that is decomposed, then the decomposition will be lossless. The
process of normalization depends on being able to factor or decompose a table into two or smaller tables, in such a
way that we can recapture the precise content of the original table by joining the decomposed parts.
2 Lossy Decomposition
Data will be lost for more decomposition of the table.
END
Functional Dependency (FD) is a constraint that determines the relation of one attribute to another attribute.
Functional dependency is denoted by an arrow “→”. The functional dependency of X on Y is represented by X → Y.
In this example, if we know the value of the Employee number, we can obtain Employee Name, city, salary, etc. By
this, we can say that the city, Employee Name, and salary are functionally dependent on the Employee number.
Key Terms for Functional Dependency in
Description
Database
Axioms are a set of inference rules used to infer all the functional
Axiom
dependencies on a relational database.
Inclusion Dependency
Multivalued dependency and join dependency can be used to guide database design although they both are less
common than functional dependencies. The inclusion dependency is a statement in which some columns of a
relation are contained in other columns.
Transitive Dependency
When an indirect relationship causes functional dependency it is called Transitive Dependency.
Fully-functionally Dependency
An attribute is fully functional dependent on another attribute if it is Functionally Dependent on that attribute and
not on any of its proper subset
Trivial functional dependency
A → B has trivial functional dependency if B is a subset of A.
The following dependencies are also trivial: A → A, B → B
{ DeptId, DeptName } -> Dept Id
Non-trivial functional dependency
A → B has a non-trivial functional dependency if B is not a subset of A.
Trivial − If a functional dependency (FD) X → Y holds, where Y is a subset of X, then it is called a trivial FD. It occurs
when B is not a subset of A in − A ->B, DeptId -> DeptName
Non-trivial − If an FD X → Y holds, where Y is not a subset of X, then it is called a non-trivial FD.
Completely non-trivial − If an FD X → Y holds, where x intersects Y = Φ, it is said to be a completely non-trivial FD.
When A intersection B is NULL, then A → B is called a complete non-trivial. A ->B Intersaction is empty.
Multivalued Dependency and its types
1. Join Dependency
2. Join decomposition is a further generalization of Multivalued dependencies.
3. Inclusion Dependency
Example of Dependency diagrams and flow
Dependency Preserving
If a relation R is decomposed into relations R1 and R2, then the dependencies of R either must be a part of R1 or
R2 or must be derivable from the combination of functional dependencies of R1 and R2.
For example, suppose there is a relation R (A, B, C, D) with a functional dependency set (A->BC). The relational R is
decomposed into R1(ABC) and R2(AD) which is dependency preserving because FD A->BC is a part of relation
R1(ABC)
Find the canonical cover?
Solution: Given FD = { B → A, AD → BC, C → ABD }, now decompose the FD using decomposition rule( Armstrong
Axiom ).
B→A
END
Consistency: The word consistency means that the value should remain preserved always, the database remains
consistent before and after the transaction.
Isolation and levels of isolation: The term 'isolation' means separation. Any changes that occur in any
particular transaction will not be seen by other transactions until the change is not committed in the memory.
A transaction isolation level is defined by the following phenomena:
The five concurrency problems that can occur in the database are:
1. Temporary Update Problem
2. Incorrect Summary Problem
3. Lost Update Problem
4. Unrepeatable Read Problem
5. Phantom Read Problem
Dirty Read – A Dirty read is a situation when a transaction reads data that has not yet been committed. For
example, Let’s say transaction 1 updates a row and leaves it uncommitted, meanwhile, Transaction 2 reads the
updated row. If transaction 1 rolls back the change, transaction 2 will have read data that is considered never to
have existed. (Dirty Read Problems (W-R Conflict))
Lost Updates occur when multiple transactions select the same row and update the row based on the value
selected (Lost Update Problems (W - W Conflict))
Non Repeatable read – Non Repeatable read occurs when a transaction reads the same row twice and gets a
different value each time. For example, suppose transaction T1 reads data. Due to concurrency, another
transaction T2 updates the same data and commits, Now if transaction T1 rereads the same data, it will retrieve a
different value. (Unrepeatable Read Problem (W-R Conflict))
Phantom Read – Phantom Read occurs when two same queries are executed, but the rows retrieved by the two,
are different. For example, suppose transaction T1 retrieves a set of rows that satisfy some search criteria. Now,
Transaction T2 generates some new rows that match the search criteria for transaction T1. If transaction T1 re-
executes the statement that reads the rows, it gets a different set of rows this time.
Based on these phenomena, the SQL standard defines four isolation levels :
Read Uncommitted – Read Uncommitted is the lowest isolation level. In this level, one transaction may read
not yet committed changes made by another transaction, thereby allowing dirty reads. At this level, transactions
are not isolated from each other.
Read Committed – This isolation level guarantees that any data read is committed at the moment it is read.
Thus it does not allows dirty reading. The transaction holds a read or write lock on the current row, and thus
prevents other transactions from reading, updating, or deleting it.
Repeatable Read – This is the most restrictive isolation level. The transaction holds read locks on all rows it
references and writes locks on all rows it inserts, updates, or deletes. Since other transactions cannot read, update
or delete these rows, consequently it avoids non-repeatable read.
Serializable – This is the highest isolation level. A serializable execution is guaranteed to be serializable.
Serializable execution is defined to be an execution of operations in which concurrently executing transactions
appear to be serially executing.
Durability: Durability ensures the permanency of something. In DBMS, the term durability ensures that the data
after the successful execution of the operation becomes permanent in the database. If a transaction is committed,
it will remain even error, power loss, etc.
ACID Example:
States of Transaction
Begin, active, partially committed, failed, committed, end, aborted
Aborted details are necessary
If any of the checks fail and the transaction has reached a failed state then the database recovery system will make
sure that the database is in its previous consistent state. If not then it will abort or roll back the transaction to bring
the database into a consistent state.
If the transaction fails in the middle of the transaction then before executing the transaction, all the executed
transactions are rolled back to their consistent state. After aborting the transaction, the database recovery module
will select one of the two operations: 1) Re-start the transaction 2) Kill the transaction
The scheduler
A module that schedules the transaction’s actions, ensuring serializability
Two main approaches
1. Pessimistic: locks
2. Optimistic: time stamps, MV, validation
Scheduling
A schedule is responsible for maintaining jobs/transactions if many jobs are entered at the
same time(by multiple users) to execute state and read/write operations performed at that jobs.
A schedule is a sequence of interleaved actions from all transactions. Execution of several Facts while preserving
the order of R(A) and W(A) of any 1 Xact.
Note: Two schedules are equivalent if:
Two Schedules are equivalent if they have the same dependencies.
They contain the same transactions and operations
They order all conflicting operations of non-aborting transactions in the same way
A schedule is serializable if it is equivalent to a serial schedule
Process Scheduling handles the selection of a process for the processor on the basis of a
scheduling algorithm and also the removal of a process from the processor. It is an important part of
multiprogramming in operating system.
Process scheduling involves short-term scheduling, medium-term scheduling and long-term scheduling.
The major differences between long term, medium term and short term scheduler are as follows –
Long term scheduler is a job Medium term is a process of Short term scheduler is
scheduler. swapping schedulers. called a CPU scheduler.
The speed of long term is lesser The speed of medium term is The speed of short term is
than the short term. in between short and long term fastest among the other two.
scheduler.
Long term controls the degree Medium term reduces the The short term provides
of multiprogramming. degree of multiprogramming. lesser control over the
degree of
multiprogramming.
The long term is almost nil or The medium term is a part of Short term is also a minimal
minimal in the time sharing the time sharing system. time sharing system.
system.
The long term selects the Medium term can reintroduce Short term selects those
processes from the pool and the process into memory and processes that are ready to
loads them into memory for execution can be continued. execute.
execution.
Serial Schedule
The serial schedule is a type of schedule where one transaction is executed completely before starting another
transaction.
Example of Serial Schedule
Non-Serial Schedule
If interleaving of operations is allowed, then there will be a non-serial schedule.
Serializability is a guarantee about transactions over one or more objects
Doesn’t impose real-time constraints
The schedule is serializable if the precedence graph is acyclic
The serializability of schedules is used to find non-serial schedules that allow the transaction to execute
concurrently without interfering with one another.
Example of Serializable
A serializable schedule always leaves the database in a consistent state. A serial schedule is always a
serializable schedule because, in a serial schedule, a transaction only starts when the other transaction finished
execution. However, a non-serial schedule needs to be checked for Serializability.
A non-serial schedule of n number of transactions is said to be a serializable schedule if it is equivalent to the serial
schedule of those n transactions. A serial schedule doesn’t allow concurrency, only one transaction executes at a
time, and the other stars when the already running transaction is finished.
Linearizability: a guarantee about single operations on single objects Once the write completes, all later reads
(by wall clock) should reflect that write.
Types of Serializability
There are two types of Serializability.
Conflict Serializability
View Serializability
Conflict Serializable A schedule is conflict serializable if it is equivalent to some serial schedule
Non-conflicting operations can be reordered to get a serial schedule. If a schedule is conflict serializable, then it is
also viewed as serializable but not vice versa
View serializability/view equivalence is a concept that is used to compute whether schedules are View-
Serializable or not. A schedule is said to be View-Serializable if it is view equivalent to a Serial Schedule (where no
interleaving of transactions is possible).
The non-serializable schedule is divided into two types, Recoverable and Non-recoverable Schedules.
1. Recoverable Schedule(Cascading Schedule, cascades Schedule, strict Schedule). In a recoverable schedule, if a
transaction T commits, then any other transaction that T read from must also have committed.
A schedule is recoverable if:
It is conflict-serializable, and
Whenever a transaction T commits, all transactions that have written elements read by T have already been
committed.
2. Non-Recoverable Schedule
The relation between various types of schedules can be depicted as:
3. Serial schedules satisfy constraints of all recoverable, cascadeless, and strict schedules and hence is a
subset of strict schedules.
Note: Linearizability + serializability = strict serializability
Transaction behavior equivalent to some serial execution
And that serial execution agrees with real-time
Serializability Theorems
Wormhole Theorem: A history is isolated if, and only if, it has no wormhole transactions.
Locking Theorem: If all transactions are well-formed and two-phase, then any legal history will be isolated.
Locking Theorem (converse): If a transaction is not well-formed or is not two-phase, then it is possible to write
another transaction, such that the resulting pair is a wormhole.
Rollback Theorem: An update transaction that does an UNLOCK and then a ROLLBACK is not two-phase.
Thomas Write Rule provides the guarantee of serializability order for the protocol. It improves the Basic Timestamp
Ordering Algorithm.
The basic Thomas writing rules are as follows:
If TS(T) < R_TS(X) then transaction T is aborted and rolled back, and the operation is rejected.
If TS(T) < W_TS(X) then don't execute the W_item(X) operation of the transaction and continue
processing.
Different Types of reading Write Conflict in DBMS
As I mentioned earlier, the read operation is safe as it does modify any information. So, there is no Read-Read (RR)
conflict in the database. So, there are three types of conflict in the database transaction.
Problem 1: Reading Uncommitted Data (WR Conflicts)
Reading the value of an uncommitted object might yield an inconsistency
Dirty Reads or Write-then-Read (WR) Conflicts.
Problem 2: Unrepeatable Reads (RW Conflicts)
Reading the same object twice might yield an inconsistency
Read-then-Write (RW) Conflicts (Write-After-Read)
Problem 3: Overwriting Uncommitted Data (WW Conflicts)
Overwriting an uncommitted object might yield an inconsistency
What is Write-Read (WR) conflict?
This conflict occurs when a transaction read the data which is written by the other transaction before committing.
What is Read-Write (RW) conflict?
Transaction T2 is Writing data that is previously read by transaction T1.
Here if you look at the diagram above, data read by transaction T1 before and after T2 commits is different.
What is Write-Write (WW) conflict?
Here Transaction T2 is writing data that is already written by other transaction T1. T2 overwrites the data written
by T1. It is also called a blind write operation.
Data written by T1 has vanished. So it is data update loss.
Phase Commit (PC)
One-phase commit
The Single Phase Commit protocol is more efficient at run time because all updates are done without any explicit
coordination.
BEGIN
INSERT INTO CUSTOMERS (ID,NAME,AGE,ADDRESS,SALARY)
VALUES (1, 'Ramesh', 32, 'Ahmedabad', 2000.00 );
INSERT INTO CUSTOMERS (ID,NAME,AGE,ADDRESS,SALARY)
VALUES (2, 'Khilan', 25, 'Delhi', 1500.00 );
COMMIT;
Two-Phase Commit (2PC)
The most commonly used atomic commit protocol is a two-phase commit. You may notice that is very similar to
the protocol that we used for total order multicast. Whereas the multicast protocol used a two-phase approach to
allow the coordinator to select a commit time based on information from the participants, a two-phase commit
lets the coordinator select whether or not a transaction will be committed or aborted based on information from
the participants.
Three-phase Commit
Another real-world atomic commit protocol is a three-phase commit (3PC). This protocol can reduce the amount of
blocking and provide for more flexible recovery in the event of failure. Although it is a better choice in unusually
failure-prone environments, its complexity makes 2PC the more popular choice.
Transaction atomicity using a two-phase commit
Transaction serializability using distributed locking.
All lock requests are made to the concurrency-control manager. Transactions proceed only once the lock request is
granted. A lock is a variable, associated with the data item, which controls the access of that data item. Locking is
the most widely used form of concurrency control.
Deadlock Example:
1. Binary Locks: A Binary lock on a data item can either be locked or unlocked states.
2. Shared/exclusive: This type of locking mechanism separates the locks in DBMS based on their uses. If a
lock is acquired on a data item to perform a write operation, it is called an exclusive lock.
3. Simplistic Lock Protocol: This type of lock-based protocol allows transactions to obtain a lock on every
object before beginning operation. Transactions may unlock the data item after finishing the ‘write’
operation.
4. Pre-claiming Locking: Two-Phase locking protocol which is also known as a 2PL protocol needs a
transaction should acquire a lock after it releases one of its locks. It has 2 phases growing and shrinking.
5. Shared lock: These locks are referred to as read locks, and denoted by 'S'.
If a transaction T has obtained Shared-lock on data item X, then T can read X, but cannot write X. Multiple Shared
locks can be placed simultaneously on a data item.
A deadlock is an unwanted situation in which two or more transactions are waiting indefinitely for one another to
give up locks.
No preemption -- resources cannot be preempted; a resource can be released only voluntarily by the
process holding it.
Circular wait – one waits for others, others wait for one.
The Bakery algorithm is one of the simplest known solutions to the mutual exclusion problem for the general case
of the N process. The bakery Algorithm is a critical section solution for N processes. The algorithm preserves the first
come first serve the property.
Before entering its critical section, the process receives a number. The holder of the smallest number enters the
critical section.
Deadlock detection
This technique allows deadlock to occur, but then, it detects it and solves it. Here, a database is periodically checked
for deadlocks. If a deadlock is detected, one of the transactions, involved in the deadlock cycle, is aborted. Other
transactions continue their execution. An aborted transaction is rolled back and restarted.
When a transaction waits more than a specific amount of time to obtain a lock (called the deadlock timeout),
Derby can detect whether the transaction is involved in a deadlock.
If deadlocks occur frequently in your multi-user system with a particular application, you might need to do some
debugging.
A deadlock where two transactions are waiting for one another to give up locks.
In below diagram second graph has deadlock. We will abort one transaction to remove deadlock. Transaction
executation start from T28-> T26->T27-> T25. In second we can abort transection coming from T28 to T27.
Phantom deadlock detection is the condition where the deadlock does not exist but due to a delay in propagating
local information, deadlock detection algorithms identify the locks that have been already acquired.
There are three alternatives for deadlock detection in a distributed system, namely.
Centralized Deadlock Detector − One site is designated as the central deadlock detector.
Hierarchical Deadlock Detector − Some deadlock detectors are arranged in a hierarchy.
Distributed Deadlock Detector − All the sites participate in detecting deadlocks and removing them.
The deadlock detection algorithm uses 3 data structures –
Available
Vector of length m Indicates the number of available resources of each type.
Allocation
Matrix of size n*m A[i,j] indicates the number of j the resource type allocated to I the process.
Request
Matrix of size n*m Indicates the request of each process.
Request[i,j] tells the number of instances Pi process is the request of jth resource type.
Deadlock Avoidance
Deadlock avoidance
Acquire locks in a pre-defined order
Acquire all locks at once before starting transactions
Aborting a transaction is not always a practical approach. Instead, deadlock avoidance mechanisms can be used to
detect any deadlock situation in advance.
The deadlock prevention technique avoids the conditions that lead to deadlocking. It requires that every
transaction lock all data items it needs in advance. If any of the items cannot be obtained, none of the items are
locked.
The transaction is then rescheduled for execution. The deadlock prevention technique is used in two-phase locking.
To prevent any deadlock situation in the system, the DBMS aggressively inspects all the operations, where
transactions are about to execute. If it finds that a deadlock situation might occur, then that transaction is never
allowed to be executed.
Deadlock Prevention Algorithm/protocols
1. Wait-Die scheme
2. Wound wait scheme
Note! Deadlock prevention is more strict than Deadlock Avoidance.
Example: Wait-Die − If T1 is older than T2, T1 is allowed to wait. Otherwise, if T1 is younger than T2, T1 is aborted
and later restarted.
Wait-die: permit older waits for younger
Wound-Wait − permit younger waits for older.
Note: In a bulky system, deadlock prevention techniques may work well. If T1 is older than T2, T2 is aborted and
later restarted. Otherwise, if T1 is younger than T2, T1 is allowed to wait.
Here, we want to develop an algorithm to avoid deadlock by making the right choice all the time
Dijkstra's Banker's Algorithm is an approach to trying to give processes as much as possible while guaranteeing
no deadlock.
Safe state -- a state is safe if the system can allocate resources to each process in some order and still avoid a
deadlock.
Banker's Algorithm for Single Resource Type is a resource allocation and deadlock avoidance algorithm. This
name has been given since it is one of most problems in Banking Systems these days.
In this, as a new process P1 enters, it declares the maximum number of resources it needs.
The system looks at those and checks if allocating those resources to P1 will leave the system in a safe state or not.
If after allocation, it will be in a safe state, the resources are allocated to process P1.
Otherwise, P1 should wait till the other processes release some resources.
This is the basic idea of Banker’s Algorithm.
A state is safe if the system can allocate all resources requested by all processes ( up to their stated maximums )
without entering a deadlock state.
Resource Preemption:
To eliminate deadlocks using resource preemption, we preempt some resources from processes and give those
resources to other processes. This method will raise three issues –
(a) Selecting a victim:
We must determine which resources and which processes are to be preempted and also order to minimize the
cost.
(b) Rollback:
We must determine what should be done with the process from which resources are preempted. One simple idea
is total rollback. That means aborting the process and restarting it.
(c) Starvation:
In a system, the same process may be always picked as a victim. As a result, that process will never complete its
designated task. This situation is called Starvation and must be avoided. One solution is that a process must be
picked as a victim only a finite number of times.
Concurrent executions are done for Better transaction throughput, response time Done via better utilization of
resources
What is Concurrency Control?
Concurrent access is quite easy if all users are just reading data. There is no way they can interfere with one another.
Though for any practical Database, it would have a mix of READ and WRITE operations, and hence the concurrency
is a challenge. DBMS Concurrency Control is used to address such conflicts, which mostly occur with a multi-user
system.
The concurrency control protocols ensure the atomicity, consistency, isolation, durability and serializability of the
concurrent execution of the database transactions.
Two Phase Locking Protocol is also known as 2PL protocol is a method of concurrency control in DBMS that
ensures serializability by applying a lock to the transaction data which blocks other transactions to access the same
data simultaneously. Two Phase Locking protocol helps to eliminate the concurrency problem in DBMS. Every 2PL
schedule is serializable.
Theorem: 2PL ensures/enforce conflict serializability schedule
But does not enforce recoverable schedules
2PL rule: Once a transaction has released a lock it is not allowed to obtain any other locks
This locking protocol divides the execution phase of a transaction into three different parts.
In the first phase, when the transaction begins to execute, it requires permission for the locks it needs.
The second part is where the transaction obtains all the locks. When a transaction releases its first lock, the third
phase starts.
In this third phase, the transaction cannot demand any new locks. Instead, it only releases the acquired locks.
The Two-Phase Locking protocol allows each transaction to make a lock or unlock request Growing Phase and
Shrinking Phase.
The 2PL protocol indeed offers serializability. However, it does not ensure that deadlocks do not happen.
In the above-given diagram, you can see that local and global deadlock detectors are searching for deadlocks and
solving them by resuming transactions to their initial states.
To deal with conflicts in timestamp algorithms, some transactions involved in conflicts are made to wait and abort
others.
Following are the main strategies of conflict resolution in timestamps:
Wait-die:
The older transaction waits for the younger if the younger has accessed the granule first.
The younger transaction is aborted (dies) and restarted if it tries to access a granule after an older concurrent
transaction.
Wound-wait:
The older transaction pre-empts the younger by suspending (wounding) it if the younger transaction tries to access
a granule after an older concurrent transaction.
An older transaction will wait for a younger one to commit if the younger has accessed a granule that both want.
Timestamp Ordering:
Following are the three basic variants of timestamp-based methods of concurrency control:
1. Total timestamp ordering
2. Partial timestamp ordering
Multiversion timestamp ordering
Multi-version concurrency control
Multiversion Concurrency Control (MVCC) enables snapshot isolation. Snapshot isolation means that whenever a
transaction would take a read lock on a page, it makes a copy of the page instead, and then performs its
operations on that copied page. This frees other writers from blocking due to read lock held by other transactions.
Maintain multiple versions of objects, each with its timestamp. Allocate the correct version to reads. Multiversion
schemes keep old versions of data items to increase concurrency.
The main difference between MVCC and standard locking:
Read locks do not conflict with write locks ⇒ reading never blocks writing, writing blocks reading
Advantage of MVCC
Locking needed for serializability considerably reduced. It also increase performance and throughput more and
more then 2PL. With MVCC read operations never lead to a conflict. With MVCC the isolation level serializable
does not permit dirty read nonrepeatable reads, and phantom reads.
Disadvantages of MVCC
visibility-check overhead (on every tuple read/write)
Validation-Based Protocols
Validation-based Protocol in DBMS also known as Optimistic Concurrency Control Technique is a method to avoid
concurrency in transactions. In this protocol, the local copies of the transaction data are updated rather than the
data itself, which results in less interference while the execution of the transaction.
Optimistic Methods of Concurrency Control:
The optimistic method of concurrency control is based on the assumption that conflicts in database operations are
rare and that it is better to let transactions run to completion and only check for conflicts before they commit.
The Validation based Protocol is performed in the following three phases:
Read Phase
Validation Phase
Write Phase
Read Phase
In the Read Phase, the data values from the database can be read by a transaction but the write operation or
updates are only applied to the local data copies, not the actual database.
Validation Phase
In the Validation Phase, the data is checked to ensure that there is no violation of serializability while applying the
transaction updates to the database.
Write Phase
In the Write Phase, the updates are applied to the database if the validation is successful, else; the updates are not
applied, and the transaction is rolled back.
Laws of concurrency control
1. First Law of Concurrency Control
Concurrent execution should not cause application programs to malfunction.
2. Second Law of Concurrency Control
Concurrent execution should not have lower throughput or much higher response times than serial
execution.
Lock Thrashing is the point where system performance(throughput) decreases with increasing load
(adding more active transactions). It happens due to the contention of locks. Transactions waste time on lock waits.
The default concurrency control mechanism depends on the table type
Disk-based tables (D-tables) are by default optimistic.
Main-memory tables (M-tables) are always pessimistic.
Pessimistic locking (Locking and timestamp) is useful if there are a lot of updates and relatively high chances
of users trying to update data at the same time.
Optimistic (Validation) locking is useful if the possibility for conflicts is very low – there are many records but
relatively few users, or very few updates and mostly read-type operations.
Optimistic concurrency control is based on the idea of conflicts and transaction restart while pessimistic concurrency
control uses locking as the basic serialization mechanism (it assumes that two or more users will want to update the
same record at the same time, and then prevents that possibility by locking the record, no matter how unlikely
conflicts are.
Properties
Optimistic locking is useful in stateless environments (such as mod_plsql and the like). Not only useful but critical.
optimistic locking -- you read data out and only update it if it did not change.
Optimistic locking only works when developers modify the same object. The problem occurs when multiple
developers are modifying different objects on the same page at the same time. Modifying one
object may affect the process of the entire page, which other developers may not be aware of.
pessimistic locking -- you lock the data as you read it out AND THEN modify it.
Lock Granularity:
A database is represented as a collection of named data items. The size of the data item chosen as the unit of
protection by a concurrency control program is called granularity. Locking can take place at the following level :
Database level.
Table level(Coarse-grain locking).
Page level.
Row (Tuple) level.
Attributes (fields) level.
Multiple Granularity
Let's start by understanding the meaning of granularity.
Granularity: It is the size of the data item allowed to lock.
It can be defined as hierarchically breaking up the database into blocks that can be locked.
The Multiple Granularity protocol enhances concurrency and reduces lock overhead.
It maintains the track of what to lock and how to lock.
It makes it easy to decide either to lock a data item or to unlock a data item. This type of hierarchy can be
graphically represented as a tree.
In our example:
– T1: reads the list of products
– T2: inserts a new product
– T1: re-reads: a new product appears!
Dealing With Phantoms
Lock the entire table, or
Lock the index entry for ‘blue’
– If the index is available
Or use predicate locks
– A lock on an arbitrary predicate
Dealing with phantoms is expensive
END
What is an “Algebra”?
Answer: Set of operands and operations that are “closed” under all compositions
What is the basis of Query Languages?
Answer: Two formal Query Languages form the basis of “real” query languages (e.g., SQL) are:
1) Relational Algebra: Operational, it provides a recipe for evaluating the query. Useful for representing execution
plans. A language based on operators and a domain of values. The operator's map values are taken from the domain
into other domain values. Domain: The set of relations/tables.
2) Relational Calculus: Let users describe what they want, rather than how to compute it. (Nonoperational, Non-
Procedural, declarative.)
SQL is an abstraction of relational algebra. It makes using it much easier than writing a bunch of math. Effectively,
the parts of SQL that directly relate to relational algebra are:
SQL -> Relational Algebra
Select columns -> Projection
Select row -> Selection (Where Clause)
INNER JOIN -> Set Union
OUTER JOIN -> Set Difference
JOIN -> Cartesian Product (when you screw up your join statement)
Inner Join Inner join includes only those tuples that satisfy the matching criteria.
Outer Join In an outer join, along with tuples that satisfy the matching criteria.
Left Outer Join( ) In the left outer join, the operation allows keeping all tuples in the left
relation.
Right Outer join( ) In the right outer join, the operation allows keeping all tuples in the right
relation.
Full Outer Join( ) In a full outer join, all tuples from both relations are included in the result
irrespective of the matching condition.
Select Operation Select(σ) The SELECT operation is used for selecting a subset of the tuples according to a given
selection condition (Unary operator).
Notation: ⴋp(r) p is called the selection predicate
Project Operation
Projection(π) The projection eliminates all attributes of the input relation but those mentioned in the
projection list. (Unary operator)/ Projection operator has to eliminate duplicates!
Notation: πA1,..., Ak (r)
The result is defined as the relation of k columns obtained by deleting the columns that are not listed
Union Operation
Notation: r U s
Note: The Semi Join and Bloom Join are two techniques/data fetching method in distributed databases.
Relational Calculus
There is an alternate way of formulating queries known as Relational Calculus. Relational calculus is a non-procedural
query language. In the non-procedural query language, the user is concerned with the details of how to obtain the
results. The relational calculus tells what to do but never explains how to do it. Most commercial relational languages
are based on aspects of relational calculus including SQL-QBE and QUEL.
It is based on Predicate calculus, a name derived from a branch of symbolic language. A predicate is a truth-valued
function with arguments.
Notations of RC
Differences in RA and RC
Sr. No. Key Relational Algebra Relational Calculus
Relational Algebra targets how to obtain the Relational Calculus targets what result
2 Objective
result. to obtain.
In TRS, the variables represent the tuples In DRS, the variables represent the value drawn from the
from specified relations. specified domain.
A tuple is a single element of relation. In A domain is equivalent to column data type and any
database terms, it is a row. constraints on the value of data.
Notation : Notation :
{T | P (T)} or {T | Condition (T)} { a1, a2, a3, …, an | P (a1, a2, a3, …, an)}
Example : Example :
{T | EMPLOYEE (T) AND T.DEPT_ID = 10} { | < EMPLOYEE > DEPT_ID = 10 }
Examples of RC:
Query Block in RA
SQL, Relational Algebra, Tuple Calculus, and domain calculus examples: Comparisons
Select Operation
R = (A, B)
Relational Algebra: σB=17 (r)
Tuple Calculus: {t | t ∈ r ∧ B = 17}
Domain Calculus: {<a, b> | <a, b> ∈ r ∧ b = 17}
Project Operation
R = (A, B)
Relational Algebra: ΠA(r)
Tuple Calculus: {t | ∃ p ∈ r (t[A] = p[A])}
Domain Calculus: {<a> | ∃ b ( <a, b> ∈ r )}
Combining Operations
R = (A, B)
Relational Algebra: ΠA(σB=17 (r))
Tuple Calculus: {t | ∃ p ∈ r (t[A] = p[A] ∧ p[B] = 17)}
Domain Calculus: {<a> | ∃ b ( <a, b> ∈ r ∧ b = 17)}
Natural Join
R = (A, B, C, D) S = (B, D, E)
Relational Algebra: r ⋈ s
Πr.A,r.B,r.C,r.D,s.E(σr.B=s.B ∧ r.D=s.D (r × s))
Tuple Calculus: {t | ∃ p ∈ r ∃ q ∈ s (t[A] = p[A] ∧ t[B] = p[B] ∧
t[C] = p[C] ∧ t[D] = p[D] ∧ t[E] = q[E] ∧
p[B] = q[B] ∧ p[D] = q[D])}
Domain Calculus: {<a, b, c, d, e> | <a, b, c, d> ∈ r ∧ <b, d, e> ∈ s}
i. Left-Deep Tree: A join B, B join C, C join D, D join E, etc…This is a query in which most tables are
sequentially joined one after another.
ii. Bushy Tree: A join B, A join C, B join D, C join E, etc…This is a query in which tables branch out into
multiple logical units within each branch of the tree.
What are the different optimizers that are used to optimize the database?
Answer: There are two types of optimizers:
Rule-Based Optimizer (RBO): If the referenced objects don’t maintain any internal statistics, RBO is used.
Cost-Based Optimizer (CBO): If the referenced objects maintain internal statistics, CBO will check all the possible
execution plans and select the one with the lowest cost.
The query processing works in the following way:
Parsing and Translation
As query processing includes certain activities for data retrieval.
select emp_name from Employee where salary>10000;
Thus, to make the system understand the user query, it needs to be translated in the form of relational algebra.
We can bring this query in the relational algebra form as:
σsalary>10000 (πsalary (Employee))
πsalary (σsalary>10000 (Employee))
After translating the given query, we can execute each relational algebra operation by using different algorithms.
So, in this way, query processing begins its working.
Query processor
Query optimizer
Query executor
The parsing of a query is performed within the database using the Optimizer component. Taking all of these inputs
into consideration, the Optimizer decides the best possible way to execute the query. This information is stored
within the SGA in the Library Cache – a sub-pool within the Shared Pool.
The memory area within the Library Cache in which the information about a query’s processing is kept is called the
Cursor. Thus, if a reusable cursor is found within the library cache, it’s just a matter of picking it up and using it to
execute the statement. This is called Soft Parsing. If it’s not possible to find a reusable cursor or if the query has
never been executed before, query optimization is required. This is called Hard Parsing. Query processor two steps:
Hard parsing means that either the cursor was not found in the library cache or it was found but was invalidated for
some reason. For whatever reason, Hard Parsing would mean that work needs to be done by the optimizer to ensure
the most optimal execution plan for the query.
Before the process of finding the best plan is started for the query, some tasks are completed. These tasks are
repeatedly executed even if the same query executes in the same session for N number of times:
1. Syntax Check
2. Semantics Check
3. Hashing the query text and generating a hash key-value pair
Various phases of query executation in system. First query go from client process to server process and in PGA SQL
area then following phases start:
1 Parsing (Parse query tree, (syntax check, semantic check, shared pool check) used for soft parse
2 Transformation (Binding) Library cache and disctionary cache for meta data check and then
database buffer cache
3 Estimation/query optimization
Query Evaluation
5. Evaluate the SELECT clause. Discard columns that are not specified in the SELECT clause. (In case of SELECT FIRST
n… UNION SELECT …, the first n rows of the result from the union are chosen.)
6. Perform any unions. Combine result tables as specified in the UNION clause. (In case of SELECT FIRST n… UNION
SELECT …, the first n rows of the result from the union are chosen.)
7. Apply for the ORDER BY clause. Sort the result rows as specified.
Steps to process a query: parsing, validation, resolution, optimization, plan compilation, execution.
Cost Estimation
The cost estimation of a query evaluation plan is calculated in terms of various resources that include: Number of
disk accesses. Execution time is taken by the CPU to execute a query.
Query Optimization
Summary of steps of processing an SQL query:
Lexical analysis, parsing, validation, Query Optimizer, Query Code Generator, Runtime Database Processor
The term optimization here has the meaning “choose a reasonably efficient strategy” (not necessarily the best
strategy)
Query optimization: choosing a suitable strategy to execute a particular query more efficiently
An SQL query undergoes several stages: lexical analysis (scanning, LEX), parsing (YACC), validation
Scanning: identify SQL tokens
Parser: check the query syntax according to the SQL grammar
Validation: check that all attributes/relation names are valid in the particular database being queried
Then create the query tree or the query graph (these are internal representations of the query)
Main techniques to implement query optimization
Heuristic rules (to order the execution of operations in a query)
Computing cost estimates of different execution strategies
Optimizing Queries AND tuning the database query for best performance:
Always use WHEREclause in SELECTqueries, when we don’t need all rows to be returned. This will help to narrow
the return rows else it will perform a whole table scan and waste the Sql server resources with increasing the
network bandwidth.
While running a query, the operators used with the WHERE clause directly affect the performance. The operators
shown below are in their decreasing order of their performance.
1)=
2)>,>=,<, <=
3)LIKE
4)<>
When we are writing queries using NOT IN, which result poor performance as the optimizer need to use nested
table scan to perform activity. Which can be replaced by using NOT EXISTS.
While we use IN, in the sql query it better to use one or more leading characters in the clause instead of using the
wildcard character at the starting.
While there is case to use IN or BETWEEN clause in the query, it is always advisable to use BETWEEN for better
result.
Avoid using SUBSTRING function in query. SUBSTR returns specific portion of a string and INSTR provides character
position in which a pattern is found in a string. SUBSTR returns string whereas INSTR returns numeric position in
string.
The queries having WHERE clause connected by AND operators are evaluated from left to right in the order they
are written.
Do not use the COUNT() aggregate in a subquery to do an existence check
Try to avoid dynamic SQL Unless really required, try to avoid the use of dynamic SQL because:
It is hard to debugging and finding the solutions or troubleshoots.
When user inserts the input to the dynamic SQL, there is possibility of SQL injection attacks, it’s a application level
security.
Clustered Index Scan: Sometimes considered equivalent to Table Scan. It also happened when a non-clustered
index on an eligible column is not indexing. Many times, creating/indexing a non-clustered index will enable you to
get rid of it.
Hash Join: This is the most expensive joining methodology, which takes place when the joining columns between
two tables are not indexed. So creating/indexing indexes on the columns will enable you to get rid of it.
Nested Loops: Most cases, this happens when a non-clustered index does not include (Cover) a column that is
used in the SELECT column list. In this case, for each member in the non-clustered index column, the database
server has to seek into the clustered index to retrieve the other column value which specified in SELECT list. On
creating a covered index which will enable to get rid of it.
RID Lookup: Takes place when you have a non-clustered index but the same table does not have any clustered
index. In this case, the database engine has to look up the actual row which is using the row ID and is more
expensive operation. On creating a clustered index on the corresponding table would enable you to get rid of it.
Database tuning
END
File Organization
File Organization defines how file records are mapped onto disk blocks. We have four types of File Organization to
organize file records −
Sorted Files: Best if records must be retrieved in some order, or only a `range’ of records is needed.
Sequential File Organization
Store records in sequential order based on the value of the search key of each record. Each record organized by
index or key process is called a sequential file organization that would be much faster to find records based on the
key.
Hashing File Organization
A hash function is computed on some attribute of each record; the result specifies in which block of the file the
record is placed. Data structures to organize records via trees or hashing on some key Called a hashing file
organization.
File Operations
Operations on database files can be broadly classified into two categories −
1. Update Operations
2. Retrieval Operations
Update operations change the data values by insertion, deletion, or update. Retrieval operations, on the other
hand, do not alter the data but retrieve them after optional conditional filtering. In both types of operations,
selection plays a significant role. Other than the creation and deletion of a file, there could be several operations,
which can be done on files.
Open − A file can be opened in one of the two modes, read mode or write mode. In read mode, the operating
system does not allow anyone to alter data. In other words, data is read-only. Files opened in reading mode can be
shared among several entities. Write mode allows data modification. Files opened in write mode can be read but
cannot be shared.
Locate − Every file has a file pointer, which tells the current position where the data is to be read or written. This
pointer can be adjusted accordingly. Using the find (seek) operation, it can be moved forward or backward.
Read − By default, when files are opened in reading mode, the file pointer points to the beginning of the file. There
are options where the user can tell the operating system where to locate the file pointer at the time of opening a
file. The very next data to the file pointer is read.
Write − Users can select to open a file in write mode, which enables them to edit its contents. It can be deletion,
insertion, or modification. The file pointer can be located at the time of opening or can be dynamically changed if
the operating system allows it to do so.
Close − This is the most important operation from the operating system’s point of view. When a request to close a
file is generated, the operating system removes all the locks (if in shared mode).
Tree-Structured Indexing
Indexing is a data structure technique to efficiently retrieve records from the database files based on some attributes
on which the indexing has been done. Indexing in database systems is like what we see in books.
Indexing is defined based on its indexing attributes.
Indexing is a data structure technique which allows you to quickly retrieve records from a database file.
An Index is a small table having only two columns. The first column comprises a copy of the primary or candidate
key of a table. Its second column contains a set of pointers for holding the address of the disk block where that
specific key value stored. Oracle automatically maintains and uses indexes and when any change is made in the
table data Oracle automatically distributes it into relevant indexes. You cannot update index itself.
4 Non-Clustering The Non-Clustering indexes are used to quickly find all records whose values in a certain
field satisfy some condition. Non-clustering index (different order of data and index). Non-clustering Index
whose search key specifies an order different from the sequential order of the file. Non-clustering indexes
are also called secondary indexes.
Dense Index
In a dense index, there is an index record for every search key value in the database. This makes searching faster
but requires more space to store index records themselves. Index records contain a search key value and a pointer
to the actual record on the disk.
Sparse Index
In a sparse index, index records are not created for every search key. An index record here contains a search key
and an actual pointer to the data on the disk. To search a record, we first proceed by index record and reach the
actual location of the data. If the data we are looking for is not where we directly reach by following the index,
then the system starts a sequential search until the desired data is found.
Multilevel Index
Index records comprise search-key values and data pointers. The multilevel index is stored on the disk along with
the actual database files. As the size of the database grows, so does the size of the indices. There is an immense
need to keep the index records in the main memory to speed up the search operations. If the single-level index is
used, then a large size index cannot be kept in memory which leads to multiple disk accesses.
A multi-level Index helps in breaking down the index into several smaller indices to make the outermost level so
small that it can be saved in a single disk block, which can easily be accommodated anywhere in the main memory.
B+ Tree
A B+ tree is a balanced binary search tree that follows a multi-level index format. The leaf nodes of a B+ tree
denote actual data pointers. B+ tree ensures that all leaf nodes remain at the same height, thus balanced.
Additionally, the leaf nodes are linked using a link list; therefore, a B+ tree can support random access as well as
sequential access.
Structure of B+ Tree
Every leaf node is at an equal distance from the root node. A B+ tree is of the order n where n is fixed for every
B+ tree.
Internal nodes −
Internal (non-leaf) nodes contain at least ⌈n/2⌉ pointers, except the root node.
At most, an internal node can contain n pointers.
Leaf nodes −
Leaf nodes contain at least ⌈n/2⌉ record pointers and ⌈n/2⌉ key values.
At most, a leaf node can contain n record pointers and n key values.
Every leaf node contains one block pointer P to point to the next leaf node and forms a linked list.
An index is an on-disk structure associated with a table or view that speeds the retrieval of rows from the table or
view. An index contains keys built from one or more columns in the table or view. Indexes are automatically created
when PRIMARY KEY and UNIQUE constraints are defined on table columns. An index on a file speeds up selections
on the search key fields for the index.
The index is a collection of buckets.
Bucket = primary page plus zero or more overflow pages. Buckets contain data entries.
Types of Indexes
1 Clustered Index (table records are sorted physically and stored in a particular order. The leaf node of a
clustered index holds the data pages)
2 Non-Clustered Index (Secondary index=> While in a non-clustered index, logical sorting happens which
does not match the physical order of the records. The non-clustered index holds the index rows.)
3 Column Store Index
4 Filtered Index (An index that have less rows than it's table)
5 Hash-based Index
6 Dense Index (primary index)
7 sparse index (Primary Index)
8 b or b+ tree index
9 FK index
10 Outer and Inner Index
11 Secondary index
12 File Indexing – B+ Tree
13 Bitmap Indexing
14 Inverted Index
15 Forward Index
16 Function-based index
17 Spatial index
18 Bitmap Join Index
19 Composite index
20 Ordered index
21 Primary key index If the search key contains a primary key, then it is called a primary index.
22 Unique index: Search key contains a candidate key.
23 Multilevel index(A multilevel index considers the index file, which we will now refer to as the first (or
base) level of a multilevel index, as an ordered file with a distinct value for each K(i))
24 Inner index: The main index file for the data
25 Outer index: A sparse index on the index
26
An inverted index is an index data structure storing a mapping from content, such as words or numbers, to its
locations in a document or a set of documents. In simple words, it is a hashmap like data structure that directs you
from a word to a document or a web page.
There are two types of inverted indexes: A record-level inverted index contains a list of references to documents for
each word. A word-level inverted index additionally contains the positions of each word within a document. The
latter form offers more functionality, but needs more processing power and space to be created.
Hash Organization
Hashing uses hash functions with search keys as parameters to generate the address of a data record.
Bucket − A hash file stores data in bucket format. The bucket is considered a unit of storage. A bucket typically
stores one complete disk block, which in turn can store one or more records.
Hash Function − A hash function, h, is a mapping function that maps all the set of search keys K to the address
where actual records are placed. It is a function from search keys to bucket addresses.
Types of Hashing Techniques
There are mainly two types of SQL hashing methods/techniques:
1 Static Hashing
2 Dynamic Hashing/Extendible hashing
Static Hashing
In static hashing, when a search-key value is provided, the hash function always computes the same address.
Static hashing is further divided into:
1. Open hashing
2. Close hashing.
Dynamic Hashing or Extendible hashing
Dynamic hashing offers a mechanism in which data buckets are added and removed dynamically and on demand.
In this hashing, the hash function helps you to create a large number of values.
The problem with static hashing is that it does not expand or shrink dynamically as the size of the database grows
or shrinks. Dynamic hashing provides a mechanism in which data buckets are added and removed dynamically and
on-demand. Dynamic hashing is also known as extended hashing.
Key terms when dealing with hashing the records
Hashing function h(r) Mapping from the index’s search key to a bucket in which the (data entry for) record r
belongs.
What is Collision?
Hash collision is a state when the resultant hashes from two or more data in the data set, wrongly map the same
place in the hash table.
How to deal with Hashing Collision?
There is two technique that you can use to avoid a hash collision:
1. Rehashing: This method, invokes a secondary hash function, which is applied continuously until an empty slot is
found, where a record should be placed.
2. Chaining: The chaining method builds a Linked list of items whose key hashes to the same value. This method
requires an extra link field to each table position.
END
Duties of the DBA A Database administrator has some very precisely defined duties which need to be performed by
the DBA very religiously. A short account of these jobs is listed below:
1. Schema definition
2. Granting data access
3. Routine Maintenance
4. Backups Management
5. Monitoring jobs running
6. Installation and integration
7. Configuration and migration
8. Optimization and maintenance
9. administration and Customization
10. Upgradation and backup recovery
11. Database storage reorganization
12. Performance monitoring
13. Tablespace and Monitoring disk storage space
Roles Category
Normally Organization hires DBA in three roles:
1. L1=Junior/fresher dba, having 1–2-year exp.
2. L2=Intermediate dba, having 2+ to 4-year exp.
3. L3=Advanced/Expert dba, having 4+ to 6-year exp.
Component modules of a DBMS and their interactions.
ACTIVATE A ROLE
SCOTT> set role SHARIF identified by devdb;
TO DISABLING ALL ROLE
SCOTT> set role none;
GRANT A PRIVILEGE
SYS> grant create any table to SHARIF;
REVOKE A PRIVILEGE
SYS> revoke create any table from SHARIF;
SET ALL ROLES ASSIGNED TO scott AS DEFAULT
SYS> alter user scott default role all;
SYS> alter user scott default role SHARIF;
Grant succeeded.
SHAM> grant all on EMP to SCOTT;
Grant succeeded.
SHAM> grant references on EMP to SCOTT;
Grant succeeded.
Sql> Revoke all suppliers from the public;
SHAM> revoke all on EMP from SCOTT;
SHAM> revoke references on EMP from SCOTT CASCADE CONSTRAINTS;
Grant succeeded.
SHAM> grant select on EMP to PUBLIC;
SYS> grant create session to PUBLIC;
Grant succeeded.
Note: If a privilege has been granted to PUBLIC, all users in the database can use it.
Note: Public acts like a ROLE, sometimes acts like a USER.
Note: NOTE: Is there DROP TABLE PRIVILEGE in oracle? NO. DROP TABLE is NOT a PRIVILEGE.
What is Privilege
Privilege is special right or permission. Privileges are granted to perform operations in a database.
Example of Privilege: CREATE SESSION privilege is used to a user connect to the oracle database.
The syntax for revoking privileges on a table in oracle is:
Revoke privileges on the object from a user;
Privileges can be assigned to a user or a role. Privileges are given to users with GRANT command and taken away
with REVOKE command.
There are two distinct type of privileges.
1. SYSTEM PRIVILEGES (Granted by DBA like ALTER DATABASE, ALTER SESSION, ALTER SYSTEM, CREATE USER)
2. SCHEMA OBJECT PRIVILEGES.
SYSTEM privileges are NOT directly related to any specific object or schema.
Two type of users can GRANT, REVOKE SYSTEM PRIVILEGES to others.
User who have been granted specific SYSTEM PRIVILEGE WITH ADMIN OPTION.
User who have been granted GRANT ANY PRIVILEGE.
You can GRANT and REVOKE system privileges to the users and roles.
Powerful system Privileges DBA, SYSDBA, SYSOPER(Roles or Privilleges); SYS, SYSTEM (tablespace or user)
Alter session and alter database statements
You enable and disable the recycle bin by changing the recyclebin initialization
parameter. This parameter is not dynamic, so a database restart is required when you
change it with an ALTER SYSTEM statement.
To enable the recycle bin:
1. Issue one of the following statements:
ALTER SESSION SET recyclebin = ON;
ALTER SYSTEM SET recyclebin = ON SCOPE = SPFILE;
If you used ALTER SYSTEM, restart the database.
To disable the recycle bin:
1. Issue one of the following statements:
ALTER SESSION SET recyclebin = OFF;
ALTER SYSTEM SET recyclebin = OFF SCOPE = SPFILE;
If you used ALTER SYSTEM, restart the database.
Alter session/alter database/alter systems statements and their differences
alter session affects the current session only, alter system affects the entire system.
Use the ALTER SESSION statement to specify or modify any of the conditions or parameters that affect your connection to the
database. The statement stays in effect until you disconnect from the database.
The alter database and alter system commands are very similar but ehere are some subtle diffferences between when you use
alter system and alter database:
- Use the "alter database" statement to modify, maintain, or recover an existing database.
- Use the "alter stsrem" statement to dynamically alter your Oracle Database instance. The settings stay in - effect as long as the
database is mounted unless the "scope" command is used.
- "alter database" is already possible in MOUNT status whereas "alter system" usually requires the database to be in OPEN status.
- As a general rule "alter system" is an instance command, use to alter parameters (e.g. alter system set db_cache_size=xxx) and
altering processes. The "alter database", conversely, is used to change the database strurcture and displays (e.g. alter database
backup controlfile to trace).
alter database commands cannot be audited; where as alter system can be audited.
alter database needs bouncing database but alter system doesn't need.
An "alter system" command is (mostly) only possible in status OPEN. The only exception from this I recall presently is
"alter system set param=value" to modify initialization parameters. That is already possible in NOMOUNT status.
"alter database" on the other hand is already possible in status MOUNT, where tablespaces including the system tablespace that
contains audit information is not accessable. That is why "alter database" cannot be audited probably.
As a rule of thumb: You can do "alter database" in MOUNT already, but "alter system" only in status OPEN.
Mostly you would see that alter database command is used when there is a structural modification that is needed in the database,
for example, adding a redo log member, removing a member, making a datafile offline/online and so on, all related to physcial
structures. And this command is generally used when the system is not open, for example, making the database in the archivelog
or in the flashback mode, cancelling the backup and so on.
Alter system is basically the same but many a times this is for logical structure changes, for example, making a memory parameter
grow/shrink. This command, most of the time , does require that the system should be open and running.
Use the ALTER DATABASE statement to modify, maintain, or recover an existing database.
Use the ALTER SYSTEM statement to dynamically alter your Oracle Database instance. The settings stay in effect as long as the
database is mounted.
SONY can access user sham.emp table because SELECT PRIVILEGE given to ‘PUBLIC’. So that sham.emp is available
to everyone of the database. SONY has created a view EMP_VIEW based on sham.emp.
Note: If you revoke OBJECT PRIVILEGE from a user, that privilege also revoked to whom it was granted.
Note: If you grant RESOURCE role to the user, this privilege overrides all explicit tablespace quotas. The UNLIMITED
TABLESPACE system privilege lets the user allocate as much space in any tablespaces that make up the database.
Database account locks and unlock
Alter user admin identified by admin account lock;
Select u.username from all_users u where u.username like 'info';
Database security and non-database(non database ) security
Contengency plan: Its about good to have but hope you never use it. Disaster recovery plan. When disaster
control and management failed. It give data backup plan, recovery plan, emergency mode operation plan,
Business impact analysis, incident response plan, bunisess continuity plan. It is also call Plan B.
Example: Work from home is alternative to recourse planning when pendamic is contingency planning.
Four-Step Planning Process for Your Business Continuity Plan
1 Threat Assessment
2 Business Critical Impact Analysis
3 Prevention and Mitigation Strategies
4 Testing, Practice and Continuous Improvement
The Contingency plan should address the following issues:
1. Operation Risk assessment
2. Contingency planning
3. Software errors outside of normal working hours
Example:
The National Institute of Standards and Technology (NIST) standard for IT disaster recovery planning includes
contingency in its title.
A popular IT contingency plan model is defined in NIST SP 800-34 Rev. 1 (2010), "Contingency Planning Guide for
Federal Information Systems."
In includes the following four steps:
Contingency planning policy statement. This policy provides the outline and authorization to develop a contingency
plan.
Business impact analysis. BIA identifies and prioritizes the systems that are important to an organization's business
functions.
Preventive controls. Proactive measures that prevent system outages and disruptions can ensure system availability
and reduce costs related to contingency measures and lifecycle.
Contingency strategies. Thorough recovery strategies ensure that a system may be recovered fast and completely
after a disruption.
Business Continuity plan/ contengency plan/ Plan B
“A continuity plan is in place to respond to threats to data security, including significant data breaches or near
misses, and it is tested once a year as a minimum, with a report to senior management.”
A business continuity policy is the set of standards and guidelines an organization enforces to ensure resilience and
proper risk management. Business continuity policies vary by organization and industry and require periodic
updates as technologies evolve and business risks change.
There are 9 Policies To Reduce IT Security And Compliance Risks
1. Acceptable Use Policy (AUP)
2. Information Security
3. Security Awareness
4. Remote Access
5. Business Continuity
6. Change Management
7. Data Backup, Retention, And Disposal Policy
8. Incident Response
9. Bring Your Own Device Policy
Standards for Security
The ISO/IEC 270001 family of standards, also known as the ISO 27000 series, is a series of best practices to help
organisations improve their information security.
Published by ISO (the International Organization for Standardization) and the IEC (International Electrotechnical
Commission), the series explains how to implement best-practice information security practices.
It does this by setting out ISMS (information security management system) requirements.
An ISMS is a systematic approach to risk management, containing measures that address the three pillars of
information security: people, processes and technology.
The series consists of 46 individual standards, including ISO 27000, which provides an introduction to the family as
well as clarifying key terms and definitions.
That’s why organisations are increasingly investing heavily in their defences, using ISO 27001 as a guideline for
effective security.
Discover our bestselling standards:
ISO/IEC 27001:2013 and ISO/IEC 27002:2013 Information technology – Security techniques – ISO 27001 and ISO
27002 standards bundle
ISO/IEC 27017:2015 (ISO 27017) Information technology – Security techniques – Code of practice for information
security controls based on ISO/IEC 27002 for cloud services
ISO/IEC 27031:2011 (ISO 27031) Information technology – Security techniques – Guidelines for information and
communication technology readiness for business continuity
ISO/IEC 27000:2018 (ISO 27000) Information technology – Security techniques – Information security management
systems – Overview and vocabulary
A business continuity plan (BCP) is concerned with how you keep the organisation relocating and reshaping
services.
Continuous Data Protection (CDP) is High Availability and Data Repair
Vertical Partitioning=> Different columns of a table at different sites (Joining across partitions and Complex of
query)
Horizontal Partitioning=> Different rows of a table at different sites (Unions across partitions and ease of query)
Synchronous Replication
In a synchronous replication, the receiving system acknowledges every single change received from the sending
system. Adopting this method requires maintenance of a “hot” backup site, and it is most effective in combination
with “hot” failover solutions and Global Server Load Balancing (GSLB) solutions. We will refer to replication with
this semantics as synchronous replication; before an update transaction commits, it synchronizes all copies of
modifed data. There are two basic techniques for ensuring that transactions see the same value regardless of
which copy of an object they access. In the first technique, called voting, a transaction must write a majority of
copies in order to modify an object and read at least enough copies to make sure that one of the copies is current.
In the second technique, called read-any write-all, to read an object, a transaction can read any one copy, but to
write an object, it must write all copies. Reads are fast, especially if we have a local copy, but writes are slower,
relative to the first technique. This technique is attractive when reads are much more frequent than writes, and it
is usually adopted for implementing synchronous replication.
Client-side load balancing is defined in your client connection definition (tnsnames.ora file, for example) by setting
the parameter LOAD_BALANCE=ON.
Semi-Synchronous Replication
The receiving system sends an acknowledgement only after a series of changes have been received. This method
of synchronization is parallel to the “warm” failover approach and may be the right choice for services that — in
the event of a disaster — can allow for some loss of data and a reasonable amount of downtime.
Asynchronous Replication
This method’s data replication is faster but less secure, as the sending system simply continues to send data,
without receiving any response. Parallel to the “cold” failover approach, this method is best suited for static
resources or scenarios in which data loss is acceptable.
Using Replication’s Horizontal and Vertical partitioning capabilities to manage pubications in a distributed
database environment. An alternative approach to replication, called asynchronous replication, has come to be
widely used in commercial distributed DBMSs. Copies of a modified relation are updated only periodically in this
approach, and a transaction that reads different copies of the same relation may see different values.
SQL Server has three types of replication: Snapshot, Merge, and Transaction. Snapshot replication creates a
snapshot of the data (point-in-time picture of the data)
Horizontal Scaling
“Scaling out”, or Horizontal Scaling is the practice of adding more instances or servers, to spread out databases on
more machines to deal with low capacity or increased demand”.
Vertical Scaling
Vertical scaling, or “scaling up”, involves adding more resources to a smaller number of server instances - the
opposite approach to a horizontal system.
Horizontal = a predicate was applied to replicate only SOME rows.
Database link:
END
Inmon’s approach – designing centralized storage first and then creating data marts from the summarized data
warehouse data and metadata.
Grain
Declaring the grain is the pivotal step in a dimensional design. The grain establishes exactly what a single fact table
row represents. The grain declaration becomes a binding contract on the design. The grain must be declared
before choosing dimensions or facts because every candidate dimension or fact must be consistent with the grain.
Atomic grain refers to the lowest level at which data is captured by a given business process. We strongly
encourage you to start by focusing on atomic-grained data because it withstands the assault of unpredictable user
queries; rolled-up summary grains are important for performance tuning, but they pre-suppose the business’s
common questions. Each proposed fact table grain results in a separate physical table; different grains must not be
mixed in the same fact table.
Type is Normalized.
Focuses on data reorganization using relational database management systems (RDBMS)
Holds simple relational data between a core data repository and data marts, or subject-oriented databases Ad-hoc
SQL queries needed to access data are simple
Kimball’s approach – creating data marts first and then developing a data warehouse database incrementally from
independent data marts.
Type is Denormalized.
Focuses on infrastructure functionality using multidimensional database management systems (MDBMS) like star
schema or snowflake schema
On the one hand there is data that can be measured – e.g. costs, temperatures, speed. But such data - in DW
terminology called facts - will have no business value if not used in context of time intervals and/or other
describing attributes. On the other hand only the describing data - in DW terminology called dimensions –
gives meaning to facts.
“A data warehouse is a relational database that is designed for query and analysis rather than for transaction
processing.”
Data Mart
A data mart(s) can be created from an existing data warehouse—the top-down approach—or other sources, such as
internal operational systems or external data. Similar to a data warehouse, it is a relational database that stores
transactional data (time value, numerical order, reference to one or more objects) in columns and rows making it
easy to organize and access.
Data marts and data warehouses are both highly structured repositories where data is stored and managed until it
is needed. Data marts are designed for a specific line of business and DWH is designed for enterprise-wide range
use. The data mart is >100 and DWH is >100 and the Data mart is a single subject but DWH is a multiple subjects
repository. Data marts are independent data marts and dependent data marts.
Data mart contains a subset of organization-wide data. This subset of data is valuable to specific groups of an
organization.
Type of
Explanation
facts/measure
Additive Measures should be added to all dimensions.
Semi-Additive In this type of fact, measures may be added to some dimensions and not to others.
It stores some basic units of measurement of a business process. Some real-world examples
Non-Additive
include sales, phone calls, and orders.
Conformed Conformed dimensions are the very fact to which it relates. This dimension is used in more
Dimensions than one-star schema or Datamart.
Outrigger A dimension may have a reference to another dimension table. These secondary dimensions
Dimensions are called outrigger dimensions. This kind of Dimension should be used carefully.
Shrunken Rollup Shrunken Rollup dimensions are a subdivision of rows and columns of a base dimension. These
Dimensions kinds of dimensions are useful for developing aggregated fact tables.
Dimension-to-
Dimensions may have references to other dimensions. However, these relationships can be
Dimension Table
modeled with outrigger dimensions.
Joins
Role-Playing A single physical dimension helps to reference multiple times in a fact table as each reference
Dimensions links to a logically distinct role for the dimension.
It is a collection of random transactional codes, flags, or text attributes. It may not logically
Junk Dimensions
belong to any specific dimension.
A degenerate dimension is without a corresponding dimension. It is used in the transaction
Degenerate
and collecting snapshot fact tables. This kind of dimension does not have its dimension as it is
Dimensions
derived from the fact table.
Type of
Explanation
facts/measure
Swappable They are used when the same fact table is paired with different versions of the same
Dimensions dimension.
Sequential processes, like web page events, mostly have a separate row in a fact table for
Step Dimensions
every step in a process. It tells where the specific step should be used in the overall session.
numerical results, it is possible that the event merely records a set of dimensional entities coming together at a
moment in time.
Data Capture
Data capture is an advanced extraction process. It enables the extraction of data from documents, converting it
into machine-readable data. This process is used to collect important organizational information when the source
systems are in the form of paper/electronic documents (receipts, emails, contacts, etc.)
OLAP Model and Its types
Online Analytical Processing (OLAP) is a tool that enables users to perform data analysis from various database
systems simultaneously. Users can use this tool to extract, query, and retrieve data. OLAP enables users to analyze
the collected data from diverse points of view.
Data analysis techniques:
Characteristics of OLAP
In the FASMI characteristics of OLAP methods, the term derived from the first letters of the characteristics are:
Fast
It defines which system is targeted to deliver the most feedback to the client within about five seconds, with the
elementary analysis taking no more than one second and very few taking more than 20 seconds.
Analysis
It defines which method can cope with any business logic and statistical analysis that is relevant for the function and
the user, and keep it easy enough for the target client. Although some preprogramming may be needed we do not
think it acceptable if all application definitions have to allow the user to define new Adhoc calculations as part of the
analysis and to document the data in any desired method, without having to program so we exclude products (like
Oracle Discoverer) that do not allow the user to define new Adhoc calculation as part of the analysis and to document
on the data in any desired product that do not allow adequate end user-oriented calculation flexibility.
Share
It defines which the system tools all the security requirements for understanding and, if multiple write connection
is needed, concurrent update location at an appropriated level, not all functions need the customer to write data
back, but for the increasing number which does, the system should be able to manage multiple updates in a timely,
secure manner.
Multidimensional
This is the basic requirement. OLAP system must provide a multidimensional conceptual view of the data, including
full support for hierarchies, as this is certainly the most logical method to analyze businesses and organizations.
OLAP Operations
Since OLAP servers are based on a multidimensional view of data, we will discuss OLAP operations in
multidimensional data. Its operations are same as data ware house operations.
Roll-up
Roll-up performs aggregation on a data cube in any of the following ways −
By climbing up a concept hierarchy for a dimension
By dimension reduction
The following diagram illustrates how roll-up works.
Drill-down is performed by stepping down a concept hierarchy for the dimension time.
Initially, the concept hierarchy was "day < month < quarter < year."
On drilling down, the time dimension descended from the level of the quarter to the level of the month.
When drill-down is performed, one or more dimensions from the data cube are added.
It navigates the data from less detailed data to highly detailed data.
Slice
The slice operation selects one particular dimension from a given cube and provides a new sub-cube. Consider the
following diagram that shows how a slice works.
Here Slice is performed for the dimension "time" using the criterion time = "Q1".
It will form a new sub-cube by selecting one or more dimensions.
Dice
Dice selects two or more dimensions from a given cube and provides a new sub-cube. Consider the following
diagram that shows the dice operation.
The dice operation on the cube based on the following selection criteria involves three dimensions.
(location = "Toronto" or "Vancouver")
(time = "Q1" or "Q2")
(item =" Mobile" or "Modem")
Pivot
The pivot operation is also known as rotation. It rotates the data axes in view to provide an alternative
presentation of data. Consider the following diagram that shows the pivot operation.
There are many techniques used by data mining technology to make sense of your business data. Here are a few of
the most common:
Association rule learning:
Also known as market basket analysis, association rule learning looks for interesting relationships between variables
in a dataset that might not be immediately apparent, such as determining which products are typically purchased
together. This can be incredibly valuable for long-term planning.
Classification: This technique sorts items in a dataset into different target categories or classes based on common
features. This allows the algorithm to neatly categorize even complex data cases.
Clustering:
This approach groups similar data in a cluster. The outliers may be undetected or they will fall outside the clusters.
To help users understand the natural groupings or structure within the data, you can apply the process of partitioning
a dataset into a set of meaningful sub-classes called clusters. This process looks at all the objects in the dataset and
groups them together based on similarity to each other, rather than on predetermined features.
Modeling is what people often think of when they think of data mining. Modeling is the process of taking some data
(usually) and building a model that reflects that data. Usually, the aim is to address a specific problem through
modeling the world in some way and from the model develop a better understanding of the world.
Clustering is an integral part of grid infrastructure and focuses on a specific objective.
While grid, which may or may not consist of multiple clusters, possesses a wider framework that enables sharing of
storage systems, data resources, and remaining others across different geographical locations.
A cluster will have single ownership but the grid can have multiple ownership based on the number of clusters it
holds.
Decision tree: Another method for categorizing data is the decision tree. This method asks a series of cascading
questions to sort items in the dataset into relevant classes.
Regression: This technique is used to predict a range of numeric values, such as sales, temperatures, or stock prices,
based on a particular data set.
Here data can be made smooth by fitting it to a regression function. The regression used may be linear (having one
independent variable) or multiple (having multiple independent variables).
Regression is a technique that conforms data values to a function. Linear regression involves finding the “best” line
to fit two attributes (or variables) so that one attribute can be used to predict the other.
Outer detection:
This type of data mining technique refers to the observation of data items in the dataset which do not match an
expected pattern or expected behavior. This technique can be used in a variety of domains, such as intrusion,
detection, fraud or fault detection, etc. Outer detection is also called Outlier Analysis or Outlier mining.
Lattice
Sequential Patterns:
This data mining technique helps to discover or identify similar patterns or trends in transaction data for a certain
period.
Prediction:
Where the end user can predict the most repeated things.
5 Data integration
Integration of multiple databases, data cubes, or files
Information Retrieval (IR) can be defined as a software program that deals with the organization, storage,
retrieval, and evaluation of information from document repositories, particularly textual information.
An Information Retrieval (IR) model selects and ranks the document that is required by the user or the user has
asked for in the form of a query.
The software program deals with the Data retrieval deals with obtaining data from a database
organization, storage, retrieval, and evaluation management system such as ODBMS. It is A process of
of information from document repositories, identifying and retrieving the data from the database, based
particularly textual information. on the query provided by the user or application.
Small errors are likely to go unnoticed. A single error object means total failure.
END
The BPM lifecycle is considered to have five stages: design, model, execute, monitor, optimize, and Process
reengineering.
The difference between BP and BPMS is defined as BPM is a discipline that uses various methods to discover, model,
analyze, measure, improve, and optimize business processes.
BPM is a method, technique, or way of being/doing and BPMS is a collection of technologies to help build software
systems or applications to automate processes.
BPMS is a software tool used to improve an organization’s business processes through the definition, automation,
and analysis of business processes. It also acts as a valuable automation tool for businesses to generate a competitive
advantage through cost reduction, process excellence, and continuous process improvement. As BPM is a discipline
used by organizations to identify, document, and improve their business processes; BPMS is used to enable aspects
of BPM.
BPMN Task
A logical unit of work that is carried out as a single whole
Resource
A person or a machine that can perform specific tasks
Activity -the performance of a task by a resource
Case
A sequence of activities performed to achieve some goal, an order, an insurance claim, a car assembly
Work item
The combination of a case and a task that is just to be carried out
Process
Describes how a particular category of cases shall be managed
Control flow construct ->sequence, selection, iteration, parallelisation
BPMN concepts
Events
Things that happen instantaneously (e.g. an invoice
Activities
Units of work that have a duration (e.g. an activity to
Process, events, and activities are logically related
Sequence
The most elementary form of relation is Sequence, which implies that one event or activity A is followed by
another event or activity B.
Start event
Circles used with a thin border
End event
Circles used with a thick border
Label
Give a name or label to each activity and event
Token
Once a process instance has been spawned/born, we use a token to identify the progress (or state) of that
instance.
Gateway
There is a gating mechanism that either allows or disallows the passage of tokens through the gateway
Split gateway
A point where the process flow diverges
Have one incoming sequence flow and multiple outgoing sequence flows (representing the branches that diverge)
Join gateway
A point where the process flow converges
Mutually exclusive
Only one of them can be true every time the XOR split is reached by a token
Exclusive (XOR) split
To model the relation between two or more alternative activities, like in the case of the approval or rejection of a
claim.
Exclusive (XOR) join
To merge two or more alternative branches that may have previously been forked with an XOR-split
Indicated with an empty diamond or empty diamond marked with an “X”
Naming/Label Conventions in BPMN:
The label will begin with a verb followed by a noun.
The noun may be preceded by an adjective
The verb may be followed by a complement to explain how the action is being done.
The flow of a process with Big Database
Model the depicted BPMN process by combining the specific elements as shown below.
Also observe the following hints:
All BPMN elements can be accessed via the toolbar on the left. Click the needed element and position it on the
modelling area. After placing an element, you can access the different types by clicking the wrench icon displayed
next to it after selection. In this example you need User, Service and Script tasks. Apart from that, this flow
contains Start, Boundary and End Events, Exclusive Gateways and Swim Lanes (modelled by using the Create
Pool/Participant function on the toolbar).
The shown Timer Boundary Event can be modelled by dragging a boundary event to the border of a task and
changing its type afterwards. Set the Timer Definition Type to Duration and enter PT30S for the Timer Definition to
define a 30 second duration using the ISO 8601 syntax. The source of the depicted arrow pointing to the Send
Reminder task must be set to that Timer Boundary Event.
The Text Annotation as show at the Review for payment task can be used, to give extensive information about
elements in the process.
The Pizza Collaboration Example
END
Disk Array: Arrangement of several disks that gives abstraction of a single, large disk.
RAID techniques:
What are the two different types of I/O slaves RMAN Support ?
Ans. Disk I/O Slaves and Tape I/O Slaves.
What are the two types of channels used in RMAN process ?
Ans. Disk channels and Tape Channels
Key to lower I/O cost: reduce seek/rotation delays! Hardware vs. software solutions?
2. Data-transfer rate: the rate at which data can be retrieved from or stored on disk (e.g., 25-100 MB/s)
3. Mean time to failure (MTTF): average time the disk is expected to run continuously without any failure
A block is also called a physical Recards that have no fixed size Any data transferred between
record on hard drives and floppies depends on the data types of the hard disk and the RAM is
columns usually sent in blocks
. The default NTFS Block size is 4096 A disk can read/write a page faster. Pages manage data that is stored
bytes. Pages are virtual blocks Each block/page consists of some in RAM.
records.
4 tuples fit in one block if the block A block is virtual memory unit that A hard disk plate has many
size is 2 kb and 30 tuples fit on 1 stores tables rows and records concentric circles on it, called
block if the block size is 8kb. logically in its segments and A page
Smallest unit of logical memory, it is is a physical memory unit that store tracks. Every track is further
used to read a file or write data to a data physically in disk file divided into sectors.
file or physical memory unit called A page is loaded into the processor Page/block: processing with
page. from the main memory. pages is easier/faster than the
block
It is also called variable length Fixed length records, inflexible OS prefer page not block but both
records having complex structure. structure in memory. are storage units.
If I insert a new row/record it will come in a block/page if the existing block/page has space. Otherwise, it assigned
a new block within the file.
Block Diagram depicting paging. Page Map Table(PMT) contains pages from page number 0 to 7
Pinned block: Memory block that is not allowed to be written back to disk.
Toss immediate strategy: Frees the space occupied by a block as soon as the final tuple of that block has been
processed
Example: We can say if we have an employee table and have email, name, CNIC... Empid = 12 bytes, name = 59
bytes, CNIC = 15 bytes.... so all employee table columns are 230 bytes. Its means each row in the employee table
have of 230 bytes. So its means we can store around 2 rows in one block. For example, say your hard drive has a
block size of 4K, and you have a 4.5K file. This requires 8K to store on your hard drive (2 whole blocks), but only 4.5K
on a floppy (9 floppy-size blocks).
Architecture: The buffer manager stages pages from external storage to the main memory buffer pool. File and
index layers make calls to the buffer manager.
What is the steal approach in DBMS? What are the Buffer Manager Policies/Roles? Data
storage on disk?
Note: Buffer manager moves pages between the main memory buffer pool (volatile memory) from the external
storage disk (in non-volatile storage). When execution starts, the file and index layer make the call to the buffer
manager.
The steal approach is used when the buffer manager replaces an existing page in the cache, that has been updated
by a transaction not yet committed, by another page requested by another transaction.
No-force. The force rule means that REDO will never be needed during recovery since any committed transaction
will have all its updates on disk before it is committed.
The deferred update ( NO-UNDO ) recovery scheme a no-steal approach. However, typical database systems employ
a steal/no-force strategy. The advantage of steel is that it avoids the need for very large buffer space.
Steal/No-Steal
Similarly, it would be easy to ensure atomicity with a no-steal policy. The no-steal policy states
that pages cannot be evicted from memory (and thus written to disk) until the transaction commits.
Need support for undo: removing the effects of an uncommitted transaction on the disk
Force/No Force
Durability can be a very simple property to ensure if we use a force policy. The force policy states
when a transaction executes, force all modified data pages to disk before the transaction commits.
Data on a hard disk is stored in microscopic areas called magnetic domains on the magnetic material. Each domain
stores either 1 or 0 values.
When the computer is switched off, then the head is lifted to a safe zone normally termed a safe parking zone to
prevent the head from scratching against the data zone on a platter when the air bearing subsides. This process is
called parking. The basic difference between the magnetic tape and magnetic disk is that magnetic tape is used for
backups whereas, the magnetic disk is used as secondary storage.
Memory allocation
Logical address space and Physical address space
Static and dynamic loading
Static and dynamic linking
To perform a linking task a linker is used. A linker is a program that takes one or more object files generated by a
compiler and combines them into a single executable file.
Static linking: In static linking, the linker combines all necessary program modules into a single executable
program. So there is no runtime dependency. Some operating systems support only static linking, in which system
language libraries are treated like any other object module.
Dynamic linking: The basic concept of dynamic linking is similar to dynamic loading. In dynamic linking, “Stub” is
included for each appropriate library routine reference. A stub is a small piece of code. When the stub is executed,
it checks whether the needed routine is already in memory or not. If not available then the program loads the
routine into memory.
Memory Allocation
First Fit
Best Fit
Worst Fit
Space Allocation
In the Operating system, files are always allocated disk spaces.
Three types of space allocation methods are:
1. Linked Allocation
2. Indexed Allocation
3. Contiguous Allocation
There are various methods which can be used to allocate disk space to the files. Selection of an appropriate
allocation method will significantly affect the performance and efficiency of the system. Allocation method
provides a way in which the disk will be utilized and the files will be accessed.
In this method,
Every file users a contiguous address space on memory.
Here, the OS assigns disk address is in linear order.
In the contiguous allocation method, external fragmentation is the biggest issue.
Linked Allocation
In this method,Every file includes a list of links.
The directory contains a link or pointer in the first block of a file.
With this method, there is no external fragmentation
This File allocation method is used for sequential access files.
This method is not ideal for a direct access file.
Indexed Allocation
In this method, Directory comprises the addresses of index blocks of the specific files.
An index block is created, having all the pointers for specific files.
All files should have individual index blocks to store the addresses for disk space.
Dynamic Storage-Allocation Problem/Algorithms
Memory allocation is a process by which computer programs are assigned memory or space. It is of four types:
First Fit Allocation
The first hole that is big enough is allocated to the program. In this type fit, the partition is allocated, which is the
first sufficient block from the beginning of the main memory.
Fragmentation
As processes are loaded and removed from memory, the free memory space is broken into little pieces. It happens
after sometimes that processes cannot be allocated to memory blocks considering their small size and memory
blocks remain unused. This problem is known as Fragmentation.
Fragmentation Category −
1. External fragmentation
Total memory space is enough to satisfy a request or to reside a process in it, but it is not contiguous, so it cannot
be used.
2. Internal fragmentation
The memory block assigned to the process is bigger. Some portion of memory is left unused, as it cannot be used
by another process.
Two types of fragmentation are possible
1. Horizontal fragmentation
2. Vertical Fragmentation
Reconstruction of Hybrid Fragmentation
The original relation in hybrid fragmentation is reconstructed by performing union and full outer join.
3. Hybrid fragmentation can be achieved by performing horizontal and vertical partitions together.
4. Mixed fragmentation is a group of rows and columns in relation.
● I/O problem
- Latch job in memory while it is involved in I/O
- Do I/O only into OS buffers
Segmentation
Segmentation is a memory management technique in which each job is divided into several segments of different
sizes, one for each module that contains pieces that perform related functions. Each segment is a different logical
address space of the program or A segment is a logical unit.
Segmentation with Paging
Both paging and segmentation have their advantages and disadvantages, it is better to combine these two
schemes to improve on each. The combined scheme is known as 'Page the Elements'. Each segment in this scheme
is divided into pages and each segment is maintained in a page table. So the logical address is divided into the
following 3 parts:
1. Segment numbers(S)
2. Page number (P)
3. The displacement or offset number (D)
As shown in the following diagram, the Intel 386 uses segmentation with paging for memory management with a
two-level paging scheme
Swapping
Swapping is a mechanism in which a process can be swapped temporarily out of the main memory (or move) to
secondary storage (disk) and make that memory available to other processes. At some later time, the system
swaps back the process from the secondary storage to the main memory.
Though performance is usually affected by the swapping process it helps in running multiple and big processes in
parallel and that's the reason Swapping is also known as a technique for memory compaction.
Note: Bring a page into memory only when it is needed. The same page may be brought into memory several times
Paging
A page is also a unit of data storage. A page is loaded into the processor from the main memory. A page is made up
of unit blocks or groups of blocks. Pages have fixed sizes, usually 2k or 4k. A page is also called a virtual page or
memory page. When the transfer of pages occurs between main memory and secondary memory it is known as
paging.
Paging is a memory management technique in which process address space is broken into blocks of the same size
called pages (size is the power of 2, between 512 bytes and 8192 bytes). The size of the process is measured in the
number of pages.
Divide logical memory into blocks of the same size called pages.
Similarly, main memory is divided into small fixed-sized blocks of (physical) memory called frames and the size of a
frame is kept the same as that of a page to have optimum utilization of the main memory and to avoid external
fragmentation.
Divide physical memory into fixed-sized blocks called frames (size is the power of 2, between 512 bytes and 8192
bytes)
The basic difference between the magnetic tape and magnetic disk is that magnetic tape is used for backups
whereas, the magnetic disk is used as secondary storage.
Hard disk stores information in the form of magnetic fields. Data is stored digitally in the form of tiny magnetized
regions on the platter where each region represents a bit.
Microsoft SQL Server databases are stored on disk in two files: a data file and a log file
Note: To run a program of size n pages, need to find n free frames and load the program
Implementation of Page Table
The page table is kept in the main memory
Page-table base register (PTBR) points to the page table
Page-table length register (PRLR) indicates the size of the page table
In this scheme, every data/instruction access requires two memory accesses. One for the page table and one for
the data/instruction.
The two memory access problems can be solved by the use of a special fast-lookup hardware cache called
associative memory or translation look-aside buffers (TLBs)
The concept of a logical address space that is bound to separate physical address space is central to proper
memory management
Logical address – generated by the CPU; also referred to as virtual address
Physical address – address seen by the memory unit
Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical
(virtual) and physical addresses differ in the execution-time address-binding scheme
The user program deals with logical addresses; it never sees the real physical addresses
The logical address space of a process can be noncontiguous; the process is allocated physical memory whenever
the latter is available
1. Oracle 18c (new name) = Oracle Database 12c Release 2 12.2.0.2 (Patch Set for 12c Release 2).
2. Oracle 19c (new name) = Oracle Database 12c Release 2 12.2.0.3 (Terminal Patch Set for Release
Database Instances and Automatic Storage Management (ASM): Database instances and ASM instances will be
restarted if they crash somehow.
Oracle NET Listener: Oracle NET Listener will be started if it crashes and stops listening for an incoming
connection.
ASM Disk Groups: Oracle Restart will mount ASM Disk groups if they are dismounted.
Database Services: Non-default database services will be started by the Oracle Restart feature.
Oracle Notification Services (ONS): This is another Oracle component that can be protected by Oracle Restart.
Single instance can be converted into RAC using one of the below methods:
Enterprise Manager
DBCA i.e. Database Configuration Assistant
RCONFIG Utility
Administration Task Preferred Tool Other Tools
Update the password of an Oracle Home Oracle Home User Control None
User
Load data SQL*Loader's conventional and Oracle Enterprise Manager Load SQL*Loader (SQLLDR)
direct path load methods Wizard
OCOPY
A latch can be defined as an object that ensures data integrity on other objects in SQL Server memory, particularly
pages.
Binding in-lists in 10g: The newly introduced MEMBER OF collection condition can be used as an alternative to IN-
list binding in 10g.
Partition-wise dependencies possible in 10g release 2: Partitions can be modified without the requirement of
invalidating dependent objects.
The collect function in 10g: String aggregation can be used along with the newly introduced 10g COLLECT group
function.
Pl/SQL optimization in Oracle 10g: The compiler optimization for much quicker PL/SQL optimization in Oracle 10g
comes equipped with altogether new features about optimization bugs.
SQL plan enhancements: New Oracle 10g features allow for enhanced SQL performance investigations that are
simpler to use.
Dml error logging performance: The Release 2 of Oracle 10g features add-on performance characteristics of DML
error logging.
Flashback restore points enable the capturing of a point in time for affecting flashback operations in release 2 of
Oracle 10g.
Oracle now "fixes" DBMS_OUTPUT via new enhancements responsible for impacting dbms_output in the Oracle
10g release 2.
The auto-trace enhancement feature allows the use of DBMS_XPLAN for creating output for its explain plans in
Oracle 10g release 2.
The evolution of Oracle data can now be viewed as a flashback version query in Oracle 10g.
Oracle 10g enables DBAs and developers to enqueue/dequeue in bulk with its new array-based advanced queuing
tool in 10g.
External tables can be used for unloading/ write/read data in 10g.
Oracle offers new SQL tuning recommendations with 10g.
Exceptions are capable of being traced back to the source in Oracle 10g
----------------
New Features of 11g
Oracle 11g have Some of the best used and popular features of Oracle 11g include the following:
Database Replay tool helps in the capturing of production database workload. It replays the results in a test or a
database (same) for assessing the overall impact of change. The SQL statements thus captured can be replayed at
will.
The SQL Performance Analyzer predicts the impact and performance of changes made to SQL even before the
modifications take place. This feature accurately predicts the results of the actual SQL statements that are issued
against a database – besides, it monitors SQL performance metrics in real-time.
Edition-based Redefinition, which was introduced in Release 2, enables the patching and updating process of
various data objects even as the application is online.
The new features related to Referential, Internal, Virtual Column partitioning as well as other sub-partitioning
options are useful for handling the partitioned tables in Oracle 11g with ease.
The revolutionary Edition-Based Redefinition feature helps in the patching/ updating of application data objects
even as the application remains in a state of uninterrupted use (this relates to Release 2 only).
The tools and features for schema management help in the easy addition of columns containing default values.
These new features of the 11g version of Oracle aid the exploration of virtual columns, invisible indexes, and read-
only tables.
Oracle 11g presents new tools for RAC One Node, patching, and upgrades, Clusterware, etc. that enables the use
of a unique name for clusters, enables HA for single-instance databases, places OCRs/voting disks on the ASM, etc.
The new features of Oracle 11g in the areas connected with OLAP and data warehousing include the Analytic
Workspace Manager. Cube Organized MVs, Query Rewrites that are extendible to sub queries/remote tables, and
more.
The features responsible for the use of simple integer, "real" native compilation, inlining of code, PLS timer, etc.
are beneficial for the enhancement of PL/SQL Performance
The PL/SQL efficient coding triggers off the firing of different events. They fuel the ability of Oracle 11g to force
triggers belonging to the same type to create a sequence of events.
The transaction management features of Oracle 11g explore the LogMiner interface of Enterprise Manager's and
help in the usage of the Flashback Data Archive.
The new security features of Oracle 11g takes care of data masking, case-sensitive passwords, Tablespace
Encryption, etc.
The Oracle Exadata Simulator, when used in Oracle Database 11g Release 2 EE databases, predicts how statements
will react in the Oracle Exadata Database Machine while using the SQL Performance Analyzer, etc.
With the SQL Plan Management new feature in Oracle 11g, DBAs and developers may pick up the right plan every
time. The incorporation of bind variables ensures new execution plans that serve to be less cumbersome for use by
DBAs.
The new features of Oracle 11g include smart tools for multicolumn statistics, online patching, automatic memory
management, etc.
Oracle 11g new features offer help via SQL Access Advisor for the actual usage of the tables along with their data.
The Pivot and Unpivot SQL operations give off information in the form of spreadsheet-type crosstab reports
belonging to relational tables that use simple SQL as well as store data from crosstab tables to different relational
tables.
The features of the Data Recovery Advisor allow for parallel backup of the same files, creation, and management
of virtual catalogs, undropping of tablespaces, etc.
The Resiliency tools and features related to Oracle 11g lays down the platform for Automatic Diagnostic
Repository, Automatic Health Monitoring as well as other new resiliency features.
The Automatic Storage Management features Oracle 11g encompasses variable extent sizes, new SYSASM roles,
and many ASM improvements.
Oracle Database 11g supports data compression with new features of Advanced /Hybrid Columnar Compression.
Oracle 11g allows for brand new features related to PL/SQL Function Cache, SQL Result Cache, Database Resident
Connection Pooling, and so forth.
New features of 11g in Oracle provide next-generation LOB tools for LOB encryption, deduplication, compression,
and asynchronicity.
------------------------------
New Features of 12c
Oracle Database 12c features make it easy for developers and DBAs to make their transition to cloud applications.
For instance, its multitenant architecture has been designed for simplifying consolidation to forms without
necessitating any changes. The consolidation tools of Oracle 12c are beneficial for cloud readiness. Besides, its
pluggable databases are backed by rapid provisioning, portability capabilities, etc. Overall, Oracle Database 12c is
very useful for self-service provisioning and database as a service. The new features of Oracle 12g include:
With 12C, Oracle is addressing the problems related to Multitenancy via the functionality of pluggable databases
backed by data consolidation tools. This feature has led to significant changes in database architecture with the
help of Container Databases that are referred to as CBDs and Pluggable Databases (PDB). The Container Database
owns the process and memory. The PDB takes care of user data while the container holds the metadata. The seed
PDB and up to 253 PDBs can be created using this feature. The pluggable database feature helps upgrade, patch,
monitor, tune, adjust, back up and guard the data of a single instance to get separate HRs or Scott schemas for
every PDB. The CPU percentage can also be allocated for each PDB.
Another new feature of Oracle 12C is its Redaction Policy. Data Redaction or masking of data can be set up via a
Data Redaction policy that uses a package termed as DBMS_REDACT. This package extends the capability of FGAC
and VPD as present in their earlier versions.
Oracle Database 12c 03 release introduces the features of Fetch and offsets Replacement and Top N Query to
Rownum. This new SQL syntax simplifies the fetching of the first few rows. "Fetch First X Rows only" is the new
syntax that can be used for this purpose.
The Online Stats Gathering and Adaptive Query Optimization features of the 12C version of Oracle allows the
optimizer to enable runtime adjustments to different execution plans to lead to more useful stats. For IAS (Insert
As Select) and CTAS (Create Table As Select) statements, the figures can be gathered online for immediate
availability.
The new Oracle 12c restore command simplifies the task of restoring any particular table in RMAN. Users need not
restore a tablespace, use the export/ import features, etc. for this purpose.
The limits allocated earlier on the data types NVarchar2, Varchar2, Raw Data Types, etc. have been increased from
4K to 32,767 bytes in Oracle 12C.
As all the functions are not existing in the database, in reality, they cannot be found by the command
ALL_OBJECTS. The inline PL/SQL procedures and features have been greatly enhanced in Oracle 12C. Now, PL/SQL
procedures and functions are capable of being added along with views inline constructs. The query is written in a
manner that a real stored procedure is being called.
A col can now be 'generated as identity' to sequence replacement with the new Oracle 12c feature. This amounts
to creating separate sequences and performing sequence.nextval for every row. Known as the No Sequence Auto-
Increment Primary Key, this feature is helping the developer community in many more ways than one.
Before the introduction of the 12 version of Oracle, a column was not capable of being in multiple indexes. Now a
column can be added to the B-tree index and a Bit Map index 09 at the same time even though only one index
would be usable at a time.
The new feature of Oracle 12c related to the online migration of sub-partition or partition of tables from a
tablespace to another is beneficial for DBAs. Just as online movement could be achieved for non-partitioned tables
in the earlier releases, table partitions/ sub-partitions can now be moved either online or offline to other
tablespaces. The ONLINE clause allows all DML operations to be performed uninterrupted to the partition or sub-
partition involved in any given procedure. Do note that no DML operations are permitted in case the partition or
sub-partition is taken offline.
With the temp undo feature in Oracle 12C, the undo records are capable of being stored in a temporary table
rather than UNDO TS. This leads to the reduced tablespace and lesser redo log space being used.
The new database archiving feature of Oracle 12c enables the archiving of rows found in a table by stating them as
inactive. The inactive rows remain in the database; they are capable of being optimized with the help of
compression but aren’t visible to applications.
Oracle 12c allows for invisible columns in a table; they are not found in generic queries.
It is now possible to create limits on PGA by activating the automatic PGA management that necessitates
PGA_AGGREGATE_LIMIT based parameter settings. The limits set on PGA helps in avoiding excessive usage of the
same.
DDL statements get automatically logged in xml/log files in case ENABLE_DDL_LOGGING has been set to True.
If possible, users should connect to an instance via a dispatcher. This keeps the number of processes required for
the running instance low. User should explicitly connect to an instance using a dedicated server process
The listener process is started when the server is started (or whenever the instance is started). The listener is only
required for connections from other servers, and the DBA performs the creation of the listener process. When a
new connection comes in over the network, the listener passes the connection to Oracle.
In SQL Server, DBCC procedures perform database consistency checks. In Oracle, the DBVERIFY utility checks for
data block corruption. Oracle also has an ANALYZE command that will perform structure checks.
I want to make one of the tablespaces READ ONLY, and guarantee its state to be same as it was before startup.
> save the scripts of making the datafile online and tablespace to read only in .sql file.
> alter database datafile 'd:\data_file_location' online;
> alter tablespace <tablespace_name> read only;
performing recovery using redo data that is received from the primary database. Oracle Database12c enables a
physical standby database to receive and apply redo while it is open in read-only mode.
Logical Standby Database
A logical standby database contains the same logical information (unless configured to skip certain objects) as the
production database, although the physical organization and structure of the data can be different. The logical
standby database is kept synchronized with the primary database by transforming the data in the redo received from
the primary database into SQL statements and then executing the SQL statements on the standby database. This is
done with the use of LogMiner technology on the redo data received from the primary database. The tables in a
logical standby database can be used simultaneously for recovery and other tasks such as reporting, summations,
and queries.
A standby database is a transactionally consistent copy of the primary database. Using a backup copy of the primary
database, you can create up to nine standby databases and incorporate them in a Data Guard configuration.
A standby database is a database replica created from a backup of a primary database. By applying archived redo
logs from the primary database to the standby database, you can keep the two databases synchronized.
A standby database has the following main purposes:
1. Disaster protection
2. Protection against data corruption
Oracle Data Guard is one of the best data protection software out there for the oracle database. It works in a very
simple manner by maintaining an exact physical replica of the production copy remotely. Oracle Data Guard works
without any issue and performs active-passive data replication for high availability of data.
In Oracle Data Guard, data replication can happen only on homogenous data platforms that use identical database
management systems (DBMS) and operating systems. Such systems are a network of two or more Oracle
databases residing in one or more machines. The Data Guard completes one-way physical replication and these
replications can be configured only between oracle to oracle.
The Oracle Data Guard uses Active Data Guard, known for its simplicity, data availability, best data protection, and
high performance. As a result, it passes for the simplest and the fastest one-way replication of a complete Oracle
database. Unlike GoldenGate, Data Guard is very simple to use and supports all applications and workloads. It has
no data type restrictions and it’s very transparent to operate. There are no requirements for supplemental logging.
Also, there are no performance implications for tables without a unique index or primary key with Data Guard. In
addition, the need for performance tuning is also zero to none at the standby database.
More than just for disaster recovery
Oracle’s disaster recovery solution for Oracle data
Automates the creation and maintenance of one or more synchronized copies (standby) of the production (or
primary) database
supplemental log should be enabled. You might also face some performance issues if the table doesn’t have any
primary key. Oracle Golden Gate is most advanced logical replication product from Oracle. This is especially well
known for its cross-platform operating capabilities.
The basic features of the two products may look similar but takes GoldenGate uses replication while dataguard
not.
Data Guard is best for disaster recovery and data protection problems, GoldenGate is a more flexible
heterogeneous replication mechanism and is also able to transform the data while it is being replicated.
Data Guard is an Oracle specific technology while GoldenGate support heterogeneous database systems including
all the major RDBMS as DB2,Sybase, MySQL .
Data Guard supports active-passive replication. One of the database is the primary database and the other one is
in an inactive Data Guard mode.
GoldenGate supports an active-active replication mode and allows both systems to work simultaneously while
maintaining the data integrity.
GoldenGate allows transformation of the data, with conflict management while it is being replicated between both
database systems.
GoldenGate allows replication across platform. Data can be extracted from a Unix platform and replicated to an
Oracle database running on platform Windows.
GoldenGate has many case of utilization. The use of flat files for data transportation and the support of
heterogeneous systems makes the technology so very interesting
Oracle Active Data Guard provides the best data protection and availability for Oracle Database in the simplest most
economical manner by maintaining an exact physical replica of the production copy at a remote location that is
open read-only while replication is active.
GoldenGate is an advanced logical replication product that supports multi-master replication, hub and spoke
deployment and data transformation, providing customers very flexible options to address the complete range of
replication requirements. GoldenGate also supports replication between a broad range of heterogeneous hardware
platforms and database management systems.
1 Basic Data replication can be happened only Data replication can be happened
heterogeneous database platforms only homogeneous database
platforms
4 Transparency of Only DATA which are replicated are An Oracle Data Guard ,primary and
backups similar to each other. It does not have standby are physically exact copies
transparency of backup of each other.
What is Cloning?
Database Cloning is a procedure that can be used to create an identical copy of the existing Oracle database. DBAs
occasionally need to clone databases to test backup and recovery strategies or export a table that was dropped from
the production database and import it back into the production databases. Cloning can be done on a different host
or the same host even if it is different from the standby database. Exact copy of database.
Database Cloning can be done using the following methods,
Cold Cloning
Hot Cloning
RMAN Cloning
Clone an Oracle Database using Cold Physical Backup
The datafiles from the production database can be from a hot backup, a cold backup or an RMAN backup.
Source Database side: (Troy database)
Cold Backup Steps:
1. Get the file path information using below query
Select name from v$datafile;
select member from v$logfile;
select name from v$controlfile;
2. Parameter file backup
If troy database running on spfile
Create pfile=’/u01/backup/inittroy.ora’ from spfile;
If database running in pfile using os command to copy the pfile and placed in backup path.
3.Taken the control file backup
Alter database backup controlfile to trace as ‘/u01/backup/control01.ora’;
4.Shutdown immediate
5.Copy all the data files/log files using os command & placed in backup path.
6.Startup the database.
Clone Database side: (Clone database)
Database Name: Clone
Clone Database Steps:
1.Create the appropriate folder in corresponding path & placed the backup files in corresponding
folder.(bdump,udump,create,pfile,cdump,oradata)
2.Change the init.ora parameter like control file path, dbname, instance name etc…
3.Create the password file using orapwd utility.
(Database in windows we need to create the service id using oradim utility)
RMAN Cloning
RMAN performed the following steps automatically to duplicate the database :
1. Allocates automatic auxiliary channel
In Oracle, Recovery Manager (RMAN) has the ability to duplicate or clone a database from a backup or from an
active database using DUPLICATE command to copy all the data in a source database.
Why to use RMAN over traditional OS duplicate command? The answer is that, if you copy a database with
operating system utilities instead of the DUPLICATE command, then the DBID (an internal, uniquely generated
number that differentiates databases) of the copied database remains the same as the original database but the
DUPLICATE command automatically assigns the duplicate database a different DBID so that it can be registered in
the same recovery catalog as the source database.
In active database duplication, RMAN connects as TARGET to the source database instance and as AUXILIARY to
the auxiliary instance. Auxiliary instance is a database instance used in the recovery process to perform the work of
recovery. RMAN copies the live source database over the network to the auxiliary instance.Here no backups of the
source database are required. Figure 24-2 illustrates active database duplication.
In backup-based duplication, RMAN creates the duplicate database by using pre-existing RMAN backups and
copies. RMAN can perform backup-based duplication with or without either of the following connections:
* Target
* Recovery catalog
——————————————-
(1) Create a backup of the source database by setting auto backup ON, by default CONTROLFILE AUTOBACKUP is
OFF.
$ rman target=/
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;
RMAN> BACKUP DATABASE PLUS ARCHIVELOG
(2) Create a password file for the duplicate instance.
$ orapwd file=filename password=password entries=max_users
Notes: The Oracle orapwd command line utility assists the DBA with granting SYSDBA and SYSOPER privileges to
other users.
(3) Add the appropriate entries into the “tnsnames.ora” file in the “$ORACLE_HOME/network/admin” directory to
allow connections to the target database from the duplicate server.
(4) Create pfile on source and get that file copied to target.
(5) Make the backup files from the source database available to the destination server using scp command (linux
copy command).
(6) Now connact to the duplicate instance as
$ ORACLE_SID=DB11G; export ORACLE_SID
$ sqlplus / as sysdba
(7) Start the database in NOMOUNT mode using pfile which is created earlier.
SQL> STARTUP NOMOUNT;
(8) Now we need to connect auxiliary instance (db instance used in the recovery process to perform the recovery
work)
$ rman AUXILIARY / /*No target or catalog. Metadata comes from backups.
(9) And then finally duplicate the database as:
DUPLICATE TARGET DATABASE TO newdb;
SPFILE BACKUP LOCATION ‘/source/app/oracle/fast_recovery_area/kkdb’ NOFILENAMECHECK;
Oracle allocates logical database space for all data in a database. The units of database space allocation are data
blocks, extents, and segments.
The Relationships Among Segments, Extents, Data Blocks in the data file, Oracle block, and OS block:
A schema is a collection of database objects, including logical structures such as tables, views, sequences, stored
procedures, synonyms, indexes, clusters, and database links.
Data block: Oracle manages the storage space in the data files of a database in units called data blocks. A data
block is the smallest unit of data used by a database.
Oracle block and data block are equal in data storage by logical and physical respectively like table's (logical) data is
stored in its data segment.
The high water mark is the boundary between used and unused space in a segment.
high water mark of a tableHigh water mark is the maximum amount of database blocks used so far by a segment.
HWM is increased when we insert data. But why not it decreased automatically when we delete data. In manual
segment space management - during a full scan, all blocks under the high water mark are read and processed.
In automatic segment space management (ASSM), during a full scan, all blocks under the "low high water mark"
are read and processed - blocks between this low high water mark and the 'high water mark' may or may not be
formatted yet.
The high water mark (HWM) for an Oracle table is a construct that shows the table at its greatest size. Just as a
lake has a high-water mark after a draught, an Oracle table has a high water mark that shows the greatest size of
the table, the point at which it consumed the most extents.
Operating system block: The data consisting of the data block in the data files are stored in operating system
blocks.
OS Page: The smallest unit of storage that can be atomically written to non-volatile storage is called a page. One
block can hold 2 pages. The minimum database page size is 512 bytes.
Using the maximum database page size of 65536 byte.
In PostgreSQL and SQL Server, the default page size is 8 KB, in MySQL is 16 KB and in IBM DB2 and Oracle it is only
4 KB.
The above formula is valid for average rows per block as-of your last analyze of the table (with dbms_stats).
You can also use the dbms_rowid package to compute the average rows per data block in Oracle:
select
count(*)
from
TOPIC
where
dbms_rowid.rowid_block_number(rowid) =
(
select
min(dbms_rowid.rowid_block_number(rowid))
from
TOPIC
);
Question: What query do I need to display the number of data blocks consumed by an Oracle table?
Answer: To see the number of blocks used by a table you can issue this query:
select
blocks,
bytes/1024/1024 as MB
from
user_segments
where
segment_name = 'MYTAB';
Something like this might estimate average rows per data block (SQL below is not tested):
select
(blocksize - (blocksize*(pctfree/100))) / avg_row_len
from
user_tables
where
table_name = 'MYTAB';
QUERY 1: Check table size from user_segments. When you are connected to your own schema/user.
select segment_name,sum(bytes)/1024/1024/1024 GB from user_segments where segment_type='TABLE' and
segment_name=upper('&TABLE_NAME') group by segment_name;
QUERY 2: Check table size from dba_segments if you are connected using sysdba.
Advantages
1. You can perform hot backups (backups when the database is online).
2. The archive logs and the last full backup (offline or online) or an older backup can completely recover the
database without losing any data because all changes made in the database are stored in the log file.
Disadvantages
1. It requires additional disk space to store archived log files. However, the agent offers the option to purge
the logs after they have been backed up, giving you the opportunity to free disk space if you need it.
NO-ARCHIVELOG MODE
Advantages
Disadvantages
1. If you must recover a database, you can only restore the last full offline backup. As a result, any changes
made to the database after the last full offline backup are lost.
2. Database downtime is significant because you cannot back up the database online. This limitation
becomes a very serious consideration for large databases.
Note: Because NOARCHIVELOG mode does not guarantee Oracle database recovery if there is a disaster, the Agent
for Oracle does not support this mode. If you need to maintain Oracle Server in NOARCHIVELOG mode, then you
must backup full Oracle database files without the agent using CA ARCserve Backup while the database is offline to
ensure disaster recovery.
LOG_ARCHIVE_FORMAT: This parameter names the archive logs in this format. For example, if your format is:
arch%s.arc, your log files will be called: arch1.arc, arch2.arc, arch3.arc where the ‘1’, ‘2’, ‘3’, etc is the sequence
number.
Switching Database Archiving Mode
1. Shut down the database instance.
SQL> shutdown immediate
An open database must be closed and dismounted and any associated instances shut down before the database’s
archiving mode can be switched. Archiving cannot be disabled if any datafiles need media recovery.
2. Backup the database. This backup can be used with the archive logs that you will generate.
3. Perform any operating system specific steps (optional).
4. Start up a new instance and mount, but do not open the database.
SQL> startup mount
NOTE: If you are using the Real Application Cluster (RAC), then you must mount the database exclusively using one
instance to switch the database’s archiving mode.
5. Put the database into archivelog mode
SQL> alter database archivelog;
NOTE: You can also use below shown query to take the database out of archivelog mode.
SQL> alter database noarchivelog;
6. Open the database.
SQL> alter database open;
7. Verify your database is now in archivelog mode.
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 22
Next log sequence to archive 24
Current log sequence 24
8. Archive all your redo logs at this point.
SQL> archive log all;
9. Ensure these newly created Archive log files are added to the backup process.
A big file tablespace eases database administration because it consists of only one data file. The
a single data file can be up to 128TB (terabytes) in size if the tablespace block size is 32KB; if you
use the more common 8KB block size, 32TB is the maximum size of a big file tablespace.
Oracle Database must use logical space management to track and allocate the extents in a tablespace. When a
database object requires an extent, the database must have a method of finding and providing it. Similarly, when
an object no longer requires an extent, the database must have a method of making the free extent available.
Oracle Database manages space within a tablespace based on the type that you create.
SGA (System Global Area) is an area of memory (RAM) allocated when an Oracle Instance starts up. The SGA's size
and function are controlled by initialization (INIT.ORA or SPFILE) parameters.
In general, the SGA consists of the following subcomponents, as can be verified by querying the V$SGAINFO:
SELECT FROM v$sgainfo;
The common components are:
Data buffer cache - cache data and index blocks for faster access.
Shared pool - cache parsed SQL and PL/SQL statements.
Dictionary Cache - information about data dictionary objects. (Dictionary cache is memory area in shared pool.
Dictionary cache are used to hold db blocks of recently used data dictionary tables. Data directory cache contains
the all table and indexes, trigger and db_ objects information.)
Redo Log Buffer - committed transactions that are not yet written to the redo log files.
JAVA pool - caching parsed Java programs.
Streams pool - cache Oracle Streams objects.
Large pool - used for backups, UGAs, etc.
Automatic Shared Memory Management simplifies the configuration of the SGA and is the recommended
memory configuration. To use Automatic Shared Memory Management, set the SGA_TARGET initialization
parameter to a nonzero value and set the STATISTICS_LEVEL initialization parameter to TYPICAL or ALL. The value
of the SGA_TARGET parameter should be set to the amount of memory that you want to dedicate to the SGA. In
response to the workload on the system, the automatic SGA management distributes the memory appropriately
for the following memory pools:
SGA: The size is indirectly determined by the size of the memory areas contained.
Buffer Pool: DB_BLOCK_BUFFERS (unit: Blocks) or DB_CACHE_SIZE when you use the dynamic SGA as described in
Note 617416.
Shared Pool : SHARED_POOL_SIZE
Java Pool: JAVA_POOL_SIZE
Large Pool : LARGE_POOL_SIZE
Streams Pool (Oracle 10g or later): STREAMS_POOL_SIZE
Redo Buffer: LOG_BUFFER
In addition, in the context of the dynamic SGA (Note 617416), you can define the parameter SGA_MAX_SIZE which
sets an upper limit for the total size of the SGA. In general, you can only increase the size of parameters, such as
DB_CACHE_SIZE or SHARED_POOL_SIZE, up to the size defined by SGA_MAX_SIZE
PGA: The PGA allocation is dynamic and can be affected by the parameters SORT_AREA_SIZE, HASH_AREA_SIZE,
BITMAP_MERGE_AREA_SIZE and CREATE_BITMAP_AREA_SIZE or PGA_AGGREGATE_TARGET when you use the
automatic PGA administration
How can I determine the chronological sequence of the PGA size?
Up to and including Oracle 9i, there was no standard way of determining the chronological sequence of the PGA
allocation.
As of Oracle 10g, you can determine the chronological sequence of the overall PGA consumption using
DBA_HIST_PGASTAT:
How do I determine Oracle's current memory requirements?
Ans: Oracle Memory = Buffer Pool + Shared Pool + PGA + Process Memory
Notify an auditor that an unauthorized user is manipulating or deleting data and that the user has more privileges
than expected which can lead to reassessing user authorizations
Monitor and gather data about specific database activities
Detect problems with an authorization or access control implementation
The two general types of auditing are standard auditing, which is based on privileges, schemas, objects, and
statements, and fine-grained auditing. Standard audit records can be written either to DBA_AUDIT_TRAIL (the
sys.aud$ table) or to the operating system. Fine-grained audit records are written to DBA_FGA_AUDIT_TRAIL (the
sys.fga_log$ table) and the DBA_COMMON_AUDIT_TRAIL view, which combines standard and fine-grained audit log
records.
Auditing is the monitoring and recording of selected user database actions. It can be based on individual actions,
such as the type of SQL statement executed, or on combinations of factors that can include user name, application,
time, and so on. Security policies can trigger auditing when specified elements in an Oracle database are accessed
or altered, including the contents within a specified object.
Components of database audit
Audit access and authentication: This component measure and understands the core security design and it gather
details about who accessed which systems, when, and how
Audit user and administrator: It lists details about the activities that were performed in the database by application
users and administrators
Monitor security activity: This component identify and flag any suspicious activity, unusual or abnormal access to
sensitive data or critical systems
Database audit vulnerability and threat detection: This would detect vulnerabilities in the database, and monitor
every user who is attempting to exploit the database
Change Auditing: In this stage, the baseline policy for the database is established. The policy includes configuration
change, schema change, user access, privileges elevation and file structure validation, and then track any
deviations from that baseline metrics.
END
Exports/Imports Data Pump Export by issuing the following command at the system command prompt:
EXPDP EMR/1234567@HMSDB SCHEMA=EMR OR FULL=Y DIRECTORY=DATA_PUMP_DIR
DUMPFILE=SCHEMA.DMP LOGFILE=EXPSCHEMA.LOG;
IMPDP emr/1234567@HMSDB DIRECTORY=DATA_PUMP_DIR DUMPFILE=sharif.DMP SCHEMAS=EMR
FROMUSER=MIS TO USER=EMR;
2) Physical backups(Prefered as primary method for production databases) Physical backups, which are
the primary concern in a backup and recovery strategy, are copies of physical database files. It is also called file
system backup b/c it use operating system file backup commands. You can make physical backups with either the
Oracle Recovery Manager (RMAN) utility or operating system utilities. These are copies of physical database files.
For example, a physical backup might copy database content from a local disk drive to another secure location.
Physical backup Types (offline or cold(database shutdown state or database is running in NOARCHIVELOG Mode. It
is point in time snapshot of database), online or hot(Database is open in archivelog mode running), full, incremental)
3) User-managed Backup SQLPlus and OS Commands by starting from the beginning null end; Back up your
database manually by executing commands specific to your operating system.
If you do not want to use RMAN, you can use operating system commands such as the UNIX cp command to make
backups. You can also automate backup operations by writing scripts. User managed backup include hot and cold
backup. Hot and cold backup also called manual backup. No tool required for these types of backup. It also perform
manual recorvery when manual backup done.
Recovery Window
A recovery window is a period of time that begins with the current time and extends backward in time to the point
of recoverability.
Two terms that are very important when it comes to RMAN backups validation they are
1. Expired Backups
2. Obsolete Backups
Expired backups
Let's take you trigger an RMAN backup and someone deleted backup set or backup pieces at OS level. The database
CONTROFILE has the details of the backup on disk but at OS level the backup file does not exists.
We can run RMAN crosscheck command to check if backup files exists at OS level or not. If the backup files are not
found, RMAN will mark it as EXPIRED.
RMAN> crosscheck backup;
RMAN> delete expired backup;
Obsolete backups
The general meaning of OBSOLETE is no longer used or required.
We use below command to list all the obsolete backups inside RMAN
RMAN> report obsolete;
RMAN> delete obsolete;
RMAN Backup (A backup is considered expired only when RMAN performs a crosscheck and cannot find the file.
In short, obsolete means "not needed," whereas expired means "not found.")
What are the benefits of RMAN over user-managed backup-recovery process?
Answer:
1. powerful Data Recovery Advisor feature
2. simpler backup and recovery commands
20. It selects the most appropriate backup for database recovery and renders it very easily through use of
simple commands.
21. Using RMAN, you can automatically backup the database to tape.
22. Using RMAN, you can easily clone the database to the remote host by using the duplicate command
provided by RMAN.
23. Moreover, databases can be cloned to any point in time.
24. Because RMAN does not take a backup of the temporary tablespace, during recovery it is created
automatically by RMAN.
25. Using the cross platform tablespace conversion functionality, you can convert the tablespace which was
created in one OS to another.
26. Using the Encryption feature, you can create encrypted backup setswhich will make this backup more
secure.
27. Using compression, you can easily create binary compressed backup sets.
28. Most production databases are big in size and change frequently. It is not the right option to backup the
whole database each time (every day). By using the incremental backup functionality, you only take a
backup of changed data blocks by reducing time of backup and future recover process.
29. Using RMAN Block Media Recovery, you can recover your database in the data block level.
30. Using RMAN, you can easily create a physical standby database just following simple steps
A backup set is one or more datafiles, control files, or archived redo logs that are written in an RMAN-specific
format; it requires you to use the RMAN restore command for recovery operations. In contrast, when you use the
copy command to create an image copy of a file, it is in an instance-usable format--you do not need to invoke
RMAN to restore or recover it.
When you issue RMAN commands such as backup or copy, RMAN establishes a connection to an Oracle server
session. The server session then backs up the specified datafile, control file, or archived log from the target
database.
By default, RMAN creates backup sets rather than image copies. A backup set consists of one or more backup pieces,
which are physical files written in a format that only RMAN can access. A multiplexed backup set contains the blocks
from multiple input files. RMAN can write backup sets to disk or tape.
If you specify BACKUP AS COPY, then RMAN copies each file as an image copy, which is a bit-for-bit copy of a database
file created on disk. Image copies are identical to copies created with operating system commands like cp on Linux
or COPY on Windows, but are recorded in the RMAN repository and so are usable by RMAN. You can use RMAN to
make image copies while the database is open.
In a differential level n incremental backup, you back up all blocks that have changed since the most recent level n
or lower backup. For example, in a differential level 2 backup, RMAN determines which level 1 or level 2 backup
occurred most recently and backs up all blocks modified since that backup.
In a cumulative level n backup, RMAN backs up all the blocks used since the most recent backupat level n-1 or
less. For example, in a cumulative level 3 backup, RMAN determines which level 2 or level 1 backup occurred most
recently and backs up all blocks used since that backup.
Backup
Type Definition
Full A backup of a datafile that includes every allocated block in the file being backed up. A full backup
of a datafile can be an image copy, in which case every data block is backed up. It can also be
stored in a backup set, in which case datafile blocks not in use may be skipped, according to rules
in Oracle Database Backup and Recovery Reference.
A full backup cannot be part of an incremental backup strategy; that is, it cannot be the parent for
a subsequent incremental backup.
Incremental An incremental backup is either a level 0 backup, which includes every block in the file except
blocks compressed out because they have never been used, or a level 1 backup, which includes
only those blocks that have been changed since the parent backup was taken.
Open A backup of online, read/write datafiles when the database is open.
Closed A backup of any part of the target database when it is mounted but not open. Closed backups can
be consistent or inconsistent.
Consistent A backup taken when the database is mounted (but not open) after a normal shutdown. The
checkpoint SCNs in the datafile headers match the header information in the control file. None of
the datafiles has changes beyond its checkpoint. Consistent backups can be restored without
recovery.
Note: If you restore a consistent backup and open the database in read/write mode without
recovery, transactions after the backup are lost. You still need to perform an OPEN RESETLOGS.
Inconsistent A backup of any part of the target database when it is open or when a crash occurred
or SHUTDOWN ABORT was run prior to mounting.
What is SCN Number in Oracle Database? And what is use in incremental backup?
Each data block in a datafile contains a system change number (SCN), which is the SCN at which the most recent
change was made to the block. During an incremental backup, RMAN reads the SCN of each data block in the input
file and compares it to the checkpoint SCN of the parent incremental backup. (If block change tracking is enabled,
RMAN does not read the portions of the file known to have not changed since the parent incremental backup.) If
the SCN in the input data block is greater than or equal to the checkpoint SCN of the parent, then RMAN copies the
block. One consequence of this mechanism is that RMAN applies all blocks containing changed data during
recovery—even if the change is to an object created with the NOLOGGING option. Hence, making incremental
backups is a safeguard against the loss of changes made by NOLOGGING operations.
RMAN does not need to restore a base incremental backup of a datafile in order to apply incremental backups to
the datafile during recovery. For example, you can restore non-incremental image copies of the datafiles in the
database, and RMAN can recover them with incremental backups.
Level 0 are a base for subsequent backups. Copies all blocks containing data similar to a full backup, with the only
difference that full backups are never included in an incremental strategy. Level 0 can be backup sets or image
copies. Level 1 are subsequent backups of a level 0.
A level 1 incremental backup can be either of the following types:
A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0.
In a differential level 1 backup, RMAN backs up all blocks that have changed since the most recent incremental
backup at level 1 (cumulative or differential) or level 0. For example, in a differential level 1 backup, RMAN
determines which level 1 backup occurred most recently and backs up all blocks modified after that backup. If no
level 1 is available, RMAN copies all blocks changed since the base level 0 backup.
A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0.
Cumulative backups are preferable to differential backups when recovery time is more important than disk space,
because fewer incremental backups need to be applied during recovery.
The size of the backup file depends solely upon the number of blocks modified and the incremental backup level.
A level 0 incremental backup is physically identical to a full backup. The only difference is that the level 0 backup is
recorded as an incremental backup in the RMAN repository, so it can be used as the parent for a level 1 backup.
There are two types of incremental backups, "differential" and "cumulative". The goal of an incremental backup is
to back up only those data blocks that have changed since a previous backup. You can use RMAN to create
incremental backups of datafiles, tablespaces, or the whole database. RMAN does not need to restore a base
incremental backup of a datafile in order to apply incremental backups to the datafile during recovery. For example,
you can restore non-incremental image copies of the datafiles in the database, and RMAN can recover them with
incremental backups. Backup sets are logical entities produced by the RMAN BACKUP command.
You can make a backup of the whole database at once or supplement a whole database backup with backups of
individual tablespaces, datafiles, control files, and archived logs. You can use O/S commands to perform these
backups. Because incremental backups are not as big as full backups, you can create them on disk more easily.
Full backup—Creates a copy of data that can include parts of a database such as the control file, transaction files
(redo logs), tablespaces, archive files, and data files. Regular cold full physical backups are recommended. The
database must be in archive log mode for a full physical backup.
Incremental—Captures only changes made after the last full physical backup. Incremental backup can be done with
a hot backup.
Cold-full backup - A cold-full backup is when the database is shut down, all of the physical files are backed up, and
the database is started up again.
Cold-partial backup - A cold-partial backup is used when a full backup is not possible due to some physical
constraints.
Hot-full backup - A hot-full backup is one in which the database is not taken off-line during the backup process.
Rather, the tablespace and data files are put into a backup state.
Hot-partial backup - A hot-partial backup is one in which the database is not taken off-line during the backup
process, plus different tablespaces are backed up on different nights.
RMAN Backup/Restore backup (full,level 0,level 1)
Full backup and level 0 backup both are same, the difference between them is that the level 0 backup is the root
backup for its incremental backups maintained in rman repository.
Although the content is same both are a part of different backup strategy. if you plan to take incremental backups
and restore them then full backups cannot be used. In other words you can not restore a level 1 incremental backup
on top of a full backup, you can only restore level backup on top of a level 0 backup.
If lost the APEX tablespace but your database is currently functioning. If this is the case, and assuming your APEX
tablespace does not span multiple datafiles, you can attempt to swap out the datafile. Please force a backup in rman
before trying any of this.
The RMAN client: An Oracle Database executable that interprets commands, directs server sessions to execute those
commands, and records its activity in the target database control file. The RMAN executable is automatically installed
with the database and is typically located in the same directory as the other database executables. For example, the
RMAN client on Linux is located in $ORACLE_HOME/bin. Oracle base is the main or root directory of an oracle
whereas ORACLE_HOME is located beneath base folder in which all oracle products reside.
RMAN client is the client application that performs all the backup and recovery operations for the target database.
It uses Oracle net to connect to the target database so that its location can be found on any host that is connected
to the target host using Oracle Net. It is a command line interface which helps in issuing the backup, recover, SQL
and special RMAN commands. It is a mandatory component for RMAN.
Recovery catalog schema: It is the user present in the recovery catalog database that has the metadata tables made
by RMAN. RMAN periodically shifts metadata from the control file of the target database to the recovery catalog. It
is an optional component.
Recovery catalog database: It is a database that contains the recovery catalog that contains metadata which is used
by RMAN to perform backup and recovery tasks. One recovery catalog can be created for containing metadata of
multiple target databases. It is also an optional component.
RMAN obtains the information it needs from either the control file or the optional recovery catalog. The recovery
catalog is a central repository containing a variety of information useful for backup and recovery. Conveniently,
RMAN automatically establishes the names and locations of all the files that you need to back up.
Using RMAN, you can perform two types of incremental backups: a differential backup or a cumulative backup.
A recovery catalog: A separate database schema used to record RMAN activity against one or more target databases.
A recovery catalog preserves RMAN repository metadata if the control file is lost, making it much easier to restore
and recover following the loss of the control file. The database may overwrite older records in the control file, but
RMAN maintains records forever in the catalog unless the records are deleted by the user.
The SCN is an Oracle server–assigned number that indicates a committed version of the database
Every datafile and Controlfile in database might have SCN at a given point in time. It change when database mounted.
Every database mayhave a common SCN.
This synchronization of the SCNs will make sure we have a consistent backup of database.
Oracle will try to recover the
instance as close as possible to the time that you specify for the fast_start_mttr_target parameter. The maximum
value of this parameter is 3600 seconds (1 hour).
A recovery catalog is a database schema that holds the metadata used by RMAN for restoration and recovery
processes.
It basically stores information on
the control file parameter in the initialization parameter file (copying online log from the secondary Non-FRA
location) and restarting it but you will have an interruption of production service, which is very undesirable.
Scenario A)
Besides FRA, we have multiplexed Control files to two other separate location, so risk of loosing control file (and
fear of not able to do complete recovery) is minimized
We won’t be putting even a single control file in the FRA.
Scenario B)
Besides FRA, we have multiplexed Control files to only one other separate location, so risk of loosing control file
and (and fear of not able to do complete recovery) is more. Complete recovery of database is of primary
importance to you than the database interruption.
Binary Logs : Point In Time Recovery (PITR)
Binary logs record all changes to the databases, which are important if you need to do a Point In Time Recovery
(PITR). Without the binary logs, you can only recover the database to the point in time of a specific backup. The
binary logs allow you to wind forward from that point by applying all the changes that were written to the binary
logs. Unless you have a read-only system, it is likely you will need to enable the binary logs.
For example, enter the following commands to guarantee that the database is in a consistent state for a backup:
For example, enter the following command at the RMAN prompt to back up the database to the default backup
device:
The following variation of the command creates image copy backups of all datafiles in the database:
FORMAT Specifies a location and name for backup pieces and copies. You must use BACKUP
substitution variables to generate unique filenames. FORMAT
'AL_%d/%t/%s/%p'
The most common substitution variable is %U, which generates a unique ARCHIVELOG LIKE
name. Others include %d for the DB_NAME, %t for the backup set time '%arc_dest%';
stamp, %s for the backup set number, and %p for the backup piece number.
TAG Specifies a user-defined string as a label for the backup. If you do not BACKUP
specify a tag , then RMAN assigns a default tag with the date and time. Tags TAG
are always stored in the RMAN repository in uppercase. 'weekly_full_db_bkup'
DATABASE MAXSETSIZE
10M;
Data Replication
Replication is the process of copying and maintaining database objects in multiple databases that make up a
distributed database system. Replication can improve the performance and protect the availability of applications
because alternate data access options exist.
Oracle provides its own set of tools to replicate Oracle and integrate it with other databases. In this post, you will
explore the tools provided by Oracle as well as open-source tools that can be used for Oracle database replication
by implementing custom code.
The catalog is needed to keep track of the location of each fragment & replica
SQL Server has three types of replication:
Snapshot, Merge, and Transaction. Snapshot replication creates a snapshot of the data (point-in-time picture of
the data)
Data replication techniques
Synchronous vs. asynchronous
Synchronous: all replicas are up-to-date
Asynchronous: cheaper but delay in synchronization
Regarding the timing of data transfer, there are two types of data replication:
Asynchronous replication is when the data is sent to the model server -- the server where the replicas take data
from the client. Then, the model server pings the client with a confirmation saying the data has been received. From
there, it goes about copying data to the replicas at an unspecified or monitored pace.
Synchronous replication is when data is copied from the client-server to the model server and then replicated to
all the replica servers before the client is notified that data has been replicated. This takes longer to verify than the
asynchronous method, but it presents the advantage of knowing that all data was copied before proceeding.
Asynchronous database replication offers flexibility and ease of use, as replications happen in the background.
Methods to Setup Oracle Database Replication
You can easily set up the Oracle Database Replication using the following methods:
Method 1: Oracle Database Replication Using Hevo Data
Method 2: Oracle Database Replication Using A Full Backup And Load Approach
Method 3: Oracle Database Replication Using a Trigger-Based Approach
Method 4: Oracle Database Replication Using Oracle Golden Gate CDC
Method 5: Oracle Database Replication Using Custom Script-Based on Binary Log
Oracle types of data replication and integration in OLAP
Three main architectures:
Consolidation database: All data is moved into a single database and managed from a central location. Oracle Real
Application Clusters (Oracle RAC), Grid computing, and Virtual Private Database (VPD) can help you consolidate
information into a single database that is highly available, scalable, and secure.
Federation: Data appears to be integrated into a single virtual database while remaining in its current distributed
locations. Distributed queries, distributed SQL, and Oracle Database Gateway can help you create a federated
database.
Sharing Mediation: Multiple copies of the same information are maintained in multiple databases and application
data stores. Data replication and messaging can help you share information at multiple databases.
Types of Recovery:
Complete recovery
Recovering the database exactly till the point of failure.
Incomplete Recovery
It cannot recover the database till the point of failure. It recover database till you have taken backup.
Oracle Administration Commands
This example connects to a local database as user SYSBACKUP with the SYSBACKUP
privilege. SQL*Plus prompts for the SYSBACKUP user password.
connect sysbackup as sysbackup
Use the SQL*Plus CONNECT command to initially connect to the Oracle instance or to
reconnect to the Oracle instance.
Syntax
CONN[ECT] [logon] [AS {SYSOPER | SYSDBA | SYSBACKUP | SYSDG | SYSKM | SYSRAC}]
A sample query follows. (You can also query the V$VERSION view to see componentlevel
information.) Other product release levels may increment independent of the
database server.
You need to query the “v$pwfile_users” view to get information about the existing users in the password file. Execute the SQL
query below:
Sql>SELECT * FROM v$pwfile_users;
The query above will return four columns for each user in the password file. The column names are USERNAME, SYSDBA,
SYSOPER, and SYSASM.
The USERNAME column shows the username of the user in the password file.
The SYSDBA column shows whether the user has SYSDBA privileges or not.
The SYSOPER column shows whether the user has SYSOPER privileges or not.
The SYSASM column shows whether the user has SYSASM privileges or not.
To identify the release of Oracle Database that is currently installed and to see the
release levels of other database components you are using, query the data dictionary
view PRODUCT_COMPONENT_VERSION.
The following administrative user accounts are automatically created when Oracle
Database is installed:
• SYS
• SYSTEM
• SYSBACKUP
• SYSDG
• SYSKM
• SYSRAC
Grant the SYSDBA, SYSOPER, SYSBACKUP, SYSDG, or SYSKM administrative privilege to
the user. For example:
GRANT SYSDBA to mydba;
This statement adds the user to the password file, thereby enabling connection AS
SYSDBA, AS SYSOPER, AS SYSBACKUP, AS SYSDG, or AS SYSKM.
However, if user mydba has not been granted the SYSOPER privilege, then the followingv command fails:
CONNECT mydba AS SYSOPER
The V$PWFILE_USERS view contains information about users that have been granted
administrative privileges.
Your database is open. You don’t want to interrupt currently connected users but you want to temporarily disable further logons.
What would you do to achieve this and how would you revert the database back to its normal state after that?
Ans. I would put the database in “restricted mode”. While in restricted mode, only users with the “RESTRICTED SESSION” privilege
can make a connection. I would run the below command to put the database in restricted mode:
Sql> alter system enable restricted session;
After executing this command regular users won’t be able to loggon into the database. Once I want to revert the database to
normal, I execute this command:
Sql>alter system disable restricted session; if you “suspend” the database, Oracle will halt I/O operations to these datafiles until
it is reverted back to normal mode.
It is also possible to create a database via an SQL script. In this script I would specify:
Name of the database
The password of the SYS user
The password of the SYSTEM user
At least three online redo log groups. I would also specify at least two members for each redo log group.
Character set and the national character set of the database.
Location and size of the SYSTEM and SYSAUXtablespace. These tablespaces will be used for holding system data.
I would specify a normal tablespace to use as the default tablespace of the database.
I would specify a temporary tablespace to use as the default temporary tablespace of the database.
I would specify an undo tablespace.
To create Database by following command:
CREATE DATABASE mynewdb
USER SYS IDENTIFIED BY sys_password
USER SYSTEM IDENTIFIED BY system_password
You can cancel FORCE LOGGING mode using the following SQL statement:
ALTER DATABASE NO FORCE LOGGING;
Use the ALTER DATABASE BACKUP CONTROLFILE statement to back up your control files.
You have two options:
• Back up the control file to a binary file (duplicate of existing control file) using the
following statement:
ALTER DATABASE BACKUP CONTROLFILE TO '/oracle/backup/control.bkp';
• Produce SQL statements that can later be used to re-create your control file:
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
For example, the following statement adds a new group of redo logs to the database:
ALTER DATABASE
ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo') SIZE 100M;
You can also specify the number that identifies the group using the GROUP clause:
ALTER DATABASE
ADD LOGFILE GROUP 10 ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo')
When using the ALTER DATABASE statement, you can alternatively identify the target
group by specifying all of the other members of the group in the TO clause, as shown in
the following example:
ALTER DATABASE ADD LOGFILE MEMBER '/oracle/dbs/log2c.rdo'
TO ('/oracle/dbs/log2a.rdo', '/oracle/dbs/log2b.rdo');
The following statement drops the redo log /oracle/dbs/log3c.rdo:
ALTER DATABASE DROP LOGFILE MEMBER '/oracle/dbs/log3c.rdo';
A redo log file might become corrupted while the database is open, and ultimately stop
database activity because archiving cannot continue.
In this situation, to reinitialize the file without shutting down the database:
• Run the ALTER DATABASE CLEAR LOGFILE SQL statement.
The following statement clears the log files in redo log group number 3:
ALTER DATABASE CLEAR LOGFILE GROUP 3;
This statement overcomes two situations where dropping redo logs is not possible:
• If there are only two log groups
• The corrupt redo log file belongs to the current group
If the corrupt redo log file has not been archived, use the UNARCHIVED keyword in the
statement.
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;
This statement clears the corrupted redo logs and avoids archiving them. The cleared
redo logs are available for use even though they were not archived.
For example, the following statement creates a locally managed tablespace named
lmtbsb and specifies AUTOALLOCATE:
CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/lmtbsb01.dbf' SIZE 50M
EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
AUTOALLOCATE causes the tablespace to be system managed with a minimum extent
size of 64K.
The alternative to AUTOALLOCATE is UNIFORM. which specifies that the tablespace is
managed with extents of uniform size. You can specify that size in the SIZE clause of
UNIFORM. If you omit SIZE, then the default size is 1M.
The following example creates a tablespace with uniform 128K extents. (In a database
with 2K blocks, each extent would be equivalent to 64 database blocks). Each 128K
extent is represented by a bit in the extent bitmap for this file.
CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/lmtbsb01.dbf' SIZE 50M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K;
You cannot specify the DEFAULT storage clause, MINIMUM EXTENT, or TEMPORARY when
you explicitly specify EXTENT MANAGEMENT LOCAL. To create a temporary locally
managed tablespace, use the CREATE TEMPORARY TABLESPACE statement.
For example, the following statement creates tablespace lmtbsb with automatic
segment space management:
CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/lmtbsb01.dbf' SIZE 50M
EXTENT MANAGEMENT LOCAL
A bigfile tablespace is a tablespace with a single, but potentially very large (up to 4G
blocks) data file. Traditional smallfile tablespaces, in contrast, can contain multiple
data files, but the files cannot be as large.
The benefits of bigfile tablespaces are the following:
• A bigfile tablespace with 8K blocks can contain a 32 terabyte data file. A bigfile
tablespace with 32K blocks can contain a 128 terabyte data file. The maximum
number of data files in an Oracle Database is limited (usually to 64K files).
Therefore, bigfile tablespaces can significantly enhance the storage capacity of an
Oracle Database.
• Bigfile tablespaces can reduce the number of data files needed for a database. An
additional benefit is that the DB_FILES initialization parameter and MAXDATAFILES
parameter of the CREATE DATABASE and CREATE CONTROLFILE statements can be
adjusted to reduce the amount of SGA space required for data file information and
the size of the control file.
The following statement creates an encrypted tablespace with the default encryption
algorithm:
CREATE TABLESPACE securespace
DATAFILE '/u01/app/oracle/oradata/orcl/secure01.dbf' SIZE 100M
ENCRYPTION ENCRYPT;
The following statement creates the same tablespace with the AES256 algorithm:
CREATE TABLESPACE securespace
DATAFILE '/u01/app/oracle/oradata/orcl/secure01.dbf' SIZE 100M
ENCRYPTION USING 'AES256' ENCRYPT;
To determine the current default temporary tablespace for the database, run the
following query:
SELECT PROPERTY_NAME, PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE
PROPERTY_NAME='DEFAULT_TEMP_TABLESPACE';
The following statement creates a temporary tablespace in which each extent is 16M.
Each 16M extent (which is the equivalent of 8000 blocks when the standard block size
is 2K) is represented by a bit in the bitmap for the file.
CREATE TEMPORARY TABLESPACE lmtemp TEMPFILE '/u02/oracle/data/lmtemp01.dbf'
SIZE 20M REUSE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 16M;
For example, if neither group1 nor group2 exists, then the following statements create
those groups, each of which has only the specified tablespace as a member:
CREATE TEMPORARY TABLESPACE lmtemp2 TEMPFILE '/u02/oracle/data/lmtemp201.dbf'
SIZE 50M
TABLESPACE GROUP group1;
The following statement also adds a tablespace to an existing group, but in this case
because tablespace lmtemp2 already belongs to group1, it is in effect moved from
group1 to group2:
ALTER TABLESPACE lmtemp2 TABLESPACE GROUP group2;
Now group2 contains both lmtemp and lmtemp2, while group1 consists of only tmtemp3.
You can remove a tablespace from a group as shown in the following statement:
ALTER TABLESPACE lmtemp3 TABLESPACE GROUP '';
Tablespace lmtemp3 no longer belongs to any group. Further, since there are no
longer any members of group1, this results in the implicit deletion of group1.
For example:
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE group2;
Any user who has not explicitly been assigned a temporary tablespace will now use
tablespaces lmtemp and lmtemp2.
The following statement creates tablespace lmtbsb, but specifies a block size that
differs from the standard database block size (as specified by the DB_BLOCK_SIZE
initialization parameter):
CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/lmtbsb01.dbf' SIZE 50M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K
BLOCKSIZE 8K;
For example, the following statement brings the users tablespace online:
ALTER TABLESPACE users ONLINE;
For example the following statement makes the flights tablespace read-only:
ALTER TABLESPACE flights READ ONLY;
For example, the following statement makes the flights tablespace writable:
ALTER TABLESPACE flights READ WRITE;
Two clauses of the ALTER TABLESPACE statement support data file transparency when
you are using bigfile tablespaces:
• RESIZE: The RESIZE clause lets you resize the single data file in a bigfile
tablespace to an absolute size, without referring to the data file. For example:
ALTER TABLESPACE bigtbs RESIZE 80G;
• AUTOEXTEND (used outside of the ADD DATAFILE clause):
With a bigfile tablespace, you can use the AUTOEXTEND clause outside of the ADD
DATAFILE clause. For example:
ALTER TABLESPACE bigtbs AUTOEXTEND ON NEXT 20G;
You can use ALTER TABLESPACE to add a temp file, take a temp file offline, or bring a
temp file online, as illustrated in the following examples:
ALTER TABLESPACE lmtemp
ADD TEMPFILE '/u02/oracle/data/lmtemp02.dbf' SIZE 18M REUSE;
ALTER TABLESPACE lmtemp TEMPFILE OFFLINE;
ALTER TABLESPACE lmtemp TEMPFILE ONLINE;
The following example shrinks the locally managed temporary tablespace lmtmp1 while
ensuring a minimum size of 20M.
ALTER TABLESPACE lmtemp1 SHRINK SPACE KEEP 20M;
The following example shrinks the temp file lmtemp02.dbf of the locally managed
temporary tablespace lmtmp2. Because the KEEP clause is omitted, the database
attempts to shrink the temp file to the minimum possible size.
ALTER TABLESPACE lmtemp2 SHRINK TEMPFILE '/u02/oracle/data/lmtemp02.dbf';
Drop the tablespace and try to perform flashback database to the previously taken scn value:
SQL> select object_name, original_name, type from recyclebin;
SQL> create table junk (c1 int, c2 int) enable row movement;
drop tablespace tbs including contents and datafiles;
Tablespace dropped.
SQL> select DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER from dual;
Recovering from loss of REDO is completely dependent on the STATUS of the member/s that are corrupted or lost.
If the media failure is temporary, then correct the problem so that the database can reuse the group when required. If the media
failure is not temporary, then use the following procedure.
Begin incomplete media recovery, recovering up through the log before the damaged log.
Ensure that the current name of the lost redo log can be used for a newly created file. If not, then rename the members of the
damaged online redo log group to a new location. For example, enter:
View Description
V$TABLESPACE Name and number of all tablespaces from the control file.
V$ENCRYPTED_TABLESPACES Name and encryption algorithm of all encrypted
tablespaces.
DBA_TABLESPACES,
USER_TABLESPACES
Descriptions of all (or user accessible) tablespaces.
View Description
DBA_TABLESPACE_GROUPS Displays the tablespace groups and the tablespaces that
belong to them.
DBA_SEGMENTS, USER_SEGMENTS Information about segments within all (or user accessible)
tablespaces.
DBA_EXTENTS, USER_EXTENTS Information about data extents within all (or user
accessible) tablespaces.
DBA_FREE_SPACE,
USER_FREE_SPACE
Information about free extents within all (or user
accessible) tablespaces.
DBA_TEMP_FREE_SPACE Displays the total allocated and free space in each
temporary tablespace.
V$DATAFILE Information about all data files, including tablespace
number of owning tablespace.
V$TEMPFILE Information about all temp files, including tablespace
number of owning tablespace.
DBA_DATA_FILES Shows files (data files) belonging to tablespaces.
DBA_TEMP_FILES Shows files (temp files) belonging to temporary
tablespaces.
V$TEMP_EXTENT_MAP Information for all extents in all locally managed temporary
tablespaces.
V$TEMP_EXTENT_POOL For locally managed temporary tablespaces: the state of
temporary space cached and used for by each instance.
V$TEMP_SPACE_HEADER Shows space used/free for each temp file.
DBA_USERS Default and temporary tablespaces for all users.
DBA_TS_QUOTAS Lists tablespace quotas for all users.
V$SORT_SEGMENT Information about every sort segment in a given instance.
The view is only updated when the tablespace is of the
TEMPORARY type.
The following example enables automatic extension for a data file added to the users
tablespace:
ALTER TABLESPACE users
ADD DATAFILE '/u02/oracle/rbdb1/users03.dbf' SIZE 10M
AUTOEXTEND ON
NEXT 512K
MAXSIZE 250M;
The value of NEXT is the minimum size of the increments added to the file when it
extends. The value of MAXSIZE is the maximum size to which the file can automatically
extend.
The next example disables the automatic extension for the data file.
ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/users03.dbf'
AUTOEXTEND OFF;
To bring an individual data file online or take an individual data file offline, issue the
ALTER DATABASE statement and include the DATAFILE clause.
The following statement brings the specified data file online:
ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/stuff01.dbf' ONLINE;
To take the same file offline, issue the following statement:
ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/stuff01.dbf' OFFLINE;
The following statement takes the specified data file offline and marks it to be dropped:
ALTER DATABASE DATAFILE '/u02/oracle/rbdb1/users03.dbf' OFFLINE FOR DROP;
This example renames the data file user1.dbf to user01.dbf while keeping the data
file in the same location.
ALTER DATABASE MOVE DATAFILE '/u01/oracle/rbdb1/user1.dbf'
TO '/u01/oracle/rbdb1/user01.dbf';
This example moves the data file user1.dbf from the /u01/oracle/rbdb1/ directory to
the /u02/oracle/rbdb1/ directory. After the operation, the file is no longer in the /u01/
oracle/rbdb1/ directory.
ALTER DATABASE MOVE DATAFILE '/u01/oracle/rbdb1/user1.dbf'
TO '/u02/oracle/rbdb1/user1.dbf';
This example copies the data file user1.dbf from the /u01/oracle/rbdb1/ directory to
the /u02/oracle/rbdb1/ directory. After the operation, the old file is retained in the /u01/
oracle/rbdb1/ directory.
ALTER DATABASE MOVE DATAFILE '/u01/oracle/rbdb1/user1.dbf'
TO '/u02/oracle/rbdb1/user1.dbf' KEEP;
This example moves the data file user1.dbf from the /u01/oracle/rbdb1/ directory to
the /u02/oracle/rbdb1/ directory. If a file with the same name exists in the /u02/oracle/
rbdb1/ directory, then the statement overwrites the file.
ALTER DATABASE MOVE DATAFILE '/u01/oracle/rbdb1/user1.dbf'
TO '/u02/oracle/rbdb1/user1.dbf' REUSE;
For example, the following statement renames the data files /u02/oracle/rbdb1/
user1.dbf and /u02/oracle/rbdb1/user2.dbf to/u02/oracle/rbdb1/
users01.dbf and /u02/oracle/rbdb1/users02.dbf, respectively:
ALTER TABLESPACE users
RENAME DATAFILE '/u02/oracle/rbdb1/user1.dbf',
'/u02/oracle/rbdb1/user2.dbf'
TO '/u02/oracle/rbdb1/users01.dbf',
'/u02/oracle/rbdb1/users02.dbf';
Use ALTER DATABASE to rename the file pointers in the database control file.
For example, the following statement renames the data files/u02/oracle/rbdb1/
sort01.dbf and /u02/oracle/rbdb1/user3.dbf to /u02/oracle/rbdb1/
temp01.dbf and /u02/oracle/rbdb1/users03.dbf, respectively:
ALTER DATABASE
RENAME FILE '/u02/oracle/rbdb1/sort01.dbf',
'/u02/oracle/rbdb1/user3.dbf'
TO '/u02/oracle/rbdb1/temp01.dbf',
'/u02/oracle/rbdb1/users03.dbf';
The following example drops the data file identified by the alias example_df3.f in the
Oracle ASM disk group DGROUP1. The data file belongs to the example tablespace.
ALTER TABLESPACE example DROP DATAFILE '+DGROUP1/example_df3.f';
The next example drops the temp file lmtemp02.dbf, which belongs to the lmtemp
tablespace.
ALTER TABLESPACE lmtemp DROP TEMPFILE '/u02/oracle/data/lmtemp02.dbf';
This is equivalent to the following statement:
ALTER DATABASE TEMPFILE '/u02/oracle/data/lmtemp02.dbf' DROP
INCLUDING DATAFILES;
To disable the trigger reorder on the inventory table, enter the following statement:
ALTER TRIGGER reorder DISABLE;
You can disable all triggers associated with a table at the same time using the ALTER
TABLE statement with the DISABLE ALL TRIGGERS option. For example, to disable all
triggers defined for the inventory table, enter the following statement:
ALTER TABLE inventory
DISABLE ALL TRIGGERS;
Recovering Tablespaces
Use the RESTORE TABLESPACE and RECOVER TABLESPACE commands on individual tablespaces when the
database is open. In this case, must take the tablespace that needs recovery offline, restore and then
recover the tablespace, and bring the recovered tablespace online.
If you cannot restore a datafile to a new location, then use the RMAN SET NEWNAME command within
a RUN command to specify the new filename. Afterward, use a SWITCH DATAFILE ALL command, which is
equivalent to using the SQL statement ALTER DATABASE RENAME FILE, to update the control file to reflect
the new names for all datafiles for which a SET NEWNAME has been issued in the RUN command.
Unlike in user-managed media recovery, you should not place an online tablespace in backup mode.
Unlike user-managed tools, RMAN does not require extra logging or backup mode because it knows the
format of data blocks.
To recover an individual tablespace when the database is open:
Prepare for recovery
Take the tablespace to be recovered offline:
The following example takes the users tablespace offline:
RMAN> SQL 'ALTER TABLESPACE users OFFLINE';
Restore and recover the tablespace.
The following RUN command, which you execute at the RMAN prompt, sets a new name for the datafile
in the users tablespace:
RUN
{
SET NEWNAME FOR DATAFILE '/disk1/oradata/prod/users01.dbf'
TO '/disk2/users01.dbf';
RESTORE TABLESPACE users;
SWITCH DATAFILE ALL; # update control file with new filenames
RECOVER TABLESPACE users;
}
Bring the tablespace online, as shown in the following example:
RMAN> SQL 'ALTER TABLESPACE users ONLINE';
Managing Space in Tablespaces
Tablespaces allocate space in extents. Tablespaces can use two different methods to keep track of their
free and used space:
of a collection of contiguous free blocks. When allocating new extents to a tablespace segment, the
database uses the free extent closest in size to the required extent. In some cases, when segments are
dropped, their extents are deallocated and marked as free, but adjacent free extents are not
immediately recombined into larger free extents. The result is fragmentation that makes allocation of
larger extents more difficult. Oracle Database addresses fragmentation in several ways:
When attempting to allocate a new extent for a segment, the database first tries to find a free extent
large enough for the new extent. Whenever the database cannot find a free extent that is large enough
for the new extent, it coalesces adjacent free extents in the tablespace and looks again.
The SMON background process periodically coalesces neighboring free extents when the PCTINCREASE
value for a tablespace is not zero. If you set PCTINCREASE=0, no coalescing of free extents occurs. If you
are concerned about the overhead of ongoing coalesce operations of SMON, an alternative is to set
PCTINCREASE=0, and periodically coalesce free space manually. Manually Coalesce any adjacent free
extens user this
command:
While Oracle permits queries against objects stored in the recycle bin, you cannot use DML or DDL statements on
objects in the recycle bin.
You can perform Flashback Query on tables in the recycle bin, but only by using the recycle bin name. You cannot
use the original name of the table.
A table and all of its dependent objects (indexes, LOB segments, nested tables, triggers, constraints and so on) go
into the recycle bin together, when you drop the table. Likewise, when you perform Flashback Drop, the objects are
generally all retrieved together.
Question: I accidently deleted a production data file and we have no backups of the datafile, except an
old one from am month ago. How can you recover Oracle when a data file has been deleted?
Answer: Recovering a lost datafile, especially when you do not have a backup, requires experts. DO NOT shutdown
the database, and call BC for emergency recovery support. You have little chance of a fast recovery without an
expert and BC can be in your system within minutes using vpn or ssh to get your recovery done right.
If all the copies of control files are lost or if a user is maintaining only one copy of the control file which gets lost,
then a user can
Manually create a control file.
Restore it from the backup control file using the below command.
ALTER DATABASE BACKUP CONTROL FILE TO TRACE;
UNIX Inode recovery: On UNIX/Linux (Solaris, AIX, HPUX), when a file is deleted, an Oracle background process still
has the file open. The deleted file is still there in the filesystem disk, and only the inode is removed. By replacing the
inode entry you can recover the lost data file.
Oracle Dictionary Recovery: If you dropped the datafile using the Oracle "drop datafile" command, the dictionary
can be restored to re-enable the data file. Re-adding a dropped datafile is tricky and un-supported but it can work,
requiring tools such as BBED tool and an in-depth understanding of Oracle internals.
As with Flashback Table, you can use Flashback Drop while the database is open. Also, you can perform the flashback
without undoing changes in objects not affected by the Flashback Drop operation. Flashback Table is more
convenient than forms of media recovery that require taking the database offline and restoring files from backup.
Important Recovery Data Structures
Table describes important data structures involved in recovery processes. Be familiar with these data structures
before starting any recovery procedure.
Data Description
Structure
Control File The control file contains records that describe and maintain information about the physical
structure of a database. The control file is updated continuously during database use, and must be
available for writing whenever the database is open. If the control file is not accessible, the
database will not function properly.
System The system change number is a clock value for the Oracle database that describes a committed
Change version of the database. The SCN functions as a sequence generator for a database, and controls
Number (SCN) concurrency and redo record ordering. Think of the SCN as a timestamp that helps ensure
transaction consistency.
Redo Records A redo record is a group of change vectors describing a single, atomic change to the database.
Redo records are constructed for all data block changes and saved on disk in the redo log. Redo
records allow multiple database blocks to be changed so that either all changes occur or no
changes occur, despite arbitrary failures.
Redo Logs All changes to the Oracle database are recorded in redo logs, which consist of at least two redo log
files that are separate from the datafiles. During database recovery from an instance or media
failure, Oracle applies the appropriate changes in the database's redo log to the datafiles; this
updates database data to the instant that the failure occurred.
Backup A database backup consists of operating system backups of the physical files that constitute the
Oracle database. To begin database recovery from a media failure, Oracle uses file backups to
restore damaged datafiles or control files.
Checkpoint A checkpoint is a data structure in the control file that defines a consistent point of the database
across all threads of a redo log. Checkpoints are similar to SCNs, and also describe which threads
exist at that SCN. Checkpoints are used by recovery to ensure that Oracle starts reading the log
threads for the redo application at the correct point. For Parallel Server, each checkpoint has its
own redo information.
Log: The log is a sequence of records. The log of each transaction is maintained in some stable storage so that if
any failure occurs, then it can be recovered from there.
Store the LSN of the most recent checkpoint at a master record on a disk
Checkpoint
The checkpoint is like a bookmark. While the execution of the transaction, such checkpoints are marked, and the
transaction is executed then using the steps of the transaction, the log files will be created.
Checkpoint declares a point before which all the logs are stored permanently in the storage disk and are in an
inconsistent state. In the case of crashes, the amount of work and time is saved as the system can restart from the
checkpoint. Checkpointing is a quick way to limit the number of logs to scan on recovery.
When the user logs back into the site, the password they use is compared to the unique hash, to determine if it
correct.
Symmetric key encryption—a private key is applied to data, changing it so it is cannot be read without being
decrypted. Data is encrypted when saved, and decrypted when retrieved, provided the user or application supplies
the key. Symmetric encryption is considered inferior to asymmetric encryption because there is a need to transfer
the key from sender to recipient.
Asymmetric encryption—incorporates two encryption keys: private and public. A public key can be retrieved by
anyone and is unique to one user. A private key is a concealed key that is only known by one user. In most cases, the
public key is the encryption key and the private key is the decryption key.
Symmetric and asymmetric encryption are cryptography terms that describe the relationship between ciphertext
and decryption keys.
Symmetric: In this case, data is encrypted when it is saved to the database and decrypted when it is called back.
Sharing data requires the receiver to have a copy of the decryption key.
Asymmetric: In this relatively new and more secure type of encryption, there is both a private and public key.
Encryption levels
Cell-Level: In this case, each individual cell of data has its own unique password.
Column-Level: This is the most commonly known encryption level and is typically included by database vendors.
Tablespace-Level: This method provides a different level of control over encryption, allowing encryption across
tables, even if accessed by multiple columns. This method doesn’t have as much of an impact on performance but
can cause issues if improperly implemented.
File-Level: This approach works not by encrypting rows or columns, but by scrambling entire files. The files can be
moved to reports, spreadsheets, or emails and still retain their protection, meaning fewer transformations or
encryption mechanisms are required. This type of encryption holds the least potential for performance
degradation
END
Modifying SCAN Configuration in Oracle 11g Release 2 RAC - Some notes on modifying SCAN configuration
after installation of Oracle RAC.
VMware ESX Server 3.5 Update 2 Installation - This article describes the bare-metal installation and basic
usage of VMware ESX Server 3.5 Update 2.
Application Server Installation Matrix
For installations on RHEL clones, like Oracle Linux and CentOS, use the instructions provided below for the
appropriate RHEL release.
OS 9iAS AS10g AS10g AS10g WebLogic WebLogic WebLogic WebLogic WebLogic
R1 R2 R3 11g 12cR1 12cR1 12cR1 12cR2
(12.1.1) (12.1.2) (12.1.3) (12.2.1)
Red Hat Enterprise Linux Yes Yes
2.1 (RHEL2)
Red Hat Enterprise Linux 3 Yes Yes Yes
(RHEL3)
Red Hat Enterprise Linux 4 Yes
(RHEL4)
Oracle Linux 5 (OL5) Yes Yes Yes Yes
Oracle Linux 6 (OL6) Yes Yes Yes Yes Yes
Oracle Linux 7 (OL7) Yes
Fedora Core 1 (FC1), Yes
Fedora 36 (F36)
Installation
Processor=> 550 MHz minimum, On Windows Vista, the minimum requirement is 800 MHz
In particular, here I will discuss the following:
1. CPU, RAM, Heap Size, and Hard Disk Space Requirements for OMS
2. CPU, RAM, and Hard Disk Space Requirements for Standalone Management Agent
3. CPU, RAM, and Hard Disk Space Requirements for Management Repository
CPU, RAM, Heap Size, and Hard Disk Space Requirements for OMS
Host Small Medium Large
CPU Cores/Host 2 4 8
RAM 4 GB 6 GB 8 GB
RAM with ADPFoot 1 , JVMDFoot 2 6GB 10 GB 14 GB
Oracle WebLogic Server JVM Heap Size 512 MB 1 GB 2 GB
Hard Disk Space 7 GB 7 GB 7 GB
Hard Disk Space with ADP, JVMD 10 GB 12 GB 14 GB
Note: While installing an additional OMS (by cloning an existing one), if you have installed BI publisher on the source
host, then ensure that you have 7 GB of additional hard disk space on the destination host, so a total of 14 GB.
CPU, RAM, and Hard Disk Space Requirements for Standalone Management Agent
For a standalone Oracle Management Agent, ensure that you have 2 CPU cores per host, 512 MB of RAM, and 1 GB
of hard disk space.
CPU, RAM, and Hard Disk Space Requirements for Management Repository
In this table RAM and Hard Disk Space Requirements for Management Repository
Host Small Medium Large
CPU Cores/HostFoot 1 2 4 8
RAM 4 GB 6 GB 8 GB
Hard Disk Space 50 GB 200 GB 400 GB
Requirement Value
Virtual memory (swap) If physical memory is between 2 GB and 16 GB, then set
virtual memory to 1 times the size of the RAM
If physical memory is more than 16 GB, then set virtual
memory to 16 GB
Requirement Value
* Refers to the contents of the admin, cfgtoollogs, flash_recovery_area, and oradata directories in the ORACLE_BASE
directory.
The minimum size for the index is 8,216,576 bytes (8 MB). To calculate the size of a database index, including all
index files, perform the following calculation:
number of existing blocks * 112 bytes = the size of database index
Type Size
Maximum possible file size with 16 K sized 64 Gigabytes (GB) (4,194,304 * 16,384) = 64 gigabytes (GB)
blocks
2 KB 20,000
4 KB 40,000
8 KB 65,536
16 KB 65,536
Type Size
installing the Oracle Database and creating an Oracle Home User account.
Here OUI is used to install Oracle Software
1 Expand the database folder that you extracted in the previous section. Double-click setup.
2 Click Yes in the User Account Control window to continue with the installation.
3 The Configure Security Updates window appears. Enter your email address and My Oracle Support
password to receive security issue notifications via email. If you do not wish to receive notifications via
email, deselect.
6. The Typical Install Configuration window appears. Click on a text field and then the balloon icon ( )to
know more about the field. Note that by default, the installer creates a container database along with a
pluggable database called "pdborcl". The pluggable database contains the sample HR schema.
7. Change the Global database name to orcl. Enter the “Administrative password” as Oracle_1. This
password will be used later to log into administrator accounts such as SYS and SYSTEM. Click Next.
8. The prerequisite checks are performed and a Summary window appears. Review the settings and click
Install.
9. Note: Depending on your firewall settings, you may need to grant permissions to allow java to access the
network.
10. The progress window appears.
11. The Database Configuration Assistant started and creates your the database.
12. After the Database Configuration Assistant creates the database, you can navigate to
https://localhost:5500/em as a SYS user to manage the database using Enterprise Manager Database
Express. You can click “Password Management…” to unlock accounts. Click OK to continue.
13. The Finish window appears. Click Close to exit the Oracle Universal Installer.
13.5 To verify the installation Navigate to C:\Windows\system32 using Windows Explorer. Double-click
services. The Services window appears, displaying a list of services.
14. Note: In advance installation step you allocate memory like:
To Create CDB you can use ETL tool Oracle Data integrator too. Other supporting tools are:
These tools also used to perform operations on CDB Database.
There is no need to spend time on the GUI at the very beginning. Thus, the developer can directly start with
implementing the business logic.
This is the reason why Oracle APEX is feasible to create rapid GUI-Prototypes without logic. Thus, prospective
customers can get an idea of how their future application will look.
And the cool thing is, it’s going to get even better with time. Oracle’s roadmap for the technology is extensive and
mentions things such as:
As you can see, there’s a lot of things that are worth waiting for. Oracle APEX is going to get a lot more powerful,
and that’s even more of a reason to get to know it and start using it.
Distinguishing Characteristics
Before APEX there was WebDB, which was based on the same techniques. WebDB became part of Oracle Portal and
disappeared in silence. The difference between APEX and WebDB is that WebDB generates packages that generate
the HTML pages, while APEX generates the HTML pages at runtime from the repository. Despite this approach APEX
is amazingly fast.
APEX became available to the public in 2003 of HTML DB 1.5 to HTML DB 2.1 and apex 2.2 version and then it was
part of version 10g of the database. At that time it was called HTMLDB and the first version was 1.5. Before HTMLDB,
it was called Oracle Flows, Oracle Platform, and Project Marvel.
Note: Starting with Oracle Database 12c Release 2 (12.2), Oracle Application Express is included in the Oracle
Home on disk and is no longer installed by default in the database.
Oracle Application Express is included with the following Oracle Database releases:
Oracle Database 19c – Oracle Application Express Release 18.1.
Oracle Database 18c – Oracle Application Express Release 5.1.
Oracle Database 12c Release 2 (12.2)- Oracle Application Express Release 5.0.
Oracle Database 12c Release 1 (12.1) – Oracle Application Express Release 4.2.
Oracle Database 11g Release 2 (11.2) – Oracle Application Express Release 4.2.
Oracle Database 11g Release 1 (11.1) – Oracle Application Express Release 3.0.
The Oracle Database releases less frequently than Oracle Application Express.
Within each application, you can also specify a Compatibility Mode in the Application Definition.
The Compatibility Mode attribute controls the backward compatibility of the Application Express runtime engine
when executing an application. Compatibility Mode options include Pre 4.1, 4.1, 4.2, 5.0, 5.1/18.1, 18.2, 19.1, and
19.2. or upper versions.
Version 22
This release of Oracle APEX introduces Approvals and the Unified Task List, Simplified Create Page wizards,
Readable Application Export formats, and Data Generator. APEX 22.1 also brings several enhancements existing
components, such as tokenized row search, an easy way to sort regions.
Version 21
This release of Oracle APEX introduces Smart Filters, Progressive Web Apps, and REST Service Catalogs. APEX 21.2
also brings greater UI flexibility with Universal Theme. APEX shared a lot of the characteristics of cloud computing,
even before cloud computing became popular.
Elasticity
Note: All Web services are APIs but all APIs are not web services. ORDS Is used to build Rest Services for 3rd party
APIs.
Oracle Database Exadata Express Cloud Service supports Simple Oracle Document Access (SODA) using
Representational State Transfer (REST).
SODA for REST can be used from any modern programming language capable of making HTTP requests. For further
details including a complete list of SODA for REST HTTP operations available for the SODA for REST API, see REST
Data Services SODA for REST Developer’s Guide.
Before a user can access SODA for REST, users must be assigned the predefined roles of Database Administrator or
Database Developer and your service must be enabled for SODA for REST. It is also possible to create custom roles
for accessing SODA for REST. SODA allows the Oracle Database to be used as a powerful NoSQL store, supporting
key-based access to all documents, and query-based access to JSON documents, all without needing to use SQL.
Because SODA is built on top of the Oracle database, you get proven Oracle enterprise-grade reliability and many
features typically lacking in NoSQL stores, such as transactions. If desired, you can still access SODA documents
directly with SQL.
The primary abstraction provided by SODA is a collection of documents. SODA is particularly powerful when it comes
to JSON documents, though all other types of documents are supported. JSON documents in SODA can be queried
using intuitive template-like queries, without needing to write SQL. These queries are also expressed in JSON, and
called QBEs (query-by-example).
The following must be performed or known before attempting to use SODA for REST:
You must provide the appropriate roles to users using SODA for REST.
The working with URL bases resquest Working on RESTful is completely SOAP (Simple Object Access
and respond. based on REST applications. Protocol) is a protocol designed
to exchange data.
REST (Representational State Format of data is based on HTTP, text, A protocol, rigid set of
Transfer), is an API used for and JSON. Incontras REST use only messaging patterns so it support
applications and servers HTTP. SMTP, HTTP, and TCP and
communication. A architecture. Internet.
REST uses HTTP requests like GET, It uses RESTful principles. SOAP is a lightweight protocol
PUT, POST, and DELETE to manage used to create web APIs, usually
CRUD. with Extensible Markup
Language (XML).
REST over HTTP is almost always the RESTful APIs use HTTP requests to SOAP messages are XML
basis for modern microservices GET, PUT, POST and DELETE data. documents having 3 building
development and communications.
blocks:
A Model.
Oracle APEX is a part of the Oracle RAD architecture and technology stack. What does it mean?
“R” stands for REST, or rather ORDS – Oracle REST Data Services. ORDS is responsible for asking the database for the
page and rendering it back to the client;
“A” stands for APEX, Oracle Application Express, the topic of this article;
“D” stands for Database, which is the place an APEX application resides in.
Other methodologies that work well with Oracle Application Express include:
Spiral – This approach is actually a series of short waterfall cycles. Each waterfall cycle yields new requirements and
enables the development team to create a robust series of prototypes.
Rapid application development (RAD) life cycle – This approach has a heavy emphasis on creating a prototype that
closely resembles the final product. The prototype is an essential part of the requirements phase. One disadvantage
of this model is that the emphasis on creating the prototype can cause scope creep; developers can lose sight of
their initial goals in the attempt to create the perfect application.
Oracle REST Data Services is a Java EE-based alternative for Oracle HTTP Server and mod_plsql.
The Java EE implementation offers increased functionality including a command-line based configuration, enhanced
security, file caching, and RESTful web services.
Oracle REST Data Services also provides increased flexibility by supporting deployments using Oracle WebLogic
Server, GlassFish Server, Apache Tomcat, and a standalone mode. Oracle now supports Oracle REST Data Services
(ORDS) running in standalone mode using the built-in Jetty web server, so you no longer need to worry about
installing WebLogic, Glassfish or Tomcat unless you have a compelling reason to do so. Removing this extra layer
means one less layer to learn and one less layer to patch.
ORDS can run as a standalone app with a built in webserver. This is perfect for local development purposes but in
the real world you will want a decent java application server (Tomcat, Glassfish or Weblogic) with a webserver in
front of it (Apache or Nginx).
The WebLogic Server is an application server. It is a platform used to develop and deploy multitier distributed
enterprise applications.
The Oracle Application Express architecture requires some form of the webserver to proxy requests between a web
browser and the Oracle Application Express engine. Oracle REST Data Services satisfies this need but its use goes
beyond that of Oracle Application Express configurations. It centralizes application services such as Web server
functionality and business components and is used to access backend enterprise systems.
Oracle REST Data Services simplifies the deployment process because there is no Oracle home required, as
connectivity is provided using an embedded JDBC driver.
Oracle REST Data Services is a Java Enterprise Edition (Java EE) based data service that provides enhanced security,
file caching features, and RESTful Web Services. Oracle REST Data Services also increases flexibility through support
for deployment in standalone mode, as well as using servers like Oracle WebLogic Server and Apache Tomcat.
ORDS, a Java-based application, enables developers with SQL and database skills to develop REST APIs for Oracle
Database. You can deploy ORDS on web and application servers, including WebLogic®, Tomcat®, and Glassfish®, as
shown in the following image:
ORDS is our middle tier JAVA application that allows you to access your Oracle Database resources via REST APIs.
Use standard HTTP(s) calls (GET|POST|PUT|DELETE) via URIs that ORDS makes available
(/ords/database123/user3/module5/something/)
ORDS will route your request to the appropriate database, and call the appropriate query or PL/SQL anonymous
block), and return the output and HTTP codes.
For most calls, that’s going to be the results of a SQL statement – paginated and formatted as JSON.
Oracle Cloud
You can run APEX in an Autonomous Database (ADB) – an elastic database that you can scale up. It’s self-driving,
self-healing, and can repair and upgrade itself. It comes in two flavours:
1. Autonomous Transaction Processing (ATP) – basically transaction processing, it’s where APEX sees most use;
2. Autonomous Data Warehouse (ADW) – for more query-driven APEX applications. Reporting data is also a
common use of Oracle APEX.
You can also use the new Database Cloud Service (DCS) – an APEX-only solution. For a fee, you can have a
commercial application running on a database cloud service.
You can also run Oracle APEX on-premise or in a Private Cloud – anywhere where a database runs. It can be a physical,
dedicated server, a virtualized machine, a docker image (you can run it on your laptop, fire it up on a train or a plane
– it’s very popular among Oracle Application Express developers). You can also use it on Exadata – a super-powerful
APEX physical server on cloud services.
Identifies Application
APEX_APPLICATION_COMPUTATIONS Computations which can run for APEX_APPLICATIONS
every page or on login
#NAVIGATION_BAR# substitution
string
Identifies attributes of an
APEX_APPLICATION_PAGE_IR APEX_APPLICATION_PAGE_REGIONS
interactive report
Developer comments of an
APEX_APPL_DEVELOPER_COMMENTS APEX_APPLICATIONS
application
Identifies a collection of
APEX_APPL_LOAD_TABLE_RULES transformation rules that are to APEX_APPLICATIONS
be used on the load tables.
For APEX (HTML DB) versions 1.5 – 3.1, the schema name is: FLOWS_XXXXXX.
For example: FLOWS_010500 for HTML DB version 1.5.x
For APEX (HTML DB) versions 3.2.x and above, the schema name is: APEX_XXXXXX.
For example: APEX_210100 for APEX version 21.1.
If the query returns 0, it is a runtime only installation, and apxrtins.sql should be used for the upgrade. If
the query returns 1, this is a development install and apexins.sql should be used
The full download is needed if the first two digits of the APEX version are different. For example, the full
Application Express download is needed to go from 20.0 to 21.1. The patch is needed if only the third digit of the
version changes. So when upgrading from from 21.1.0 patch to upgrade to 21.1.2.
The fastest way of accessing data is by using ROWID. Accessing data is unrelated to ROWNUM.
Patching
Patching involves copying a small collection of files over an existing installation. A patch is normally associated with
a particular version of an Oracle product and involves updating from one minor version of the product to a newer
minor version of the same product (for example, from version 11.1.1.2.0 to version 11.1.1.3.0).
A patch set is a single patch that contains a collection of patches designed to be applied together.
Oracle Applications includes the Oracle 9.2.0.6 (9i) Database. However, Oracle Life Sciences Data Hub (Oracle LSH)
2.1.4 requires the Oracle 11gR2 Database Server, which requires Oracle Applications ATG RUP7, which is not
supported on Oracle Database 9.2.0.6 but is supported on 9.2.0.8.
To upgrade the 9.6.0.6 database you installed during the Oracle Applications installation, apply patch set 9.2.0.8
(4547809) for your operating system.
Downloading Patches From My Oracle Support
This section describes how to download patches from My Oracle Support. For additional information, enter
document ID 1302053.1 in the Knowledge Base search field on My Oracle Support.
Opatch is typically used to patch the software on your system by copying a small collection of files over your
existing installation.
In Oracle Fusion Middleware, Opatch is used to patch an existing Oracle Fusion Middleware 11g installation.
When to install and when to patch only (Oracle Apex, and Oracle Database)
In previous versions an upgrade was required when a release affected the first two numbers of the version (4.2 to
5.0 or 5.1 to 18.1), but if the first two numbers of the version were not affected (5.1.3 to 5.1.4) you had to
download and apply a patch, rather than do the full installation. This is no longer the case.
Step One
Create a new tablespace to act as the default tablespace for APEX.
-- For non-OMF.
Step two
IF: Connect to SQL*Plus as the SYS user and run the “apexins.sql” script, specifying the relevant tablespace names
and image URL.
Logon to database as SYSDBA and switch to pluggable database orclpdb1 and run installation script. You can install
apex on dedicated tablespaces if required.
Step three
If you want to add the user silently, you could run the following code, specifying the required password and email.
BEGIN
APEX_UTIL.set_security_group_id( 10 );
APEX_UTIL.create_user(
APEX_UTIL.set_security_group_id( null );
COMMIT;
END;
Note:
Oracle Application Express is installed in the APEX_210200 schema.
The structure of the link to the Application Express administration services is as follows:
http://host:port/ords/apex_admin
The structure of the link to the Application Express development interface is as follows:
http://host:port/ords
Or
When Oracle Application Express installs, it creates three new database accounts all with status LOCKED in
database:
APEX_210200– The account that owns the Oracle Application Express schema and metadata.
FLOWS_FILES – The account that owns the Oracle Application Express uploaded files.
APEX_PUBLIC_USER – The minimally privileged account is used for Oracle Application Express configuration with
ORDS.
Create and change password for ADMIN account. When prompted enter a password for the ADMIN account.
SQL> @apxchpwd.sql
This script can be used to change the password of an Application Express instance administrator. If the user does
not yet exist, a user record will be created.
Step four
Create the APEX_LISTENER and APEX_REST_PUBLIC_USER users by running the “apex_rest_config.sql” script.
SQL> @apex_rest_config.sql
Configure RESTful Services. When prompted enter a password for the APEX_LISTENER, APEX_REST_PUBLIC_USER
account.
Sqlplus / as sysdba
@apex_rest_config.sql
output
Step five
Step Six
Now you need to decide which gateway to use to access APEX. The Oracle recommendation is ORDS.
Note: Oracle REST Data Services (ORDS), formerly known as the APEX Listener, allows APEX applications to be
deployed without the use of Oracle HTTP Server (OHS) and mod_plsql or the Embedded PL/SQL Gateway. ORDS
version 3.0 onward also includes JSON API support to work in conjunction with the JSON support in the database.
ORDS can be deployed on WebLogic, Tomcat or run in standalone mode. This article describes the installation of
ORDS on Tomcat 8 and 9. JDK must be installed before installing Apache Tomcat. Setting the Port to 8181 (as
default of 8080 is used by Apex).
For Lone-PDB installations (a CDB with one PDB), or for CDBs with small numbers of PDBs, ORDS can be installed
directly into the PDB. Sqlplus / as sysdba If you are using many PDBs per CDB, you may prefer to install ORDS into
the CDB to allow all PDBs to share the same connection pool.
This Oracle REST Data Services instance has not yet been configured.
Enter 1 to specify the database service name, or 2 to specify the database SID [1]:
Confirm password:
Requires to login with administrator privileges to verify Oracle REST Data Services schema.
Confirm password:
Retrieving information.
If using Oracle Application Express or migrating from mod_plsql then you must enter 1 [1]:
Confirm password:
Confirm password:
Confirm password:
[5] None
Choose [1]:1
Completed installation for Oracle REST Data Services version 21.4.2.r0621806. Elapsed time: 00:00:12.611
Choose [1]:1
As a result ORDS will be running in standalone mode and configured so you can try to logon to apex.
Run the “apex_epg_config.sql” script, passing in the base directory of the installation software as a parameter.
Change the password and unlock the APEX_PUBLIC_USER account. This will be used for any Database Access
Descriptors (DADs).
Step Seven
Step Eight
Starting/Stopping ORDS Under Tomcat
ORDS is started or stopped by starting or stopping the Tomcat instance it is deployed to.
Assuming you have the CATALINA_HOME environment variable set correctly, the following commands should be
used.
$ $CATALINA_HOME/bin/startup.sh
$ $CATALINA_HOME/bin/shutdown.sh
ORDS Validate
You can validate/fix the current ORDS installation using the validate option.
Confirm password:
Retrieving information.
Step Nine
1. Workspace administrators are users who perform administrator tasks specific to a workspace.
2. Instance administrators are superusers that manage an entire hosted Oracle Application Express instance
which may contain multiple workspaces.
Oracle APEX is a full spectrum technology. It can be used by so-called citizen developers, who can use the wizard to
create some simple applications to get going. However, these people can team up with a technical developer to
create a more complex application together, and in such a case it also goes full spectrum – code by code, line by
line, back-end development, front-end development, database development. If you get a perfect mix of front-end
and back-end developers, then you can create a truly great APEX application.
Our methodology is composed of different elements related to all aspects of an APEX development project.
This methodology is referred to as a waterfall because the output from one stage is the input for the next stage. A
primary problem with this approach is that it is assumed that all requirements can be established in advance.
Unfortunately, requirements often change and evolve during the development process.
The Oracle Application Express development environment enables developers to take a more iterative approach to
development.
Click Next.
Click Upload Another File if you have more XML files, otherwise click Create.
Now let’s review each component in the upload forms to determine proper regions to use in the APEX Application.
Also, let’s review the Triggers and Program Units in order to identify the business logic in your Forms Application
and determine if it will need to be replicated or not.
Oracle Forms applications still play a vital role, but many are looking for ways to modernize their
applications. Modernize your Oracle Forms applications by migrating them to Oracle Application Express (Oracle
APEX) in the cloud.
Your stored procedures and PL/SQL packages work natively in Oracle APEX, making it the clear platform of choice
for easily transitioning Oracle Forms applications to modern web applications with more capabilities, less complexity,
and lower development and maintenance costs.
Oracle APEX is a low-code development platform that enables you to build scalable, secure enterprise apps, with
world-class features, that you can deploy anywhere. You can quickly develop and deploy compelling apps that solve
real problems and provide immediate value. You won’t need to be an expert in a vast array of technologies to deliver
sophisticated solutions.
Architecture
This architecture shows the process of migrating on-premises Oracle Forms applications to Oracle Application
Express (APEX) applications with the help of an XML converter, and then moving them to the cloud.The following
diagram illustrates this reference architecture.
VCN
When you create a VCN, determine how many IP addresses your cloud resources in each subnet require. Using
Classless Inter-Domain Routing (CIDR) notation, specify a subnet mask and a network address range large enough
for the required IP addresses. Use CIDR blocks that are within the standard private IP address space.
After you create a VCN, you can change, add, and remove its CIDR blocks.
When you design the subnets, consider functionality and security requirements. All compute instances within the
same tier or role should go into the same subnet.
Security lists
Use security lists to define ingress and egress rules that apply to the entire subnet.
Cloud Guard
Clone and customize the default recipes provided by Oracle to create custom detector and responder recipes. These
recipes enable you to specify what type of security violations generate a warning and what actions are allowed to
be performed on them. For example, you might want to detect Object Storage buckets that have visibility set to
public.
Apply Cloud Guard at the tenancy level to cover the broadest scope and to reduce the administrative burden of
maintaining multiple configurations.
You can also use the Managed List feature to apply certain configurations to detectors.
Security Zones
For resources that require maximum security, Oracle recommends that you use security zones. A security zone is a
compartment associated with an Oracle-defined recipe of security policies that are based on best practices. For
example, the resources in a security zone must not be accessible from the public internet and they must be
encrypted using customer-managed keys. When you create and update resources in a security zone, Oracle Cloud
Infrastructure validates the operations against the policies in the security-zone recipe, and denies operations that
violate any of the policies.
Schema
Retain the database structure that Oracle Forms was built on, as is, and use that as the schema for Oracle APEX.
Business Logic
Most of the business logic for Oracle Forms is in triggers, program units, and events. Before starting the migration
of Oracle Forms to Oracle APEX, migrate the business logic to stored procedures, functions, and packages in the
database.
Considerations
Consider the following key items when migrating Oracle Forms Object navigator components to Oracle Application
Express (APEX):
Data Blocks
A data block from Oracle Forms relates to Oracle APEX with each page broken up into several regions and
components. Review the Oracle APEX Component Templates available in the Universal Theme.
Triggers
In Oracle Forms, triggers control almost everything. In Oracle APEX, control is based on flexible conditions that are
activated when a page is submitted and are managed by validations, computations, dynamic actions, and processes.
Alerts
Most messages in Oracle APEX are generated when you submit a page.
Attached Libraries
Oracle APEX takes care of the JavaScript and CSS libraries that support the Universal Theme, which supports all of
the components that you need for flexible, dynamic applications. You can include your own JavaScript and CSS in
several ways, mostly through page attributes. You can choose to add inline code as reference files that exist either
in the database as a BLOB (#APP_IMAGES#) or sit on the middle tier, typically served by Oracle REST Data Services
(ORDS). When a reference file is on an Oracle WebLogic Server, the file location is prefixed with #IMAGE_PREFIX#.
Editors
Oracle APEX has a text area and a rich text editor, which is equivalent to Editors in Oracle Forms.
In APEX, the LOV is coupled with the Item type. A radio group works well with a small handful of values. Select Lists
for middle-sized sets, and select Popup LOV for large data sets. You can use the queries from Record Group in Oracle
Forms for the LOV query in Oracle APEX. LOV’s in Oracle APEX can be dynamically driven by a SQL query, or be
statically defined. A static definition allows a variety of conditions to be applied to each entry. These LOVs can then
be associated with Items such as Radio Groups & Select Lists, or with a column in a report, to translate a code to a
label.
Parameters
Page Items in Oracle APEX are populated between pages to pass information to the next page, such as the selected
record in a report. Larger forms with a number of items are generally submitted as a whole, where the page process
handles the data, and branches to the next page. These values can be protected from URL tampering by session state
security, at item, page, and application levels, often by default.
Popup Menus
Popup Menus are not available out of the box in Oracle APEX, but you can build them by using Lists and associating
a button with the menu.
Program Units
Migrate the Stored procedures and functions defined in program units in Oracle Forms into Database Stored
Procedures/Functions and use Database Stored procedures/functions in Oracle APEX
processes/validations/computations.
Property Classes
Property Classes in Oracle Forms allow the developer to utilize common attributes among each instance of a
component. In APEX you can define User Interface Defaults in the data dictionary, so that each time items or reports
are created for specific tables or columns, the same features are applied by default. As for the style of the application,
you can apply classes to components that carry a particular look and feel. The Universal Theme has a default skin
that you can reconfigure declaratively.
Record Groups
Use queries in Record Groups to define the Dynamic LOV in Oracle APEX.
Reports Interactive Reports in Oracle APEX come with a number of runtime manipulation options that give users the
power to customize and manipulate the reports. Classic Reports are simple reports that don’t provide runtime
manipulation options, but are based on SQL.
Menus
Oracle Forms have specific menu files, controlled by database roles. Updating the .mmx file required that there be
no active users. The menu in Oracle APEX can either be across the top, or down the left side. These menus can be
statically defined, or dynamically driven. Static navigation entries can be controlled by authorization schemes, or
custom conditions. Dynamic menus can have security tables integrated within the SQL.
Properties
The Page Designer introduced in Oracle APEX is similar to Oracle Forms, particularly with regard to the ability to edit
multiple components at once, only intersecting attributes.
The Application Express engine uses two logs to track user activity. At any given time, one log is designated as
current. For each rendered page view, the Application Express engine inserts one row into the log file. A log switch
occurs at the interval listed on the Page View Activity Logs page. At that point, the Application Express engine
removes all entries in the noncurrent log and designates it as current.
Delete SQL Workshop log entries. The SQL Workshop maintains a history of SQL statements run in the SQL
Commands.
Workspace administrators are users who perform administrator tasks specific to a workspace and have the access
to various types of activity reports.
Instance administrators are superusers that manage an entire hosted instance using the Application Express
Administration Services application.
Use a different workspace and same schema. Export and then import the application into a different workspace.
This is an effective way to prevent a production application from being modified by developers.
Use a different workspace and different schema. Export and then import the application into a different workspace
and install it so that it uses a different schema. This new schema needs to have the database objects required by
your application.
Use a different database with all its variations. Export and then import the application into a different Oracle
Application Express instance and install it using a different workspace, schema, and database.
Oracle APEX 125 MiB All This version includes patch 32598392: PSE
version BUNDLE FOR APEX 21.1, PATCH_VERSION 3.
21.1.v1
Oracle APEX 148 MiB All except This version includes patch 32006852: PSE
version 21c BUNDLE FOR APEX 20.2, PATCH_VERSION
20.2.v1 2020.11.12. You can see the patch number and
date by running the following query:
SELECT PATCH_VERSION, PATCH_NUMBER
FROM APEX_PATCHES;
Oracle APEX 173 MiB All except This version includes patch 30990551: PSE
version 21c BUNDLE FOR APEX 20.1, PATCH_VERSION
20.1.v1 2020.07.15.
Exists SQL Query Enter a query that causes the authorization scheme to pass if it returns
at least one row and causes the scheme to fail if it returns no rows
NOT Exists SQL Query Enter a query that causes the authorization scheme to pass if it returns
no rows and causes the scheme to fail if it returns one or more rows
PL/SQL Function Returning Boolean Enter a function body. If the function returns true, the authorization
succeeds.
Item in Expression 1 is NULL Enter an item name. If the item is null, the authorization succeeds.
Item in Expression1 is NOT NULL Enter an item name. If the item is not null, the authorization succeeds.
Value of Item in Expression 1 Equals Enter and item name and value.The authorization succeeds if the item’s
Expression 2 value equals the authorization value.
Value of Item in Expression 1 Does Enter an item name and a value. The authorization succeeds if the
NOT Equal Expression 2 item’s value is not equal to the authorization value.
Value of Preference in Expression 1 Enter an preference name and a value. The authorization succeeds if the
Does NOT Equal Expression 2 preference’s value is not equal to the authorization value.
Value of Preference in Expression 1 Enter an preference name and a value. The authorization succeeds if the
Equals Expression 2 preference’s value equal the authorization value.
Is In Group Enter a group name. The authorization succeeds if the group is enabled
as a dynamic group for the session.
Is Not In Group Enter a group name. The authorization succeeds if the group is not
enabled as a dynamic group for the session.
END
WebLogic Server 5.1 (The code name of this version was Denali. It was the first version
supporting hot deployment for applications via the command line.)
Oracle WebLogic Server (WLS) 11gR1 (10.3.5 and 10.3.6) Installation on Oracle Linux 5 and 6 - This article
describes the installation of Oracle WebLogic Server (WLS) 11gR1 (10.3.5 and 10.3.6) on Oracle Linux 5
and 6.
Oracle Forms and Reports 11gR2 Silent Installation on Oracle Linux 6 - An overview of the silent
installation of Oracle Forms and Reports 11gR2 on Oracle Linux 6.
Oracle WebLogic Server (WLS) 12c Release 1 (12.1.1) Development-Only Installation on Oracle Linux 5 and
6 - This article describes the development-only installation of Oracle WebLogic Server (WLS) 12c Release 1
(12.1.1) on Oracle Linux 5 and 6.
Oracle WebLogic Server (WLS) 12c Release 1 (12.1.2) Installation on Oracle Linux 5 and 6 - This article
describes the installation of Oracle WebLogic Server (WLS) 12c Release 1 (12.1.2) on Oracle Linux 5 and 6.
WebLogic Server 12cR1 (12.1.2 and 12.1.3) : ADF Application Development Runtime - Repository
Configuration Utility (RCU) - This article provides a brief example of using the Repository Configuration
Utility (RCU) from the ADF Application Development Runtime.
WebLogic Server 12cR2 (12.2.1) : ADF Application Development Runtime - Repository Configuration Utility
(RCU) in Silent Mode - This article provides a brief example of using the Repository Configuration Utility
(RCU) from the ADF Application Development Runtime in silent mode.
Amend the SSL Keystore Settings Using WebLogic Scripting Tool (WLST) - This article shows how to amend
the SSL keystore settings for a managed server in an existing domain using WebLogic Scripting Tool
(WLST).
Oracle HTTP Server (OHS) 12cR2 (12.2.1) Installation on Oracle Linux 6 and 7 - This article demonstrates
how to perform an installation of Oracle HTTP Server (OHS) on Oracle Linux.
Client interaction with weblogic server and database server:
Others support are in are SOAP, UDDI, Web services description language, JSR-181.
WebLogic is an Application Server that runs on a middle tier, between back-end databases and related applications
and browser-based thin clients. WebLogic Server mediates the exchange of requests from the client tier with
responses from the back-end tier.
WebLogic Server is based on Java Platform, Enterprise Edition (Java EE) (formerly known as Java 2 Platform,
Enterprise Edition or J2EE), the standard platform used to create Java-based multi-tier enterprise applications.
Oracle WebLogic Server vs. Apache Tomcat
The Apache Tomcat web server is often compared with WebLogic Server. The Tomcat web server serves static
content in web applications delivered in Java servlets and JavaServer Pages.
What are the different supported installation modes available for WebLogic Server?
Following are the three supported installation modes available for WebLogic Server:
The default port of the WebLogic Admin server is 7001. It is 7002 for SSL.
Console mode: The console mode is an installation mode based on interactive text messages.
Graphical mode: The graphical mode is an installation mode based on the interactive GUI.
Silent mode: The silent mode is an installation mode based on the properties file that is provided with it, which
doesn't require any interaction.
Stage and the non-stage deployments: The stage deployment is a process where the Admin receives a copy which
is later distributed amongst the available instances. On the other hand, the Non-Stage deployment provides a
restriction that each instance needs to contact the source for the necessary deployments. The auto-deployment
mode or auto-deployment feature of the WebLogic server works for the development mode.
Command-line: When a domain is created in the command-line, details like the username and password are
prompted via a wizard to configure.
Via boot.properties file: It specifies that if a domain is already created in Development mode, the encrypted
credentials are stored in an identity file. We can enter the password if this identity file isn't available during the
boot-up process.
Java Option: In this option, on a new boot, we can create a new identity file with the credentials or if there is a
requirement to create a new domain in Production mode.
What are the different thread states in a WebLogic server?
Following is a list of several thread states used in a WebLogic server:
1. ACTIVE
2. IDLE
3. STUCK
4. HOGGER
5. STANDBY
What are the differences between UNDO and REDO in Oracle WebLogic server?
Ans:The below table explains the differences between UNDO and REDO in Oracle WebLogic server:
appc compiler compiles JSPs and translates them into servlets. WebLogic Server automatically compiles JSPs if the
servlet class file is not present or is older than the JSP source file. See Using Ant Tasks to Create Compile Scripts.
You can also precompile JSPs and package the servlet class in a Web archive (WAR) file to avoid compiling in the
server. Servlets and JSPs may require additional helper classes that must also be deployed with the Web application.
WebLogic Resource Types
WebLogic resources are hierarchical. Therefore, the level at which you define security roles and security policies is
up to you. For example, you can define security roles and security policies for an entire Enterprise Application (EAR),
an Enterprise JavaBean (EJB) JAR containing multiple EJBs, a particular EJB within that JAR, or a single method within
that EJB.
Administrative Resources
An Administrative resource is a type of WebLogic resource that allows users to perform administrative tasks.
Examples of Administrative resources include the WebLogic Server Administration Console, the weblogic.Admin
tool, and Mbean APIs.
Administrative resources are limited in scope.
Application Resources
An Application resource is a type of WebLogic resource that represents an Enterprise Application, packaged as an
EAR (Enterprise Application aRchive) file. Unlike the other types of WebLogic resources, the hierarchy of an
Application resource is a mechanism for containment, rather than a type hierarchy. You secure an Application
resource when you want to protect multiple WebLogic resources that constitute the Enterprise Application (for
example, EJB resources, URL resources, and Web Service resources). In other words, securing an Enterprise
Application will cause all the WebLogic resources within that application to inherit its security configuration.
You can also secure, on an individual basis, the WebLogic resources that constitute an Enterprise Application (EAR).
Enterprise Information Systems (EIS) Resources
A J2EE Connector is a system-level software driver used by an application server such as WebLogic Server to connect
to an Enterprise Information System (EIS). BEA supports Connectors developed by EIS vendors and third-party
application developers that can be deployed in any application server supporting the Sun Microsystems J2EE
Platform Specification, Version 1.3. Connectors, also known as Resource Adapters, contain the Java, and if necessary,
the native components required to interact with the EIS.
An Enterprise Information System (EIS) resource is a specific type of WebLogic resource that is designed as a
Connector.
COM Resources
WebLogic jCOM is a software bridge that allows bidirectional access between Java/J2EE objects deployed in
WebLogic Server, and Microsoft ActiveX components available within the Microsoft Office family of products, Visual
Basic and C++ objects, and other Component Object Model/Distributed Component Object Model (COM/DCOM)
environments.
A COM resource is a specific type of WebLogic resource that is designed as a program component object according
to Microsoft’s framework.
Java DataBase Connectivity (JDBC) Resources
A Java DataBase Connectivity (JDBC) resource is a specific type of WebLogic resource that is related to JDBC. To
secure JDBC database access, you can create security policies and security roles for all connection pools as a group,
individual connection pools, and MultiPools.
Oracle’s service oriented architecture (SOA)
SOA is not a new concept. Sun defined SOA in the late 1990’s to describe Jini, which is an environment for dynamic
discovery and use of services over a network. Web services have taken the concept of services introduced by Jini
technology and implemented it as services delivered over the web using technologies such as XML, Web Services
Description Language (WSDL), Simple Object Access Protocol (SOAP), and Universal Description, Discovery, and
Integration(UDDI). SOA is emerging as the premier integration and architecture framework in today’s complex and
heterogeneous computing environment.
SOA uses the find-bind-execute paradigm as shown in Figure. In this paradigm, service providers register their
service in a public registry. This registry is used by consumers to find services that match certain criteria. If the
registry has such a service, it provides the consumer with a contract and an endpoint address for that service.
systems using web services technology. A web service is a standard approach to making a reusable component (a
piece of software functionality) available and accessible across the web and can be thought of as a repeatable
business task such as checking a credit balance, determining if a product is available or booking a holiday. Web
services are typically the way in which a business process is implemented. BPM is about providing a workflow layer
to orchestrate the web services. It provides the context to SOA essentially managing the dynamic execution of
services and allows business users to interact with them as appropriate.
SOA can be thought of as an architectural style which formally separates services (the business functionality) from
the consumers (other business systems). Separation is achieved through a service contract between the consumer
and producer of the service. This contract should address issues such as availability, version control, security,
performance etc. Having said this many web services are freely available over the internet but use of them is risky
without a service level agreement as they may not exist in future however, this may not be an issue if similar
alternate web services are available for use. In addition to a service contract there must be a way for providers to
publish service contracts and for consumers to locate service contracts. These typically occur through standards
such as the Universal Description, Discovery and Integration (UDDI 1993) which is an XML (XML 2003) based
markup language from W3C that enables businesses to publish details of services available on the internet. The
Web Services Description Language (WSDL 2007) provides a way of describing web services in an XML format. Note
that WSDL tells you how to interact with the web service but says nothing about how it actually works behind the
interface. The standard for communication is via SOAP (Simple Object Access Protocol) (SOAP 2007) which is a
specification for exchanging information in web services. These standards are not described in detail here as
information about them is commonly available so the reader is referred elsewhere for further information. The
important issue to understand about SOA in this context, is that it separates the contract from the implementation
of that contract thus producing an architecture which is loosely coupled resulting in easily reconfigurable systems,
which can adapt to changes in business processes easily.
There has been a convergence in recent times towards integrating various approaches such as SOA with SaaS
(Software as a Service) (Bennett et al., 2000) and the Web with much talk about Web Oriented Architectures
(WOA). This approach extends SOA to web-based applications in order allow businesses to open up relevant parts
of their IT systems to customers, vendors etc. as appropriate. This has now become a necessity in order to address
competitive advantage. WOA (Hinchcliffe 2006) is often considered to be a light-weight version of SOA using
RESTful Web services, open APIs and integration approaches such as mashups.
In order to manage the lifecycle of business processes in an SOA architecture, software is needed that will enable
you to, for example: expose services without the need for programming, compose services from other services,
deploy services on any platform (hardware and operating system), maintain security and usage policies,
orchestrate services i.e. centrally coordinate the invocation of multiple web services, automatically generate the
WSDL; provide a graphical design tool, a distributable runtime engine and service monitoring capabilities, have the
ability to graphically design transformations to and from non-XML formats. These are all typical functions provided
by SOA middleware along with a runtime environment which should include e.g. event detection, service hosting,
intelligent routing, message transformation processing, security capabilities, synchronous and asynchronous
message delivery. Often these functions will be divided into several products. An enterprise service bus (ESB) is
typically at the core of a SOA tool providing an event-driven, standards based messaging engine.
Oracle Fusion Applications Architecture
Memory Requirements for Installing Oracle Fusion Middleware
Fusion Middleware products support/fall under the middleware umbrella. Oracle Fusion Middleware (OFM) includes
application servers, business process management (BPM), service-oriented architecture (SOA), a cloud appliance.
Oracle Fusion Middleware is a middle layer of software that sits between the system level and the application level.
While system-level software may comprise of the OS and visualisation software, application-level includes products
such as E-Business Suite, Fusion Applications, Siebel etc.
Oracle Fusion Middleware is a collection of standards-based software products that spans a range of tools and
services: from Java EE and developer tools.
Middleware is the software layer that lies between the operating system and the applications on each side of a
distributed computer network. It is especially integral to information technology based on Extensible Markup
Language (XML), Simple Object Access Protocol (SOAP), Web services, SOA, Unicode, Web 2.0 infrastructure, and
Lightweight Directory Access Protocol (LDAP). Textual data is represented in the Unicode character set to support
data exchange in any language. UTF-8 is used as the standard encoding for transporting data for optimal
compatibility and efficiency, while traditional non-Unicode encodings can also be used where supported.
Operating System Minimum Physical Memory Required Minimum Available Memory Required
Linux 4 GB 8 GB
UNIX 4 GB 8 GB
Windows 4 GB 8 GB
Fusion Middleware products support/fall under the middleware umbrella. Oracle Fusion Middleware (OFM) includes
application servers, business process management (BPM), service-oriented architecture (SOA), a cloud appliance.
Below are the Fusion Middleware technologies:
Oracle offers three distinct products as part of the Oracle WebLogic Server 11g family:
Oracle WebLogic Server Standard Edition (SE)
Oracle WebLogic Server Enterprise Edition (EE)
Oracle WebLogic Suite
Oracle WebLogic 11g Server Standard Edition The WebLogic Server Standard Edition (SE) is a full-featured server,
but is mainly intended for developers to develop enterprise applications quickly. WebLogic Server SE implements all
the Java EE standards and offers management capabilities through the Administration Console.
Oracle WebLogic 11g Server Enterprise Edition Oracle WebLogic Server EE is designed for mission-critical
applications that require high availability and advanced diagnostic capabilities. The EE version contains all the
features of th SE version, of course, but in addition supports clustering of servers for high availability and the ability
to manage multiple domains, plus various diagnostic tools.
Oracle WebLogic Suite 11g
Oracle WebLogic Suite offers support for dynamic scale-out applications with features such as in-memory data grid
technology and comprehensive management capabilities.
It consists of the following components:
Following is the list of the core components of the Oracle WebLogic Server:
Domains
Node Manager
Admin server
Managed server
WebLogic server cluster
Enterprise Grid Messaging
JMS Messaging Standard
JRockit
Oracle Coherence
Oracle TopLink
Oracle WebLogic Server Web Services
Tuxedo
Load-balancing for RAC involves extensive manual configuration to use a round-robin configuration to distribute
the load among the instances. Load balancing clustered databases isn’t actually load balancing, but rather a way to
create a highly available infrastructure between database clusters. Load Balancing provides automated traffic
distribution from one entry point to multiple servers reachable from your virtual cloud network.
The Oracle Cloud Infrastructure Load Balancing service provides automated traffic distribution from one entry
point to multiple servers reachable from your virtual cloud network (VCN). The service offers a load balancer with
your choice of a public or private IP address, and provisioned bandwidth.
A load balancer improves resource utilization, facilitates scaling, and helps ensure high availability.
Each service in Oracle Cloud Infrastructure integrates with IAM for authentication and authorization, for all
interfaces (the Console, SDK or CLI, and REST API).
Each load balancer has the following configuration limits:
One IP address
16 backend sets
512 backend servers per backend set
512 backend servers total
16 listeners
Round‑robin load balancing is one of the simplest methods for distributing client requests across a group of
servers. Going down the list of servers in the group, the round‑robin load balancer forwards a client request to
each server in turn. The algorithm instructs the load balancer to go back to the top of the list and repeats again.
N a nutshell, round robin network load balancing rotates connection requests among web servers in the order that
requests are received. For a simplified example, assume that an enterprise has a cluster of three servers: Server A,
Server B, and Server C.
• The first request is sent to Server A.
• The second request is sent to Server B.
• The third request is sent to Server C.
The load balancer continues passing requests to servers based on this order. This ensures that the server load is
distributed evenly to handle high traffic.
If round robin are balanced 2,3 or 4 reuqest are sent to server A and so on.
Add the tns entry of the both database in tnsnames.ora file of DESTINATION host :
-- source db tns :
PRODDB
PRODDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = proddb.dbaclass.com or localhost)(PORT = 1532 ))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = PRODDB)
)
)
--Target db tns :
TESTDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = testdb.dbaclass.com)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = TESTDB)
)
)
HMSDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = HMSDB)
)
)
ORACLR_CONNECTION_DATA =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
)
(CONNECT_DATA =
(SID = CLRExtProc)
(PRESENTATION = RO)
)
)
LISTENER_HMSDB =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
LISTENER_TESTDB =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = testdb.dbaclass.com)(PORT = 1538))
))
SID_LIST_LISTENER_TESTDB =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = TESTDB)
(ORACLE_HOME = /oracle/app/oracle/product/12.1.0.2/dbhome_1)
(SID_NAME = TESTDB )
))
-- START THE LISTENER
lsnrctl start LISTENER_TESTDB
# listener.ora Network Configuration File: F:\app\APEXMISSION\product\12.2.0\dbhome_1\network\admin\listener.ora
# Generated by Oracle configuration tools.
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = CLRExtProc)
(ORACLE_HOME = F:\app\APEXMISSION\product\12.2.0\dbhome_1)
(PROGRAM = extproc)
(ENVS = "EXTPROC_DLLS=ONLY:F:\app\APEXMISSION\product\12.2.0\dbhome_1\bin\oraclr12.dll")
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
)
)
The client is instructed to connect to the protocol address of the first Oracle Connection Manager, as indicated by:
(ADDRESS=(PROTOCOL=tcp)(HOST=host1)(PORT=1630))
Component and Description Default Port Number Port Range Protocol
The Automatic Diagnostics Repository (ADR) is a hierarchical file-based repository for handling diagnostic
information.
A small panel will ask for confirmation. Click Continue in case you don’t want to change any information provided.
Select your location on the map and install Linux.
Provide the login details.
24. Complete the installation process
After the installation is complete you will see a prompt to restart the computer.
Use the .iso file or ISO file that can be downloaded from the internet and start the virtual box.
Here we need to allocate RAM to virtual OS. It should be 2 GB as per minimum requirement.
Choose a type of storage on physical hard disk. And choose the disk size(min 12 GB as per requirement)
Then choose how much you want to shrink your drive. It is recommended that you set aside at least 20GB
(20,000MB) for Linux.
Select the drive for completing the OS installation. Select “Erase Disk and install Ubuntu” in case you want to
replace the existing OS otherwise select “Something else” option and click INSTALL NOW.
You are almost done. It should take 10-15 minutes to complete the installation. Once the installation finishes,
restart the system.
Some of those kinds of requiring intermediate Linux commands are mentioned below:
1. Rm: Rm command is used for mainly deleting or removing files or multiple files. If we use this rm command
recursively, then it will remove the entire directory.
2. Uname: This command is very much useful for displaying the entire current system information properly. It
helps for displaying Linux system information in the Linux environment in a proper way for understanding the
system’s current configuration.
3. Uptime: The uptime command is also one of the key commands for the Kali Linux platform, which gives
information about how long the system is running.
4. Users: These Kali Linux commands are used for displaying the login user name who is currently logged in on the
Linux system.
5. Less: Less command is very much used for displaying the file without opening or using cat or vi commands. This
command is basically one of the powerful extensions of the ‘more’ command in the Linux environment.
6. More: This command is used for displaying proper output in one page at a time. It is mainly useful for reading
one long file by avoiding scrolling the same.
7. Sort: This is for using sorting the content of one specific define file. This is very much useful for displaying some
of the critical contents of a big file in sorted order. If we user including this sort command, then it will give reverse
order of the content.
8. Vi: This is one of the key editor available from the first day onwards in UNIX or Linux platform. It normally
provided two kinds of mode, normal and insert.
9. Free: It is provided details information of free memory or RAM available in a Linux system.
10. History: This command is holding the history of all the executed command on the Linux platform.
Operating System Minimum Physical Memory Required Minimum Available Memory Required
Linux 4 GB 8 GB
UNIX 4 GB 8 GB
Windows 4 GB 8 GB
END
The declarative part declares PL/SQL variables, exceptions, and cursors. The executable part contains PL/SQL code
and SQL statements, and can contain nested blocks. Exception handlers contain code that is called when the
exception is raised, either as a predefined PL/SQL exception (such as NO_DATA_FOUND or ZERO_DIVIDE) or as an
exception that you define.
Anonymous block
An anonymous block is a PL/SQL program unit that has no name. An anonymous block consists of an optional
declarative part, an executable part, and one or more optional exception handlers.
This PL/SQL anonymous block prints the names of all employees in department 20 in the hr.employees table by
using the DBMS_OUTPUT package:
DECLARE
Last_name VARCHAR2(10);
Cursor c1 IS SELECT last_name
FROM employees
WHERE department_id = 20;
BEGIN
OPEN c1;
LOOP
FETCH c1 INTO Last_name;
EXIT WHEN c1%NOTFOUND;
DBMS_OUTPUT.PUT_LINE(Last_name);
END LOOP;
END;
/
Functions: A function must always return a value, but a procedure may or may not return a value.
CREATE [OR REPLACE] FUNCTION function_name [parameters] RETURN return_datatype; {IS, AS}
Declaration_section <variable,constant> ;
BEGIN Execution_section Return return_variable; EXCEPTION exception section
Return return_variable;
END;
create or replace function getsal (no IN number) return number is sal number(5); begin select salary into sal from
emp where id=no; return sal; end;
Procedure: A procedure is similar to an anonymous PL/SQL Block but it is named for repeated usage.
CREATE OR REPLACE PROCEDURE p1(id IN NUMBER, sal IN NUMBER) AS BEGIN INSERT INTO emp VALUES(id, sal);
DBMD_OUTPUT.PUT_LINE(‘VALUE INSERTED.’); END;
Procedures VS Functions:
A function MUST return a value
A procedure cannot return a value
Procedures and functions can both return data in OUT and IN OUT parameters
The return statement in a function returns control to the calling program and returns the results of the
function
The return statement of a procedure returns control to the calling program and cannot return a value
Functions can be called from SQL, procedure cannot
Functions are considered expressions, procedure are not
Package: A package is an encapsulated collection of related program objects stored together in the database.
Example:
CREATE PACKAGE citi AS
Example
The following example shows a package specification for a package named EMPLOYEE_MANAGEMENT. The
package contains one stored function and two stored procedures.
The body for this package defines the function and the procedures:
The function accepts all arguments for the fields in the employee table except for the employee number. A value
for this field is supplied by a sequence. The function returns the sequence number generated by the call to this
function.
New_empno NUMBER(10);
BEGIN
SELECT emp_sequence.NEXTVAL INTO new_empno FROM dual;
INSERT INTO emp VALUES (new_empno, name, job, mgr,
hiredate, sal, comm, deptno);
RETURN (new_empno);
END hire_emp;
The procedure deletes the employee with an employee number that corresponds to the argument emp_id. If no
employee is found, then an exception is raised.
BEGIN
DELETE FROM emp WHERE empno = emp_id;
IF SQL%NOTFOUND THEN
raise_application_error(-20011, ‘Invalid Employee
Number: ‘ || TO_CHAR(emp_id));
END IF;
END fire_emp;
The procedure accepts two arguments. Emp_id is a number that corresponds to an employee number. Sal_incr is
the amount by which to increase the employee’s salary.
BEGIN
UPDATE emp
SET sal = sal + sal_incr
WHERE empno = emp_id;
IF SQL%NOTFOUND THEN
raise_application_error(-20011, ‘Invalid Employee
Number: ‘ || TO_CHAR(emp_id));
END IF;
END sal_raise;
END employee_management;
DBMS_APPLICATION_INFO Lets you register an application name with the database for auditing or
performance tracking purposes.
DBMS_AQ Lets you add a message (of a predefined object type) onto a queue or to
dequeue a message.
DBMS_AQADM Lets you perform administrative functions on a queue or queue table for
messages of a predefined object type.
DBMS_DDL Provides access to some SQL DDL statements from stored procedures, and
provides special administration operations not available as DDLs.
DBMS_DEBUG A PL/SQL API to the PL/SQL debugger layer, Probe, in the Oracle server.
DBMS_HS_PASSTHROUGH Lets you use Heterogeneous Services to send pass-through SQL statements to
non-Oracle systems.
DBMS_IOT Creates a table into which references to the chained rows for an Index
Organized Table can be placed using the ANALYZE command.
DBMS_JOB Lets you schedule administrative procedures that you want performed at
BEGIN periodic intervals; it is also the interface for the job queue.
DBMS_JOB.REMOVE(14144); DBMS_SCHEDULER.DROP_JOB also do same work to remove job by job id.
COMMIT;
END;
DBMS_LOB Provides general purpose routines for operations on Oracle Large Object
(LOBs) datatypes – BLOB, CLOB (read-write), and BFILEs (read-only).
DBMS_LOCK Lets you request, convert and release locks through Oracle Lock Management
services.
DBMS_LOGMNR Provides functions to initialize and run the log reader.
DBMS_LOGMNR_D Queries the dictionary tables of the current database, and creates a text
based file containing their contents.
DBMS_OFFLINE_OG Provides public APIs for offline instantiation of master groups.
DBMS_ORACLE_TRACE_AGENT Provides client callable interfaces to the Oracle TRACE instrumentation within
the Oracle7 Server.
DBMS_ORACLE_TRACE_USER Provides public access to the Oracle release 7 Server Oracle TRACE
instrumentation for the calling user.
DBMS_PIPE Provides a DBMS pipe service which enables messages to be sent between
sessions.
DBMS_PDB The DBMS_PDB package provides an interface to examine and manipulate
data about pluggable databases.
DBMS_PROFILER Provides a Probe Profiler API to profile existing PL/SQL applications and
identify performance bottlenecks.
DBMS_RANDOM Provides a built-in random number generator.
DBMS_SESSION Provides access to SQL ALTER SESSION statements, and other session
information, from stored procedures.
DBMS_SHARED_POOL Lets you keep objects in shared memory, so that they will not be aged out
with the normal LRU mechanism.
DBMS_SNAPSHOT Lets you refresh snapshots that are not part of the same refresh group and
(synonym DBMS_MVIEW) purge logs. Requires the Distributed Option.
DBMS_SPACE_ADMIN Provides tablespace and segment space administration not available through
the standard SQL.
DBMS_SQL Lets you use dynamic SQL to access the database.
DBMS_STANDARD Provides language facilities that help your application interact with Oracle.
DBMS_STATS Provides a mechanism for users to view and modify optimizer statistics
gathered for database objects.
DBMS_TRACE Provides routines to start and stop PL/SQL tracing.
DBMS_TRANSACTION Provides access to SQL transaction statements from stored procedures and
monitors transaction activities.
DBMS_TTS Checks if the transportable set is self-contained.
DEBUG_EXTPROC Lets you debug external procedures on platforms with debuggers that can
attach to a running process.
OUTLN_PKG Provides the interface for procedures and functions associated with
management of stored outlines.
PLITBLM Handles index-table operations.
SDO_ADMIN Provides functions implementing spatial index creation and maintenance for
spatial objects.
SDO_GEOM Provides functions implementing geometric operations on spatial objects.
SDO_MIGRATE Provides functions for migrating spatial data from release 7.3.3 and 7.3.4 to
8.1.x.
SDO_TUNE Provides functions for selecting parameters that determine the behavior of
the spatial indexing scheme used in the Spatial Cartridge.
STANDARD Declares types, exceptions, and subprograms which are available
automatically to every PL/SQL program.
TimeSeries Provides functions that perform operations, such as extraction, retrieval,
arithmetic, and aggregation, on time series data.
TimeScale Provides scaleup and scaledown functions.
UTL_FILE Enables your PL/SQL programs to read and write operating system (OS) text
files and provides a restricted version of standard OS stream file I/O.
UTL_HTTP Enables HTTP callouts from PL/SQL and SQL to access data on the Internet or
to call Oracle Web Server Cartridges.
UTL_PG Provides functions for converting COBOL numeric data into Oracle numbers
and Oracle numbers into COBOL numeric data.
UTL_RAW Provides SQL functions for RAW datatypes that concat, substr, etc. to and
from RAWS.
UTL_REF Enables a PL/SQL program to access an object by providing a reference to the
object.
Vir_Pkg Provides analytical and conversion functions for Visual Information Retrieval.
DISPLAY_AWR - to format and display the contents of the execution plan of a stored SQL statement in the AWR.
DISPLAY_CURSOR - to format and display the contents of the execution plan of any loaded cursor.
DISPLAY_SQL_PLAN_BASELINE - to display one or more execution plans for the SQL statement identified by SQL
handle
DISPLAY_SQLSET - to format and display the contents of the execution plan of statements stored in a SQL tuning
set.
DBMS_OUTPUT package
DBMS Output In PL/SQL
DBMS_OUTPUT package allows the display of the PL/SQL output produced from subprograms and blocks of code.
This helps us to debug, test our code, and to send messages.
The put_line procedure produces the output data to a buffer. The information is displayed with the help of the
get_line procedure or by configuring SERVEROUTPUT ON in the SQL*Plus.
DBMS_OUTPUT package contains the following subprograms:
Name Purposes
DBMS_OUTPUT.DISABLE Confines the message output.
DBMS_OUTPUT.ENABLE (buffer IN INTEGER DEFAULT 20000) Allows the message output. If the buffer is set to
NULL, it represents an unlimited size of the buffer.
DBMS_OUTPUT.GET_LINE (line OUT VARCHAR, status OUT NUMBER) Fetches a buffered information within a
single line.
DBMS_OUTPUT.NEW_LINE Terminates an end of line marker.
DBMS_OUTPUT.PUT (item IN VARCHAR) Puts an incomplete line in the buffer.
DBMS_OUTPUT.PUT_LINE (item IN VARCHAR) Puts a complete line in the buffer.
Code Implementation:
DECLARE
BEGIN
DBMS_OUTPUT.PUT_LINE ('Software Testing Help!');
END;
DBMS_CRYPTO Lets you encrypt and decrypt stored data, can be used
in conjunction with PL/SQL programs running network
communications, and supports encryption and hashing
algorithms
DBMS_MONITOR Let you use PL/SQL for controlling additional tracing and
statistics gathering
DBMS_MVIEW Lets you refresh snapshots that are not part of the same
refresh group and purge logs. DBMS_SNAPSHOT is a
synonym.
DBMS_REPCAT_ADMIN Lets you create users with the privileges needed by the
symmetric replication facility. Requires the Replication
Option.
DBMS_RESUMABLE Lets you suspend large operations that run out of space
or reach space limits after executing for a long time, fix
DBMS_SERVER_ALERT Lets you issue alerts when some threshold has been
violated
Overloading A Package
There can be multiple subprograms within a package having similar names. This feature is useful if we want to
have homogenous parameters with heterogeneous data types. The concept of overloading within the package
allows the programmers to mention clearly the type of action they want to perform.
Coding Implementation with procedure overloading. (Package created):
CREATE PACKAGE overloadingprocedure AS
Procedure overl_method (p varchar2);
Procedure overl_method (numbr number);
END overloadingprocedure; /
Coding Implementation with procedure overloading. (Package body created)
CREATE OR REPLACE PACKAGE BODY overloadingprocedure AS
--procedure implemented
Procedure overl_method (p varchar2) AS
BEGIN
DBMS_OUTPUT.PUT_LINE ('First Procedure: ' || p);
END;
--procedure implemented
Procedure overl_method (numbr number) AS
BEGIN
DBMS_OUTPUT.PUT_LINE ('Second Procedure: ' || numbr);
END;
END;
RETURN l_tab;
END;
/
-- Test it.
SELECT *
FROM TABLE(get_tab_tf(10))
ORDER BY id DESC;
Pipelined Table Functions
Pipelining negates the need to build huge collections by piping rows out of the function as they are created, saving
memory and allowing subsequent processing to start before all the rows are generated.
Pipelining enables a table function to return rows faster and can reduce the memory required to cache a table
function's results.
A pipelined table function can return the table function's result collection in subsets. The returned collection
behaves like a stream that can be fetched from on demand. This makes it possible to use a table function like a
virtual table.
Pipelined table functions can be implemented in two ways:
Native PL/SQL approach: The consumer and producers can run on separate execution threads (either in the same
or different process context) and communicate through a pipe or queuing mechanism. This approach is similar to
co-routine execution.
Interface approach: The consumer and producers run on the same execution thread. Producer explicitly returns
the control back to the consumer after producing a set of results. In addition, the producer caches the current
state so that it can resume where it left off when the consumer invokes it again.
Pipelined table functions include the PIPELINED clause and use the PIPE ROW call to push rows out of the function
as soon as they are created, rather than building up a table collection. Notice the empty RETURN call, since there is
no collection to return from the function.
RETURN;
END;
/
-- Test it.
SELECT *
FROM TABLE(get_tab_ptf(10))
ORDER BY id DESC;
STATISTICS_LEVEL
The STATISTICS_LEVEL parameter was introduced in Oracle9i Release 2 (9.2) to control all major statistics
collections or advisories in the database. The level of the setting affects the number of statistics and advisories that
are enabled:
BASIC: No advisories or statistics are collected.
TYPICAL: The following advisories or statistics are collected:
Buffer cache advisory
MTTR advisory
Shared Pool sizing advisory
Segment level statistics
PGA target advisory
Timed statistics
ALL: All of TYPICAL, plus the following:
Timed operating system statistics
Row source execution statistics
The parameter is dynamic and can be altered using the following.
Oracle can only manage statistic collections and advisories whose parameter setting is undefined in the spfile. By
default the TIMED_STATISTICS parameter is set to TRUE so this must be reset for it to be controled by the statistics
level, along with any other conflicting parameters.
BEGIN
FOR rec IN (SELECT *
FROM all_tables
WHERE owner NOT IN ('SYS','SYSDBA'))
LOOP
dbms_stats.gather_table_stats(rec.owner, rec.table_name);
END LOOP;
END;
GUID
The globally unique identifier (GUID) data type in oracle and SQL Server is represented by the uniqueidentifier data
type, which stores a 16-byte binary value. A GUID is a binary number, and its main use is as an identifier that must
be unique in a network that has many computers at many sites. It also used as Surrogate key, If your tables have a
natural key already - a true key - do not replace them with surrogates.
A GUID should be a 16 byte raw.
if you want to use sys_guid() you will either:
a) deal with the FACT that it is raw(16) and program accordingly
b) deal with the FACT you can safely store that in a varchar2(32) using hex characters
SYS_GUID () function in oracle database can be defined as a built-in function in PL/SQL which is used to generate
and return a global unique identifier (GUID) (RAW value) of size 16 bytes for each row of the table and it does not
accept any argument in the function, it generates GUID which are supposed to be unique meaning they should
never be repeated twice and it also consists of a host identifier a process or thread identifier of the process which
invoked the function.
This function generates unique identifiers that are of type RAW and it is a 128-bit number or 16 bytes in size.
SELECT sys_guid() from DUAL; The DUAL table is a one-column table present in oracle database. The table has a
single VARCHAR2(1) (data type)column called DUMMY which has a value of ‘X’.
INSERT INTO employee(employee_id, employee_name, city) values(sys_guid(), 'Nilanjan', 'Mumbai');
DBMS_RANDOM This also work same as sys_guid() function
Operational Notes
DBMS_RANDOM.RANDOM produces integers in [-2^^31, 2^^31).
DBMS_RANDOM.VALUE produces numbers in [0,1) with 38 digits of precision.
Example DBMS_Random procedure and Bulk insert into the table:
insert into PERSONS (PERSONID,lastname,firstname,address,city)
values((SELECT dbms_random.random() FROM dual), 'ahmad AHMAD', 'raza', 'hujra', 'PUNJAB,PAKISTAN');
--------------------------------------------------------------
create table EMR.STUDENT
(
student_number NUMBER generated by default on null as identity,
first_name VARCHAR2(255 CHAR),
last_name VARCHAR2(255 CHAR),
phone_type VARCHAR2(30 CHAR),
department VARCHAR2(4000 CHAR),
phone NUMBER,
address_type VARCHAR2(4000 CHAR),
address VARCHAR2(4000 CHAR)
)
tablespace USERS
pctfree 10
initrans 1
maxtrans 255
storage
(
initial 64K
next 1M
minextents 1
maxextents unlimited
);
-- Create/Recreate primary, unique and foreign key constraints
alter table EMR.STUDENT
add constraint STUDENT_STUDENT_NUMBER_PK primary key (STUDENT_NUMBER)
using index
tablespace USERS
pctfree 10
initrans 2
maxtrans 255
storage
(
initial 64K
next 1M
minextents 1
maxextents unlimited
);
------------------------
SELECT * FROM STUDENT
------------------------
BEGIN
FOR loop_counter IN 1..100 LOOP
---------------------
BEGIN
FOR i IN 1..200 LOOP
INSERT INTO STUDENT (
FIRST_NAME,
LAST_NAME
) VALUES (
i,
'MY ADDRESS IS SKM' || i
);
END LOOP;
END;
(SELECT dbms_random.string('l', 8)||'@'||dbms_random.string('l', 7)||'.com' email from dual),
SYS_GUID() example:
insert into PERSONS (PERSONID,lastname,firstname,address,city)
values(sys_guid(), 'AHMAD HASSAN', 'raza', 'hujra', 'PUNJAB,PAKISTAN');--WILL INSERT RANDOM NUMBER 1 TO
UNDEFINED
ORA-00932 inconsistent datatypes: expected Number got Binary
===========================END=========================