0% found this document useful (0 votes)
314 views76 pages

Rdbms Important

Download as pdf or txt
0% found this document useful (0 votes)
314 views76 pages

Rdbms Important

Download as pdf or txt
Download as pdf or txt
You are on page 1/ 76

Relational Database Management System

Unit 1
Distribute Database
1. Explain architecture of distributed processing system. List out advantage and disadvantages of
distributed System. (2017,2016,2015,2013,2012,2011,2010,2009).
Distributed Systems Architecture
Architecture Styles
o Formulated in terms of components, and the way they are connected:
o A component is a modular unit with well-defined interfaces; replaceable; reusable
o A connector is a communication link between modules;
mediates coordination or cooperation among components

o Four styles that are most important:


1. Layered architecture
2. Object-based architecture
3. Data-centered architecture -- processes communicate through a common repository (passive or
active).
4. Event-based architecture -- processes communicate through the propagation of events
o Organize into logically different components, and distribute those components over the various
machines.

(a) Layered style is used for client-server system


(b) Object-based style for distributed object systems.
o less structured
o component = object
o connector = RPC or RMI
o Decoupling processes in space ( anonymous ) and also time ( asynchronous ) has led to
alternative styles.
(a) Publish/subscribe [decoupled in space] ( event-based architecture )
Event-based arch. supports several communication styles:
Publish-subscribe
Broadcast
Point-to-point
Decouples sender & receiver; asynchronous communication
Event-driven architecture (EDA) promotes the production, detection, consumption of,
and reaction to events.
An event can be defined as "a significant change in state". For example, when a consumer
purchases a car, the car's state changes from "for sale" to "sold". A car dealer's system
architecture may treat this state change as an event to be produced, published, detected and
consumed by various applications within the architecture.
The main advantage of this architecture is that they are loosely coupled; they need not explicitly
refer to each other. For example, If we have an alarm system that records information when the
front door opens, the door itself doesn't know that the alarm system will add information when
the door opens, just that the door has been opened.

Advantages
 Resource sharing − Sharing of hardware and so ware resources.
 Openness − Flexibility of using hardware and so ware of different vendors.
 Concurrency − Concurrent processing to enhance performance.
 Scalability − Increased throughput by adding new resources.
 Fault tolerance − The ability to con nue in opera on a er a fault has occurred.

Disadvantages
 Complexity − They are more complex than centralized systems.
 Security − More susceptible to external attack.
 Manageability − More effort required for system management.
 Unpredictability − Unpredictable responses depending on the system organization and
network load.

2. What is difference between DBMS and RDBMS? (2017)


Although DBMS and RDBMS both are used to store information in physical database but there are
some remarkable differences between them.The main differences between DBMS and RDBMS are
given below:

No. DBMS RDBMS

1) DBMS applications store data as file. RDBMS applications store data in a tabular form.

2) In DBMS, data is generally stored in In RDBMS, the tables have an identifier called primary
either a hierarchical form or a key and the data values are stored in the form of
navigational form. tables.

3) Normalization is not present in Normalization is present in RDBMS.


DBMS.

4) DBMS does not apply any RDBMS defines the integrity constraint for the
security with regards to data purpose of ACID (Atomocity, Consistency, Isolation and
manipulation. Durability) property.
5) DBMS uses file system to store data, in RDBMS, data values are stored in the form of tables,
so there will be no relation between so a relationship between these data values will be
the tables. stored in the form of a table as well.

6) DBMS has to provide some uniform RDBMS system supports a tabular structure of the data
methods to access the stored and a relationship between them to access the stored
information. information.

7) DBMS does not support distributed RDBMS supports distributed database.


database.

8) DBMS is meant to be for small RDBMS is designed to handle large amount of data. it
organization and deal with small supports multiple users.
data. it supports single user.

9) Examples of DBMS are file Example of RDBMS are mysql, postgre, sql
systems, xml etc. server, oracle etc.

After observing the differences between DBMS and RDBMS, you can say that RDBMS is an extension
of DBMS. There are many software products in the market today who are compatible for both DBMS
and RDBMS. Means today a RDBMS application is DBMS application and vice-versa.

3. Explain Relational Database Management System with example. Explain all the component of
database management system with suitable diagram. (2017)
RDBMS Stands for "Relational Database Management System." An RDBMS is a DBMS designed
specifically for relational databases. Therefore, RDBMS are a subset of DBMS.
A relational database refers to a databasethat stores data in a structured format,
using rows and columns. This makes it easy to locate and access specific values within the database.
It is "relational" because the values within each table are related to each other. Tables may also be
related to other tables. The relational structure makes it possible to run queries across multiple
tables at once.

While a relational database describes the type of database an RDMBS manages, the RDBMS refers to
the database program itself. It is the software that executes queries on the data, including adding,
updating, and searching for values. An RDBMS may also provide a visual representation of the data.
For example, it may display data in a tables like a spreadsheet, allowing you to view and even edit
individual values in the table. Some RDMBS programs allow you to create forms that can streamline
entering, editing, and deleting data.

Most well known DBMS applications fall into the RDBMS category. Examples include Oracle
Database, MySQL, Microsoft SQL Server, and IBM DB2. Some of these programs support non-
relational databases, but they are primarily used for relational database management.
Examples of non-relational databases include Apache HBase, IBM Domino, and Oracle NoSQL
Database. These type of databases are managed by other DMBS programs that support NoSQL,
which do not fall into the RDBMS category.

Components of DBMS
DBMS have several components, each performing very significant tasks in the database
management system environment. Below is a list of components within the database and its
environment.
Software
This is the set of programs used to control and manage the overall database. This includes the DBMS
software itself, the Operating System, the network software being used to share the data among
users, and the application programs used to access data in the DBMS.

Hardware
Consists of a set of physical electronic devices such as computers, I/O devices, storage devices, etc.,
this provides the interface between computers and the real world systems.

Data
DBMS exists to collect, store, process and access data, the most important component. The database
contains both the actual or operational data and the metadata.

Procedures
These are the instructions and rules that assist on how to use the DBMS, and in designing and
running the database, using documented procedures, to guide the users that operate and manage it.

Database Access Language


This is used to access the data to and from the database, to enter new data, update existing data, or
retrieve required data from databases. The user writes a set of appropriate commands in a database
access language, submits these to the DBMS, which then processes the data and generates and
displays a set of results into a user readable form.

Query Processor
This transforms the user queries into a series of low level instructions. This reads the online user’s
query and translates it into an efficient series of operations in a form capable of being sent to the
run time data manager for execution.
Run Time Database Manager
Sometimes referred to as the database control system, this is the central software component of the
DBMS that interfaces with user-submitted application programs and queries, and handles database
access at run time. Its function is to convert operations in user’s queries. It provides control to
maintain the consistency, integrity and security of the data.

Data Manager
Also called the cache manger, this is responsible for handling of data in the database, providing a
recovery to the system that allows it to recover the data after a failure.

Database Engine
The core service for storing, processing, and securing data, this provides controlled access and rapid
transaction processing to address the requirements of the most demanding data consuming
applications. It is often used to create relational databases for online transaction processing or
online analytical processing data.

Data Dictionary
This is a reserved space within a database used to store information about the database itself. A
data dictionary is a set of read-only table and views, containing the different information about the
data used in the enterprise to ensure that database representation of the data follow one standard
as defined in the dictionary.

Report Writer
Also referred to as the report generator, it is a program that extracts information from one or more
files and presents the information in a specified format. Most report writers allow the user to select
records that meet certain conditions and to display selected fields in rows and columns, or also
format the data into different charts.

4. Give short note on (a) distributed database management system. (b). Data warehousing (2017)
(a) A distributed database is a collection of multiple interconnected databases, which are spread
physically across various locations that communicate via a computer network.
Features
 Databases in the collection are logically interrelated with each other. Often they represent a
single logical database.
 Data is physically stored across multiple sites. Data in each site can be managed by a DBMS
independent of the other sites.
 The processors in the sites are connected via a network. They do not have any multiprocessor
configuration.
 A distributed database is not a loosely connected file system.
 A distributed database incorporates transaction processing, but it is not synonymous with a
transaction processing system.

(b) Data warehousing (2017)


Data warehousing is the process of constructing and using a data warehouse. A data warehouse is
constructed by integrating data from multiple heterogeneous sources that support analytical
reporting, structured and/or ad hoc queries, and decision making. Data warehousing involves data
cleaning, data integration, and data consolidations .

Data Warehouse Information


There are decision support technologies that help utilize the data available in a data warehouse.
These technologies help executives to use the warehouse quickly and effectively. They can gather
data, analyze it, and take decisions based on the information present in the warehouse. The
information gathered in a warehouse can be used in any of the following domains −
 Tuning Production Strategies − The product strategies can be well tuned by reposi oning the
products and managing the product portfolios by comparing the sales quarterly or yearly.
 Customer Analysis − Customer analysis is done by analyzing the customer's buying preferences,
buying time, budget cycles, etc.
 Operations Analysis − Data warehousing also helps in customer rela onship management, and
making environmental corrections. The information also allows us to analyze business
operations.

Integrating Heterogeneous Databases


To integrate heterogeneous databases, we have two approaches −
 Query-driven Approach
 Update-driven Approach

5. Explain ACID property in details. (2017)


A transaction is a single logical unit of work which accesses and possibly modifies the contents of a
database. Transactions access data using read and write operations.
In order to maintain consistency in a database, before and after transaction, certain properties are
followed.

These are called ACID properties.


Atomicity
By this, we mean that either the entire transaction takes place at once or doesn’t happen at all.
There is no midway i.e. transactions do not occur partially. Each transaction is considered as one unit
and either runs to completion or is not executed at all. It involves following two operations.
—Abort: If a transaction aborts, changes made to database are not visible.
—Commit: If a transaction commits, changes made are visible.
Atomicity is also known as the ‘All or nothing rule’.
Consider the following transaction T consisting of T1 and T2: Transfer of 100 from account X to
account Y.

If the transaction fails after completion of T1 but before completion of T2.( say, after write(X) but
before write(Y)), then amount has been deducted from X but not added to Y. This results in an
inconsistent database state. Therefore, the transaction must be executed in entirety in order to
ensure correctness of database state.
Consistency
This means that integrity constraints must be maintained so that the database is consistent before
and after the transaction. It refers to correctness of a database. Referring to the example above,
The total amount before and after the transaction must be maintained.
Total before T occurs = 500 + 200 = 700.
Total after T occurs = 400 + 300 = 700.
Therefore, database is consistent. Inconsistency occurs in case T1 completes but T2fails. As a result T
is incomplete.

Isolation
This property ensures that multiple transactions can occur concurrently without leading to
inconsistency of database state. Transactions occur independently without interference. Changes
occurring in a particular transaction will not be visible to any other transaction until that particular
change in that transaction is written to memory or has been committed. This property ensures that
the execution of transactions concurrently will result in a state that is equivalent to a state achieved
these were executed serially in some order.
Let X= 500, Y = 500.
Consider two transactions T and T”.

Suppose T has been executed till Read (Y) and then T’’ starts. As a result , interleaving of operations
takes place due to which T’’ reads correct value of X but incorrect value of Yand sum computed by
T’’: (X+Y = 50, 000+500=50, 500)
is thus not consistent with the sum at end of transaction:
T: (X+Y = 50, 000 + 450 = 50, 450).
This results in database inconsistency, due to a loss of 50 units. Hence, transactions must take place
in isolation and changes should be visible only after a they have been made to the main memory.

Durability:
This property ensures that once the transaction has completed execution, the updates and
modifications to the database are stored in and written to disk and they persist even is system
failure occurs. These updates now become permanent and are stored in a non-volatile memory. The
effects of the transaction, thus, are never lost.
The ACID properties, in totality, provide a mechanism to ensure correctness and consistency of a
database in a way such that each transaction is a group of operations that acts a single unit,
produces consistent results, acts in isolation from other operations and updates that it makes are
durably stored.

6. What do you mean by recovery? How many techniques are used in SQL to recover data? (2017)
Recovery
When a system crashes, it may have several transactions being executed and various files opened for
them to modify the data items. Transactions are made of various operations, which are atomic in
nature. But according to ACID properties of DBMS, atomicity of transactions as a whole must be
maintained, that is, either all the operations are executed or none.
When a DBMS recovers from a crash, it should maintain the following −
 It should check the states of all the transactions, which were being executed.
 A transaction may be in the middle of some operation; the DBMS must ensure the atomicity of
the transaction in this case.
 It should check whether the transaction can be completed now or it needs to be rolled back.
 No transactions would be allowed to leave the DBMS in an inconsistent state.
There are two types of techniques, which can help a DBMS in recovering as well as maintaining
the atomicity of a transaction −
 Maintaining the logs of each transaction, and writing them onto some stable storage before
actually modifying the database.
 Maintaining shadow paging, where the changes are done on a volatile memory, and later, the
actual database is updated.
Log-based Recovery
Log is a sequence of records, which maintains the records of actions performed by a transaction.
It is important that the logs are written prior to the actual modification and stored on a stable
storage media, which is failsafe.
Log-based recovery works as follows −
 The log file is kept on a stable storage media.
 When a transaction enters the system and starts execution, it writes a log about it.
<Tn, Start>
 When the transaction modifies an item X, it write logs as follows −
<Tn, X, V1, V2>
It reads Tn has changed the value of X, from V1 to V2.
 When the transaction finishes, it logs −
<Tn, commit>
The database can be modified using two approaches −
 Deferred database modification − All logs are wri en on to the stable storage and the database
is updated when a transaction commits.
 Immediate database modification − Each log follows an actual database modifica on. That is,
the database is modified immediately after every operation.
Recovery with Concurrent Transactions
When more than one transaction are being executed in parallel, the logs are interleaved. At the
time of recovery, it would become hard for the recovery system to backtrack all logs, and then
start recovering. To ease this situation, most modern DBMS use the concept of 'checkpoints'.
Checkpoint
Keeping and maintaining logs in real time and in real environment may fill out all the memory
space available in the system. As time passes, the log file may grow too big to be handled at all.
Checkpoint is a mechanism where all the previous logs are removed from the system and stored
permanently in a storage disk. Checkpoint declares a point before which the DBMS was in
consistent state, and all the transactions were committed.
Recovery
When a system with concurrent transactions crashes and recovers, it behaves in the following
manner −

 The recovery system reads the logs backwards from the end to the last checkpoint.
 It maintains two lists, an undo-list and a redo-list.
 If the recovery system sees a log with <Tn, Start> and <Tn, Commit> or just <Tn, Commit>, it
puts the transaction in the redo-list.
 If the recovery system sees a log with <Tn, Start> but no commit or abort log found, it puts the
transaction in undo-list.
All the transactions in the undo-list are then undone and their logs are removed. All the
transactions in the redo-list and their previous logs are removed and then redone before
saving their logs.

7. Explain : a. Three tier architecture system b. Serializability (2016)


Three-tier architecture allows any one of the three tiers to be upgraded or replaced independently.
The user interface is implemented on a desktop PC and uses a standard graphical user interface with
different modules running on the application server. The relational database management system
on the database server contains the computer data storage logic. The middle tiers are usually
multitiered.
The three tiers in a three-tier architecture are:
1. Presentation Tier: Occupies the top level and displays information related to services available
on a website. This tier communicates with other tiers by sending results to the browser and
other tiers in the network.
2. Application Tier: Also called the middle tier, logic tier, business logic or logic tier, this tier is
pulled from the presentation tier. It controls application functionality by performing detailed
processing.
3. Data Tier: Houses database servers where information is stored and retrieved. Data in this tier
is kept independent of application servers or business logic.
By looking at the below diagram, you can easily identify that 3-tier architecture has three
different layers.
 Presentation layer
 Business Logic layer
 Database layer

3 Tier Architecture Diagram

Here we have taken a simple example of student form to understand all these three layers. It has
information about a student like – Name, Address, Email, and Picture.
Serializability in DBMS-
 Some non-serial schedules may lead to inconsistency of the database.
 Serializability is a concept that helps to identify which non-serial schedules are correct and will
maintain the consistency of the database.
 Serializability is the classical concurrency scheme. It ensures that a schedule for executing
concurrent transactions is equivalent to one that executes the transactions serially in some
order. It assumes that all accesses to the database are done using read and write operations. A
schedule is called ``correct'' if we can find a serial schedule that is ``equivalent'' to it. Given a
set of transactions T1...Tn, two schedules S1 and S2 of these transactions are equivalent if the
following conditions are satisfied:
 Read-Write Synchronization: If a transaction reads a value written by another transaction in
one schedule, then it also does so in the other schedule.
 Write-Write Synchronization: If a transaction overwrites the value of another transaction in
one schedule, it also does so in the other schedule.

Types of Serializability-
Serializability is mainly of two types-

1. Conflict Serializability
2. View Serializability

8. Describe : a. Primary key b. Foreign key c. Unique key d. Candidate key e. Composite key (2016)
a. Primary Key
A column or group of columns in a table which helps us to uniquely identifies every row in that table
is called a primary key. This DBMS can't be a duplicate. The same value can't appear more than once
in the table.
Rules for defining Primary key:
 Two rows can't have the same primary key value
 It must for every row to have a primary key value.
 The primary key field cannot be null.
 The value in a primary key column can never be modified or updated if any foreign key refers
to that primary key.
Example:
In the following example, <code>StudID</code> is a Primary Key.
StudID Roll No First Name LastName Email

1 11 Tom Price abc@gmail.com

2 12 Nick Wright xyz@gmail.com

3 13 Dana Natan mno@yahoo.com


b. Foreign key
A foreign key is a column which is added to create a relationship with another table. Foreign keys
help us to maintain data integrity and also allows navigation between two different instances of an
entity. Every relationship in the model needs to be supported by a foreign key.
Example:
DeptCode DeptName

001 Science

002 English

005 Computer

Teacher ID Fname Lname

B002 David Warner

B017 Sara Joseph

B009 Mike Brunton


In this example, we have two table, teach and department in a school. However, there is no way to
see which search work in which department.
In this table, adding the foreign key in Deptcode to the Teacher name, we can create a relationship
between the two tables.
Teacher ID DeptCode Fname Lname

B002 002 David Warner

B017 002 Sara Joseph

B009 001 Mike Brunton


This concept is also known as Referential Integrity.

c. Unique Key
A unique key is a set of one or more than one fields/columns of a table that uniquely identify a
record in a database table.
You can say that it is little like primary key but it can accept only one null value and it cannot have
duplicate values.The unique key and primary key both provide a guarantee for uniqueness for a
column or a set of columns.
There is an automatically defined unique key constraint within a primary key constraint.There may
be many unique key constraints for one table, but only one PRIMARY KEY constraint for one table.

d. Candidate Key
A super key with no repeated attribute is called candidate key.
The Primary key should be selected from the candidate keys. Every table must have at least a single
candidate key.
Properties of Candidate key:
 It must contain unique values
 Candidate key may have multiple attributes
 Must not contain null values
 It should contain minimum fields to ensure uniqueness
 Uniquely identify each record in a table
Example: In the given table Stud ID, Roll No, and email are candidate keys which help us to
uniquely identify the student record in the table.
StudID Roll No First Name LastName Email

1 11 Tom Price abc@gmail.com

2 12 Nick Wright xyz@gmail.com

3 13 Dana Natan mno@yahoo.com

e.Composite key
A key which has multiple attributes to uniquely identify rows in a table is called a composite key. The
difference between compound and the composite key is that any part of the compound key can be a
foreign key, but the composite key may or maybe not a part of the foreign key.

9. Define and describe concurrency control in data base system giving suitable example. (2016)
Concurrency-control protocols : allow concurrent schedules, but ensure that the schedules are
conflict/view serializable, and are recoverable and maybe even cascadeless.
These protocols do not examine the precedence graph as it is being created, instead a protocol
imposes a discipline that avoids non-seralizable schedules.
Different concurrency control protocols provide different advantages between the amount of
concurrency they allow and the amount of overhead that they impose.

We’ll be learning some protocols which are important for GATE CS. Questions from this topic is
frequently asked and it’s recommended to learn this concept. (At the end of this series of articles I’ll
try to list all theoretical aspects of this concept for students to revise quickly and they may find the
material in one place.) Now, let’s get going:
Different categories of protocols:
 Lock Based Protocol
 Basic 2-PL
 Conservative 2-PL
 Strict 2-PL
 Rigorous 2-PL
 Graph Based Protocol
 Time-Stamp Ordering Protocol
 Multiple Granularity Protocol
 Multi-version Protocol
For GATE we’ll be focusing on the First three protocols.
Lock Based Protocols –
A lock is a variable associated with a data item that describes a status of data item with respect to
possible operation that can be applied to it. They synchronize the access by concurrent transactions
to the database items. It is required in this protocol that all the data items must be accessed in a
mutually exclusive manner. Let me introduce you to two common locks which are used and some
terminology followed in this protocol.
1. Shared Lock (S): also known as Read-only lock. As the name suggests it can be shared between
transactions because while holding this lock the transaction does not have the permission to update
data on the data item. S-lock is requested using lock-S instruction.
2. Exclusive Lock (X): Data item can be both read as well as written.This is Exclusive and cannot be held
simultaneously on the same data item. X-lock is requested using lock-X instruction.
Lock Compatibility Matrix –

 A transaction may be granted a lock on an item if the requested lock is compatible with locks already
held on the item by other
transactions.
 Any number of transactions can hold shared locks on an item, but if any transaction holds an
exclusive(X) on the item no other transaction may hold any lock on the item.
 If a lock cannot be granted, the requesting transaction is made to wait till all incompatible locks held
by other transactions have been released. Then the lock is granted.
Upgrade / Downgrade locks : A transaction that holds a lock on an item A is allowed under certain
condition to change the lock state from one state to another.
Upgrade: A S(A) can be upgraded to X(A) if Ti is the only transaction holding the S-lock on element A.
Downgrade: We may downgrade X(A) to S(A) when we feel that we no longer want to write on data-
item A. As we were holding X-lock on A, we need not check any conditions.

So, by now we are introduced with the types of locks and how to apply them. But wait, just by
applying locks if our problems could’ve been avoided then life would’ve been so simple! If you have
done Process Synchronization under OS you must be familiar with one consistent problem,
starvation and Deadlock! We’ll be discussing them shortly, but just so you know we have to apply
Locks but they must follow a set of protocols to avoid such undesirable problems. Shortly we’ll use
2-Phase Locking (2-PL) which will use the concept of Locks to avoid deadlock. So, applying simple
locking, we may not always produce Serializable results, it may lead to Deadlock Inconsistency.
Problem With Simple Locking…
Consider the Partial Schedule:
T1 T2

1 lock-X(B)

2 read(B)

3 B:=B-50

4 write(B)

5 lock-S(A)

6 read(A)

7 lock-S(B)

8 lock-X(A)

9 …… ……

Deadlock – consider the above execution phase. Now, T1 holds an Exclusive lock over B, and T2 holds
a Shared lock over A. Consider Statement 7, T1 requests for lock on B, while in Statement
8 T2 requests lock on A. This as you may notice imposes a Deadlock as none can proceed with their
execution.
Starvation – is also possible if concurrency control manager is badly designed. For example: A
transaction may be waiting for an X-lock on an item, while a sequence of other transactions request
and are granted an S-lock on the same item. This may be avoided if the concurrency control manager
is properly designed.
Phew… I hope you are now familiar with why we should study Concurrency Control Protocols.
Moreover, you should be familiar with basics of Lock Based Protocols and problems with Simple
Locking.
Next we’ll discuss 2-PL and its categories, implementation along with the advantages and pitfalls of
using them. Questions on Lock based protocols are common in GATE, also we’ll further discuss about
Graph based, Timestamp and some fun questions on Thomas Write Rule. Till then, happy learning.

10. What is transaction? Explain in detail the stages and properties of transaction. (2016)
A transaction is a set of changes that must all be made together. It is a program unit whose
execution mayor may not change the contents of a database. Transaction is executed as a single
unit. If the database was in consistent state before a transaction, then after execution of the
transaction also, the database must be in a consistate. For example, a transfer of money from one
bank account to another requires two changes to the database both must succeed or fail together.
Transaction Properties
There are four important properties of transaction that a DBMS must ensure to maintain data in the
case of concurrent access and system failures. These are:
Atomicity: (all or nothing)
A transaction is said to be atomic if a transaction always executes all its actions in one step or not
executes any actions at all It means either all or none of the transactions operations are performed.
Consistency: (No violation of integrity constraints)
A transaction must preserve the consistency of a database after the execution. The DBMS assumes
that this property holds for each transaction. Ensuring this property of a transaction is the
responsibility of the user.
Isolation: (concurrent changes invisibles)
The transactions must behave as if they are executed in isolation. It means that if several
transactions are executed concurrently the results must be same as if they were executed serially in
some order. The data used during the execution of a transaction cannot be used by a second
transaction until the first one is completed.
Durability: (committed update persist)
The effect of completed or committed transactions should persist even after a crash. It means once
a transaction commits, the system must guarantee that the result of its operations will never be lost,
in spite of subsequent failures.
The acronym ACID is sometimes used to refer above four properties of transaction that we have
presented here: Atomicity, Consistency, Isolation, and Durability.
Example
In order to understand above properties consider the following example:
Let, Ti is a transaction that transfers Rs 50 from account A to account B. This transaction can be
defined as:

Atomicity
Suppose that, just prior to execution of transaction Ti the values of account A and B are Rs.I000 and
Rs.2000.
Now, suppose that during the execution of Ti, a power failure has occurred that prevented the Ti to
complete successfully. The point of failure may be after the completion Write (A,a) and before
Write(B,b). It means that the changes in A are performed but not in B. Thus the values of account A
and Bare Rs.950 and Rs.2000 respectively. We have lost Rs.50 as a result 'of this failure.
Now, our database is in inconsistent state.
The reason for this inconsistent state is that our transaction is completed partially and we save the
changes of uncommitted transaction. So, in order to get the consistent state, database must be
restored to its original values i.e. A to Rs.I000 and B to Rs.2000, this leads to the concept of atomicity
of transaction. It means that in order to maintain the consistency of database, either all or none of
transaction's operations are performed.
In order to maintain atomicity of transaction, the database system keeps track of the old values of
any write and if the transaction does not complete its execution, the old values are restored to make
it appear as the transaction never executed.
Consistency
The consistency requirement here is that the sum of A and B must be unchanged by the execution of
the transaction. Without the consistency requirement, money could be created or destroyed by the
transaction. It can. be verified easily that, if the database is consistent before an execution of the
transaction, the database remains consistent after the execution of the transaction.
Ensuring consistency for an individual transaction is the responsibility of the application programmer
who codes the transaction.
Isolation
If several transactions are executed concurrently (or in parallel), then each transaction must behave
as if it was executed in isolation. It means that concurrent execution does not result an inconsistent
state.
For example, consider another transaction T2, which has to display the sum of account A and B.
Then, its result should be Rs.3000.
Let’s suppose that both Tl and T2 perform concurrently, their schedule is shown below:

The above schedule results inconsistency of database and it shows Rs.2950 as sum of accounts A and
B instead of Rs.3000. The problem occurs because second concurrently running transaction T2, reads
A and B at intermediate point and computes its sum, which results inconsistent value. Isolation
property demands that the data used during the execution of a transaction cannot be used by a
second transaction until the first one is completed.
A solution to the problem of concurrently executing transaction is to execute each transaction
serially 'that is one after the other. However, concurrent execution of transaction provides
significant performance benefits, so other solutions are developed they allow multiple transactions
to execute concurrently.
Durability
Once the execution of the transaction completes successfully, and the user who initiated the
transaction has been notified that the transfer of funds has taken place, it must be the case that no
system failure will result in a loss of data corresponding to this transfer of funds.
The durability property guarantees that, once a transaction completes successfully all the updates
that it carried out on the database persist, even if there is a system failure after the transaction
completes execution. Ensuring durability is the responsibility of a component of the database system
called the recovery-management component.
States of Transaction
A transaction must be in one of the following states:
 Active: the initial state, the transaction stays in this state while it is executing.
 Partially committed: after the final statement has been executed.
 Failed: when the normal execution can no longer proceed.
 Aborted: after the transaction has been rolled back and the database has been restored to its
state prior to the start of the transaction.
 Committed: after successful completion.
The state diagram corresponding to a transaction is shown in Figure.
We say that a transaction has committed only if it has entered the committed state. Similarly, we say
that a transaction has aborted only if it has entered the aborted state. A transaction is said to have
terminated if has either committed or aborted.
A transaction starts in the active state. When it finishes its final statement, it enters the partially
committed state. At this point, the transaction has completed its execution, but it is still possible
that it may have to be aborted, since the actual output may still be temporarily hiding in main
memory and thus a hardware failure may preclude its successful completion
The database system then writes out enough information to disk that, even in the event of a failure,
the updates performed by the transaction can be recreated when the system restarts after the
failure. When the last of this information is written out, the transaction enters the committed state.

11. Define and describe transaction management. Hence discuss the concept of Serializability and
locks.(2015)
Transaction Management
A transaction in Oracle begins when the first executable SQL statement is encountered.
An executable SQL statement is a SQL statement that generates calls to an instance, including DML
and DDL statements.
When a transaction begins, Oracle assigns the transaction to an available undo tablespace to record
the rollback entries for the new transaction.
A transaction ends when any of the following occurs:
A user issues a COMMIT or ROLLBACK statement without a SAVEPOINT clause.
A user runs a DDL statement such as CREATE, DROP, RENAME, or ALTER. If the current transaction
contains any DML statements, Oracle first commits the transaction, and then runs and commits the

DDL statement as a new, single statement transaction.


 A user disconnects from Oracle. The current transaction is committed.
 A user process terminates abnormally. The current transaction is rolled back.
After one transaction ends, the next executable SQL statement automatically starts the
following transaction.
Commit Transactions
Committing a transaction means making permanent the changes performed by the SQL statements
within the transaction.
Before a transaction that modifies data is committed, the following has occurred:
 Oracle has generated undo information. The undo information contains the old data values
changed by the SQL statements of the transaction.
 Oracle has generated redo log entries in the redo log buffer of the SGA. The redo log record
contains the change to the data block and the change to the rollback block. These changes may
go to disk before a transaction is committed.
 The changes have been made to the database buffers of the SGA. These changes may go to
disk before a transaction is committed.
Rollback of Transactions
Rolling back means undoing any changes to data that have been performed by SQL statements
within an uncommitted transaction. Oracle uses undo tablespaces (or rollback segments) to store
old values. The redo log contains a record of changes.
Oracle lets you roll back an entire uncommitted transaction. Alternatively, you can roll back the
trailing portion of an uncommitted transaction to a marker called a savepoint.
All types of rollbacks use the same procedures:
 Statement-level rollback (due to statement or deadlock execution error)
 Rollback to a savepoint
 Rollback of a transaction due to user request
 Rollback of a transaction due to abnormal process termination
 Rollback of all outstanding transactions when an instance terminates abnormally
Serializability is the classical concurrency scheme. It ensures that a schedule for executing
concurrent transactions is equivalent to one that executes the transactions serially in some
order. It assumes that all accesses to the database are done using read and write operations. A
schedule is called ``correct'' if we can find a serial schedule that is ``equivalent'' to it. Given a
set of transactions T1...Tn, two schedules S1 and S2 of these transactions are equivalent if the
following conditions are satisfied:
 Read-Write Synchronization: If a transaction reads a value written by another transaction in
one schedule, then it also does so in the other schedule.
 Write-Write Synchronization: If a transaction overwrites the value of another transaction in
one schedule, it also does so in the other schedule.

Lock
A lock is a variable associated with a data item that describes the status of the item with respect to
possible operations that can be applied to it. Generally, there is one lock for each data item in
the database. Locks are used as a means of synchronizing the access by concurrent transactions to
the database item.
Types of Locks
Several types of locks are used in concurrency control. To introduce locking concepts gradually, we
first discuss binary locks, which are simple but restrictive and so are not used in practice. We then
discuss shared/exclusive locks, which provide more general locking capabilities and are used in
practical database locking schemes.
Binary Locks
A binary lock can have two states or values: locked and unlocked.
A distinct lock is associated with each database item A. If the value of the lock on A is 1,
item A cannot be accessed by a database operation that requests the item. If the value of the lock
on A is 0 then item can be accessed when requested. We refer to the current value of the lock
associated with item A as LOCK (A). There are two operations, lock item and unlock item are used
with binary locking A transaction requests access to an item A by first issuing a lock item
(A) operation. If LOCK (A) = 1, the transaction is forced to wait. If LOCK (A) = 0 it is set to 1 (the
transaction locks the item) and the transaction is allowed to access item A. When the transaction is
through using the item, it issues an unlock item (A) operation, which sets LOCK (A) to 0 (unlocks the
item) so that A may be accessed by other transactions. Hence binary lock enforces mutual exclusiol1
on the data item.
Share/Exclusive (for Read/Write) Locks
We should allow several transactions to access the same item A if they all access A' for reading
purposes only. However, if a transaction is to write an item A, it must have exclusive access to A. For
this purpose, a different type of lock called a multiple-mode lock is used. In this scheme there are
shared/exclusive or read/write locks are used.

Locking operations
There are three locking operations called read_lock(A), write_lock(A) and unlock(A) represented as
lock-S(A), lock-X(A), unlock(A) (Here, S indicates shared lock, X indicates exclusive lock)can be
performed on a data item. A lock associated with an item A, LOCK (A), now has three possible states:
"read-locked", "write-locked," or "unlocked." A read-locked item is also called share-locked item
because other transactions are allowed to read the item, whereas a write-locked item is caused
exclusive-locked, because a single transaction exclusively holds the lock on the item.
Compatibility of Locks
Suppose that there are A and B two different locking modes. If a transaction T1 requests a lock of
mode on item Q on which transaction T2 currently hold a lock of mode B. If transaction can be
granted lock, in spite of the presence of the mode B lock, then we say mode A is compatible with
mode B. Such a function is shown in one matrix

12. Present the overview of three tier of client server architecture. What are the advantages of
segregating the three tier? (2015, 2012, 2010)
3-tier Architecture
A 3-tier architecture separates its tiers from each other based on the complexity of the users and
how they use the data present in the database. It is the most widely used architecture to design a
DBMS.

 Database (Data) Tier − At this er, the database resides along with its query processing
languages. We also have the relations that define the data and their constraints at this level.
 Application (Middle) Tier − At this er reside the applica on server and the programs that
access the database. For a user, this application tier presents an abstracted view of the
database. End-users are unaware of any existence of the database beyond the application. At
the other end, the database tier is not aware of any other user beyond the application tier.
Hence, the application layer sits in the middle and acts as a mediator between the end-user
and the database.
 User (Presentation) Tier − End-users operate on this tier and they know nothing about any
existence of the database beyond this layer. At this layer, multiple views of the database can
be provided by the application. All views are generated by applications that reside in the
application tier.
Multiple-tier database architecture is highly modifiable, as almost all its components are
independent and can be changed independently.
Advantages of using three-tier architecture:
 It makes the logical separation between business layer and presentation layer and database
layer.
 Migration to new graphical environments is faster.
 As each tier is independent it is possible to enable parallel development of each tier by using
different sets of developers.
 Easy to maintain and understand large project and complex project.
 Since application layer is between the database layer and presentation layer so the database
layer will be more secured and client will not have direct access to the database.
 Posted data from presentation layer can be verified or validated at application layer before
updating it to the database.
 Database Security can be provided at application layer.
 Application layer or middle layer or business layer can be a protection shield to the database.
 New rules or new validation rules can be defined any time and changes made to middle layer
will not effect presentation layer.

Define any logic once within the business layer and that logic can be shared among any number of
components in the presentation layer.
 We can display only necessary methods from business layer in the presentation layer.
 We can hide unnecessary methods from business layer in the presentation layer.
 Easy to apply object oriented concept
 Easy to update data provider queries.
13. Explain the rules that a distributed database should follow. (2014, 2010)
These rules can be applied on any database system that manages stored data using only its
relational capabilities. This is a foundation rule, which acts as a base for all the other rules.
Rule 1: Information Rule
The data stored in a database, may it be user data or metadata, must be a value of some table cell.
Everything in a database must be stored in a table format.
Rule 2: Guaranteed Access Rule
Every single data element (value) is guaranteed to be accessible logically with a combination of
table-name, primary-key (row value), and attribute-name (column value). No other means, such as
pointers, can be used to access data.
Rule 3: Systematic Treatment of NULL Values
The NULL values in a database must be given a systematic and uniform treatment. This is a very
important rule because a NULL can be interpreted as one the following − data is missing, data is not
known, or data is not applicable.
Rule 4: Active Online Catalog
The structure description of the entire database must be stored in an online catalog, known as data
dictionary, which can be accessed by authorized users. Users can use the same query language to
access the catalog which they use to access the database itself.
Rule 5: Comprehensive Data Sub-Language Rule
A database can only be accessed using a language having linear syntax that supports data definition,
data manipulation, and transaction management operations. This language can be used directly or
by means of some application. If the database allows access to data without any help of this
language, then it is considered as a violation.
Rule 6: View Updating Rule
All the views of a database, which can theoretically be updated, must also be updatable by the
system.
Rule 7: High-Level Insert, Update, and Delete Rule
A database must support high-level insertion, updation, and deletion. This must not be limited to a
single row, that is, it must also support union, intersection and minus operations to yield sets of data
records.
Rule 8: Physical Data Independence
The data stored in a database must be independent of the applications that access the database.
Any change in the physical structure of a database must not have any impact on how the data is
being accessed by external applications.
Rule 9: Logical Data Independence
The logical data in a database must be independent of its user’s view (application). Any change in
logical data must not affect the applications using it. For example, if two tables are merged or one is
split into two different tables, there should be no impact or change on the user application. This is
one of the most difficult rule to apply.
Rule 10: Integrity Independence
A database must be independent of the application that uses it. All its integrity constraints can be
independently modified without the need of any change in the application. This rule makes a
database independent of the front-end application and its interface.
Rule 11: Distribution Independence
The end-user must not be able to see that the data is distributed over various locations. Users
should always get the impression that the data is located at one site only. This rule has been
regarded as the foundation of distributed database systems.
Rule 12: Non-Subversion Rule
If a system has an interface that provides access to low-level records, then the interface must not be
able to subvert the system and bypass security and integrity constraints.

14. Explain the component of distributed database. (2014)


Components of DBMS
DBMS have several components, each performing very significant tasks in the database
management system environment. Below is a list of components within the database and its
environment.

Software
This is the set of programs used to control and manage the overall database. This includes the DBMS
software itself, the Operating System, the network software being used to share the data among
users, and the application programs used to access data in the DBMS.

Hardware
Consists of a set of physical electronic devices such as computers, I/O devices, storage devices, etc.,
this provides the interface between computers and the real world systems.

Data
DBMS exists to collect, store, process and access data, the most important component. The database
contains both the actual or operational data and the metadata.
Procedures
These are the instructions and rules that assist on how to use the DBMS, and in designing and
running the database, using documented procedures, to guide the users that operate and manage it.

Database Access Language


This is used to access the data to and from the database, to enter new data, update existing data, or
retrieve required data from databases. The user writes a set of appropriate commands in a database
access language, submits these to the DBMS, which then processes the data and generates and
displays a set of results into a user readable form.

Query Processor
This transforms the user queries into a series of low level instructions. This reads the online user’s
query and translates it into an efficient series of operations in a form capable of being sent to the
run time data manager for execution.

Run Time Database Manager


Sometimes referred to as the database control system, this is the central software component of the
DBMS that interfaces with user-submitted application programs and queries, and handles database
access at run time. Its function is to convert operations in user’s queries. It provides control to
maintain the consistency, integrity and security of the data.

Data Manager
Also called the cache manger, this is responsible for handling of data in the database, providing a
recovery to the system that allows it to recover the data after a failure.

Database Engine
The core service for storing, processing, and securing data, this provides controlled access and rapid
transaction processing to address the requirements of the most demanding data consuming
applications. It is often used to create relational databases for online transaction processing or
online analytical processing data.

Data Dictionary
This is a reserved space within a database used to store information about the database itself. A
data dictionary is a set of read-only table and views, containing the different information about the
data used in the enterprise to ensure that database representation of the data follow one standard
as defined in the dictionary.

Report Writer
Also referred to as the report generator, it is a program that extracts information from one or more
files and presents the information in a specified format. Most report writers allow the user to select
records that meet certain conditions and to display selected fields in rows and columns, or also
format the data into different charts.

15. Write the advantage of distributed database over centralized database. (2014)
Comparison between centralized and distributed DBMS:
There are many aspect that let us make a comparison between centralized and distributed DBMS:
 Database management system is any software that manages and controls the storage, the
organization, security, retrieval and integral of data in a specific database, whereas DDBMS
consist of a single database that is divided into many fragments. Each fragment is integrated
on one or more computer and controlled by independent database (DBMS) (Connolly & Begg,
2004).
 In centralized DBMS the data is distributed across the network computers, and the data is
stored on many sites and under the management responsibility of DDBMS. But in the DBMS
data is stored and controlled in a central site.
 Both of DDBMS and centralized DBMS provide the access to database using the same interface,
but for this function centralized DBMS faces less complication than DDBMS.
 For distributing data over network we can use replication or fragmentation. The objective of
replication and fragmentation is to make a transparency of this allocation to make the details
of implementation hidden on users. In centralized DBMS is not need to make transparencies.
 In DDBMS design we can find three issues which are not in centralized DBMS design. These
issues are: How to split the database to fragments, and fragments to replicate, and in which
locate we can find these fragments.
 Consequently, centralized DBMS is less sophisticated than DDBMS because it not supports the
organizational structure of today’s widely distributed enterprises, and DDBMS more reactive
and reliable (Blurtit, 2010).

16. Explain the different type of failure that can occur in distributed database. (2014)
Designing a reliable system that can recover from failures requires identifying the types of failures
with which the system has to deal. In a distributed database system, we need to deal with four types
of failures: transaction failures (aborts), site (system) failures, media (disk) failures, and
communication line failures. Some of these are due to hardware and others are due to software.

1. Transaction Failures: Transactions can fail for a number of reasons. Failure can be due to an error
in the transaction caused by incorrect input data as well as the detection of a present or potential
deadlock. Furthermore, some concurrency control algorithms do not permit a transaction to
proceed or even to wait if the data that they attempt to access are currently being accessed by
another transaction. This might also be considered a failure.The usual approach to take in cases of
transaction failure is to abort the transaction, thus resetting the database to its state prior to the
start of this transaction.
2. Site (System) Failures: The reasons for system failure can be traced back to a hardware or to a
software failure. The system failure is always assumed to result in the loss of main memory contents.
Therefore, any part of the database that was in main memory buffers is lost as a result of a system
failure. However, the database that is stored in secondary storage is assumed to be safe and correct.
In distributed database terminology, system failures are typically referred to as site failures, since
they result in the failed site being unreachable from other sites in the distributed system. We
typically differentiate between partial and total failures in a distributed system. Total failure refers
to the simultaneous failure of all sites in the distributed system; partial failure indicates the failure of
only some sites while the others remain operational.

3. Media Failures: Media failure refers to the failures of the secondary storage devices that store the
database. Such failures may be due to operating system errors,as well as to hardware faults such as
head crashes or controller failures. The important point is that all or part of the database that is on
the secondary storage is considered to be destroyed and inaccessible. Duplexing of disk storage and
maintaining archival copies of the database are common techniques that deal with this sort of
catastrophic problem. Media failures are frequently treated as problems local to one site and
therefore not specifically addressed in the reliability mechanisms of distributed DBMSs.

4. Communication Failures There are a number of types of communication failures. The most
common ones are the errors in the messages, improperly ordered messages, lost messages, and
communication line failures. The first two errors are the responsibility of the computer network; we
will not consider them further. Therefore, in our discussions of distributed DBMS reliability, we
expect the underlying computer network hardware and software to ensure that two messages sent
from a process at some originating site to another process at some destination site are delivered
without error and in the order in which they were sent. Lost or undeliverable messages are typically
the consequence of communication line failures or (destination) site failures. If a communication line
fails, in addition to losing the message(s) in transit, it may also divide the network into two or more
disjoint groups. This is called network partitioning. If the network is partitioned, the sites in each
partition may continue to operate. In this case, executing transactions that access data stored in
multiple partitions becomes a major issue.

17. What are the recovery techniques that are followed to recover? (2014)
DATABASE RECOVERY IN DBMS AND ITS TECHNIQUES
Classification of failure:
To see wherever the matter has occurred, we tend to generalize a failure into numerous classes, as
follows:
 Transaction failure
 System crash
 Disk failure

Types of Failure

1. Transaction failure: A transaction needs to abort once it fails to execute or once it reaches to any
further extent from wherever it can’t go to any extent further. This is often known as transaction
failure wherever solely many transactions or processes are hurt. The reasons for transaction failure
are:
 Logical errors
 System errors
1. Logical errors: Where a transaction cannot complete as a result of its code error or an internal error
condition.
2. System errors: Wherever the information system itself terminates an energetic transaction as a
result of the DBMS isn’t able to execute it, or it’s to prevent due to some system condition. to
Illustrate, just in case of situation or resource inconvenience, the system aborts an active
transaction.
3. System crash: There are issues − external to the system − that will cause the system to prevent
abruptly and cause the system to crash. For instance, interruptions in power supply might cause the
failure of underlying hardware or software package failure. Examples might include OS errors.
4. Disk failure: In early days of technology evolution, it had been a typical drawback wherever hard-
disk drives or storage drives accustomed to failing oftentimes. Disk failures include the formation of
dangerous sectors, unreachability to the disk, disk crash or the other failure, that destroys all or a
section of disk storage.
Recovery and Atomicity:
When a system crashes, it should have many transactions being executed and numerous files
opened for them to switch the information items. Transactions are a product of numerous
operations that are atomic in nature. However consistent with ACID properties of a database,
atomicity of transactions as an entire should be maintained, that is, either all the operations are
executed or none.
When a database management system recovers from a crash, it ought to maintain the subsequent:
 It ought to check the states of all the transactions that were being executed.
 A transaction could also be within the middle of some operation; the database management
system should make sure the atomicity of the transaction during this case.
 It ought to check whether or not the transaction is completed currently or it must be rolled
back.
 No transactions would be allowed to go away from the database management system in an
inconsistent state.
Also See: What is Deadlock in DBMS
There are 2 forms of techniques, which may facilitate a database management system in
recovering as well as maintaining the atomicity of a transaction:
 Maintaining the logs of every transaction, and writing them onto some stable storage before
truly modifying the info.
 Maintaining shadow paging, wherever the changes are done on a volatile memory, and later,
and the particular info is updated.

Log-based recovery Or Manual Recovery):


Log could be a sequence of records, which maintains the records of actions performed by dealing.
It’s necessary that the logs area unit written before the particular modification and hold on a stable
storage media, that is failsafe. Log-based recovery works as follows:
 The log file is unbroken on a stable storage media.
 When a transaction enters the system and starts execution, it writes a log regarding it.
Recovery with concurrent transactions (Automated Recovery):
When over one transaction is being executed in parallel, the logs are interleaved. At the time of
recovery, it’d become exhausting for the recovery system to go back all logs, and so begin
recovering. To ease this example, the latest package uses the idea of ‘checkpoints’. Automated
Recovery is of three types
 Deferred Update Recovery
 Immediate Update Recovery
 Shadow Paging
18. What is distributed DBMS? Explain why replication of data is useful in distributed DBMS.
(2013)
In a distributed database, there are a number of databases that may be geographically distributed all
over the world. A distributed DBMS manages the distributed database in a manner so that it appears
as one single database to users. In the later part of the chapter, we go on to study the factors that
lead to distributed databases, its advantages and disadvantages.

A distributed database is a collection of multiple interconnected databases, which are spread


physically across various locations that communicate via a computer network.
Features
 Databases in the collection are logically interrelated with each other. Often they represent a
single logical database.
 Data is physically stored across multiple sites. Data in each site can be managed by a DBMS
independent of the other sites.
 The processors in the sites are connected via a network. They do not have any multiprocessor
configuration.
 A distributed database is not a loosely connected file system.
 A distributed database incorporates transaction processing, but it is not synonymous with a
transaction processing system.
Data Replication is the process of storing data in more than one site or node. It is useful inimproving
the availability of data. It is simply copying data from a database from one server to another server
so that all the users can share the same data without any inconsistency. The result is a distributed
database in which users can access data relevant to their tasks without interfering with the work of
others.
 Data replication encompasses duplication of transactions on an ongoing basis, so that
the replicate is in a consistently updated state and synchronized with the source.However in
data replication data is available at different locations, but a particular relation has to reside at
only one location.
 There can be full replication, in which the whole database is stored at every site. There can also
be partial replication, in which some frequently used fragment of the database are replicated
and others are not replicated.

19. Explain how DBMS can be secured. (2013)


Security Of DBMS
Security refers to activities and measures to ensure the confidentiality, integrity, and availability of
an information system and its main asset, data.3 It is important to understand that securing data
requires a comprehensive, company-wide approach.
• Confidentiality deals with ensuring that data is protected against unauthorized access, and if the
data are accessed by an authorized user, that the data are used only for an authorized purpose. In
other words, confidentiality entails safeguarding data against disclosure of any information that
would violate the privacy rights of a person or organization. Data must be evaluated and classified
according to the level of confidentiality: highly restricted (very few people have access), confidential
(only certain groups have access), and unrestricted (can be accessed by all users).
• Integrity, within the data security framework, is concerned with keeping data consistent, free of
errors, or anomalies. Integrity focuses on maintaining the data free of inconsistencies and
anomalies. The DBMS plays a pivotal role in ensuring the integrity of the data in the database.
However, from the security point of view, integrity deals not only with the data in the database but
also with ensuring that organizational processes, users, and usage patterns maintain such integrity.
• Availability refers to the accessibility of data whenever required by authorized users and for
authorized purposes. To ensure data availability, the entire system (not only the data component)
must be protected from service degradation or interruption caused by any source (internal or
external).

20. Explain the need of concurrency control in transaction? Explain time stamp ordering protocol
for con currency control. (2013)
Need for Concurrency Control
Process of managing simultaneous operations on the database without having them interfere with
one another.
• Prevents interference when two or more users are accessing database simultaneously and at
least one is updating data.
• Although two transactions may be correct in themselves, interleaving of operations may
produce
an incorrect result.
Need
Several problems can occur when concurrent transactions execute in an uncontrolled manner.
1) The Lost Update Problem
This problem occurs when two transactions that access the same database items have their
operations interleaved in a way that makes the value of some database item incorrect.
Successfully completed update is overridden by another user.
Timestamp Ordering Protocol –
The main idea for this protocol is to order the transactions based on their Timestamps. A schedule in
which the transactions participate is then serializable and the only equivalent serial schedule
permitted has the transactions in the order of their Timestamp Values. Stating simply, the schedule
is equivalent to the particular Serial Order corresponding to the order of the Transaction timestamps.
Algorithm must ensure that, for each items accessed by Conflicting Operations in the schedule, the
order in which the item is accessed does not violate the ordering. To ensure this, use two Timestamp
Values relating to each database item X.
 W_TS(X) is the largest timestamp of any transaction that executed write(X)successfully.
 R_TS(X) is the largest timestamp of any transaction that executed read(X)successfully.
Basic Timestamp Ordering –
Every transaction is issued a timestamp based on when it enters the system. Suppose, if an old
transaction Ti has timestamp TS(Ti), a new transaction Tj is assigned timestamp TS(Tj) such that TS(Ti)
< TS(Tj).The protocol manages concurrent execution such that the timestamps determine the
serializability order. The timestamp ordering protocol ensures that any conflicting read and write
operations are executed in timestamp order. Whenever some Transaction T tries to issue a
R_item(X) or a W_item(X), the Basic TO algorithm compares the timestamp of T with R_TS(X) &
W_TS(X) to ensure that the Timestamp order is not violated. This describe the Basic TO protocol in
following two cases.
1. Whenever a Transaction T issues a W_item(X) operation, check the following conditions:

 If R_TS(X) > TS(T) or if W_TS(X) > TS(T), then abort and rollback T and reject the operation.
else,
 Execute W_item(X) operation of T and set W_TS(X) to TS(T).
2. Whenever a Transaction T issues a R_item(X) operation, check the following conditions:

 If W_TS(X) > TS(T), then abort and reject T and reject the operation, else
 If W_TS(X) <= TS(T), then execute the R_item(X) operation of T and set R_TS(X) to the larger of
TS(T) and current R_TS(X).
3. Whenever the Basic TO algorithm detects twp conflicting operation that occur in incorrect
order, it rejects the later of the two operation by aborting the Transaction that issued it.
Schedules produced by Basic TO are guaranteed to be conflict serializable. Already discussed
that using Timestamp, can ensure that our schedule will be deadlock free.
4. One drawback of Basic TO protocol is that it Cascading Rollback is still possible. Suppose we
have a Transaction T1 and T2 has used a value written by T1. If T1 is aborted and resubmitted to
the system then, T must also be aborted and rolled back. So the problem of Cascading aborts
still prevails.

21. Explain : a. Query optimization b. Deadlock detection (2013)


Query Optimization: A single query can be executed through different algorithms or re-written in
different forms and structures. Hence, the question of query optimization comes into the picture –
Which of these forms or pathways is the most optimal? The query optimizer attempts to determine
the most efficient way to execute a given query by considering the possible query plans.

Importance: The goal of query optimization is to reduce the system resources required to fulfill a
query, and ultimately provide the user with the correct result set faster.
 First, it provides the user with faster results, which makes the application seem faster to the
user.
 Secondly, it allows the system to service more queries in the same amount of time, because
each request takes less time than unoptimized queries.
 Thirdly, query optimization ultimately reduces the amount of wear on the hardware (e.g. disk
drives), and allows the server to run more efficiently (e.g. lower power consumption, less
memory usage).

There are broadly two ways a query can be optimized:


1. Analyze and transform equivalent relational expressions: Try to minimize the tuple and column
counts of the intermediate and final query processes (discussed here).
2. Using different algorithms for each operation: These underlying algorithms determine how
tuples are accessed from the data structures they are stored in, indexing, hashing, data
retrieval and hence influence the number of disk and block accesses (discussed in query
processing).

Deadlock Detection
3. 1. If resources have single instance:
In this case for Deadlock detection we can run an algorithm to check for cycle in the Resource
Allocation Graph. Presence of cycle in the graph is the sufficient condition for deadlock.

4. In the above diagram, resource 1 and resource 2 have single instances. There is a cycle R1–
>P1–>R2–>P2. So Deadlock is Confirmed.
5. 2. If there are multiple instances of resources:
Detection of cycle is necessary but not sufficient condition for deadlock detection, in this case
system may or may not be in deadlock varies according to different situations.

22. Write short note on: a. Data mining and Data warehouseing b. Serializability (2013,2012)
a.Data warehouse
A data warehouse is a technique for collecting and managing data from varied sources to provide
meaningful business insights. It is a blend of technologies and components which allows the
strategic use of data.
Data Warehouse is electronic storage of a large amount of information by a business which is
designed for query and analysis instead of transaction processing. It is a process of transforming data
into information and making it available to users for analysis.
Data Mining
Data mining is looking for hidden, valid, and potentially useful patterns in huge data sets. Data
Mining is all about discovering unsuspected/ previously unknown relationships amongst the data.
It is a multi-disciplinary skill that uses machine learning, statistics, AI and database technology.
The insights extracted via Data mining can be used for marketing, fraud detection, and scientific
discovery, etc.

Data Mining Vs Data Warehouse: Key Differences


Data Mining Data Warehouse

Data mining is the process of analyzing unknown A data warehouse is database system which is
patterns of data. designed for analytical instead of transactional
work.

Data mining is a method of comparing large Data warehousing is a method of centralizing


amounts of data to finding right patterns. data from different sources into one common
repository.

Data mining is usually done by business users with Data warehousing is a process which needs to
the assistance of engineers. occur before any data mining can take place.

Data mining is the considered as a process of On the other hand, Data warehousing is the
extracting data from large data sets. process of pooling all relevant data together.

One of the most important benefits of data mining One of the pros of Data Warehouse is its
techniques is the detection and identification of ability to update consistently. That's why it is
errors in the system. ideal for the business owner who wants the
best and latest features.

Data mining helps to create suggestive patterns of Data Warehouse adds an extra value to
important factors. Like the buying habits of operational business systems like CRM
customers, products, sales. So that, companies can systems when the warehouse is integrated.
make the necessary adjustments in operation and
production.

The Data mining techniques are never 100% In the data warehouse, there is great chance
accurate and may cause serious consequences in that the data which was required for analysis
certain conditions. by the organization may not be integrated into
the warehouse. It can easily lead to loss of
information.

The information gathered based on Data Mining by Data warehouses are created for a huge IT
organizations can be misused against a group of project. Therefore, it involves high
people. maintenance system which can impact the
revenue of medium to small-scale
organizations.

After successful initial queries, users may ask more Data Warehouse is complicated to implement
complicated queries which would increase the and maintain.
workload.

Organisations can benefit from this analytical tool Data warehouse stores a large amount of
by equipping pertinent and usable knowledge- historical data which helps users to analyze
based information. different time periods and trends for making
future predictions.

Organisations need to spend lots of their resources In Data warehouse, data is pooled from
for training and Implementation purpose. multiple sources. The data needs to be cleaned
Moreover, data mining tools work in different and transformed. This could be a challenge.
manners due to different algorithms employed in
their design.

The data mining methods are cost-effective and Data warehouse's responsibility is to simplify
efficient compares to other statistical data every type of business data. Most of the work
applications. that will be done on user's part is inputting the
raw data.

Another critical benefit of data mining techniques Data warehouse allows users to access critical
is the identification of errors which can lead to data from the number of sources in a single
losses. Generated data could be used to detect a place. Therefore, it saves user's time of
drop-in sale. retrieving data from multiple sources.

Data mining helps to generate actionable Once you input any information into Data
strategies built on data insights. warehouse system, you will unlikely to lose
track of this data again. You need to conduct a
quick search, helps you to find the right
statistic information.

23. Explain the following: a. Serializability b. Recovery technique (2013)


b. Recovery Techniques:
1. Salvation program: Run after a crash to attempt to restore the system to a valid state. No
recovery data used. Used when all other techniques fail or were not used. Good for cases
where buffers were lost in a crash and one wants to reconstruct what was lost...(4,5)
2. Incremental dumping: Modified files copied to archive after job completed or at intervals.
(3,4)
3. Audit trail: Sequences of actions on files are recorded. Optimal for "backing out" of
transactions. (Ideal if trail is written out before changes). (1,2,3)
4. Differential files: Separate file is maintained to keep track of changes, periodically merged with
the main file. (2,3)
5. Backup/current version: Present files form the current version of the database. Files
containing previous values form a consistent backup version. (2,3)
6. Multiple copies: Multiple active copies of each file are maintained during normal operation of
the database. In cases of failure, comparison between the versions can be used to find a
consistent version. (6)
7. Careful replacement: Nothing is updated in place, with the original only being deleted after
operation is complete. (2,6)
(Parens and numbers are used to indicate which levels from above are supported by each
technique).

Combinations of two techniques can be used to offer similar protection against different kinds
of failures. The techniques above, when implemented, force changes to:
 The way data is structured (4,5,6).
 The way data is updated and manipulated (7).
 nothing (available as utilities) (1,2,3).

24. Explain the following term: a. Con currency control technique b. Deadlocks (2011, 2010)
25. Consider a file system such as the one on your favorite OS.
A file system is a method of organizing files on physical media, such as hard disks, CD's, and flash
drives. In the Microsoft Windows family of operating systems, users are presented with several
different choices of file systems when formatting such media. These choices depend on the type of
media involved and the situations in which the media is being formatted. The two most common file
systems in Windows are as follows:
 NTFS
 FAT
 exFAT
 HFS Plus
THE NTFS FILE SYSTEM
NTFS (short for New Technology File System) is a modern, well-formed file system that is most
commonly used by Windows Vista, 7 & 8. It has feature-rich, yet simple organization that allows it to
be used on very large volumes.
NTFS has the following properties:
 NTFS partitions can extend up to 16EB (about 16 million TB).
 Files stored to NTFS partitions can be as large as the partition.
 NTFS partitions occasionally become fragmentented and should be defragmented every one to
two months.
 NTFS partitions can be read from and written to by Windows and Linux systems and, can only
be read from by Mac OS X systems (by default). Mac OS X, with the assistance of the NTFS-3G
driver, can write to NTFS partitions. Installation instructions for the NTFS-3G driver can be
found here: Mac OS X - Writing to NTFS drives
It is recommended that NTFS be used on all media whose use is primarily with modern Windows
systems. It should not be used for devices which need to be written to by Mac OS X systems or on
media that is used in devices which are not compatible with NTFS.

THE FAT FILE SYSTEM


The FAT (short for File Allocation Table) file system is a general purpose file system that is
compatible with all major operating systems (Windows, Mac OS X, and Linux/Unix). It has relatively
simple technical underpinnings, and was the default file system for all Windows operating systems
prior to Windows 2000. Because of its overly simplistic structure, FAT suffers from issues such as
over-fragmentation, file corruption, and limits to file names and size.
The FAT file system has the following properties:
 FAT partitions cannot extend beyond 2TB.
o NOTE: Windows cannot format a disc larger than 32 GB to FAT32, but Mac OS X can.
 Files stored to a FAT partition cannot exceed 4GB.
 FAT partitions need to be defragmented often to maintain reasonable performance.
 FAT partitions larger than 32GB are generally not recommended as that amount of space starts
to overwhelm FAT's overly simplistic organization structure.
FAT is generally only used for devices with small capacity where portability between operating
systems is paramount. When choosing a file system for a hard disk, FAT is not recommend
unless you are using an older version of Windows.
NOTE: This section refers to the FAT32 file system. Some early versions of Windows 95 used the
FAT16 file system, which had even more technical issues and stricter limitations. It is recommended
that FAT16 is never used on any modern media.

THE EXFAT FILE SYSTEM


The exFAT (Extended File Allocation Table) is a Microsoft file system that is compatible with
Windows and Mac OS 10.6+. It is also compatible with many media devices such as TVs and portable
media players.
exFAT has the following properties:
 exFAT partitions can extend up extremely large disc sizes. 512 TiB is the recommended
maximum.
 Files up to 16 EiB can be stored on an exFAT partition.
 exFAT is not compatible with linux/Unix.
 exFAT partitions should be defragmented often.
 exFAT cannot pre-allocate disk space.

THE HFS PLUS FILE SYSTEM


HFS (Hierarchical File System) Plus is a file system developed by Apple for Mac OS X. It is also
referred to as Mac OS Extended.
HFS Plus has the following properties:
 Maximum volume is 8 EB (about 8 million TB).
 Files stored to HFS+ partitions can be as large as the partition.
 Windows users can read HFS+ but not write.
 Drivers are available that allow Linux users to read and writer to HFS+ volumes.

THE EXT FILE SYSTEM


The extended file system was created to be used with the Linux kernel. EXT 4 is the most recent
version of EXT.
EXT4 has the following properties:
 EXT4 can support volumes up to 1 EiB.
 16 TB maximum file size.
 Red Hat recommends using XFS (not EXT4) for volumes over 100 TB.
 EXT4 is backwards compatible with EXT2 and EXT3.
 EXT4 can pre-allocate disk space.
 By default, Windows and Mac OS cannot read EXT file systems.

26. What are the steps involved in creation and deletion files, and in writing data to a file?
Creating a File
Many people create files using a text editor, but you can use the command cat to create files
without using/learning to use a text editor. To create a practice file (called firstfile) and enter one
line of text in it, type the following at the % prompt:
cat > firstfile
(Press the Enter/Return key.)
This is just a test.
(Press the Enter/Return key.)
Terminate file entry by typing Control-d on a line by itself. (Hold down the Control key and type d.)
On your screen, you will see:
% cat > firstfile
This is just a test.
^D
To examine the contents of a file you have just created, enter this at the % prompt:
cat firstfile
Removing a File
Use the rm command to remove a file. For example,
rm file3
deletes file3 and its contents. You may remove more than one file at a time by specifying a list of
files to be deleted. For example,
rm firstfile secondfile
You will be prompted to confirm whether you really want to remove the files:
rm: remove firstfile (y/n)? y
rm: remove secondfile (y/n)? n
Type y or yes to remove a file; type n or no to leave it intact.

27. Explain how the issue of atomicity and durability are relevant to the creation and deletion of
files, and to writing data to files? (2009)

28. Explain the difference between the terms serial schedule and serializable schedule? (2009)
Serial Schedule
• Transactions execute fully.
• One at a time.
• No interleaving.
• Different orders of execution may produce different
final values

Serializable Schedule
• Interleaved.
• Equivalent to SOME serial schedule.
• Equivalence does NOT mean “ending up with the same values
as”.
• Equivalence cannot depend on initial values of database items.
• Cannot depend on values written
DB doesn’t know logic of transaction.
• Depends only on order of operations.

27. What is recoverable schedule? Why is recoverability of schedule desire? Are there any
circumstances under which it would be desirable to allow non-recoverable schedules? Explain
your answer. (2009)
 A recoverable schedule is one where, for each pair of Transaction Ti and Tj such that Tj reads
data item previously written by Ti the commit operation of Ti appears before the commit
operation Tj .
T8T8 T9T9

read(A) T9T9 is dependent on T8T8

write(A) Non recoverable schedule if T9T9 commits before T8T8

read(A)

read(B)
 Suppose that the system allows T9T9 to commit immediately after execution of read(A)
instruction. Thus T9T9 commit before T8T8 does.
 Now suppose that T8T8 fails before it commits. Since T9T9 has read the value of data item A
written by T8T8 we must abort T9T9 to ensure transaction Atomicity.
 However, T9T9 has already committed and cannot be aborted. Thus we have a situation where
it is impossible to recover correctly from the failure of T8T8.
 Recoverable schedules are desirable because failure of a transaction might otherwise bring the
system into an irreversibly inconsistent state.
 Non recoverable schedules may sometimes be needed when updates must be made visible
early due to time constraints, even if they have not yet been committed, which may be
required for every long duration transactions.
 Recovery with Concurrent Transactions:
When more than one transaction are being executed in parallel, the logs are interleaved. At the time
of recovery, it would become hard for the recovery system to backtrack all logs, and then start
recovering. To ease this situation, most modern DBMS use the concept of 'checkpoints'.
 Checkpoint:
o Keeping and maintaining logs in real time and in real environment may fill out all the memory
space available in the system. As time passes, the log file may grow too big to be handled at all.
o Checkpoint is a mechanism where all the previous logs are removed from the system and
stored permanently in a storage disk. -Checkpoint declares a point before which the DBMS was
in consistent state, and all the transactions were committed.
o During recovery we need to consider only the most recent transaction Ti that started before
the checkpoint, and transactions that started after Ti.
 Scan backwards from end of log to find the most recent <checkpoint>record
 Continue scanning backwards till a record <tistart>is found.
 Need only consider the part of log following above start record. Earlier part of log can be
ignored during recovery, and can be erased whenever desired.
 For all transactions (starting from Ti or later) with no <ticommit>,executeundo(Ti). (Done only
in case of immediate modification.)
 Scanning forward in the log, for all transactions starting from Ti or later with a
<ticommit>,executeredo(Ti).
 Recovery:
o We modify the log-based recovery schemes to allow multiple transactions to execute
concurrently.
o All transactions share a single disk buffer and a single log .A buffer block can have data items
updated by one or more transactions.
o We assume concurrency control using strict two-phase locking ;i.e. the updates of
uncommitted transactions should not be visible to other transactions
o Otherwise how to perform undo if T1 updates A, then T2 updates A and commits, and finally
T1 has to abort?
o Log records of different transactions may be interspersed in the log. The check pointing
technique and actions taken on recovery have to be changed since several transactions may be
active when a checkpoint is performed.
o Checkpoints are performed as before, except that the checkpoint log record is now of the form
<checkpoint l=""> where L is the list of transactions active at the time of the checkpoint
o We assume no updates are in progress while the checkpoint is carried out (will relax this later)
o When the system recovers from a crash, it first does the following:
 Initialize undo-list and redo-list to empty
 Scan the log backwards from the end, stopping when the first <checkpoint l="">record is
found.
o For each record found during the backward scan:
 if the record is <ti commit="">, add Ti to redo-list
 if the record is <ti start="">, then if Ti is not in redo-list, add Ti to undolist.
o For every Ti in L, if Ti is not in redo-list, add Ti to undo-lis
o When a system with concurrent transactions crashes and recovers, it behaves in the following
manner −

 The recovery system reads the logs backwards from the end to the last checkpoint.
 It maintains two lists, an undo-list and a redo-list.
 If the recovery system sees a log with <tn, start=""> but no commit or abort log found, it puts
the transaction in undo-list.
o All the transactions in the undo-list are then undone and their logs are removed. All the
transactions in the redo-list and their previous logs are removed and then redone before
saving their logs.

28. When a transaction is rolled back under timestamp ordering, it is assigned a new timestamp.
Why can it not simply keep its old timestamp? (2009)
 A schedule in which the transactions participate is serializable, and the equivalent serial
schedule has the transactions in order of their timestamp values. This is called timestamp
ordering (TO).
 Notice how this differs from 2PL, where a schedule is serializable by being equivalent to some
serial schedule allowed by the locking protocols.
 In timestamp ordering, however, the schedule is equivalent to the particular serial order
corresponding to the order of the transaction timestamps.
 The algorithm must ensure that, for each item accessed by conflicting operations in the
schedule, the order in which the item is accessed does not violate the serializability order.
 To do this, the algorithm associates with each database item X two timestamp (TS) values:
1. Read_TS(X): The read timestamp of item X; this is the largest timestamp among all the
timestamps of transactions that have successfully read item X—that is, read_TS(X) = TS(T),
where T is the youngest transaction that has read Xsuccessfully.
2. Write_TS(X): The write timestamp of item X; this is the largest of all the timestamps of
transactions that have successfully written item X—that is, write_TS(X) = TS(T), where T is the
youngest transaction that has written Xsuccessfully.
Basic Timestamp Ordering
 Whenever some transaction T tries to issue a read_item(X) or a write_item(X) operation, the
basic TO algorithm compares the timestamp of T with read_TS(X) and write_TS(X) to ensure
that the timestamp order of transaction execution is not violated.
 If this order is violated, then transaction T is aborted and resubmitted to the system as a new
transaction with a new timestamp.
 If T is aborted and rolled back, any transaction T1 that may have used a value written by T
must also be rolled back. Similarly, any transaction T2 that may have used a value written by T1
must also be rolled back, and so on.
 This effect is known as cascading rollback and is one of the problems associated with basic TO,
since the schedules produced are not recoverable.
 An additional protocol must be enforced to ensure that the schedules are recoverable, cascade
less, or strict. We first describe the basic TO algorithm here.
 The concurrency control algorithm must check whether conflicting operations violate the
timestamp ordering in the following two cases:
a. Transaction T issues a write_item(X) operation:
o If read_TS(X) >TS(T) or if write_TS(X) > TS(T), then abort and roll back T and reject the
operation. This should be done because some younger transaction with a timestamp greater
than TS(T)—and hence after T in the timestamp ordering—has already read or written the
value of item X before T had a chance to write X, thus violating the timestamp ordering.
o If the condition in part (a) does not occur, then execute the write_item(X) operation of T and
set write_TS(X) to TS(T).
b. Transaction T issues a read_item(X) operation:
o If write_TS(X) >TS(T), then abort and roll back T and reject the operation. This should be done
because some younger transaction with timestamp greater than TS(T)—and hence after T in
the timestamp ordering—has already written the value of item X before T had a chance to read
X.
o If write_TS(X) <= TS(T), then execute the read_item(X) operation of T and set read_TS(X) to the
larger of TS(T) and the current read_TS(X).
 Hence, whenever the basic TO algorithm detects two conflicting operations that occur in the
incorrect order, it rejects the later of the two operations by aborting the transaction that
issued it.
 The schedules produced by basic TO are hence guaranteed to be conflict serializable, like the
2PL protocol.
 However, some schedules are possible under each protocol that is not allowed under the
other. Hence, neither protocol allows all possible serializable schedules.
 As mentioned earlier, deadlock does not occur with timestamp ordering. However, cyclic
restart (and hence starvation) may occur if a transaction is continually aborted and restarted.
Strict Timestamp Ordering
 A variation of basic TO called strict TO ensures that the schedules are both strict (for easy
recoverability) and (conflict) serializable.
 In this variation, a transaction T that issues a read_item(X) or write_item(X) such that TS(T)
>write_TS(X) has its read or write operation delayed until the transaction T that wrote the
value of X (hence TS(T) = write_TS(X)) has committed or aborted.
 To implement this algorithm, it is necessary to simulate the locking of an item X that has been
written by transaction T until T is either committed or aborted. This algorithm does not cause
deadlock, since T waits for T only if TS(T) > TS(T).
Thomas's Write Rule
A modification of the basic TO algorithm, known as Thomas’s write rule, does not enforce conflict
serializability; but it rejects fewer write operations, by modifying the checks for the write_item(X)
operation as follows:
i. If read_TS(X) >TS(T), then abort and roll back T and reject the operation.
ii. If write_TS(X) >TS(T), then do not execute the write operation but continue processing. This
is because some transaction with timestamp greater than TS(T)—and hence after T in the
timestamp ordering—has already written the value of X. Hence, we must ignore the
write_item(X) operation of T because it is already outdated and obsolete. Notice that any
conflict arising from this situation would be detected by case (i).
iii. If neither the condition in part (i) nor the condition in part (ii) occurs, then execute the
write_item(X) operation of T and set write_TS(X) to TS(T).

29. Under what condition is it less expensive to avoid deadlock than to allow deadlocks to occur
and then to detect them? (2009)
Deadlock avoidance is preferable if the consequences of abort are serious (as in interactive
transactions), and if there is high contention and a resulting high probability of deadlock.
Deadlock Avoidance
Deadlock avoidance can be done with Banker’s Algorithm.

Banker’s Algorithm
Bankers’s Algorithm is resource allocation and deadlock avoidance algorithm which test all the
request made by processes for resources, it check for safe state, if after granting request system
remains in the safe state it allows the request and if their is no safe state it don’t allow the request
made by the process.
Inputs to Banker’s Algorithm
1. Max need of resources by each process.
2. Currently allocated resources by each process.
3. Max free available resources in the system.
Request will only be granted under below condition.
1. If request made by process is less than equal to max need to that process.
2. If request made by process is less than equal to freely availbale resource in the system.

Unit 2
Query Optimization
1. What is query optimization in RDBMS? Define with suitable flow diagram. Explain external
sorting in query optimization process. (2016)
Query Optimization: A single query can be executed through different algorithms or re-written in
different forms and structures. Hence, the question of query optimization comes into the picture –
Which of these forms or pathways is the most optimal? The query optimizer attempts to determine
the most efficient way to execute a given query by considering the possible query plans.
Importance: The goal of query optimization is to reduce the system resources required to fulfill a
query, and ultimately provide the user with the correct result set faster.
 First, it provides the user with faster results, which makes the application seem faster to the
user.
 Secondly, it allows the system to service more queries in the same amount of time, because
each request takes less time than unoptimized queries.
 Thirdly, query optimization ultimately reduces the amount of wear on the hardware (e.g. disk
drives), and allows the server to run more efficiently (e.g. lower power consumption, less
memory usage).
There are broadly two ways a query can be optimized:
1. Analyze and transform equivalent relational expressions: Try to minimize the tuple and column
counts of the intermediate and final query processes (discussed here).
2. Using different algorithms for each operation: These underlying algorithms determine how
tuples are accessed from the data structures they are stored in, indexing, hashing, data
retrieval and hence influence the number of disk and block accesses (discussed in query
processing).
Introduction to Query Processing
 Query Processing is a translation of high-level queries into low-level expression.
 It is a step wise process that can be used at the physical level of the file system, query
optimization and actual execution of the query to get the result.
 It requires the basic concepts of relational algebra and file structure.
 It refers to the range of activities that are involved in extracting data from the database.
 It includes translation of queries in high-level database languages into expressions that can be
implemented at the physical level of the file system.
 In query processing, we will actually understand how these queries are processed and how
they are optimized.

External sorting is a class of sorting algorithms that can handle massive amounts of data. External
sorting is required when the data being sorted do not fit into the main memory of a computing
device (usually RAM) and instead they must reside in the slower external memory, usually a hard
disk drive. Thus, external sorting algorithms are external memory algorithms and thus applicable in
the external memory model of computation.
External sorting algorithms generally fall into two types, distribution sorting, which
resembles quicksort, and external merge sort, which resembles merge sort. The latter typically uses
a hybrid sort-merge strategy. In the sorting phase, chunks of data small enough to fit in main
memory are read, sorted, and written out to a temporary file. In the merge phase, the sorted
subfiles are combined into a single larger file.

2. Write short note on : a. Multimedia Database system b. Data mining (2016)


a. Multimedia database is the collection of interrelated multimedia data that includes text, graphics
(sketches, drawings), images, animations, video, audio etc and have vast amounts of multisource
multimedia data. The framework that manages different types of multimedia data which can be
stored, delivered and utilized in different ways is known as multimedia database management
system. There are three classes of the multimedia database which includes static media, dynamic
media and dimensional media.

Content of Multimedia Database management system :


1. Media data – The actual data representing an object.
2. Media format data – Information such as sampling rate, resolution, encoding scheme etc.
about the format of the media data after it goes through the acquisition, processing and
encoding phase.
3. Media keyword data – Keywords description relating to the generation of data. It is also
known as content descriptive data. Example: date, time and place of recording.
4. Media feature data – Content dependent data such as the distribution of colors, kinds of
texture and different shapes present in data.

Types of multimedia applications based on data management characteristic are :


1. Repository applications – A Large amount of multimedia data as well as meta-data(Media
format date, Media keyword data, Media feature data) that is stored for retrieval purpose,
e.g., Repository of satellite images, engineering drawings, radiology scanned pictures.
2. Presentation applications – They involve delivery of multimedia data subject to temporal
constraint. Optimal viewing or listening requires DBMS to deliver data at certain rate offering
the quality of service above a certain threshold. Here data is processed as it is delivered.
Example: Annotating of video and audio data, real-time editing analysis.
3. Collaborative work using multimedia information – It involves executing a complex task by
merging drawings, changing notifications. Example: Intelligent healthcare network.

b. Data Mining” refers to the extraction of useful information from a bulk of data or data
warehouses. One can see that the term itself is a little bit confusing. In case of coal or diamond
mining, the result of extraction process is coal or diamond. But in case of Data Mining, the result of
extraction process is not data!! Instead, the result of data mining is the patterns and knowledge that
we gain at the end of the extraction process. In that sense, Data Mining is also known as Knowledge
Discovery or Knowledge Extraction.
data mining is used in almost all the places where a large amount of data is stored and processed.
For example, banks typically use ‘data mining’ to find out their prospective customers who could be
interested in credit cards, personal loans or insurances as well. Since banks have the transaction
details and detailed profiles of their customers, they analyze all this data and try to find out patterns
which help them predict that certain customers could be interested in personal loans etc.

Main Purpose of Data Mining


Basically, the information gathered from Data Mining helps to predict hidden patterns, future trends
and behaviors and allowing businesses to take decisions.
Technically, data mining is the computational process of analyzing data from different perspective,
dimensions, angles and categorizing/summarizing it into meaningful information.
Data Mining can be applied to any type of data e.g. Data Warehouses, Transactional Databases,
Relational Databases, Multimedia Databases, Spatial Databases, Time-series Databases, World Wide
Web.

3. What are the advantages associated with relational database design? Discuss Codds’ rules for
relational database. (2015)
Advantages of a relational database
Splitting data into a number of related tables brings many advantages over a flat file database. These
include:
1. Data is only stored once. In the previous example, the city data was gathered into one table so
now there is only one record per city. The advantages of this are
 No multiple record changes needed
 More efficient storage
 Simple to delete or modify details.
 All records in other tables having a link to that entry will show the change.
2. Complex queries can be carried out. A language called SQL has been developed to allow
programmers to 'Insert', 'Update', 'Delete', 'Create', 'Drop' table records. These actions are further
refined by a 'Where' clause. For example
SELECT * FROM Customer WHERE ID = 2
This SQL statement will extract record number 2 from the Customer table. Far more complicated
queries can be written that can extract data from many tables at once.
3. Better security. By splitting data into tables, certain tables can be made confidential. When a
person logs on with their username and password, the system can then limit access only to those
tables whose records they are authorised to view. For example, a receptionist would be able to view
employee location and contact details but not their salary. A salesman may see his team's sales
performance but not competing teams.
4. Cater for future requirements. By having data held in separate tables, it is simple to add records
that are not yet needed but may be in the future. For example, the city table could be expanded to
include every city and town in the country, even though no other records are using them all as yet. A
flat file database cannot do this.

Summary - advantages of a relational database over flat file


 Avoids data duplication
 Avoids inconsistent records
 Easier to change data
 Easier to change data format
 Data can be added and removed easily
 Easier to maintain security.

Codd’s Relational Database Rules:


In 1985, Dr. E. F. Codd published a list of 12 rules to define a relational database system.2 The reason
Dr. Codd published the list was his concern that many vendors were marketing products as
“relational” even though those products did not meet minimum relational standards. Dr. Codd’s list,
shown in the following Table 3.8, serves as a frame of reference for what a truly relational database
should be. Bear in mind that even the dominant database vendors do not fully support all 12 rules.

1. Information
All information in a relational database must be logically represented as column values in rows
within tables.
2. Guaranteed Access
Every value in a table is guaranteed to be accessible through a combination of table name, primary
key value, and column name.
3. Systematic Treatment of Nulls
Nulls must be represented and treated in a systematic way, independent of data type.
4. Dynamic Online Catalog Based on the Relational Model
The metadata must be stored and managed as ordinary data, that is, in tables within the database.
Such data must be available to authorized users using the standard database relational language.
5. Comprehensive Data Sublanguage
The relational database may support many languages. However, it must support one well-defined,
declarative language with support for data definition, view definition, data manipulation (interactive
and by program), integrity constraints, authorization, and transaction management (begin, commit,
and rollback).
6. View Updating
Any view that is theoretically updatable must be updatable through the system.
7. High-Level Insert, Update, and Delete
The database must support set-level inserts, updates, and deletes.
8. Physical Data Independence:
Application programs and ad hoc facilities are logically unaffected when physical access methods or
storage structures are changed.
9. Logical Data Independence
Application programs and ad hoc facilities are logically unaffected when changes are made to the
table structures that preserve the original table values (changing order of columns or inserting
columns).
10. Integrity Independence
All relational integrity constraints must be definable in the relational language and stored in the
system catalog, not at the application level.
11. Distribution Independence
The end users and application programs are unaware and unaffected by the data location
(distributed vs. local databases).
12. Nonsubversion
If the system supports low-level access to the data, there must not be a way to bypass the integrity
rules of the database.
Rule Zero All preceding rules are based on the notion that in order for a database to be considered
relational, it must use its relational facilities exclusively to manage the database.

4. What do you understand by query optimization? Elaborate using an example. (2015)


Query Optimization: A single query can be executed through different algorithms or re-written in
different forms and structures. Hence, the question of query optimization comes into the picture –
Which of these forms or pathways is the most optimal? The query optimizer attempts to determine
the most efficient way to execute a given query by considering the possible query plans.
Importance: The goal of query optimization is to reduce the system resources required to fulfill a
query, and ultimately provide the user with the correct result set faster.
 First, it provides the user with faster results, which makes the application seem faster to the
user.
 Secondly, it allows the system to service more queries in the same amount of time, because
each request takes less time than unoptimized queries.
 Thirdly, query optimization ultimately reduces the amount of wear on the hardware (e.g. disk
drives), and allows the server to run more efficiently (e.g. lower power consumption, less
memory usage).
There are broadly two ways a query can be optimized:
1. Analyze and transform equivalent relational expressions: Try to minimize the tuple and column
counts of the intermediate and final query processes (discussed here).
2. Using different algorithms for each operation: These underlying algorithms determine how
tuples are accessed from the data structures they are stored in, indexing, hashing, data
retrieval and hence influence the number of disk and block accesses (discussed in query
processing).

5. Discuss the concept of temporal database. Hence database performance improvements in


temporal database. (2015)
A temporal database is a database that has certain features that support time-sensitive status for
entries. Where some databases are considered current databases and only support factual data
considered valid at the time of use, a temporal database can establish at what times certain entries
are accurate.
Different uses of temporal databases require radically different types of development. For example,
in a database of customer, patient or citizen data, indicators for individual people will follow a kind
of life cycle timeline that can be created according to time frames for comment life events. By
contrast, many industrial processes using temporal databases need extremely short valid time and
transaction time indicators. These are rigidly implemented depending on length of time for various
parts of business processes.
1. Why we need Temporal Database:
In the last two decades, the relational data model has gained popularity because of its simplity and
solid mathematical foundation. However, the relational data model as proposed by Codd [Cod70]
does not address the temporal dimension of data. Variation of data over time is treated in the same
way as ordinary data. This is not satisfactory for applications that require past persent, and/or future
data values--to be dealt with by the database, In real life such applications abound. In fact, most
application require temporal data to a certain extent.

2. The Main Goal of Temporal Database:


o Identification of an appropriate data type for time
o Prevent fragmentation of an object description
o Provide query algebra to deal with temporal data
o Compatiable with old database without temporal data

3. What we can do by Temporal Database:


o It's easy to deal with temporal data
o Record the data changed with time is more convenient
o An object description can be well defined without fragmentation
o Having relation model to describ temporal data
o Having query algebra to deal with temporal data
o It works that to handle static data (without time dimension) in temporal database
o The tranditional database algebra still work in temporal database
o The new query algebra to control time dimemsion is similar to tranditional database algebra

6. What do you understand by data mining? Hence discuss the relevance of business intelligence
and knowledge discovery in database. (2015)
Knowledge Discovery (KD)
 Data Pre-Processing;
 Intelligent Data Analysis;
 Temporal and Spatial KD;
 Data and Knowledge Visualization;
 Machine Learning (e.g. Decision Trees, Neural Networks, Bayesian Learning, Inductive and
Fuzzy Logic) and Statistical Methods;
 Hybrid Learning Models and Methods: Using KD methods and Cognitive Models, Learning
in Ontologies, inductive logic, etc.
 Domain KD: Learning from Heterogeneous, Unstructured (e.g. text) and Multimedia data,
Networks, Graphs and Link Analysis);
 Data Mining and Machine Learning: Classification, Regression, Clustering and Association
Rules;
 Ubiquitous Data Mining: Distributed Data Mining, Incremental Learning, Change
Detection, Learning from Ubiquitous Data Streams;
Business Intelligence (BI)/Business Analytics/Data Science
 Methodologies, Architectures or Computational Tools;
 Artificial Intelligence (e.g. KD, Evolutionary Computation, Intelligent Agents, Logic) applied to
BI: Data Warehouse, OLAP, Data Mining, Decision Support Systems, Adaptive BI, Web
Intelligence and Competitive Intelligence.
Real-word Applications
 Prediction/Optimization in Finance, Marketing, Medicine, Sales, Production.
 Mining Big Data and Cloud computing.
 Social Network Analysis; Community detection, Influential nodes.

7. What is data warehouse? Discuss the architecture of data warehouse with staging area. (2015)
The Data Warehouse Staging Area is temporary location where data from source systems is copied.
A staging area is mainly required in a Data Warehousing Architecture for timing reasons. In short, all
required data must be available before data can be integrated into the Data Warehouse.
Due to varying business cycles, data processing cycles, hardware and network resource limitations
and geographical factors, it is not feasible to extract all the data from all Operational databases at
exactly the same time.

Typical Data Warehousing Environment

8. Explain the steps in query processing. (2014)


Basic Steps in Query Processing
1. Parsing and translation
2. Optimization
3. Evaluation

1. Parsing and translation


 Translate the query into its internal form. This is then translated into relational algebra.
 Parser checks syntax, verifies relation.
2. Optimization
 SQL is a very high level language:
o The users specify what to search for- not how the search is actually done
o The algorithms are chosen automatically by the DBMS.
 For a given SQL query there may be many possible execution plans.
 Amongst all equivalent plans choose the one with lowest cost.
 Cost is estimated using statistical information from the database catalog.
3. Evaluation
 The query evaluation engine takes a query evaluation plan, executes that plan and returns the
answer to that query.

9. What are the different measures of query cost? (2014)

10. Explain the data mining technique? Write the advantages of classification. (2014)
CLASSIFICATION ALGORITHAM

11. Write short note on temporal database. (2014)

12. Describe the algorithm for external sorting. With the help of example. (2013,2011, 2010)

13. Briefly describe the select and join operation on data, with the help of suitable example.
(2011)
A SQL Join statement is used to combine data or rows from two or more tables based on a common
field between them. Different types of Joins are:
 INNER JOIN
 LEFT JOIN
 RIGHT JOIN
 FULL JOIN
Consider the two tables below:

Student

StudentCourse

The simplest Join is INNER JOIN.


1. INNER JOIN: The INNER JOIN keyword selects all rows from both the tables as long as the
condition satisfies. This keyword will create the result-set by combining all rows from both the
tables where the condition satisfies i.e value of the common field will be same.
2. LEFT JOIN: This join returns all the rows of the table on the left side of the join and matching
rows for the table on the right side of join. The rows for which there is no matching row on
right side, the result-set will contain null.
3. RIGHT JOIN: RIGHT JOIN is similar to LEFT JOIN. This join returns all the rows of the table on
the right side of the join and matching rows for the table on the left side of join. The rows for
which there is no matching row on left side, the result-set will contain null. RIGHT JOIN is also
known as RIGHT .

Select Operation (σ)


It selects tuples that satisfy the given predicate from a relation.
Notation − σp(r)
Where σ stands for selection predicate and r stands for relation. p is prepositional logic formula
which may use connectors like and, or, and not. These terms may use relational operators like −
=, ≠, ≥, < , >, ≤.
For example −
σsubject = "database"(Books)
Output − Selects tuples from books where subject is 'database'.
σsubject = "database" and price = "450"(Books)
Output − Selects tuples from books where subject is 'database' and 'price' is 450.
σsubject = "database" and price = "450" or year > "2010"(Books)
Output − Selects tuples from books where subject is 'database' and 'price' is 450 or those books
published after 2010.

14. Explain multimedia database with the help of suitable examples. (2011)
Multimedia database is the collection of interrelated multimedia data that includes text, graphics
(sketches, drawings), images, animations, video, audio etc and have vast amounts of multisource
multimedia data. The framework that manages different types of multimedia data which can be
stored, delivered and utilized in different ways is known as multimedia database management
system. There are three classes of the multimedia database which includes static media, dynamic
media and dimensional media.

Content of Multimedia Database management system :


1. Media data – The actual data representing an object.
2. Media format data – Information such as sampling rate, resolution, encoding scheme etc.
about the format of the media data after it goes through the acquisition, processing and
encoding phase.
3. Media keyword data – Keywords description relating to the generation of data. It is also
known as content descriptive data. Example: date, time and place of recording.
4. Media feature data – Content dependent data such as the distribution of colors, kinds of
texture and different shapes present in data.

Types of multimedia applications based on data management characteristic are :


1. Repository applications – A Large amount of multimedia data as well as meta-data(Media
format date, Media keyword data, Media feature data) that is stored for retrieval purpose,
e.g., Repository of satellite images, engineering drawings, radiology scanned pictures.
2. Presentation applications – They involve delivery of multimedia data subject to temporal
constraint. Optimal viewing or listening requires DBMS to deliver data at certain rate offering
the quality of service above a certain threshold. Here data is processed as it is delivered.
Example: Annotating of video and audio data, real-time editing analysis.
3. Collaborative work using multimedia information – It involves executing a complex task by
merging drawings, changing notifications. Example: Intelligent healthcare network.
A multimedia database is a collection of related multimedia data. Common multimedia data
types that can be found in a multimedia database include the following:
• Text
• Graphics: drawing, sketches, and illustrations
• Images: color and black & white pictures, photographs, maps and paintings
• Animation sequences: animated images or graphic objects
• Video: a sequence of images (frames), typically recording a real-life event and usually
produced by a video recorder
• Audio: generated from an aural recording device
• Composite multimedia: a combination of two or more of the above data types.

15. Explain the following: a. Multimedia Database b. Data Mining c. Data Warehousing (2010)
Unit 4
PL/SQL
1. How many types of SQL statements? Explain all the statements in detail.
Type of SQL Statement (DDL, DML, DCL, TCS, SCS Commands)
SQL statements are divided into five different categories: Data definition language (DDL), Data
manipulation language (DML), Data Control Language (DCL), Transaction Control Statement (TCS),
Session Control Statements (SCS).

Data Definition Language (DDL) Statements


Data definition statement are use to define the database structure or table.
Statement Description

CREATE Create new database/table.

ALTER Modifies the structure of database/table.

DROP Deletes a database/table.

TRUNCATE Remove all table records including allocated table spaces.

RENAME Rename the database/table.

Data Manipulation Language (DML) Statements


Data manipulation statement are use for managing data within table object.
Statement Description

SELECT Retrieve data from the table.

INSERT Insert data into a table.

UPDATE Updates existing data with new data within a table.

DELETE Deletes the records rows from the table.

MERGE MERGE (also called UPSERT) statements to INSERT new records or UPDATE existing
records depending on condition matches or not.

LOCK TABLE LOCK TABLE statement to lock one or more tables in a specified mode. Table access
denied to a other users for the duration of your table operation.

CALL Statements are supported in PL/SQL only for executed dynamically. CALL a PL/SQL
EXPLAIN program or EXPLAIN PATH access the data path.
PLAN

Data Control Language (DCL) Statements


Data control statement are use to give privileges to access limited data.
Statement Description
GRANT Gives privileges to user for accessing database data.

REVOKE Take back for given privileges.

ANALYZE ANALYZE statement to collect statistics information about index, cluster, table.

AUDIT To track the occurrence of a specific SQL statement or all SQL statements during the
user sessions.

COMMENT Write comment to the data table.

Transaction Control Statement (TCS)


Transaction control statement are use to apply the changes permanently save into database.
Statement Description

COMMIT Permanent work save into database.

ROLLBACK Restore database to original form since the last COMMIT.

SAVEPOINT Create SAVEPOINT for later use ROLLBACK the new changes.

SET SET TRANSACTION command set the transaction properties such as read-
TRANSACTION write/read only access.

2. Explain: a. Joining in SQL b. Deadlock


a. A SQL Join statement is used to combine data or rows from two or more tables based on a
common field between them. Different types of Joins are:
 INNER JOIN
 LEFT JOIN
 RIGHT JOIN
 FULL JOIN
Consider the two tables below:
Student

StudentCourse
The simplest Join is INNER JOIN.
1. INNER JOIN: The INNER JOIN keyword selects all rows from both the tables as long as the condition
satisfies. This keyword will create the result-set by combining all rows from both the tables where
the condition satisfies i.e value of the common field will be same.

2.LEFT JOIN: This join returns all the rows of the table on the left side of the join and matching rows
for the table on the right side of join. The rows for which there is no matching row on right side, the
result-set will contain null. LEFT JOIN is also known as LEFT OUTER JOIN.

RIGHT JOIN: RIGHT JOIN is similar to LEFT JOIN. This join returns all the rows of the table on the right
side of the join and matching rows for the table on the left side of join. The rows for which there is
no matching row on left side, the result-set will contain null. RIGHT JOIN is also known as RIGHT
OUTER JOIN.

b. A deadlock is a situation in which two computer programs sharing the same resource are
effectively preventing each other from accessing the resource, resulting in both programs ceasing to
function.
The earliest computer operating systems ran only one program at a time. All of the resources of the
system were available to this one program. Later, operating systems ran multiple programs at once,
interleaving them. Programs were required to specify in advance what resources they needed so
that they could avoid conflicts with other programs running at the same time. Eventually some
operating systems offered dynamic allocation of resources. Programs could request further
allocations of resources after they had begun running.
Every process needs some resources to complete its execution. However, the resource is granted in
a sequential order.
1. The process requests for some resource.
2. OS grant the resource if it is available otherwise let the process waits.
3. The process uses it and release on the completion.
A Deadlock is a situation where each of the computer process waits for a resource which is
being assigned to some another process. In this situation, none of the process gets executed
since the resource it needs, is held by some other process which is also waiting for some other
resource to be released.

3. Define Date types. Explain all kinds of data types in SQL.


SQL General Data Types
Each column in a database table is required to have a name and a data type.
SQL developers have to decide what types of data will be stored inside each and every table column
when creating a SQL table. The data type is a label and a guideline for SQL to understand what type
of data is expected inside of each column, and it also identifies how SQL will interact with the stored
data.
The following table lists the general data types in SQL:

Data type Description

CHARACTER(n) Character string. Fixed-length n

VARCHAR(n) or Character string. Variable length. Maximum length n


CHARACTER
VARYING(n)

BINARY(n) Binary string. Fixed-length n

BOOLEAN Stores TRUE or FALSE values

VARBINARY(n) or Binary string. Variable length. Maximum length n


BINARY
VARYING(n)

INTEGER(p) Integer numerical (no decimal). Precision p

SMALLINT Integer numerical (no decimal). Precision 5

INTEGER Integer numerical (no decimal). Precision 10

BIGINT Integer numerical (no decimal). Precision 19

DECIMAL(p,s) Exact numerical, precision p, scale s. Example: decimal(5,2) is a number that has
3 digits before the decimal and 2 digits after the decimal

NUMERIC(p,s) Exact numerical, precision p, scale s. (Same as DECIMAL)

FLOAT(p) Approximate numerical, mantissa precision p. A floating number in base 10


exponential notation. The size argument for this type consists of a single
number specifying the minimum precision

REAL Approximate numerical, mantissa precision 7


FLOAT Approximate numerical, mantissa precision 16

DOUBLE Approximate numerical, mantissa precision 16


PRECISION

DATE Stores year, month, and day values

TIME Stores hour, minute, and second values

TIMESTAMP Stores year, month, day, hour, minute, and second values

INTERVAL Composed of a number of integer fields, representing a period of time,


depending on the type of interval

ARRAY A set-length and ordered collection of elements

MULTISET A variable-length and unordered collection of elements

XML Stores XML data

SQL Data Type Quick Reference


However, different databases offer different choices for the data type definition.
The following table shows some of the common names of data types between the various database
platforms:

Data type Access SQLServer Oracle MySQL PostgreSQL

boolean Yes/No Bit Byte N/A Boolean

integer Number (integer) Int Number Int Int


Integer Integer

float Number (single) Float Number Float Numeric


Real

currency Currency Money N/A N/A Money

string (fixed) N/A Char Char Char Char

string (variable) Text (<256) Varchar Varchar Varchar Varchar


Memo (65k+) Varchar2

binary object OLE Object Memo Binary (fixed up to 8K) Long Blob Binary
Varbinary (<8K) Raw Text Varbinary
Image (<2GB)

4. What is Constraints? Give all kinds of constraints with syntax and example.
Constraints enforce limits to the data or type of data that can be inserted/updated/deleted from a
table. The whole purpose of constraints is to maintain the data integrity during an
update/delete/insert into a table. In this tutorial we will learn several types of constraints that can
be created in RDBMS.
Types of constraints
 NOT NULL
 UNIQUE
 DEFAULT
 CHECK
 Key Constraints – PRIMARY KEY, FOREIGN KEY
 Domain constraints
 Mapping constraints

 NOT NULL constraints


NOT NULL constraints prevent null values from being entered into a column.
 Unique constraints
Unique constraints ensure that the values in a set of columns are unique and not null for all rows in
the table. The columns specified in a unique constraint must be defined as NOT NULL. The database
manager uses a unique index to enforce the uniqueness of the key during changes to the columns of
the unique constraint.

 Primary key constraints


You can use primary key and foreign key constraints to define relationships between tables.

 (Table) Check constraints


A check constraint (also referred to as a table check constraint) is a database rule that specifies the
values allowed in one or more columns of every row of a table. Specifying check constraints is done
through a restricted form of a search condition.

 Foreign key (referential) constraints


Foreign key constraints (also known as referential constraints or referential integrity constraints)
enable definition of required relationships between and within tables.

 Informational constraints
An informational constraint is a constraint attribute that can be used by the SQL compiler to improve
the access to data. Informational constraints are not enforced by the database manager, and are not
used for additional verification of data; rather, they are used to improve query performance.

5. Write short notes on any two of the following: (2015) a. Data Definition Language (DDL) b. Data
Manipulation Language (DML) c. PL/SQL functions d. Error handling in PL/SQL
DDL(Data Definition Language) : DDL or Data Definition Language actually consists of the SQL
commands that can be used to define the database schema. It simply deals with descriptions of the
database schema and is used to create and modify the structure of database objects in database.
Examples of DDL commands:
 CREATE – is used to create the database or its objects (like table, index, function, views, store
procedure and triggers).
 DROP – is used to delete objects from the database.
 ALTER-is used to alter the structure of the database.
 TRUNCATE–is used to remove all records from a table, including all spaces allocated for the
records are removed.
 COMMENT –is used to add comments to the data dictionary.
 RENAME –is used to rename an object existing in the database.
DML(Data Manipulation Language) : The SQL commands that deals with the manipulation of data
present in database belong to DML or Data Manipulation Language and this includes most of the SQL
statements.
Examples of DML:
 SELECT – is used to retrieve data from the a database.
 INSERT – is used to insert data into a table.
 UPDATE – is used to update existing data within a table.
 DELETE – is used to delete records from a database table.

PL/SQL Function
Functions is a standalone PL/SQL subprogram. Like PL/SQL procedure, functions have a unique name
by which it can be referred. These are stored as PL/SQL database objects. Below are some of the
characteristics of functions.
 Functions are a standalone block that is mainly used for calculation purpose.
 Function use RETURN keyword to return the value, and the datatype of this is defined at the
time of creation.
 A Function should either return a value or raise the exception, i.e. return is mandatory in
functions.
 Function with no DML statements can be directly called in SELECT query whereas the function
with DML operation can only be called from other PL/SQL blocks.
 It can have nested blocks, or it can be defined and nested inside the other blocks or packages.
 It contains declaration part (optional), execution part, exception handling part (optional).
 The values can be passed into the function or fetched from the procedure through the
parameters.
 These parameters should be included in the calling statement.
 Function can also return the value through OUT parameters other than using RETURN.
 Since it will always return the value, in calling statement it always accompanies with
assignment operator to populate the variables.
 CREATE FUNCTION instructs the compiler to create a new function. Keyword 'OR REPLACE'
instructs the compiler to replace the existing function (if any) with the current one.
 The Function name should be unique.
 RETURN datatype should be mentioned.
 Keyword 'IS' will be used, when the procedure is nested into some other blocks. If the
procedure is standalone then 'AS' will be used. Other than this coding standard, both have the
same meaning.

6. What is a View? How can it be created? Explain the types of Views and describe its advantages
over tables. (2014)
A View in SQL as a logical subset of data from one or more tables. Views are used to restrict data
access. A View contains no data of its own but its like window through which data from tables can be
viewed or changed. The table on which a View is based are called BASE Tables.

SQL CREATE VIEW Statement


In SQL, a view is a virtual table based on the result-set of an SQL statement.
A view contains rows and columns, just like a real table. The fields in a view are fields from one or
more real tables in the database.You can add SQL functions, WHERE, and JOIN statements to a view
and present the data as if the data were coming from one single table.
CREATE VIEW Syntax
CREATE VIEW view_name AS
SELECT column1, column2, ...
FROM table_name
WHERE condition;
There are 2 types of Views in SQL: Simple View and Complex View. Simple views can only contain a
single base table. Complex views can be constructed on more than one base table. In particular,
complex views can contain: join conditions, a group by clause, a order by clause.
The key differences between these types of Views are:
SIMPLE VIEW COMPLEX VIEW

Contains only one single base table or is Conatins more than one base tables or is
created from only one table. created from more than one tables.

We cannot use group functions like MAX(),


COUNT(), etc. We can use group functions.

Does not contain groups of data. It can conatin groups of data.

DML operations could be performed through a DML operations could not always be
simple view. performed through a complex view.

INSERT, DELETE and UPDATE are directly We cannot apply INSERT, DELETE and
possible on a simle view. UPDATE on complex view directly.

Simple view does not contain group by, It can contain group by, distinct,
distinct, pseudocolumn like rownum, columns pseudocolumn like rownum, columns
defiend by expressions. defiend by expressions.

NOT NULL columns that are not selected


Does not include NOT NULL columns from by simple view can be included in
base tables. complex view.

Views over Tables:-


Table : An RDBMS object which can store data.
View : An RDBMS object which is a product of multiple tables, overtime you run a “SELECT *” on a
view a query run’s in background to fetch you the desired results.
Table : You can add/update/delete data from a table.
View : You cannot add/update/delete any data from a view, to make any changes to the view, you
will have to update the data in the source tables that are used to create the view.
Table : You can only create or drop the table, basically you cannot replace the table object directly as
its an physical entry in the RDBMS memory.
View : You can easily use replace option to recreate the view, as its just a pseudo name to the query
which is running behind.
Table : DML operations can be performed.
View : DML operation is not allowed for the above mentioned reasons.

7. Explain Aggregate functions with example. (2014)


Aggregate Functions are all about
 Performing calculations on multiple rows
 Of a single column of a table
 And returning a single value.
The ISO standard defines five (5) aggregate functions namely;
1) COUNT
2) SUM
3) AVG
4) MIN
5) MAX

Aggregate functions.
From a business perspective, different organization levels have different information requirements.
Top levels managers are usually interested in knowing whole figures and not necessary the
individual details.
Aggregate functions allow us to easily produce summarized data from our database.
For instance, from our myflix database , management may require following reports
 Least rented movies.
 Most rented movies.
 Average number that each movie is rented out in a month.
We easily produce above reports using aggregate functions.

Let's look into aggregate functions in detail.


COUNT Function
The COUNT function returns the total number of values in the specified field. It works on both
numeric and non-numeric data types. All aggregate functions by default exclude nulls values before
working on the data.
COUNT (*) is a special implementation of the COUNT function that returns the count of all the rows
in a specified table. COUNT (*) also considers Nulls and duplicates.

MIN function
The MIN function returns the smallest value in the specified table field.
As an example, let's suppose we want to know the year in which the oldest movie in our library was
released, we can use MySQL's MIN function to get the desired information.

MAX function
Just as the name suggests, the MAX function is the opposite of the MIN function. It returns the
largest value from the specified table field.
Let's assume we want to get the year that the latest movie in our database was released. We can
easily use the MAX function to achieve that.

SUM function
Suppose we want a report that gives total amount of payments made so far. We can use the
MySQL SUM function which returns the sum of all the values in the specified column. SUM works on
numeric fields only. Null values are excluded from the result returned.

AVG function
MySQL AVG function returns the average of the values in a specified column. Just like the SUM
function, it works only on numeric data types.

8. Explain the types of constraints using suitable example? Give the condition when the
constraints are implemented as table level only. (2014)
SQL constraints are used to specify rules for the data in a table.
Constraints are used to limit the type of data that can go into a table. This ensures the accuracy and
reliability of the data in the table. If there is any violation between the constraint and the data
action, the action is aborted.
Constraints can be column level or table level. Column level constraints apply to a column, and table
level constraints apply to the whole table.
The types of constraints that you can apply at the table level are as follows:
 Primary Key—Requires that a column (or combination of columns) be the unique identifier of
the row. A primary key column does not allow NULLvalues.
 Unique Key—Requires that no two rows can have duplicate values in a specified column or
combination of columns. The set of columns is considered to be a unique key.
 Check—Requires that a column (or combination of columns) satisfy a condition for every row
in the table. A check constraint must be a Boolean expression. It is evaluated each time that a
row is inserted or updated. An example of a check constraint is: SALARY > 0.
 Foreign Key—Requires that for a particular column (or combination of columns), all column
values in the child table exist in the parent table. The table that includes the foreign key is
called the dependent or child table. The table that is referenced by the foreign key is called
the parent table. An example of a foreign key constraint is where the department column of
the employees table must contain a department ID that exists in the parent department table.
Constraints can be created and usually modified with different statuses. The options include
enabled or disabled, which determine if the constraint is checked when rows are added or
modified, and deferred or immediate, which cause constraint validation to occur at the end of
a transaction or at the end of a statement, respectively.

9. Explain using example how constraints are defined in Alter table command. (2014)

10. What are cursors? Explain the attributes of cursor. (2014)


In SQL procedures, a cursor make it possible to define a result set (a set of data rows) and perform
complex logic on a row by row basis. By using the same mechanics, an SQL procedure can also define
a result set and return it directly to the caller of the SQL procedure or to a client application.
A cursor can be viewed as a pointer to one row in a set of rows. The cursor can only reference one
row at a time, but can move to other rows of the result set as needed.

There are mainly 2 types of cursors :


1) Implicit Cursor.
2) Explicit Cursor.

Implicit cursor: Oracle will implicitly creates an area for


the DML operations. Programmer will not have control on
implicit cursors. The only useful attribute on this implicit
cursor is SQL%ROWCOUNT , it will give the number of rows
affected by the recent DML operation.
The only Implicit cursor is SQL.

Explicit Cursor:
Explicit cursors are created by the programmer and
programmer have control on it programmer can
1) Open
2) Close
3) Fetch
and do some manipulations on the values

Explicit Cursors are classified into


1) Normal cursor
2) Parameterized cursor
3) Cursor For Loops and
4) REF cursors

REF Cursors:
Normally when we create a normal cursor , we cant change theselect query associated to that
query(the query which isgiven at the time of definition)But using REF cursors , we can change the
cursor statement also.These REF cursors are useful when we are sending data from one environment
to another environment.

CURSOR ATTRIBUTES :
a) %is open: evaluates to true if the cursor is open.
b) %not found: evaluates to true if the most recent fetch
does not return a row
c) %found: evaluates to true if the most recent fetch
returns a row.
d) %row count: evaluates to the total number of rows
returned to far.

To use cursors in SQL procedures, you need to do the following:


Declare a cursor that defines a result set.
 Open the cursor to establish the result set.
 Fetch the data into local variables as needed from the cursor, one row at a time.
 Close the cursor when done

To work with cursors you must use the following SQL statements:
DECLARE CURSOR
OPEN
FETCH
CLOSE

11. Write down the important features of PL/SQL (2014)


Features of PL/SQL
PL/SQL has the following features −
 PL/SQL is tightly integrated with SQL.
 It offers extensive error checking.
 It offers numerous data types.
 It offers a variety of programming structures.
 It supports structured programming through functions and procedures.
 It supports object-oriented programming.
 It supports the development of web applications and server pages.

12. Explain all of the following type with SQL example. (2013) a. DROP Index b. DROP Trigger c.
DROP Procedure d. DROP clause e. TRUNCATE Table
a. Indexes, tables, and databases can easily be deleted/removed with the DROP statement.
The DROP INDEX Statement
The DROP INDEX statement is used to delete an index in a table.
DROP INDEX Syntax for MS Access:
DROP INDEX index_name ON table_name
DROP INDEX Syntax for MS SQL Server:
DROP INDEX table_name.index_name
DROP INDEX Syntax for DB2/Oracle:
DROP INDEX index_name
DROP INDEX Syntax for MySQL:
ALTER TABLE table_name DROP INDEX index_name

The DROP TABLE Statement


The DROP TABLE statement is used to delete a table.
DROP TABLE table_name

The DROP DATABASE Statement


The DROP DATABASE statement is used to delete a database.
DROP DATABASE database_name

The TRUNCATE TABLE Statement


What if we only want to delete the data inside the table, and not the table itself?
Then, use the TRUNCATE TABLE statement:
TRUNCATE TABLE table_name

b. DROP TRIGGER
Purpose
Use the DROP TRIGGER statement to remove a database trigger from the database.
Prerequisites
The trigger must be in your own schema or you must have the DROP ANY TRIGGER system privilege.
To drop a trigger on DATABASE in another user's schema, you must also have
the ADMINISTER DATABASE TRIGGER system privilege.
Syntax
drop_trigger::=

Description of the illustration drop_trigger.gif


DROP PROCEDURE
Purpose
Packages are defined using PL/SQL. Please refer to Oracle Database PL/SQL Language Reference for
complete information on creating, altering, and dropping procedures.
Use the DROP PROCEDURE statement to remove a standalone stored procedure from the database.
Do not use this statement to remove a procedure that is part of a package. Instead, either drop the
entire package using the DROP PACKAGE statement, or redefine the package without the procedure
using the CREATE PACKAGE statement with the OR REPLACE clause.
Prerequisites
The procedure must be in your own schema or you must have the DROP ANY PROCEDURE system
privilege.
Syntax
drop_procedure::=

Description of the illustration drop_procedure.gif

DROP is used to delete a whole database or just a table.The DROP statement destroys the objects
like an existing database, table, index, or view.
A DROP statement in SQL removes a component from a relational database management system
(RDBMS).
Syntax:
DROP object object_name

Examples:
DROP TABLE table_name;
table_name: Name of the table to be deleted.

DROP DATABASE database_name;


database_name: Name of the database to be deleted.

13. Write the SQL command for grant and revoke. Also explain the role of stored procedure in
DBMS.(2013)
DCL commands are used to enforce database security in a multiple database environment. • Two
types of DCL commands are
• Grant
• Revoke
• Database Administrator's or owner’s of the database object can provide/remove privileges on a
database object.

SQL Grant Command


• SQL Grant command is used to provide access or privileges on the database objects to the users.
• The syntax for the GRANT command is: " "GRANT privilege_name ON object_name """TO
{user_name | PUBLIC | role_name} [with GRANT option]; Here, privilege_name: is the access right or
privilege granted to the user.
object_name: is the name of the database object like table, view etc.,.
user_name: is the name of the user to whom an access right is being granted. Public is used to grant
rights to all the users. With Grant option: allows users to grant access rights to other users.

SQL Revoke Command


• The revoke command removes user access rights or privileges to the database objects.
• The syntax for the REVOKE command is: REVOKE privilege_name ON object_name FROM
{User_name | PUBLIC | Role_name}
• For Example:
(a) GRANT SELECT ON employee TO user1 This command grants a SELECT permission on employee
table to user1.
(b) REVOKE SELECT ON employee FROM user1 This command will revoke a SELECT privilege on
employee table from user1.

Stored procedures have been viewed as the de facto standard for applications to access and
manipulate database information through the use of codified methods, or “procedures.” This is
largely due to what they offer developers: the opportunity to couple the set-based power of SQL
with the iterative and conditional processing control of code development. Developers couldn’t be
happier about this; finally, instead of writing inline SQL and then attempting to manipulate the data
from within the code, developers could take advantage of:
 Familiar Coding Principles
 Iterative Loops
 Conditionals
 Method Calls (the stored procedure itself is built and similarly called like a method)
 One-time, One-place Processing
 Instead of having inline SQL code spread throughout the application, now sections of SQL code
can be encapsulated into chunks of named methods that are easily identifiable and accessible
all within one location – the “Stored Procedure” folder of the database.
 All complex data processing can now be performed on the server, allowing the client
processing to focus more on presentation rather than manipulation of data.

14. Explain with SQL example. (2013) a. Cursor b. Package c. Views


a. Cursor is a pointer to this context area. PL/SQL controls the context area through a cursor. A
cursor holds the rows (one or more) returned by a SQL statement. The set of rows the cursor holds is
referred to as the active set.
You can name a cursor so that it could be referred to in a program to fetch and process the rows
returned by the SQL statement, one at a time. There are two types of cursors −
 Implicit cursors
 Explicit cursors

Implicit Cursors
Implicit cursors are automatically created by Oracle whenever an SQL statement is executed, when
there is no explicit cursor for the statement. Programmers cannot control the implicit cursors and
the information in it.

Explicit Cursors
Explicit cursors are programmer-defined cursors for gaining more control over the context area. An
explicit cursor should be defined in the declaration section of the PL/SQL Block. It is created on a
SELECT Statement which returns more than one row.

b. Package
A package is a schema object that groups logically related PL/SQL types, variables, constants,
subprograms, cursors, and exceptions. A package is compiled and stored in the database, where
many applications can share its contents.
A package always has a specification, which declares the public items that can be referenced from
outside the package.

If the public items include cursors or subprograms, then the package must also have a body. The
body must define queries for public cursors and code for public subprograms. The body can also
declare and define private items that cannot be referenced from outside the package, but are
necessary for the internal workings of the package. Finally, the body can have an initialization part,
whose statements initialize variables and do other one-time setup steps, and an exception-handling
part. You can change the body without changing the specification or the references to the public
items; therefore, you can think of the package body as a black box.
In either the package specification or package body, you can map a package subprogram to an
external Java or C subprogram by using a call specification, which maps the external subprogram
name, parameter types, and return type to their SQL counterparts.

c. Views in SQL are kind of virtual tables. A view also has rows and columns as they are in a real table
in the database. We can create a view by selecting fields from one or more tables present in the
database. A View can either have all the rows of a table or specific rows based on certain condition.
A view is nothing more than a SQL statement that is stored in the database with an associated name.
A view is actually a composition of a table in the form of a predefined SQL query.
A view can contain all rows of a table or select rows from a table. A view can be created from one or
many tables which depends on the written SQL query to create a view.
Views, which are a type of virtual tables allow users to do the following −
 Structure data in a way that users or classes of users find natural or intuitive.
 Restrict access to the data in such a way that a user can see and (sometimes) modify exactly
what they need and no more.
 Summarize data from various tables which can be used to generate reports.

15. Discuss the error handling in PL/SQL (2013)


Errors and Exception Handling
An exception is a PL/SQL error that is raised during program execution, either implicitly by TimesTen
or explicitly by your program. Handle an exception by trapping it with a handler or propagating it to
the calling environment.
Exception types
There are three types of exceptions:
 Predefined exceptions are error conditions that are defined by PL/SQL.
 Non-predefined exceptions include any standard TimesTen errors.
 User-defined exceptions are exceptions specific to your application.

In TimesTen, these three types of exceptions are used in the same way as in Oracle Database.
Exception Description How to handle
Predefined One of approximately 20 You are not required to declare these exceptions.
TimesTen error errors that occur most often They are predefined by TimesTen. TimesTen
in PL/SQL code implicitly raises the error.

Non-predefined Any other standard These must be declared in the declarative section
TimesTen error TimesTen error of your application. TimesTen implicitly raises the
error and you can use an exception handler to
catch the error.

User-defined Error defined and raised by These must be declared in the declarative section.
error the application The developer raises the exception explicitly.

18. Write Short not on Package Procedure. (2013)


PL/SQL package is a group of related functions, procedures, types, cursors, etc. PL/SQL package is
like a library once written stored in the Oracle database and can be used by many applications.
A PL/SQL package has two parts: package specification and package body.
 A package specification is the public interface of your applications. The public means the
stored function, procedures, types, etc., are accessible from other applications.
 A package body contains the code that implements the package specification.

PL/SQL
Package
Creating PL/SQL Package Specification
The package specification is required when you create a new package. The package specification lists
all the objects which are publicly accessible from other applications. The package specification also
provides the information that developers need to know in order to use the interface. In short,
package specification is the package’s API.

If the package specification does not contain any stored functions, procedures and no private code is
needed, you don’t need to have a package body. These packages may contain only type definition
and variables declaration. Those variables are known as package data. The scope of package data is
global to applications. It is recommended that you should hide as much as package data as possible
and use get and set functions to read and write that data. By doing this, you can prevent your
package data changed unintentionally.It is important to note that you must compile the package
specification before package body.

19. What are the different control Structures supported in PL/SQL? Explain. (2012)
PL/SQL Control Structures
Procedural computer programs use the basic control structures.

 The selection structure tests a condition, then executes one sequence of statements instead of
another, depending on whether the condition is true or false. A condition is any variable or
expression that returns a BOOLEAN value (TRUE or FALSE).
 The iteration structure executes a sequence of statements repeatedly as long as a condition
holds true.
 The sequence structure simply executes a sequence of statements in the order in which they
occur.

Testing Conditions: IF and CASE Statements


The IF statement executes a sequence of statements depending on the value of a condition. There
are three forms of IF statements: IF-THEN, IF-THEN-ELSE, and IF-THEN-ELSIF.
The CASE statement is a compact way to evaluate a single condition and choose between many
alternative actions. It makes sense to use CASE when there are three or more alternatives to choose
from.

 Using the IF-THEN Statement


The simplest form of IF statement associates a condition with a sequence ofstatements enclosed by
the keywords THEN and END IF (not ENDIF)
The sequence of statements is executed only if the condition is TRUE. If the condition is FALSE or
NULL, the IF statement does nothing. In either case, control passes to the next statement.

 Using CASE Statements


Like the IF statement, the CASE statement selects one sequence of statements to execute. However,
to select the sequence, the CASE statement uses a selector rather than multiple Boolean
expressions. A selector is an expression whose value is used to select one of several alternatives.
 Using the EXIT Statement
The EXIT statement forces a loop to complete unconditionally. When an EXIT statement is
encountered, the loop completes immediately and control passes to the next statement.

 Using the EXIT-WHEN Statement


The EXIT-WHEN statement lets a loop complete conditionally. When the EXIT statement is
encountered, the condition in the WHEN clause is evaluated. If the condition is true, the loop
completes and control passes to the next statement after the loop.

 Labeling a PL/SQL Loop


Like PL/SQL blocks, loops can be labeled. The optional label, an undeclared identifier enclosed by
double angle brackets, must appear at the beginning of the LOOP statement. The label name can
also appear at the end of the LOOP statement. When you nest labeled loops, use ending label names
to improve readability.

 Using the WHILE-LOOP Statement


The WHILE-LOOP statement executes the statements in the loop body as long as a condition is true:

Using the FOR-LOOP Statement


Simple FOR loops iterate over a specified range of integers. The number of iterations is known
before the loop is entered. A double dot (..) serves as the range operator. The range is evaluated
when the FOR loop is first entered and is never re-evaluated. If the lower bound equals the higher
bound, the loop body is executed once.

Sequential Control: GOTO and NULL Statements


The GOTO statement is seldom needed. Occasionally, it can simplify logic enough to warrant its use.
The NULL statement can improve readability by making the meaning and action of conditional
statements clear.
Overuse of GOTO statements can result in code that is hard to understand and maintain. Use GOTO
statements sparingly. For example, to branch from a deeply nested structure to an error-handling
routine, raise an exception rather than use a GOTO statement.

 Using the GOTO Statement


The GOTO statement branches to a label unconditionally. The label must be unique within its scope
and must precede an executable statement or a PL/SQL block. When executed, the GOTO statement
transfers control to the labeled statement or block. The labeled statement or block can be down or
up in the sequence of statements.

 Using the NULL Statement


The NULL statement does nothing, and passes control to the next statement. Some languages refer
to such an instruction as a no-op (no operation).

20. Explain the error handling in PL/SQL with suitable example. (2012)
Exception Handling in PL/SQL
An exception occurs when the PL/SQL engine encounters an instruction which it cannot execute due
to an error that occurs at run-time. These errors will not be captured at the time of compilation and
hence these needed to handle only at the run-time.
For example, if PL/SQL engine receives an instruction to divide any number by '0', then the PL/SQL
engine will throw it as an exception. The exception is only raised at the run-time by the PL/SQL
engine.
Exceptions will stop the program from executing further, so to avoid such condition, they need to be
captured and handled separately. This process is called as Exception-Handling, in which the
programmer handles the exception that can occur at the run time.

Exception-Handling Syntax
Exceptions are handled at the block, level, i.e., once if any exception occurs in any block then the
control will come out of execution part of that block. The exception will then be handled at the
exception handling part of that block. After handling the exception, it is not possible to resend
control back to the execution section of that block.
The below syntax explains how to catch and handle the exception.

21. Write Short notes on a. SQL stored procedure b. DDL and DML (2012)

22. Describe the various commands of SQL with suitable example.(2012)


Structured Query Language(SQL) as we all know is the database language by the use of which we can
perform certain operations on the existing database and also we can use this language to create a
database. SQL uses certain commands like Create, Drop, Insert etc. to carry out the required tasks.
These SQL commands are mainly categorized into four categories as discussed below:

1. DDL(Data Definition Language) : DDL or Data Definition Language actually consists of the SQL
commands that can be used to define the database schema. It simply deals with descriptions of the
database schema and is used to create and modify the structure of database objects in database.
Examples of DDL commands:
 CREATE – is used to create the database or its objects (like table, index, function, views, store
procedure and triggers).
 DROP – is used to delete objects from the database.
 ALTER-is used to alter the structure of the database.
 TRUNCATE–is used to remove all records from a table, including all spaces allocated for the
records are removed.
 COMMENT –is used to add comments to the data dictionary.
 RENAME –is used to rename an object existing in the database.

2. DML(Data Manipulation Language) : The SQL commands that deals with the manipulation of data
present in database belong to DML or Data Manipulation Language and this includes most of the SQL
statements.
Examples of DML:
 SELECT – is used to retrieve data from the a database.
 INSERT – is used to insert data into a table.
 UPDATE – is used to update existing data within a table.
 DELETE – is used to delete records from a database table.
3. DCL(Data Control Language) : DCL includes commands such as GRANT and REVOKE which mainly
deals with the rights, permissions and other controls of the database system.
Examples of DCL commands:
 GRANT-gives user’s access privileges to database.
 REVOKE-withdraw user’s access privileges given by using the GRANT command.

4. TCL(transaction Control Language) : TCL commands deals with the transaction within the database.
Examples of TCL commands:
 COMMIT– commits a Transaction.
 ROLLBACK– rollbacks a transaction in case of any error occurs.
 SAVEPOINT–sets a savepoint within a transaction.
 SET TRANSACTION–specify characteristics for the transaction.

23. What is SQL? Explain the languages and commands of SQL with suitable examples.

24. Write any five functions of Oracle and SQL with the help of suitable examples.
Oracle Built in Functions
There are two types of functions in Oracle.
1) Single Row Functions: Single row or Scalar functions return a value for every row that is
processed in a query.
2) Group Functions: These functions group the rows of data based on the values returned by the
query. This is discussed in SQL GROUP Functions. The group functions are used to calculate
aggregate values like total or average, which return just one total or one average value after
processing a group of rows.

There are four types of single row functions. They are:


1) Numeric Functions: These are functions that accept numeric input and return numeric values.
2) Character or Text Functions: These are functions that accept character input and can return both
character and number values.
3) Date Functions: These are functions that take values that are of datatype DATE as input and
return values of datatype DATE, except for the MONTHS_BETWEEN function, which returns a
number.
4) Conversion Functions: These are functions that help us to convert a value in one form to another
form. For Example: a null value into an actual value, or a value from one datatype to another
datatype like NVL, TO_CHAR, TO_NUMBER, TO_DATE etc.
You can combine more than one function together in an expression. This is known as nesting of
functions.

25. Explain ORACLE Transaction with the help of suitable examples.


A transaction is a unit of work that is performed against a database. Transactions are units or
sequences of work accomplished in a logical order, whether in a manual fashion by a user or
automatically by some sort of a database program.
A transaction is the propagation of one or more changes to the database. For example, if you are
creating a record or updating a record or deleting a record from the table, then you are performing a
transaction on that table. It is important to control these transactions to ensure the data integrity
and to handle database errors.
Practically, you will club many SQL queries into a group and you will execute all of them together as
a part of a transaction.
Properties of Transactions
Transactions have the following four standard properties, usually referred to by the acronym ACID.
 Atomicity − ensures that all opera ons within the work unit are completed successfully. Otherwise,
the transaction is aborted at the point of failure and all the previous operations are rolled back to
their former state.
 Consistency − ensures that the database properly changes states upon a successfully committed
transaction.
 Isolation − enables transac ons to operate independently of and transparent to each other.
 Durability − ensures that the result or effect of a commi ed transac on persists in case of a system
failure.

Transaction Control
The following commands are used to control transactions.
 COMMIT − to save the changes.
 ROLLBACK − to roll back the changes.
 SAVEPOINT − creates points within the groups of transac ons in which to ROLLBACK.
 SET TRANSACTION − Places a name on a transac on.

Transactional Control Commands


Transactional control commands are only used with the DML Commands such as - INSERT, UPDATE
and DELETE only. They cannot be used while creating tables or dropping them because these
operations are automatically committed in the database.

The COMMIT Command


The COMMIT command is the transactional command used to save changes invoked by a transaction
to the database.The COMMIT command is the transactional command used to save changes invoked
by a transaction to the database. The COMMIT command saves all the transactions to the database
since the last COMMIT or ROLLBACK command.

The ROLLBACK Command


The ROLLBACK command is the transactional command used to undo transactions that have not
already been saved to the database. This command can only be used to undo transactions since the
last COMMIT or ROLLBACK command was issued.

The SAVEPOINT Command


A SAVEPOINT is a point in a transaction when you can roll the transaction back to a certain point
without rolling back the entire transaction.

The RELEASE SAVEPOINT Command


The RELEASE SAVEPOINT command is used to remove a SAVEPOINT that you have created.

The SET TRANSACTION Command


The SET TRANSACTION command can be used to initiate a database transaction. This command is
used to specify characteristics for the transaction that follows. For example, you can specify a
transaction to be read only or read write.

26. create the following relations (2007) Customer (custid, custname) Order (custid, custname,
ordered, orderdate) Item (cusitd, ordered, itemid, itemname, qty, rate, amt) Note: Assumption can
be handle (made), place suitable referential integrity constraints and other constraints such as not
null, unique. B) Write syntax for insert update and delete query with an example.
27. A) Write an PL/SQL block to illustrate the working of IF-THEN-ELSE. If a number is greater than
other number then it swap the two numbers otherwise it doubles them. (2007)
B) Write a PL/SQL code. Insert a new record in table emp after abstaining values from the user.
(2007)
C) Write a PL/SQL block that obtain empno from user if his/her salary less than 900/- then delete
that record home table (2007)

28. What is package and how package can be created? Explain it with the help of an example
(Package should contain of last function and one procedure. (2007)
PL/SQL. Packages are schema objects that groups logically related PL/SQL types, variables, and
subprograms. PL/SQL package is a logical grouping of a related subprogram (procedure/function)
into a single element. A Package is compiled and stored as a database object that can be used later.
A package will have two mandatory parts −
 Package specification
 Package body or definition

Package Specification
The specification is the interface to the package. It just DECLARES the types, variables, constants,
exceptions, cursors, and subprograms that can be referenced from outside the package. In other
words, it contains all information about the content of the package, but excludes the code for the
subprograms.
All objects placed in the specification are called public objects. Any subprogram not in the package
specification but coded in the package body is called a private object.

29. How do we handle error in PL/SQL blocks?


Exception Handling in PL/SQL
An exception occurs when the PL/SQL engine encounters an instruction which it cannot execute due
to an error that occurs at run-time. These errors will not be captured at the time of compilation and
hence these needed to handle only at the run-time.
For example, if PL/SQL engine receives an instruction to divide any number by '0', then the PL/SQL
engine will throw it as an exception. The exception is only raised at the run-time by the PL/SQL
engine.

Exceptions will stop the program from executing further, so to avoid such condition, they need to be
captured and handled separately. This process is called as Exception-Handling, in which the
programmer handles the exception that can occur at the run time.

Exception-Handling Syntax
Exceptions are handled at the block, level, i.e., once if any exception occurs in any block then the
control will come out of execution part of that block. The exception will then be handled at the
exception handling part of that block. After handling the exception, it is not possible to resend
control back to the execution section of that block.
The below syntax explains how to catch and handle the exception.
30. Why do we use procedure and function in PL/SQL? Demonstrate with example.
PL/SQL Procedure
The PL/SQL stored procedure or simply a procedure is a PL/SQL block which performs one or more
specific tasks. It is just like procedures in other programming languages.
The procedure contains a header and a body.
o Header: The header contains the name of the procedure and the parameters or variables
passed to the procedure.
o Body: The body contains a declaration section, execution section and exception section similar
to a general PL/SQL block.

How to pass parameters in procedure:


When you want to create a procedure or function, you have to define parameters .There is three
ways to pass parameters in procedure:
1. IN parameters: The IN parameter can be referenced by the procedure or function. The value of
the parameter cannot be overwritten by the procedure or the function.
2. OUT parameters: The OUT parameter cannot be referenced by the procedure or function, but
the value of the parameter can be overwritten by the procedure or function.
3. INOUT parameters: The INOUT parameter can be referenced by the procedure or function and
the value of the parameter can be overwritten by the procedure or function.

Create procedure example


In this example, we are going to insert record in user table. So you need to create user table first.
Table creation:
1. create table user(id number(10) primary key,name varchar2(100));
Now write the procedure code to insert record in user table.
Procedure Code:
create or replace procedure "INSERTUSER"
(id IN NUMBER,
name IN VARCHAR2)
is
begin
insert into user values(id,name);
end;
/
Output:
Procedure created.
PL/SQL program to call procedure
Let's see the code to call above created procedure.
BEGIN
insertuser(101,'Rahul');
dbms_output.put_line('record inserted successfully');
END;
/

PL/SQL Function
The PL/SQL Function is very similar to PL/SQL Procedure. The main difference between procedure
and a function is, a function must always return a value, and on the other hand a procedure may or
may not return a value. Except this, all the other things of PL/SQL procedure are true for PL/SQL
function too.
Syntax to create a function:
CREATE [OR REPLACE] FUNCTION function_name [parameters]
[(parameter_name [IN | OUT | IN OUT] type [, ...])]
RETURN return_datatype
{IS | AS}
BEGIN
< function_body >
END [function_name];
Here:
o Function_name: specifies the name of the function.
o [OR REPLACE] option allows modifying an existing function.
o The optional parameter list contains name, mode and types of the parameters.
o IN represents that value will be passed from outside and OUT represents that this parameter
will be used to return a value outside of the procedure.
The function must contain a return statement.

o RETURN clause specifies that data type you are going to return from the function.
o Function_body contains the executable part.
o The AS keyword is used instead of the IS keyword for creating a standalone function.

PL/SQL Function Example


Let's see a simple example to create a function.
create or replace function adder(n1 in number, n2 in number)
return number
is
n3 number(8);
begin
n3 :=n1+n2;
return n3;
end;
/
Now write another program to call the function.
DECLARE
n3 number(2);
BEGIN
n3 := adder(11,22);
dbms_output.put_line('Addition is: ' || n3);
END;
/
Output:
Addition is: 33
Statement processed.
0.05 seconds

Unit 5
Trigger
1. Define Trigger. How many types of triggers? explain with syntax and example . Give syntax of
following: a. Create a trigger b. Dropping a trigger c. Before Trigger (2017)
Triggers are stored programs that are fired automatically when some events occur. The code to be
fired can be defined as per the requirement.
Oracle has also provided the facility to mention the event upon which the trigger needs to be fire
and the timing of the execution.

Types of Triggers in Oracle


Triggers can be classified based on the following parameters.
 Classification based on the timing
o BEFORE Trigger: It fires before the specified event has occurred.
o AFTER Trigger: It fires after the specified event has occurred.
o INSTEAD OF Trigger: A special type. You will learn more about the further topics. (only for
DML)
 Classification based on the level
o STATEMENT level Trigger: It fires one time for the specified event statement.
o ROW level Trigger: It fires for each record that got affected in the specified event. (only for
DML)
 Classification based on the Event
o DML Trigger: It fires when the DML event is specified (INSERT/UPDATE/DELETE)
o DDL Trigger: It fires when the DDL event is specified (CREATE/ALTER)
o DATABASE Trigger: It fires when the database event is specified
(LOGON/LOGOFF/STARTUP/SHUTDOWN

Creating Triggers
The syntax for creating a trigger is −
CREATE [OR REPLACE ] TRIGGER trigger_name
{BEFORE | AFTER | INSTEAD OF }
{INSERT [OR] | UPDATE [OR] | DELETE}
[OF col_name]
ON table_name
[REFERENCING OLD AS o NEW AS n]
[FOR EACH ROW]
WHEN (condition)
DECLARE
Declaration-statements
BEGIN
Executable-statements
EXCEPTION
Exception-handling-statements
END;
Where,
 CREATE [OR REPLACE] TRIGGER trigger_name − Creates or replaces an exis ng trigger with
the trigger_name.
 {BEFORE | AFTER | INSTEAD OF} − This specifies when the trigger will be executed. The
INSTEAD OF clause is used for creating trigger on a view.
 {INSERT [OR] | UPDATE [OR] | DELETE} − This specifies the DML opera on.
 [OF col_name] − This specifies the column name that will be updated.
 [ON table_name] − This specifies the name of the table associated with the trigger.
 [REFERENCING OLD AS o NEW AS n] − This allows you to refer new and old values for various
DML statements, such as INSERT, UPDATE, and DELETE.
 [FOR EACH ROW] − This specifies a row-level trigger, i.e., the trigger will be executed for each
row being affected. Otherwise the trigger will execute just once when the SQL statement is
executed, which is called a table level trigger.
 WHEN (condition) − This provides a condi on for rows for which the trigger would fire. This
clause is valid only for row-level triggers.
 This Oracle tutorial explains how to use the DROP TRIGGER statement to drop a trigger in
Oracle with syntax and examples.
 Description
 Once you have created a trigger in Oracle, you might find that you need to remove it from the
database. You can do this with the DROP TRIGGER statement.
 Syntax
 The syntax to a drop a trigger in Oracle in Oracle/PLSQL is:
 DROP TRIGGER trigger_name;

Oracle / PLSQL: BEFORE INSERT Trigger


This Oracle tutorial explains how to create a BEFORE INSERT Trigger in Oracle with syntax and
examples.
Description
A BEFORE INSERT Trigger means that Oracle will fire this trigger before the INSERT operation is
executed.
Syntax
The syntax to create a BEFORE INSERT Trigger in Oracle/PLSQL is:
CREATE [ OR REPLACE ] TRIGGER trigger_name
BEFORE INSERT
ON table_name
[ FOR EACH ROW ]

DECLARE
-- variable declarations

BEGIN
-- trigger code

EXCEPTION
WHEN ...
-- exception handling

END;
Parameters or Arguments
OR REPLACE
Optional. If specified, it allows you to re-create the trigger is it already exists so that you can change
the trigger definition without issuing a DROP TRIGGER statement.
trigger_name
The name of the trigger to create.

BEFORE INSERT
It indicates that the trigger will fire before the INSERT operation is executed.
table_name
The name of the table that the trigger is created on.
Restrictions
 You can not create a BEFORE trigger on a view.
 You can update the :NEW values.
 You can not update the :OLD values.

2. Write short notes on any two of the following: a. Oracle Transactions. b. Database Triggers c.
Declarative integrity constraints d. BEFORE vs. AFTER Triggers.
Declarative constraints Constraints are a mechanism provided within the DDL SQL standard to
maintain the consistency and integrity of a database and, at the same time, enforce certain business
rules in the database application. There are five different types of declarative constraints in SQL that
can be defined on a database column within a table, and they are as follows:
 PRIMARY KEY
 NOT NULL
 UNIQUE
 CHECK
 FOREIGN KEY

The PRIMARY KEY constraint The PRIMARY KEY constraint is used to maintain the so-called entity
integrity. When such a constraint is declared on a column of a table, the DBMS enforces the
following rules: • The column value must be unique within the table. • The value must exist for any
tuple (a record or a row of data) that is to be stored in the table.

The NOT NULL constraint The NOT NULL constraint is imposed on any column that must have a
value. In the STUDENT table, for example, the attributes DNAME and SLEVEL can have this constraint
declared on them to reflect the application requirement that whenever a student is enrolled, he/she
must be assigned to a department and be at a certain level.

The UNIQUE constraint The UNIQUE constraint is the same as the PRIMARY KEY constraint, except
NULL values are allowed. In the STUDENT table, for example, the SEMAIL attribute should have this
constraint. The reason is that according to the university’s policy, a student may or may not be given
an email account. However, when one is given, the email account name must be unique.

The CHECK constraint Declaration of a basic CHECK constraint 8 The CHECK constraint defines a
discrete list of values that a column can have. This list of values may be literally expressed within the
constraint declaration or may be defined using a mathematical expression. In the STUDENT table, for
example, a student must be at a level between 0 and 3.

The FOREIGN KEY constraint We saw in earlier chapters, when introducing the Relational model,
that entities are often linked by a one-to-many relationship. For example, a department may contain
many employees, so we say there is a one-to-many relationship between instances of the
department entity and instances of the employee entity. 10 Entities related in this way are
sometimes referred to as parents and children; in the example above, the parent entity would be
the department table, and the employee entity would be the child table.

Difference Between Before and After Trigger in MySQL


Definition
Before Trigger is a type of trigger that automatically executes before a certain operation occurs on
the table. In contrast, after trigger is a type of trigger that automatically executes after a certain
operation occurs on the table. Hence, these definitions explain the fundamental difference between
before and after trigger in MySQL.
Usage
Usually, the use of Before triggers is to perform validation before accepting data to the table and to
check the values before deleting them from the table. But, usually, the use of After triggers is to
update data in a table due to an occurred change. Hence, the main difference between before and
after trigger in MySQL is where we use them.
Example
In a banking application, before trigger helps to check the values before deleting them while after
trigger helps to update the balance in the accounts table.
Conclusion
The main difference between before and after trigger in MySQL is that Before trigger performs an
action before a certain operation executes on the table while After trigger performs an action after a
certain operation executes on the table.

3. Write down the utility of database triggers? How triggers are different from procedure min
ORACLE? Explain the types of triggers. (2014)
utility of database triggers
Triggers supplement the standard capabilities of your database to provide a highly customized
database management system. For example, you can use triggers to:
 Automatically generate derived column values
 Enforce referential integrity across nodes in a distributed database
 Enforce complex business rules
 Provide transparent event logging
 Provide auditing
 Maintain synchronous table replicates
 Gather statistics on table access
 Modify table data when DML statements are issued against views
 Publish information about database events, user events, and SQL statements to subscribing
applications
 Restrict DML operations against a table to those issued during regular business hours
 Enforce security authorizations
 Prevent invalid transactions

Differences between a Stored Procedure and a Trigger


1. We can execute a stored procedure whenever we want with the help of the exec command,
but a trigger can only be executed whenever an event (insert, delete, and update) is fired on
the table on which the trigger is defined.
2. We can call a stored procedure from inside another stored procedure but we can't directly call
another trigger within a trigger. We can only achieve nesting of triggers in which the action
(insert, delete, and update) defined within a trigger can initiate execution of another trigger
defined on the same table or a different table.
3. Stored procedures can be scheduled through a job to execute on a predefined time, but we
can't schedule a trigger.
4. Stored procedure can take input parameters, but we can't pass parameters as input to a
trigger.
5. Stored procedures can return values but a trigger cannot return a value.
6. We can use Print commands inside a stored procedure for debugging purposes but we can't
use print commands inside a trigger.
7. We can use transaction statements like begin transaction, commit transaction, and rollback
inside a stored procedure but we can't use transaction statements inside a trigger.
8. We can call a stored procedure from the front end (.asp files, .aspx files, .ascx files, etc.) but we
can't call a trigger from these files.
9. Stored procedures are used for performing tasks. Stored procedures are normally used for
performing user specified tasks. They can have parameters and return multiple results sets.
10. The Triggers for auditing work: Triggers normally are used for auditing work. They can be used
to trace the activities of table events.

Types of Triggers
Depending upon, when a trigger is fired, it may be classified as :
 Statement-level trigger
 Row-level trigger
 Before triggers
 After triggers

Statement-level Triggers
A statement trigger is fired only for once for a DML statement irrespective of the number of
rows affected by the statement.
For example, if you execute the following UPDATE command STUDENTS table, statement
trigger for UPDATE is executed only for once.
update students set bcode=’b3’
where bcode = ‘b2’;

However, statements triggers cannot be used to access the data that is being inserted,
updated or deleted. In other words, they do not have access to keywords NEW and OLD,
which are used to access data.
Statement-level triggers are typically used to enforce rules that are not related to data. For
example, it is possible to implement a rule that says “no body can modify BATCHES table after
9 P.M”. Statement-level trigger is the default type of trigger.

Row-level Trigger
A row trigger is fired once for each row that is affected by DML command. For example, if an
UPDATE command updates 100 rows then row-level trigger is fired 100 times whereas a
statement-level trigger is fired only for once.

Row-level trigger are used to check for the validity of the data. They are typically used to
implement rules that cannot be implemented by integrity constraints.
Row-level triggers are implemented by using the option FOR EACH ROW in CREATE TRIGGER
statement

Before Triggers
While defining a trigger, you can specify whether the trigger is to be fired before the command
(INSERT, DELETE, and UPDATE) is executed or after the command is executed.
Before triggers are commonly used to check the validity of the data before the action is
performed. For instance, you can use before trigger to prevent deletion of row if deletion
should not be allowed in the given case.

AFTER Triggers
After triggers are fired after the triggering action is completed. For example, If after trigger is
associated with INSERT command then it is fired after the row is inserted into the table.
4. Write syntax for creating a trigger. (2014)

5. Write short note on (2013) a. BEFORE vs. AFTER Trigger Combination b. Triggers vs. Declarative
Integrity Constraint
BEFORE vs. AFTER Trigger Combination
The following program creates a BEFORE vs. AFTER Trigger Combination trigger for the customers
table that would fire for INSERT or UPDATE or DELETE operations performed on the CUSTOMERS
table. This trigger will display the salary difference between the old values and new values −
CREATE OR REPLACE TRIGGER display_salary_changes
BEFORE DELETE OR INSERT OR UPDATE ON customers
FOR EACH ROW
WHEN (NEW.ID > 0)
DECLARE
BEGIN
sal_diff := :NEW.salary - :OLD.salary;
dbms_output.put_line('Old salary: ' || :OLD.salary);
dbms_output.put_line('New salary: ' || :NEW.salary);

END;
/
When the above code is executed at the SQL prompt, it produces the following result −
Trigger created.

The following points need to be considered here −


 OLD and NEW references are not available for table-level triggers, rather you can use them for
record-level triggers.
 If you want to query the table in the same trigger, then you should use the AFTER keyword,
because triggers can query the table or change it again only after the initial changes are
applied and the table is back in a consistent state.
 The above trigger has been written in such a way that it will fire before any DELETE or INSERT
or UPDATE operation on the table, but you can write your trigger on a single or multiple
operations, for example BEFORE DELETE, which will fire whenever a record will be deleted
using the DELETE operation on the table.

Triggering a Trigger
Let us perform some DML operations on the CUSTOMERS table. Here is one INSERT statement,
which will create a new record in the table −
INSERT INTO CUSTOMERS (ID,NAME,AGE,ADDRESS,SALARY)
VALUES (8, 'sam', 22, 'ajmer', 7800.00 );
When a record is created in the CUSTOMERS table, the above create
trigger, display_salary_changes will be fired and it will display the following result −
Old salary:
New salary: 7800

Triggers Declarative Integrity Constraint

A trigger is a piece of code which gets An integrity constraint defines basic rules for a
automatically executed upon occurrence table's columns.
of an event. It may not be meant for
enforcing integrity.
A database trigger is a procedure written An integrity constraint defines a business rule for
in PL/SQL and Will run implicitly when a table column which automatically takes care by
data is modified or when some user or Oracle internally. Intefrity Constraints are NOT
system actions occur. NULL, UNIQUE, CHECK, PRIMARY KEY, FOREIGN
KEY
A trigger does not apply to data loaded A declarative integrity constraint is a statement
before the definition of the about the database that is always true. A
trigger, therefore, it does not guarantee constraint applies to existing data in the table
all data in a table conforms to the rules and any statement that manipulates the table.
established by an associated trigger.
Triggers are similar to stored procedures Integrity constraints serve for several purposes in
database design, implementation, and run-time.

6. Explain the following: (2012) c. Database trigger d. Statement trigger e. Before trigger
7. What are Database Triggers? Explain the use and type of database triggers.(2011)
8. What is the trigger? Write the types of Trigger and give examples of BEFORE and AFTER
trigger.(2007)
BEFORE vs. AFTER Trigger Combination
The following program creates a BEFORE vs. AFTER Trigger Combination trigger for the customers
table that would fire for INSERT or UPDATE or DELETE operations performed on the CUSTOMERS
table. This trigger will display the salary difference between the old values and new values −
CREATE OR REPLACE TRIGGER display_salary_changes
BEFORE DELETE OR INSERT OR UPDATE ON customers
FOR EACH ROW
WHEN (NEW.ID > 0)
DECLARE
BEGIN
sal_diff := :NEW.salary - :OLD.salary;
dbms_output.put_line('Old salary: ' || :OLD.salary);
dbms_output.put_line('New salary: ' || :NEW.salary);

END;
/
When the above code is executed at the SQL prompt, it produces the following result −

Trigger created.
The following points need to be considered here −
 OLD and NEW references are not available for table-level triggers, rather you can use them for
record-level triggers.
 If you want to query the table in the same trigger, then you should use the AFTER keyword,
because triggers can query the table or change it again only after the initial changes are
applied and the table is back in a consistent state.
 The above trigger has been written in such a way that it will fire before any DELETE or INSERT
or UPDATE operation on the table, but you can write your trigger on a single or multiple
operations, for example BEFORE DELETE, which will fire whenever a record will be deleted
using the DELETE operation on the table.

Triggering a Trigger
Let us perform some DML operations on the CUSTOMERS table. Here is one INSERT statement,
which will create a new record in the table −
INSERT INTO CUSTOMERS (ID,NAME,AGE,ADDRESS,SALARY)
VALUES (8, 'sam', 22, 'ajmer', 7800.00 );
When a record is created in the CUSTOMERS table, the above create
trigger, display_salary_changes will be fired and it will display the following result −
Old salary:
New salary: 7800

You might also like