DBMS PPT Unit-5

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 85

DATABASE MANAGEMENT SYSTEMS

II B.Tech II Semester

UNIT-V PPT SLIDES

1
Transaction Management:
The ACID Properties
Transactions and Schedules
Concurrent Execution of Transactions
Lock Based Concurrency Control.
Concurrency Control:
Serializability
Recoverability
Introduction to Lock
Crash Recovery:
Introduction to ARIES
the Log
other Recovery related Structures
the Write-Ahead Log Protocol
Check pointing
Recovering from a System Crash.

2
Transaction Concept

• A transaction is a unit of program execution that


accesses and possibly updates various data items.
• E.g. transaction to transfer $50 from account A to
account B:
1. read(A)
2. A := A – 50
3. write(A)
4. read(B)
5. B := B + 50
6. write(B)

• Two main issues to deal with:


– Failures of various kinds, such as hardware
failures and system crashes
– Concurrent execution of multiple transactions
3
Example of Fund Transfer
• Transaction to transfer $50 from account A to account B:
1. read(A)
2. A := A – 50
3. write(A)
4. read(B)
5. B := B + 50
6. write(B)
• Atomicity requirement
– if the transaction fails after step 3 and before step 6, money will be
“lost” leading to an inconsistent database state
• Failure could be due to software or hardware
– the system should ensure that updates of a partially executed
transaction are not reflected in the database
• Durability requirement — once the user has been notified that the
transaction has completed (i.e., the transfer of the $50 has taken place), the
updates to the database by the transaction must persist even if there are
software or hardware failures.
4
Example of Fund Transfer (Cont.)
• Transaction to transfer $50 from account A to account B:
1. read(A)
2. A := A – 50
3. write(A)
4. read(B)
5. B := B + 50
6. write(B)
• Consistency requirement in above example:
– the sum of A and B is unchanged by the execution of the transaction
• In general, consistency requirements include
• Explicitly specified integrity constraints such as primary keys and
foreign keys
• Implicit integrity constraints
– e.g. sum of balances of all accounts, minus sum of loan amounts
must equal value of cash-in-hand
– A transaction must see a consistent database.
– During transaction execution the database may be temporarily
inconsistent.
– When the transaction completes successfully the database must be
consistent
• Erroneous transaction logic can lead to inconsistency
5
Example of Fund Transfer (Cont.)

• Isolation requirement — if between steps 3 and 6, another


transaction T2 is allowed to access the partially updated database,
it will see an inconsistent database (the sum A + B will be less than
it should be).
T1 T2
1. read(A)
2. A := A – 50
3. write(A)
read(A), read(B), print(A+B)
4. read(B)
5. B := B + 50
6. write(B
• Isolation can be ensured trivially by running transactions serially
– that is, one after the other.
• However, executing multiple transactions concurrently has
significant benefits, as we will see later.
6
ACID Properties
A transaction is a unit of program execution that accesses and
possibly updates various data items.To preserve the integrity of
data the database system must ensure:
• Atomicity. Either all operations of the transaction are properly
reflected in the database or none are.
• Consistency. Execution of a transaction in isolation preserves the
consistency of the database.
• Isolation. Although multiple transactions may execute concurrently,
each transaction must be unaware of other concurrently executing
transactions. Intermediate transaction results must be hidden from
other concurrently executed transactions.
– That is, for every pair of transactions Ti and Tj, it appears to Ti that
either Tj, finished execution before Ti started, or Tj started
execution after Ti finished.
• Durability. After a transaction completes successfully, the changes it
has made to the database persist, even if there are system failures.

7
Transaction State
• Active – the initial state; the transaction stays in this state while it
is executing
• Partially committed – after the final statement has been executed.
• Failed -- after the discovery that normal execution can no longer
proceed.
• Aborted – after the transaction has been rolled back and the
database restored to its state prior to the start of the transaction.
Two options after it has been aborted:
– restart the transaction
• can be done only if no internal logical error
– kill the transaction
• Committed – after successful completion.

8
Transaction State (Cont.)

9
Implementation of Atomicity and Durability
• The recovery-management component of a database system implements the
support for atomicity and durability.
• E.g. the shadow-database scheme:
– all updates are made on a shadow copy of the database
• db_pointer is made to point to the updated shadow copy after
– the transaction reaches partial commit and
– all updated pages have been flushed to disk.

10
Implementation of Atomicity and Durability (Cont.)
• db_pointer always points to the current consistent copy of
the database.
– In case transaction fails, old consistent copy pointed to by
db_pointer can be used, and the shadow copy can be
deleted.
• The shadow-database scheme:
– Assumes that only one transaction is active at a time.
– Assumes disks do not fail
– Useful for text editors, but
• extremely inefficient for large databases (why?)
– Variant called shadow paging reduces copying of
data, but is still not practical for large databases
– Does not handle concurrent transactions
11
Concurrent Executions
• Multiple transactions are allowed to run concurrently in the system.
Advantages are:
– increased processor and disk utilization, leading to better
transaction throughput
• E.g. one transaction can be using the CPU while another is
reading from or writing to the disk
– reduced average response time for transactions: short
transactions need not wait behind long ones.
• Concurrency control schemes – mechanisms to achieve isolation
– that is, to control the interaction among the concurrent
transactions in order to prevent them from destroying the
consistency of the database
• Will study in Chapter 16, after studying notion of correctness
of concurrent executions.
12
Schedules
• Schedule – a sequences of instructions that specify the
chronological order in which instructions of concurrent
transactions are executed
– a schedule for a set of transactions must consist of all
instructions of those transactions
– must preserve the order in which the instructions
appear in each individual transaction.
• A transaction that successfully completes its execution will
have a commit instructions as the last statement
– by default transaction assumed to execute commit
instruction as its last step
• A transaction that fails to successfully complete its
execution will have an abort instruction as the last
statement
13
Schedule 1
• Let T1 transfer $50 from A to B, and T2 transfer 10% of the balance from A to B.
• A serial schedule in which T1 is followed by T2 :

14
Schedule 2

• A serial schedule where T2 is followed by T1

15
Schedule 3
• Let T1 and T2 be the transactions defined previously. The following schedule is
not a serial schedule, but it is equivalent to Schedule 1.

In Schedules 1, 2 and 3, the sum A + B is preserved.


16
Schedule 4
• The following concurrent schedule does not
preserve the value of (A + B ).

17
Serializability
• Basic Assumption – Each transaction preserves database
consistency.
• Thus serial execution of a set of transactions preserves database
consistency.
• A (possibly concurrent) schedule is serializable if it is equivalent to
a serial schedule. Different forms of schedule equivalence give
rise to the notions of:
1. conflict serializability 2. view serializability
• Simplified view of transactions
– We ignore operations other than read and write instructions
– We assume that transactions may perform arbitrary
computations on data in local buffers in between reads and
writes.
– Our simplified schedules consist of only read and write
instructions.
18
Conflicting Instructions

• Instructions li and lj of transactions Ti and Tj


respectively, conflict if and only if there exists some
item Q accessed by both li and lj, and at least one of
these instructions wrote Q.
1. li = read(Q), lj = read(Q). li and lj don’t conflict.
2. li = read(Q), lj = write(Q). They conflict.
3. li = write(Q), lj = read(Q). They conflict
4. li = write(Q), lj = write(Q). They conflict
• Intuitively, a conflict between li and lj forces a
(logical) temporal order between them.
– If li and lj are consecutive in a schedule and they do not
conflict, their results would remain the same even if
they had been interchanged in the schedule.
19
Conflict Serializability

• If a schedule S can be transformed into a


schedule S´ by a series of swaps of non-
conflicting instructions, we say that S and S´
are conflict equivalent.
• We say that a schedule S is conflict
serializable if it is conflict equivalent to a
serial schedule

20
Conflict Serializability (Cont.)
• Schedule 3 can be transformed into Schedule 6, a serial
schedule where T2 follows T1, by series of swaps of non-
conflicting instructions.
– Therefore Schedule 3 is conflict serializable.

Schedule 3 Schedule 6
21
Conflict Serializability (Cont.)

• Example of a schedule that is not conflict serializable:

• We are unable to swap instructions in the above


schedule to obtain either the serial schedule < T3, T4
>, or the serial schedule < T4, T3 >.
22
View Serializability
• Let S and S´ be two schedules with the same set of transactions. S and
S´ are view equivalent if the following three conditions are met, for
each data item Q,
1. If in schedule S, transaction Ti reads the initial value of Q, then in
schedule S’ also transaction Ti must read the initial value of Q.
2. If in schedule S transaction Ti executes read(Q), and that value was
produced by transaction Tj (if any), then in schedule S’ also
transaction Ti must read the value of Q that was produced by the
same write(Q) operation of transaction Tj .
3. The transaction (if any) that performs the final write(Q) operation
in schedule S must also perform the final write(Q) operation in
schedule S’.
As can be seen, view equivalence is also based purely on reads and
writes alone.
23
View Serializability (Cont.)
• A schedule S is view serializable if it is view equivalent to a
serial schedule.
• Every conflict serializable schedule is also view serializable.
• Below is a schedule which is view-serializable but not conflict
serializable.

• What serial schedule is above equivalent to?


• Every view serializable schedule that is not conflict serializable
has blind writes.

24
Other Notions of Serializability
• The schedule below produces same outcome as
the serial schedule < T1, T5 >, yet is not conflict
equivalent or view equivalent to it.

Determining such equivalence requires analysis of


operations other than read and write. 25
Recoverable Schedules
Need to address the effect of transaction failures on concurrently
running transactions.
• Recoverable schedule — if a transaction Tj reads
a data item previously written by a transaction Ti
, then the commit operation of Ti appears
before the commit operation of Tj.
• The following schedule (Schedule 11) is not
recoverable if T9 commits immediately after the
read

• If T8 should abort, T9 would have read (and possibly shown to the user)
an inconsistent database state. Hence, database must ensure that
schedules are recoverable.
26
Cascading Rollbacks

• Cascading rollback – a single transaction failure leads to a


series of transaction rollbacks. Consider the following
schedule where none of the transactions has yet
committed (so the schedule is recoverable)

• If T10 fails, T11 and T12 must also be rolled back.


• Can lead to the undoing of a significant amount of work
27
Concurrency Control

• A database must provide a mechanism that will


ensure that all possible schedules are
– either conflict or view serializable, and
– are recoverable and preferably cascadeless
• A policy in which only one transaction can
execute at a time generates serial schedules,
but provides a poor degree of concurrency
– Are serial schedules recoverable/cascadeless?
• Testing a schedule for serializability after it has
executed is a little too late!
• Goal – to develop concurrency control
protocols that will assure serializability.
28
Concurrency Control vs. Serializability Tests
• Concurrency-control protocols allow concurrent schedules,
but ensure that the schedules are conflict/view serializable,
and are recoverable and cascadeless .
• Concurrency control protocols generally do not examine
the precedence graph as it is being created
– Instead a protocol imposes a discipline that avoids
nonseralizable schedules.
– We study such protocols in Chapter 16.
• Different concurrency control protocols provide different
tradeoffs between the amount of concurrency they allow
and the amount of overhead that they incur.
• Tests for serializability help us understand why a
concurrency control protocol is correct.

29
Levels of Consistency in SQL-92
• Serializable — default
• Repeatable read — only committed records to be read,
repeated reads of same record must return same value.
However, a transaction may not be serializable – it may find
some records inserted by a transaction but not find others.
• Read committed — only committed records can be read,
but successive reads of record may return different (but
committed) values.
• Read uncommitted — even uncommitted records may be
read.

• Lower degrees of consistency useful for gathering approximate


information about the database
• Warning: some database systems do not ensure serializable schedules
by default
– E.g. Oracle and PostgreSQL by default support a level of
consistency called snapshot isolation (not part of the SQL
standard)
30
Transaction Definition in SQL
• Data manipulation language must include a
construct for specifying the set of actions that
comprise a transaction.
• In SQL, a transaction begins implicitly.
• A transaction in SQL ends by:
– Commit work commits current transaction and begins
a new one.
– Rollback work causes current transaction to abort.
• In almost all database systems, by default, every
SQL statement also commits implicitly if it
executes successfully
– Implicit commit can be turned off by a database
directive
• E.g. in JDBC, connection.setAutoCommit(false);
31
Implementation of Isolation
• Schedules must be conflict or view serializable, and
recoverable, for the sake of database consistency,
and preferably cascadeless.
• A policy in which only one transaction can execute
at a time generates serial schedules, but provides a
poor degree of concurrency.
• Concurrency-control schemes tradeoff between the
amount of concurrency they allow and the amount
of overhead that they incur.
• Some schemes allow only conflict-serializable
schedules to be generated, while others allow view-
serializable schedules that are not conflict-
serializable.

32
Figure 15.6

33
Testing for Serializability

• Consider some schedule of a set of transactions T1,


T2, ..., Tn
• Precedence graph — a direct graph where the
vertices are the transactions (names).
• We draw an arc from Ti to Tj if the two transaction
conflict, and Ti accessed the data item on which the
conflict arose earlier.
• We may label the arc by the item that was accessed.
• Example 1 x

34
Example Schedule (Schedule A) + Precedence Graph
T1 T2 T3 T4 T5
read(X)
read(Y)
read(Z)
read(V)
read(W) T T
read(W)
read(Y) 1 2
write(Y)
write(Z)
read(U)
read(Y)
write(Y)
T T
read(Z)
write(Z) 4
3
read(U)
write(U)
T
5

35
Lock-Based Protocols

• A lock is a mechanism to control concurrent


access to a data item
• Data items can be locked in two modes :
1. exclusive (X) mode. Data item can be both
read as well as
written.
instruction.X-lock is requested using lock-X
2. shared (S) mode. Data item can only be read.
S-lock is
requested using lock-S instruction.
• Lock requests are made to concurrency-control
manager. Transaction can proceed only after
request is granted.
36
Lock-Based Protocols (Cont.)

• Lock-compatibility matrix

• A transaction may be granted a lock on an item if the requested lock is


compatible with locks already held on the item by other transactions
• Any number of transactions can hold shared locks on an item,
– but if any transaction holds an exclusive on the item no other transaction
may hold any lock on the item.
• If a lock cannot be granted, the requesting transaction is made to wait till all
incompatible locks held by other transactions have been released. The lock is
then granted.

37
Lock-Based Protocols (Cont.)
• Example of a transaction performing locking:
T2: lock-S(A);
read (A);
unlock(A);
lock-S(B);
read (B);
unlock(B);
display(A+B)
• Locking as above is not sufficient to guarantee
serializability — if A and B get updated in-between the
read of A and B, the displayed sum would be wrong.
• A locking protocol is a set of rules followed by all
transactions while requesting and releasing locks.
Locking protocols restrict the set of possible
schedules. 38
Pitfalls of Lock-Based Protocols
• Consider the partial schedule

• Neither T3 nor T4 can make progress — executing lock-S(B) causes T4


to wait for T3 to release its lock on B, while executing lock-X(A) causes
T3 to wait for T4 to release its lock on A.
• Such a situation is called a deadlock.
– To handle a deadlock one of T3 or T4 must be rolled back
and its locks released.

39
Pitfalls of Lock-Based Protocols (Cont.)

• The potential for deadlock exists in most


locking protocols. Deadlocks are a necessary
evil.
• Starvation is also possible if concurrency
control manager is badly designed. For
example:
– A transaction may be waiting for an X-lock on an
item, while a sequence of other transactions
request and are granted an S-lock on the same
item.
– The same transaction is repeatedly rolled back
due to deadlocks.
• Concurrency control manager can be
designed to prevent starvation.
40
The Two-Phase Locking Protocol
• This is a protocol which ensures conflict-serializable
schedules.
• Phase 1: Growing Phase
– transaction may obtain locks
– transaction may not release locks
• Phase 2: Shrinking Phase
– transaction may release locks
– transaction may not obtain locks
• The protocol assures serializability. It can be proved
that the transactions can be serialized in the order of
their lock points (i.e. the point where a transaction
acquired its final lock).

41
The Two-Phase Locking Protocol (Cont.)

• Two-phase locking does not ensure freedom from


deadlocks
• Cascading roll-back is possible under two-phase
locking. To avoid this, follow a modified protocol
called strict two-phase locking. Here a transaction
must hold all its exclusive locks till it
commits/aborts.
• Rigorous two-phase locking is even stricter: here
all locks are held till commit/abort. In this protocol
transactions can be serialized in the order in which
they commit.

42
The Two-Phase Locking Protocol (Cont.)
• There can be conflict serializable schedules that
cannot be obtained if two-phase locking is used.
• However, in the absence of extra information
(e.g., ordering of access to data), two-phase
locking is needed for conflict serializability in the
following sense:
Given a transaction Ti that does not follow two-
phase locking, we can find a transaction Tj that
uses two-phase locking, and a schedule for Ti and
Tj that is not conflict serializable.

43
Lock Conversions
• Two-phase locking with lock conversions:
– First Phase:
– can acquire a lock-S on item
– can acquire a lock-X on item
– can convert a lock-S to a lock-X (upgrade)
– Second Phase:
– can release a lock-S
– can release a lock-X
– can convert a lock-X to a lock-S (downgrade)
• This protocol assures serializability. But still
relies on the programmer to insert the various
locking instructions.
44
Automatic Acquisition of Locks
• A transaction Ti issues the standard read/write
instruction, without explicit locking calls.
• The operation read(D) is processed as:
if Ti has a lock on D
then
read(D)
else begin
if necessary wait until no other
transaction has a lock-X on D
grant Ti a lock-S on D;
read(D)
end
45
Automatic Acquisition of Locks (Cont.)
• write(D) is processed as:
if Ti has a lock-X on D
then
write(D)
else begin
if necessary wait until no other trans. has any
lock on D,
if Ti has a lock-S on D
then
upgrade lock on D to lock-X
else
grant Ti a lock-X on D
write(D)
end;
• All locks are released after commit or abort
46
Implementation of Locking
• A lock manager can be implemented as a separate
process to which transactions send lock and unlock
requests
• The lock manager replies to a lock request by sending a
lock grant messages (or a message asking the
transaction to roll back, in case of a deadlock)
• The requesting transaction waits until its request is
answered
• The lock manager maintains a data-structure called a
lock table to record granted locks and pending
requests
• The lock table is usually implemented as an in-memory
hash table indexed on the name of the data item being
locked

47
Lock Table
• Black rectangles indicate granted locks,
white ones indicate waiting requests
• Lock table also records the type of lock
granted or requested
• New request is added to the end of the
queue of requests for the data item,
and granted if it is compatible with all
earlier locks
• Unlock requests result in the request
being deleted, and later requests are
checked to see if they can now be
granted
• If transaction aborts, all waiting or
Granted granted requests of the transaction are
deleted
Waiting – lock manager may keep a list of
locks held by each transaction, to
implement this efficiently

48
Graph-Based Protocols
• Graph-based protocols are an alternative to two-
phase locking
• Impose a partial ordering  on the set D = {d1,
d2 ,..., dh} of all data items.
– If di  dj then any transaction accessing both di and dj
must access di before accessing dj.
– Implies that the set D may now be viewed as a
directed acyclic graph, called a database graph.
• The tree-protocol is a simple kind of graph
protocol.

49
Timestamp-Based Protocols
• Each transaction is issued a timestamp when it enters
the system. If an old transaction Ti has time-stamp
TS(Ti), a new transaction Tj is assigned time-stamp TS(Tj)
such that TS(Ti) <TS(Tj).
• The protocol manages concurrent execution such that
the time-stamps determine the serializability order.
• In order to assure such behavior, the protocol maintains
for each data Q two timestamp values:
– W-timestamp(Q) is the largest time-stamp of any
transaction that executed write(Q) successfully.
– R-timestamp(Q) is the largest time-stamp of any
transaction that executed read(Q) successfully.

50
Timestamp-Based Protocols (Cont.)

• The timestamp ordering protocol ensures that


any conflicting read and write operations are
executed in timestamp order.
• Suppose a transaction Ti issues a read(Q)
1. If TS(Ti)  W-timestamp(Q), then Ti needs to read
a value of Q that was already overwritten.
 Hence, the read operation is rejected, and Ti is rolled
back.
2. If TS(Ti) W-timestamp(Q), then the read
operation is executed, and R-timestamp(Q) is set
to max(R-timestamp(Q), TS(Ti)).
51
Timestamp-Based Protocols (Cont.)

• Suppose that transaction Ti issues write(Q).


1. If TS(Ti) < R-timestamp(Q), then the value of Q that Ti is
producing was needed previously, and the system
assumed that that value would never be produced.
 Hence, the write operation is rejected, and Ti is rolled back.
2. If TS(Ti) < W-timestamp(Q), then Ti is attempting to
write an obsolete value of Q.
 Hence, this write operation is rejected, and Ti is rolled back.
3. Otherwise, the write operation is executed, and W-
timestamp(Q) is set to TS(Ti).

52
Example Use of the Protocol

A partial schedule for several data items for transactions with


timestamps 1, 2, 3, 4, 5
T1 T2 T3 T4 T5
read(X)
read(Y)
read(Y)
write(Y)
write(Z)
read(Z)
read(X)
abort
read(X)
write(Z)
abort
write(Y)
write(Z)

53
Recovery and Atomicity

• Modifying the database without ensuring that


the transaction will commit may leave the
database in an inconsistent state.
• Consider transaction Ti that transfers $50 from
account A to account B; goal is either to
perform all database modifications made by Ti
or none at all.
• Several output operations may be required for
Ti (to output A and B). A failure may occur
after one of these modifications have been
made but before all of them are made.
54
Recovery and Atomicity (Cont.)

• To ensure atomicity despite failures, we first


output information describing the
modifications to stable storage without
modifying the database itself.
• We study two approaches:
– log-based recovery, and
– shadow-paging
• We assume (initially) that transactions run
serially, that is, one after the other.

55
Recovery Algorithms
• Recovery algorithms are techniques to ensure
database consistency and transaction
atomicity and durability despite failures
– Focus of this chapter
• Recovery algorithms have two parts
1. Actions taken during normal transaction
processing to ensure enough information exists to
recover from failures
2. Actions taken after a failure to recover the
database contents to a state that ensures
atomicity, consistency and durability
56
Log-Based Recovery
• A log is kept on stable storage.
– The log is a sequence of log records, and maintains a record of
update activities on the database.
• When transaction Ti starts, it registers itself by writing a
<Ti start>log record
• Before Ti executes write(X), a log record <Ti, X, V1, V2> is written,
where V1 is the value of X before the write, and V2 is the value to be
written to X.
– Log record notes that Ti has performed a write on data item Xj Xj
had value V1 before the write, and will have value V2 after the
write.
• When Ti finishes it last statement, the log record <Ti commit> is
written.
• We assume for now that log records are written directly to stable
storage (that is, they are not buffered)
• Two approaches using logs
– Deferred database modification
– Immediate database modification 57
Deferred Database Modification
• The deferred database modification scheme records
all modifications to the log, but defers all the writes to
after partial commit.
• Assume that transactions execute serially
• Transaction starts by writing <Ti start> record to log.
• A write(X) operation results in a log record <Ti, X, V>
being written, where V is the new value for X
– Note: old value is not needed for this scheme
• The write is not performed on X at this time, but is
deferred.
• When Ti partially commits, <Ti commit> is written to
the log
• Finally, the log records are read and used to actually
execute the previously deferred writes.

58
Deferred Database Modification (Cont.)
• During recovery after a crash, a transaction needs to be
redone if and only if both <Ti start> and<Ti commit> are
there in the log.
• Redoing a transaction Ti ( redoTi) sets the value of all data
items updated by the transaction to the new values.
• Crashes can occur while
– the transaction is executing the original updates, or
– while recovery action is being taken
• example transactions T0 and T1 (T0 executes before T1):T : 0
read (A) T1 : read (C)
A: - A - 50 C:- C- 100
Write (A) write (C)
read (B)
B:- B + 50
write (B)

59
Deferred Database Modification (Cont.)
• Below we show the log as it appears at three instances of
time.

• If log on stable storage at time of crash is as in case:


(a) No redo actions need to be taken
(b) redo(T0) must be performed since <T0 commit> is present
(c) redo(T0) must be performed followed by redo(T1) since
<T0 commit> and <Ti commit> are present
60
Immediate Database Modification
• The immediate database modification scheme allows
database updates of an uncommitted transaction to be made
as the writes are issued
– since undoing may be needed, update logs must have both
old value and new value
• Update log record must be written before database item is
written
– We assume that the log record is output directly to stable
storage
– Can be extended to postpone log record output, so long as
prior to execution of an output(B) operation for a data
block B, all log records corresponding to items B must be
flushed to stable storage

61
Checkpoints
• Problems in recovery procedure as discussed
earlier :
1. searching the entire log is time-consuming
2. we might unnecessarily redo transactions which have
already
3. output their updates to the database.
• Streamline recovery procedure by periodically
performing checkpointing
1. Output all log records currently residing in main
memory onto stable storage.
2. Output all modified buffer blocks to the disk.
3. Write a log record < checkpoint> onto stable storage.
62
Checkpoints (Cont.)
• During recovery we need to consider only the most recent transaction
Ti that started before the checkpoint, and transactions that started
after Ti.
1. Scan backwards from end of log to find the most recent
<checkpoint> record
2. Continue scanning backwards till a record <Ti start> is found.
3. Need only consider the part of log following above start record.
Earlier part of log can be ignored during recovery, and can be
erased whenever desired.
4. For all transactions (starting from Ti or later) with no <Ti commit>,
execute undo(Ti). (Done only in case of immediate modification.)
5. Scanning forward in the log, for all transactions starting from Ti
or later with a <Ti commit>, execute redo(Ti).
63
Example of Checkpoints
Tc Tf
T1
T2
T3
T4

checkpoint system failure

• T1 can be ignored (updates already output to disk due to


checkpoint)
• T2 and T3 redone.
• T4 undone
64
Recovery With Concurrent Transactions
• We modify the log-based recovery schemes to allow multiple
transactions to execute concurrently.
– All transactions share a single disk buffer and a single log
– A buffer block can have data items updated by one or
more transactions
• We assume concurrency control using strict two-phase
locking;
– i.e. the updates of uncommitted transactions should not
be visible to other transactions
• Otherwise how to perform undo if T1 updates A, then
T2 updates A and commits, and finally T1 has to abort?
• Logging is done as described earlier.
– Log records of different transactions may be interspersed in the
log.

65
Recovery With Concurrent Transactions
• The checkpointing technique and actions taken on
recovery have to be changed
– since several transactions may be active when a
checkpoint is performed.

• Checkpoints are performed as before, except that


the checkpoint log record is now of the form
< checkpoint L>
where L is the list of transactions active at the time
of the checkpoint
– We assume no updates are in progress while the
checkpoint is carried out (will relax this later)

66
Recovery With Concurrent Transactions (Cont.)

• When the system recovers from a crash, it first does


the following:
1. Initialize undo-list and redo-list to empty
2. Scan the log backwards from the end, stopping
when the first <checkpoint L> record is found.
For each record found during the backward scan:
if the record is <Ti commit>, add Ti to redo-list
if the record is <Ti start>, then if Ti is not in
redo-list, add Ti to undo-list
3. For every Ti in L, if Ti is not in redo-list, add Ti to
undo-list
67
Recovery With Concurrent Transactions (Cont.)

• At this point undo-list consists of incomplete


transactions which must be undone, and redo-list
consists of finished transactions that must be redone.
• Recovery now continues as follows:
1. Scan log backwards from most recent record, stopping
when
<Ti start> records have been encountered for every Ti in
undo-list.
 During the scan, perform undo for each log record that belongs to
a transaction in undo-list.
2. Locate the most recent <checkpoint L> record.
3. Scan log forwards from the <checkpoint L> record till the
end of the log.
 During the scan, perform redo for each log record that belongs to
a transaction on redo-list
68
Example of Recovery

• Go over the steps of the recovery algorithm on


the following log:
<T0 start>
<T0, A, 0, 10>
<T0 commit>
<T1 start> /* Scan at step 1 comes up to here */
<T1, B, 0, 10>
<T2 start>
<T2, C, 0, 10>
<T2, C, 10, 20>
<checkpoint {T1, T2}>
<T3 start>
<T3, A, 10, 20>
<T3, D, 0, 10>
<T3 commit>

69
Log Record Buffering
• Log record buffering: log records are buffered in
main memory, instead of of being output
directly to stable storage.
– Log records are output to stable storage when a
block of log records in the buffer is full, or a log force
operation is executed.
• Log force is performed to commit a transaction
by forcing all its log records (including the
commit record) to stable storage.
• Several log records can thus be output using a
single output operation, reducing the I/O cost.
70
Log Record Buffering (Cont.)

• The rules below must be followed if log records


are buffered:
– Log records are output to stable storage in the order
in which they are created.
– Transaction Ti enters the commit state only when the
log record
<Ti commit> has been output to stable storage.
– Before a block of data in main memory is output to
the database, all log records pertaining to data in that
block must have been output to stable storage.
• This rule is called the write-ahead logging or WAL rule
– Strictly speaking WAL only requires undo information to be
output

71
Advanced Recovery: Key Features

• Support for high-concurrency locking techniques, such as


those used for B+-tree concurrency control, which release
locks early
– Supports “logical undo”
• Recovery based on “repeating history”, whereby recovery
executes exactly the same actions as normal processing
– including redo of log records of incomplete transactions,
followed by subsequent undo
– Key benefits
• supports logical undo
• easier to understand/show correctness
72
Advanced Recovery: Logical Undo Logging
• Operations like B+-tree insertions and deletions release locks
early.
– They cannot be undone by restoring old values (physical
undo), since once a lock is released, other transactions
may have updated the B+-tree.
– Instead, insertions (resp. deletions) are undone by
executing a deletion (resp. insertion) operation (known as
logical undo).
• For such operations, undo log records should contain the undo
operation to be executed
– Such logging is called logical undo logging, in contrast to
physical undo logging
• Operations are called logical operations

73
Advanced Recovery: Physical Redo

• Redo information is logged physically (that is,


new value for each write) even for operations
with logical undo
– Logical redo is very complicated since database
state on disk may not be “operation consistent”
when recovery starts
– Physical redo logging does not conflict with early
lock release

74
Advanced Recovery: Crash Recovery

The following actions are taken when recovering from system crash
1. (Redo phase): Scan log forward from last < checkpoint L> record till end
of log
1. Repeat history by physically redoing all updates of all transactions,
2. Create an undo-list during the scan as follows
• undo-list is set to L initially
• Whenever <Ti start> is found Ti is added to undo-list
• Whenever <Ti commit> or <Ti abort> is found, Ti is deleted from
undo-list
This brings database to state as of crash, with committed as well as
uncommitted transactions having been redone.
Now undo-list contains transactions that are incomplete, that is, have
neither committed nor been fully rolled back.
75
Advanced Recovery: Crash Recovery (Cont.)
Recovery from system crash (cont.)
2. (Undo phase): Scan log backwards, performing
undo on log records of transactions found in undo-
list.
– Log records of transactions being rolled back are
processed as described earlier, as they are found
• Single shared scan for all transactions being undone
– When <Ti start> is found for a transaction Ti in undo-
list, write a <Ti abort> log record.
– Stop scan when <Ti start> records have been found
for all Ti in undo-list
• This undoes the effects of incomplete transactions
(those with neither commit nor abort log records).
Recovery is now complete.
76
Advanced Recovery: Checkpointing

• Checkpointing is done as follows:


1. Output all log records in memory to stable storage
2. Output to disk all modified buffer blocks
3. Output to log on stable storage a < checkpoint L>
record.
Transactions are not allowed to perform any
actions while checkpointing is in progress.
• Fuzzy checkpointing allows transactions to
progress while the most time consuming parts of
checkpointing are in progress
– Performed as described on next slide

77
ARIES
• ARIES is a state of the art recovery method
– Incorporates numerous optimizations to reduce overheads
during normal processing and to speed up recovery
– The “advanced recovery algorithm” we studied earlier is
modeled after ARIES, but greatly simplified by removing
optimizations
• Unlike the advanced recovery algorithm, ARIES
1. Uses log sequence number (LSN) to identify log records
• Stores LSNs in pages to identify what updates have already
been applied to a database page
2. Physiological redo
3. Dirty page table to avoid unnecessary redos during recovery
4. Fuzzy checkpointing that only records information about dirty
pages, and does not require dirty pages to be written out at
checkpoint time
• More coming up on each of the above …
78
ARIES Optimizations
• Physiological redo
– Affected page is physically identified, action
within page can be logical
• Used to reduce logging overheads
– e.g. when a record is deleted and all other records have to
be moved to fill hole
» Physiological redo can log just the record deletion
» Physical redo would require logging of old and new
values for much of the page
• Requires page to be output to disk atomically
– Easy to achieve with hardware RAID, also supported by
some disk systems
– Incomplete page output can be detected by checksum
techniques,
» But extra actions are required for recovery
» Treated as a media failure
79
ARIES Data Structures

• ARIES uses several data structures


– Log sequence number (LSN) identifies each log
record
• Must be sequentially increasing
• Typically an offset from beginning of log file to allow
fast access
– Easily extended to handle multiple log files
– Page LSN
– Log records of several different types
– Dirty page table

80
ARIES Data Structures: Page LSN

• Each page contains a PageLSN which is the LSN of the


last log record whose effects are reflected on the page
– To update a page:
• X-latch the page, and write the log record
• Update the page
• Record the LSN of the log record in PageLSN
• Unlock page
– To flush page to disk, must first S-latch page
• Thus page state on disk is operation consistent
– Required to support physiological redo
– PageLSN is used during recovery to prevent repeated redo
• Thus ensuring idempotence

81
ARIES Data Structures: Log Record
• Each log record contains LSN of previous log record of the same
transaction
LSN TransID PrevLSN RedoInfo UndoInfo
– LSN in log record may be implicit
• Special redo-only log record called compensation log record (CLR) used
to log actions taken during recovery that never need to be undone
– Serves the role of operation-abort log records used in advanced
recovery algorithm
– Has a field UndoNextLSN to note next (earlier) record to be undone
• Records in between would have already been undone
• Required to avoid repeated undo of already undone actions
LSN TransID UndoNextLSN
RedoInfo 3'
1 2 3 4 4' 2' 1'
82
ARIES Data Structures: Checkpoint Log

• Checkpoint log record


– Contains:
• DirtyPageTable and list of active transactions
• For each active transaction, LastLSN, the LSN of the last log record
written by the transaction
– Fixed position on disk notes LSN of last completed
checkpoint log record
• Dirty pages are not written out at checkpoint time
• Instead, they are flushed out continuously, in the background
• Checkpoint is thus very low overhead
– can be done frequently

83
ARIES Recovery Algorithm
ARIES recovery involves three passes
• Analysis pass: Determines
– Which transactions to undo
– Which pages were dirty (disk version not up to date) at
time of crash
– RedoLSN: LSN from which redo should start
• Redo pass:
– Repeats history, redoing all actions from RedoLSN
• RecLSN and PageLSNs are used to avoid redoing actions already
reflected on page
• Undo pass:
– Rolls back all incomplete transactions
• Transactions whose abort was complete earlier are not undone
– Key idea: no need to undo these transactions: earlier undo actions
were logged, and are redone as required

84
Text Books: (1) DBMS by Raghu Ramakrishnan
(2) DBMS by Sudarshan and Korth

85

You might also like