What Are Necessary Conditions Which Can Lead To A Deadlock Situation in A System?

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

2.

What are necessary conditions which can lead to a deadlock


situation in a system?
1. Mutual Exclusion:
When two people meet in the landings, they can’t just walk through because there is
space only for one person. This condition to allow only one person (or process) to use the
step between them (or the resource) is the first condition necessary for the occurrence of
the deadlock.
2. Hold and Wait:
When the 2 people refuses to retreat and hold their grounds, it is called holding. This is
the next necessary condition for the the deadlock.
3. No Preemption:
For resolving the deadlock one can simply cancel one of the processes for other to
continue. But Operating System doesn’t do so. It allocates the resources to the processors
for as much time needed until the task is completed. Hence, there is no temporary
reallocation of the resources. It is third condition for deadlock.
4. Circular Wait:
When the two people refuses to retreat and wait for each other to retreat, so that they can
complete their task, it is called circular wait. It is the last condition for the deadlock to
occur.

3. State the main difference between logical from physical address


space.
BASIS FOR
LOGICAL ADDRESS PHYSICAL ADDRESS
COMPARISON

Basic It is the virtual address The physical address is a

generated by CPU location in a memory unit.

Address Space Set of all logical Set of all physical

addresses generated by addresses mapped to the

CPU in reference to a corresponding logical

program is referred as addresses is referred as

Logical Address Space. Physical Address.

Visibility The user can view the The user can never view

logical address of a physical address of

program. program
BASIS FOR
LOGICAL ADDRESS PHYSICAL ADDRESS
COMPARISON

Access The user uses the The user can not directly

logical address to access physical address.

access the physical

address.

Generation The Logical Address is Physical Address is

generated by the CPU Computed by MMU

What is fragmentation? Type of fragmentation with example.


Fragmentation is an unwanted problem where the memory blocks cannot be
allocated to the processes due to their small size and the blocks remain unused. It
can also be understood as when the processes are loaded and removed from the
memory they create free space or hole in the memory and these small blocks cannot
be allocated to new upcoming processes and results in inefficient use of memory.
Basically, there are two types of fragmentation:

 Internal Fragmentation
 External Fragmentation

Internal Fragmentation
In this fragmentation, the process is allocated a memory block of size more than the
size of that process. Due to this some part of the memory is left unused and this
cause internal fragmentation.

Example: Suppose there is fixed partitioning (i.e. the memory blocks are of fixed
sizes) is used for memory allocation in RAM. These sizes are 2MB, 4MB, 4MB,
8MB. Some part of this RAM is occupied by the Operating System (OS).

Now, suppose a process P1 of size 3MB comes and it gets memory block of size
4MB. So, the 1MB that is free in this block is wasted and this space can’t be utilized
for allocating memory to some other process. This is called internal
fragmentation.
How to remove internal fragmentation?
This problem is occurring because we have fixed the sizes of the memory blocks. This
problem can be removed if we use dynamic partitioning for allocating space to the process.
In dynamic partitioning, the process is allocated only that much amount of space which is
required by the process. So, there is no internal fragmentation.

External Fragmentation
In this fragmentation, although we have total space available that is needed by a
process still we are not able to put that process in the memory because that space is
not contiguous. This is called external fragmentation.

Example: Suppose in the above example, if three new processes P2, P3, and P4
come of sizes 2MB, 3MB, and 6MB respectively. Now, these processes get memory
blocks of size 2MB, 4MB and 8MB respectively allocated.

So, now if we closely analyze this situation then process P3 (unused 1MB)and
P4(unused 2MB) are again causing internal fragmentation. So, a total of 4MB (1MB
(due to process P1) + 1MB (due to process P3) + 2MB (due to process P4)) is
unused due to internal fragmentation.
Now, suppose a new process of 4 MB comes. Though we have a total space of
4MB still we can’t allocate this memory to the process. This is called external
fragmentation.

How to remove external fragmentation?


This problem is occurring because we are allocating memory continuously to the processes. So, if
we remove this condition external fragmentation can be reduced. This is what done in paging &
segmentation(non-contiguous memory allocation techniques) where memory is allocated non-
contiguously to the processes. We will learn about paging and segmentation in the next blog.

Another way to remove external fragmentation is compaction. When dynamic partitioning is used
for memory allocation then external fragmentation can be reduced by merging all the free
memory together in one large block. This technique is also called defragmentation. This larger
block of memory is then used for allocating space according to the needs of the new processes.

5. Give an example of a Process State.

6. What is multitasking? Difference b/w


Multiprogmmaing and Multitasking.
Concept of

Context

Switching is Concept of Context Switching and

used. Time Sharing is used.

The processor is typically used in time

In multiprogrammed system, the sharing mode. Switching happens when

operating system simply switches either allowed time expires or where there

to, and executes, another job other reason for current process needs to

3. when current job needs to wait.  wait (example process needs to do IO).

Multi-programming increases

CPU utilization by organising In multi-tasking also increases CPU

4. jobs . utilization, it also increases responsiveness.

The idea is to further extend the CPU

The idea is to reduce the CPU Utilization concept by increasing

5. idle time for as long as possible. responsiveness Time Sharing.

7. What is context switching. Disadvantage of


context switching.
A context switching is a process that involves switching of the CPU from one process or
task to another. In this phenomenon, the execution of the process that is present in the
running state is suspended by the kernel and another process that is present in the ready
state is executed by the CPU.

The disadvantage of context switching is that it requires some time for context switching
i.e. the context switching time. Time is required to save the context of one process that is in
the running state and then getting the context of another process that is about to come in
the running state. During that time, there is no useful work done by the CPU from the user
perspective. So, context switching is pure overhead in this condition.

8. What is critical section. The basic requirements


of CS.
The critical section is a code segment where the shared variables can be accessed. An atomic action is
required in a critical section i.e. only one process can execute in its critical section at a time. All the other
processes have to wait to execute in their critical sections.

A diagram that demonstrates the critical section is as follows −

In the above diagram, the entry section handles the entry into the critical section. It acquires the resources
needed for execution by the process. The exit section handles the exit from the critical section. It releases
the resources and also informs the other processes that the critical section is free.

9. What is Bounded-buffer problem?

10.What is dispatcher?
A dispatcher is a special program which comes into play after the scheduler. When the scheduler
completes its job of selecting a process, it is the dispatcher which takes that process to the desired
state/queue. The dispatcher is the module that gives a process control over the CPU after it has been
selected by the short-term scheduler. This function involves the following:
 Switching context
 Switching to user mode
 Jumping to the proper location in the user program to restart that program
11. What is Turn arround , waiting , response
time ?
 Burst time is the total time taken by the process for its execution on the CPU.
 Arrival time is the time when a process enters into the ready state and is ready for its
execution.
 Waiting time is the total time spent by the process in the ready state waiting for
CPU.
Waiting time = Turnaround time - Burst time
 Response time is the time spent when the process is in the ready state and gets the
CPU for the first time.
Response time = Time at which the process gets the CPU for the first
time - Arrival time

 Turnaround time is the total amount of time spent by the process from
coming in the ready state for the first time to its completion.

Turnaround time = Burst time + Waiting time

or

Turnaround time = Exit time - Arrival time

12. What is starvation? Solution for starvation.


Starvation or indefinite blocking is phenomenon associated with the Priority scheduling algorithms, in
which a process ready to run for CPU can wait indefinitely because of low priority. In heavily loaded
computer system, a steady stream of higher-priority processes can prevent a low-priority process from ever
getting the CPU.

Solution to Starvation : Aging


Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time.

13. What is Thrasing?


Thrashing is a condition or a situation when the system is spending a major portion of its time in servicing
the page faults, but the actual processing done is very negligible.

14. Difference between paging and segmatation.


Sr. Key Paging Segmentation
No.

Memory Size In Paging, a process address In Segmentation, a process address


1 space is broken into fixed sized space is broken in varying sized
blocks called pages. blocks called sections.

Accountability Operating System divides the Compiler is responsible to calculate


2 memory into pages. the segment size, the virtual address
and actual address.

Size Page size is determined by Section size is determined by the


3
available memory. user.

Speed Paging technique is faster in Segmentation is slower than paging.


4
terms of memory access.

Fragmentation Paging can cause internal Segmentation can cause external


5 fragmentation as some pages fragmentation as some memory
may go underutilized. block may not be used at all.

Logical During paging, a logical address During segmentation, a logical


6 Address is divided into page number and address is divided into section
page offset. number and section offset.

Table During paging, a logical address During segmentation, a logical


7 is divided into page number and address is divided into section
page offset. number and section offset.

Data Storage Page table stores the page data. Segmentation table stores the
8
segmentation data.

15. What are the methods for handling the


Deadlock.
Deadlock Detection
Deadlock can be detected by the resource scheduler as it keeps track of all the resources that are
allocated to different processes. After a deadlock is detected, it can be handed using the given methods −

 All the processes that are involved in the deadlock are terminated. This approach is not that useful
as all the progress made by the processes is destroyed.
 Resources can be preempted from some processes and given to others until the deadlock
situation is resolved.
Deadlock Prevention
It is important to prevent a deadlock before it can occur. So, the system checks each transaction before it is
executed to make sure it does not lead to deadlock. If there is even a slight possibility that a transaction
may lead to deadlock, it is never allowed to execute.

Some deadlock prevention schemes that use timestamps in order to make sure that a deadlock does not
occur are given as follows −

 Wait - Die Scheme


 In the wait - die scheme, if a transaction T1 requests for a resource that is held by transaction T2,
one of the following two scenarios may occur −
o TS(T1) < TS(T2) - If T1 is older than T2 i.e T1 came in the system earlier than T2, then it
is allowed to wait for the resource which will be free when T2 has completed its execution.
o TS(T1) > TS(T2) - If T1 is younger than T2 i.e T1 came in the system after T2, then T1 is
killed. It is restarted later with the same timestamp.
 Wound - Wait Scheme
 In the wound - wait scheme, if a transaction T1 requests for a resource that is held by transaction
T2, one of the following two possibilities may occur −
o TS(T1) < TS(T2) - If T1 is older than T2 i.e T1 came in the system earlier than T2, then it
is allowed to roll back T2 or wound T2. Then T1 takes the resource and completes its
execution. T2 is later restarted with the same timestamp.
o TS(T1) > TS(T2) - If T1 is younger than T2 i.e T1 came in the system after T2, then it is
allowed to wait for the resource which will be free when T2 has completed its execution.
Deadlock Avoidance
It is better to avoid a deadlock rather than take measures after the deadlock has occurred. The wait for
graph can be used for deadlock avoidance. This is however only useful for smaller databases as it can get
quite complex in larger databases.

Wait for graph

The wait for graph shows the relationship between the resources and transactions. If a transaction requests
a resource or if it already holds a resource, it is visible as an edge on the wait for graph. If the wait for
graph contains a cycle, then there may be a deadlock in the system, otherwise not.

Ostrich Algorithm
The ostrich algorithm means that the deadlock is simply ignored and it is assumed that it will never occur.
This is done because in some systems the cost of handling the deadlock is much higher than simply
ignoring it as it occurs very rarely. So, it is simply assumed that the deadlock will never occur and the
system is rebooted if it occurs by any chance.

16. What is Belady's anomaly?


Generally, on increasing the number of frames to a process’ virtual memory, its execution
becomes faster as less number of page faults occur. Sometimes the reverse happens, i.e. more number of
page faults occur when more frames are allocated to a process. This most unexpected result is termed
as Belady’s Anomaly.

You might also like