Gate Os 2 2017
Gate Os 2 2017
org/gate-corner-2/
http://quiz.geeksforgeeks.org/gate-cs-notes/
http://quiz.geeksforgeeks.org/cpu-scheduling/
Most of the operating systems (for example Windows and Linux) use Segmentation with Paging. A
process is divided in segments and individual segments have pages.
In Partition Allocation, when there are more than one partition freely available to accommodate a
processs request, a partition must be selected. To choose a particular partition, a partition allocation
method is needed. A partition allocation method is considered better if it avoids internal fragmentation.
Below are the various partition allocation schemes :
1. First Fit: In the first fit, partition is allocated which is first
sufficient from the top of Main Memory.
2. Best Fit Allocate the process to the partition which is first
smallest sufficient partition among the free available partition.
3. Worst Fit Allocate the process to the partition which is largest
sufficient among the freely available partitions available in
the main memory.
4. Next Fit Next fit is similar to the first fit but it will search
for the first sufficient partition from the last allocation point.
Exercise: Consider the requests from processes in given order 300K, 25K, 125K and 50K. Let there be
two blocks of memory available of size 150K followed by a block size 350K.
Which of the following partition allocation schemes can satisfy above requests?
A) Best fit but not first fit.
B) First fit but not best fit.
C) Both First fit & Best fit.
D) neither first fit nor best fit.
Solution: Let us try all options.
Best Fit:
300K is allocated from block of size 350K. 50 is left in the block.
25K is allocated from the remaining 50K block. 25K is left in the block.
125K is allocated from 150 K block. 25K is left in this block also.
50K cant be allocated even if there is 25K + 25K space available.
First Fit:
300K request is allocated from 350K block, 50K is left out.
25K is be allocated from 150K block, 125K is left out.
Then 125K and 50K are allocated to remaining left out partitions.
So, first fit can handle requests.
So option B is the correct choice.
Operating System | Page Replacement Algorithms
In a operating system that uses paging for memory management, page replacement algorithm are needed
to decide which page needed to be replaced when new page comes in. Whenever a new page is referred
and not present in memory, page fault occurs.
Page Fault
A page fault is a type of interrupt, raised by the hardware when a running program accesses a memory
page that is mapped into the virtual address space, but not loaded in physical memory.
Page Replacement Algorithms
First In First Out
This is the simplest page replacement algorithm. In this algorithm, operating system keeps track of all
pages in the memory in a queue, oldest page is in the front of the queue. When a page needs to be
replaced page in the front of the queue is selected for removal.
For example, consider page reference string 1, 3, 0, 3, 5, 6 and 3 page slots.
Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots > 3 Page Faults.
when 3 comes, it is already in memory so > 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. >1 Page Fault.
Finally 6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 >1 Page
Fault.
Beladys anomaly
Beladys anomaly proves that it is possible to have more page faults when increasing the number of page
frames while using the First in First Out (FIFO) page replacement algorithm. For example, if we
consider reference string
3 2 1 0 3 2 4 3 2 1 0 4 and 3 slots, we get 9 total
page faults, but if we increase slots to 4, we get 10 page faults.
Optimal Page replacement
In this algorithm, pages are replaced which are not used for the longest duration of time in the future.
Let us consider page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 and 4 page slots.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots > 4 Page faults
0 is already there so > 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time in the future.
>1 Page fault.
0 is already there so > 0 Page fault..
4 will takes place of 1 > 1 Page Fault.
Now for the further page reference string > 0 Page fault because they are already available in the
memory.
Optimal page replacement is perfect, but not possible in practice as operating system cannot know future
requests. The use of Optimal Page replacement is to set up a benchmark so that other replacement
algorithms can be analyzed against it.
Explanation: Beladys anomaly proves that it is possible to have more page faults when increasing the
number of page frames while using the First in First Out (FIFO) page replacement algorithm.
See the example given on Wiki Page.
In computer storage, Bldy's anomaly is the phenomenon in which increasing the number of page
frames results in an increase in the number of page faults for certain memory access patterns. This
phenomenon is commonly experienced when using the First in First Out (FIFO) page replacement
algorithm. Lszl Bldy demonstrated this in 1969.[1]
In common computer memory management, information is loaded in specific sized chunks. Each chunk
is referred to as a page. Main memory can only hold a limited number of pages at a time. It requires a
frame for each page it can load. A page fault occurs when a page is not found, and might need to be
loaded from disk into memory.
When a page fault occurs and all frames are in use, one must be cleared to make room for the new page.
A simple algorithm is FIFO: whichever page has been in the frames the longest is the one that is cleared.
Until Bldy's anomaly was demonstrated, it was believed that an increase in the number of page frames
would always result in the same number or fewer page faults.
2. Page fault occurs when
(A) When a requested page is in memory
(C) When a page is corrupted
Answer: (B)
Explanation: Page fault occurs when a requested page is mapped in virtual address space but not present
in memory.
3. Assume that there are 3 page frames which are initially empty. If the page reference string is 1, 2, 3, 4,
2, 1, 5, 3, 2, 4, 6, the number of page faults using the optimal replacement policy is__________.
(A) 5
(B) 6
(C) 7
(D) 8
Answer: (C)
Explanation: In optimal page replacement replacement policy, we replace the place which is not used for
longest duration in future.
Given three page frames.
Reference string is 1, 2, 3, 4, 2, 1, 5, 3, 2, 4, 6
Initially, there are three page faults and entries are 1 2 3
Page 4 causes a page fault and replaces 3 (3 is the longest distant in future), entries become 1 2 4
Total page faults = 3+1 = 4
Pages 2 and 1 don't cause any fault.
5 causes a page fault and replaces 1, entries become 5 2 4
Total page faults = 4 + 1 = 5
On a demand paged virtual memory system running on a computer system that main memory size of 3
pages frames which are initially empty. Let LRU, FIFO and OPTIMAL denote the number of page
faults under the corresponding page replacements policy. Then
(A) OPTIMAL < LRU < FIFO
(B) OPTIMAL < FIFO < LRU
(C) OPTIMAL = LRU
(D) OPTIMAL = FIFO
Answer: (B)
The OPTIMAL will be 5, FIFO 6 and LRU 9.
(http://www.geeksforgeeks.org/operating-systems-set-5/)
5. A virtual memory system uses First In First Out (FIFO) page replacement policy and allocates a fixed
number of frames to a process. Consider the following statements:
P: Increasing the number of page frames allocated to a process sometimes increases the page fault rate.
Q: Some programs do not exhibit locality of reference.
Which one of the following is TRUE?
(A) Both P and Q are true, and Q is the reason for P
(B) Both P and Q are true, but Q is not the reason for P.
(C) P is false, but Q is true
(D) Both P and Q are false
Answer: (B)
P is true. Increasing the number of page frames allocated to process may increases the no. of page faults
(See Beladys Anomaly).
Q is also true, but Q is not the reason for-P as Beladys Anomaly occurs for some specific patterns of
page references.
6. A process has been allocated 3 page frames. Assume that none of the pages of the process are available
in the memory initially. The process makes the following sequence of page references (reference
string): 1, 2, 1, 3, 7, 4, 5, 6, 3, 1
If optimal page replacement policy is used, how many page faults occur for the above reference string?
(A) 7
(B) 8
(C) 9
(D) 10
Answer: (A)
Explanation: Optimal replacement policy looks forward in time to see which frame to
replace on a page fault.
1 23 -> 1,2,3 //page faults
173
->7
143 ->4
153 -> 5
163 -> 6
Total=7
So Answer is A
http://quiz.geeksforgeeks.org/operating-systems-memory-management-question-1/
http://quiz.geeksforgeeks.org/operating-systems-memory-management-question-7/
http://quiz.geeksforgeeks.org/gate-gate-cs-2014-set-1-question-43/
http://quiz.geeksforgeeks.org/gate-gate-cs-2012-question-42/
http://quiz.geeksforgeeks.org/gate-gate-cs-2007-question-56/
http://quiz.geeksforgeeks.org/gate-gate-cs-2007-question-82/
http://quiz.geeksforgeeks.org/gate-gate-cs-2007-question-83/
http://quiz.geeksforgeeks.org/gate-gate-cs-2014-set-3-question-30/
http://quiz.geeksforgeeks.org/gate-gate-cs-2002-question-23/
http://quiz.geeksforgeeks.org/gate-gate-cs-2001-question-21/
http://quiz.geeksforgeeks.org/gate-gate-cs-2010-question-24/
utilization is a thread. A thread has or consists of a program counter (PC), a register set, and a stack
space. Threads are not independent of one other like processes as a result threads shares with other
threads their code section, data section, OS resources also known as task, such as open files and
signals.
Processes Vs Threads
As we mentioned earlier that in many respect threads operate in the same way as that of processes.
Some of the similarities and differences are:
Similarities
Like processes threads share CPU and only one thread active (running) at a time.
Like processes, threads within a processes, threads within a processes execute sequentially.
Like processes, thread can create children.
And like process, if one thread is blocked, another thread can run.
Differences
Unlike processes, threads are not independent of one another.
Unlike processes, all threads can access every address in the task .
Unlike processes, thread are design to assist one other. Note that processes might or might not
assist one another because processes may originate from different users.
Why Threads?
Following are some reasons why we use threads in designing operating systems.
1. A process with multiple threads make a great server for example printer server.
2. Because threads can share common data, they do not need to use interprocess communication.
3. Because of the very nature, threads can take advantage of multiprocessors.
Threads are cheap in the sense that
1. They only need a stack and storage for registers therefore, threads are cheap to create.
2. Threads use very little resources of an operating system in which they are working. That is,
threads do not need new address space, global data, program code or operating system resources.
3. Context switching are fast when working with threads. The reason is that we only have to save
and/or restore PC, SP and registers.
But this cheapness does not come free - the biggest drawback is that there is no protection between
threads.
User-Level Threads
User-level threads implement in user-level libraries, rather than via systems calls, so thread switching
does not need to call operating system and to cause interrupt to the kernel. In fact, the kernel knows
nothing about user-level threads and manages them as if they were single-threaded processes.
Advantages:
The most obvious advantage of this technique is that a user-level threads package can be
implemented on an Operating System that does not support threads. Some other advantages are
User-level threads does not require modification to operating systems.
Simple Representation:
Each thread is represented simply by a PC, registers, stack and a small control block, all stored
in the user process address space.
Simple Management:
This simply means that creating a thread, switching between threads and synchronization
between threads can all be done without intervention of the kernel.
Fast and Efficient:
Thread switching is not much more expensive than a procedure call.
Disadvantages:
There is a lack of coordination between threads and operating system kernel. Therefore, process
as whole gets one time slice irrespect of whether process has one thread or 1000 threads within. It
is up to each thread to relinquish control to other threads.
User-level threads requires non-blocking systems call i.e., a multithreaded kernel. Otherwise,
entire process will blocked in the kernel, even if there are runable threads left in the processes.
For example, if one thread causes a page fault, the process blocks.
Kernel-Level Threads
In this method, the kernel knows about and manages the threads. No runtime system is needed in this
case. Instead of thread table in each process, the kernel has a thread table that keeps track of all
threads in the system. In addition, the kernel also maintains the traditional process table to keep track
of processes. Operating Systems kernel provides system call to create and manage threads.
The implementation of general structure of kernel-level thread is
<DIAGRAM>
Advantages:
Because kernel has full knowledge of all threads, Scheduler may decide to give more time to a
process having large number of threads than process having small number of threads.
Kernel-level threads are especially good for applications that frequently block.
Disadvantages:
The kernel-level threads are slow and inefficient. For instance, threads operations are hundreds of
times slower than that of user-level threads.
Since kernel must manage and schedule threads as well as processes. It require a full thread
control block (TCB) for each thread to maintain information about threads. As a result there is
significant overhead and increased in kernel complexity.
Advantages of Threads over Multiple Processes
Context Switching Threads are very inexpensive to create and destroy, and they are
inexpensive to represent. For example, they require space to store, the PC, the SP, and the generalpurpose registers, but they do not require space to share memory information, Information about
open files of I/O devices in use, etc. With so little context, it is much faster to switch between
threads. In other words, it is relatively easier for a context switch using threads.
Sharing Treads allow the sharing of a lot resources that cannot be shared in process, for
example, sharing code section, data section, Operating System resources like open file etc.
Disadvantages of Threads over Multiprocesses
Blocking The major disadvantage if that if the kernel is single threaded, a system call of one
thread will block the whole process and CPU may be idle during the blocking period.
Security Since there is, an extensive sharing among threads there is a potential problem of
security. It is quite possible that one thread over writes the stack of another thread (or damaged
shared data) although it is very unlikely since threads are meant to cooperate on a single task.
Application that Benefits from Threads
A proxy server satisfying the requests for a number of computers on a LAN would be benefited by a
multi-threaded process. In general, any program that has to do more than one task at a time could
benefit from multitasking. For example, a program that reads input, process it, and outputs could have
three threads, one for each task.
Application that cannot Benefit from Threads
Any sequential process that cannot be divided into parallel task will not benefit from thread, as they
would block until the previous one completes. For example, a program that displays the time of the
day would not benefit from multiple threads.
Resources used in Thread Creation and Process Creation
When a new thread is created it shares its code section, data section and operating system resources
like open files with other threads. But it is allocated its own stack, register set and a program counter.
The creation of a new process differs from that of a thread mainly in the fact that all the shared
resources of a thread are needed explicitly for each process. So though two processes may be running
the same piece of code they need to have their own copy of the code in the main memory to be able to
run. Two processes also do not share other resources with each other. This makes the creation of a
new process very costly compared to that of a new thread.
Context Switch
To give each process on a multiprogrammed machine a fair share of the CPU, a hardware clock
generates interrupts periodically. This allows the operating system to schedule all processes in main
memory (using scheduling algorithm) to run on the CPU at equal intervals. Each time a clock
interrupt occurs, the interrupt handler checks how much time the current running process has used. If
it has used up its entire time slice, then the CPU scheduling algorithm (in kernel) picks a different
process to run. Each switch of the CPU from one process to another is called a context switch.
Major Steps of Context Switching
The values of the CPU registers are saved in the process table of the process that was running just
before the clock interrupt occurred.
The registers are loaded from the process picked by the CPU scheduler to run next.
In a multiprogrammed uniprocessor computing system, context switches occur frequently enough that
all processes appear to be running concurrently. If a process has more than one thread, the Operating
System can use the context switching technique to schedule the threads so they appear to execute in
parallel. This is the case if threads are implemented at the kernel level. Threads can also be
implemented entirely at the user level in run-time libraries. Since in this case no thread scheduling is
provided by the Operating System, it is the responsibility of the programmer to yield the CPU
frequently enough in each thread so all threads in the process can make progress.
Action of Kernel to Context Switch Among Threads
The threads share a lot of resources with other peer threads belonging to the same process. So a
context switch among threads for the same process is easy. It involves switch of register set, the
program counter and the stack. It is relatively easy for the kernel to accomplished this task.
Action of kernel to Context Switch Among Processes
Context switches among processes are expensive. Before a process can be switched its process
control block (PCB) must be saved by the operating system. The PCB consists of the following
information:
The process state.
The program counter, PC.
The values of the different registers.
The CPU scheduling information for the process.
Memory management information regarding the process.
Possible accounting information for this process.
I/O status information of the process.
When the PCB of the currently executing process is saved the operating system loads the PCB of the
next process that has to be run on CPU. This is a heavy task and it takes a lot of time.
fork() in C
fork() system call is used to create a new process. Newly created process becomes child of
the caller process. It take no parameters and return integer value. Below are different
values returned by fork().
Negative Value: creation of a child process was unsuccessful.
Zero: Returned to the newly created child process.
Positive value: Returned to parent or caller. The value contains process ID of newly
created child process.
Examples:
1) Output of below program.
#include <stdio.h>
#include <sys/types.h>
int main()
{
pid_t pid = fork();
if (pid == 0)
printf("Child process created\n");
else
printf("Parent process created\n");
return 0;
}
Output:
Parent process created
Child process created
In the above code, a child process is created, fork() returns 0 in the child process and positive integer to
the parent process.
Number of times hello printed is equal to number of process created. Total Number of Processes = 2n
where n is number of fork system calls. So here n=3, 23 = 8
Let us put some label names for the three lines:
fork (); // Line 1
fork (); // Line 2
fork (); // Line 3
L1
/
L2
\
L2
/ \ / \
L3 L3 L3 L3 // There will be 4 child processes created by line 3
So there are total eight processes (new child processes and one original process).
Please note that the above programs dont compile in Windows environment.
fork() vs exec()
The fork system call creates a new process. The new process created by fork() is copy of the current
process except the returned value. The exex system call replaces the current process with a new program.
Exercise:
1) A process executes the following code
for (i = 0; i < n; i++)
fork();
The total number of child processes created is: (GATE CS 2008)
(A) n
(B) 2^n 1
(C) 2^n
(D) 2^(n+1) 1;
See this for solution.
Answer: (B)
Explanation:
F0
/
F1
If we sum all levels of above tree for i = 0 to n-1, we get 2^n 1. So there will be 2^n 1 child
processes.
--------fork() and Binary Tree
Given a program on fork() system call.
#include <stdio.h>
#include <unistd.h>
int main()
{
fork();
fork() && fork() || fork();
fork();
printf("forked\n");
return 0;
}
How many processes will be spawned after executing the above program?
A fork() system call spawn processes as leaves of growing binary tree. If we call fork() twice, it will
spawn 22 = 4 processes. All these 4 processes forms the leaf children of binary tree. In general if we are
level l, and fork() called unconditionally, we will have 2l processes at level (l+1). It is equivalent to
number of maximum child nodes in a binary tree at level (l+1).
As another example, assume that we have invoked fork() call 3 times unconditionally. We can represent
the spawned process using a full binary tree with 3 levels. At level 3, we will have 23 = 8 child nodes,
which corresponds to number of processes running.
A note on C/C++ logical operators:
The logical operator && has more precedence than ||, and have left to right associativity. After executing
left operand, the final result will be estimated and execution of right operand depends on outcome of left
operand as well as type of operation.
In case of AND (&&), after evaluation of left operand, right operand will be evaluated only if left
operand evaluates to non-zero. In case of OR (||), after evaluation of left operand, right operand will be
evaluated only if left operand evaluates to zero.
Return value of fork():
The man pages of fork() cites the following excerpt on return value,
On success, the PID of the child process is returned in the parent, and 0 is returned in the child. On
failure, -1 is returned in the parent, no child process is created, and errno is set appropriately.
A PID is like handle of process and represented as unsigned int. We can conclude, the fork() will return a
non-zero in parent and zero in child. Let us analyse the program. For easy notation, label each fork() as
shown below,
#include <stdio.h>
int main()
{
fork(); /* A */
( fork() /* B */ &&
fork() /* C */ ) || /* B and C are grouped according to precedence */
fork(); /* D */
fork(); /* E */
printf("forked\n");
return 0;
}
The following diagram provides pictorial representation of fork-ing new processes. All newly created
processes are propagated on right side of tree, and parents are propagated on left side of tree,
in consecutive levels.
At level 3, we have m, C1, C2, C3 as running processes and C4, C5 as children. The expression is now
simplified to ((B && C) || D), and at this point the value of (B && C) is obvious. In parents it is non-zero
and in children it is zero. Hence, the parents aware of outcome of overall B && C || D, will skip
execution of fork() D. Since, in the children (B && C) evaluated to zero, they will execute fork() D.
We should note that children C2 and C3 created at level 2, will also run fork() D as mentioned above.
At level 4, we will have m, C1, C2, C3, C4, C5 as running processes and C6, C7, C8 and C9 as child
processes. All these processes unconditionally execute fork() E, and spawns one child.
At level 5, we will have 20 processes running. The program (on Ubuntu Maverick, GCC 4.4.5) printed
forked 20 times. Once by root parent (main) and rest by children. Overall there will be 19 processes
spawned.
A note on order of evaluation:
The evaluation order of expressions in binary operators is unspecified. For details read the
post Evaluation order of operands. However, the logical operators are an exception. They are guaranteed
to evaluate from left to right.
2) Consider the following code fragment:
if (fork() == 0)
{
a = a + 5;
printf("%d,%d\n", a, &a);
}
else
{
a = a 5;
printf("%d, %d\n", a, &a);
}
Let u, v be the values printed by the parent process, and x, y be the values printed by the
child process. Which one of the following is TRUE? (GATE-CS-2005)
(A) u = x + 10 and v = y
(B) u = x + 10 and v != y
(C) u + 10 = x and v = y
(D) u + 10 = x and v != y
See this for solution.
Answer (C)
fork() returns 0 in child process and process ID of child process in parent process. In Child (x), a = a + 5
In Parent (u), a = a 5; Therefore x = u + 10. The physical addresses of a in parent and child must be
different. But our program accesses virtual addresses (assuming we are running on an OS that uses virtual
memory). The child process gets an exact copy of parent process and virtual address of a doesnt change
in child process. Therefore, we get same addresses in both parent and child.
3) Predict output of below program.
#include <stdio.h>
#include <unistd.h>
int main()
{
fork();
fork() && fork() || fork();
fork();
printf("forked\n");
return 0;
}
See this for solution
References:
http://www.csl.mtu.edu/cs4411.ck/www/NOTES/process/fork/create.html
Logical Address or Virtual Address (represented in bits): An address generated by the CPU
Logical Address Space or Virtual Address Space( represented in words or bytes): The set of all
logical addresses generated by a program
Physical Address Space (represented in words or bytes): The set of all physical addresses
corresponding to the logical addresses
Example:
If Logical Address = 31 bit, then Logical Address Space = 231 words = 2 G words (1 G = 230)
If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address = log2 227 = 27
bits
If Physical Address = 22 bit, then Physical Address Space = 222 words = 4 M words (1 M = 220)
If Physical Address Space = 16 M words = 24 * 220 words, then Logical Address = log2 224 = 24
bits
The mapping from virtual to physical address is done by the memory management unit (MMU) which is
a hardware device and this mapping is known as paging technique.
The Physical Address Space is conceptually divided into a number of fixed-size blocks, called
frames.
The Logical address Space is also splitted into fixed-size blocks, called pages.
Page number(p): Number of bits required to represent the pages in Logical Address Space or
Page number
Page offset(d): Number of bits required to represent particular word in a page or page size of
Logical Address Space or word number of a page or page offset.
Frame number(f): Number of bits required to represent the frame of Physical Address Space or
Frame number.
Frame offset(d): Number of bits required to represent particular word in a frame or frame size of
Physical Address Space or word number of a frame or frame offset.
The hardware implementation of page table can be done by using dedicated registers. But the usage of
register for the page table is satisfactory only if page table is small. If page table contain large number of
entries then we can use TLB(translation Look-aside buffer), a special, small, fast look up hardware cache.
When this memory is used, then an item is compared with all tags simultaneously.If the item is
found, then corresponding value is returned.
Segment Table: It maps two dimensional Logical address into one dimensional Physical address.
Its each table entry has
Base Address: It contains the starting physical address where the segments reside in memory.
Segment offset (d): Number of bits required to represent the size of the segment.
Advantages of Segmentation:
No Internal fragmentation.
Disadvantage of Segmentation:
As processes are loaded and removed from the memory, the free memory space is broken into
little pieces, causing External fragmentation.
The bankers algorithm is a resource allocation and deadlock avoidance algorithm that tests for safety by
simulating the allocation for predetermined maximum possible amounts of all resources, then makes an
s-state check to test for possible activities, before deciding whether allocation should be allowed to
continue.
Following Data structures are used to implement the Bankers Algorithm:
Let n be the number of processes in the system and m be the number of resources types.
Available :
It is a 1-d array of size m indicating the number of available resources of each type.
Max :
It is a 2-d array of size n*m that defines the maximum demand of each process in a
system.
Allocation :
It is a 2-d array of size n*m that defines the number of resources of each type
currently allocated to each process.
Need :
It is a 2-d array of size n*m that indicates the remaining resource need of each
process.
Allocationi specifies the resources currently allocated to process Pi and Needi specifies the additional
resources that process Pi may still request to complete its task.
Bankers algorithm consist of Safety algorithm and Resource request algorithm
Safety Algorithm
Resource-Request Algorithm
Let Requesti be the request array for process Pi. Requesti [j] = k means process Pi wants k instances of
resource type Rj. When a request for resources is made by process Pi, the following actions are taken:
Example:
Considering a system with five processes P0 through P4 and three resources types A, B, C. Resource
type A has 10 instances, B has 5 instances and type C has 7 instances. Suppose at time t0 following
snapshot of the system has been taken:
Question2. Is the system in safe state? If Yes, then what is the safe sequence?
Applying the Safety algorithm on the given system,
Question3. What will happen if process P1 requests one additional instance of resource type A and
two instances of resource type C?
We must determine whether this new system state is safe. To do so, we again execute Safety algorithm on
the above data structures.
Hence the new system state is safe, so we can immediately grant the request for process P1 .
Gate question:
http://quiz.geeksforgeeks.org/gate-gate-cs-2014-set-1-question-41/
Reference:
Operating System Concepts 8th Edition by Abraham Silberschatz, Peter B. Galvin, Greg Gagne
If one of the people tries editing the file, no other person should be reading or writing at the same
time, otherwise changes will not be visible to him/her.
However if some person is reading the file, then others may read it at the same time.
Once a writer is ready, it performs its write. Only one writer may write at a time
Here priority means, no reader should wait if the share is currently opened for reading.
Three variables are used: mutex, wrt, readcnt to implement solution
1. semaphore mutex, wrt; // semaphore mutex is used to ensure mutual exclusion when readcnt is
updated i.e. when any reader enters or exit from the critical section and semaphore wrt is used by
both readers and writers
2. int readcnt; // readcnt tells the number of processes performing read in the critical section,
initially 0
Functions for sempahore :
wait() : decrements the semaphore value.
signal() : increments the semaphore value.
Writer process:
} while(true);
Reader process:
1. Reader requests the entry to critical section.
2. If allowed:
o it increments the count of number of readers inside the critical section. If this reader is the
first reader entering, it locks the wrt semaphore to restrict the entry of writers if any reader
is inside.
o It then, signals mutex as any other reader is allowed to enter while others are already
reading.
o After performing reading, it exits the critical section. When exiting, it checks if no more
reader is inside, it signals the semaphore wrt as now, writer can enter the critical section.
3. If not allowed, it keeps on waiting.
do {
// Reader wants to enter the critical section
wait(mutex);
// The number of readers has now increased by 1
readcnt++;
//
//
//
if
wait(mutex);
readcnt--;
// that is, no reader is left in the critical section,
if (readcnt == 0)
signal(wrt);
// writers can enter
signal(mutex); // reader leaves
} while(true);
Thus, the mutex wrt is queued on both readers and writers in a manner such that preference is given to
readers if writers are also there. Thus, no reader is waiting simply because a writer has requested to enter
the critical section.
Let us first put priority inversion in the context of the Big Picture i.e. where does this come from.
In Operating System, one of the important concepts is Task Scheduling. There are several Scheduling
methods such as First Come First Serve, Round Robin, Priority based scheduling etc. Each scheduling
method has its pros and cons. As you might have guessed, Priority Inversion comes under Priority based
Scheduling. Basically, its a problem which arises sometimes when Priority based scheduling is used by
OS. In Priority based scheduling, different tasks are given different priority so that higher priority tasks
can intervene lower priority tasks if possible. So, in a priority based scheduling, if lower priority task (L)
is running and if a higher priority task (H) also needs to run, the lower priority task (L) would be
preempted by higher priority task (H). Now, suppose both lower and higher priority tasks need to share a
common resource (say access to the same file or device) to achieve their respective work. In this case,
since theres resource sharing and task synchronization is needed, several methods/techniques can be
used for handling such scenarios. For for sake of our topic on Priority Inversion, let us mention a
synchronization method say mutex. Just to recap on mutex, a task acquires mutex before entering critical
section (CS) and releases mutex after exiting critical section (CS). While running in CS, a task access this
common resource. More details on this can be referred here. Now, say both L and H shares a common
Critical Section (CS) i.e. same mutex is needed for this CS.
Coming to our discussion of priority inversion, let us examine some scenarios.
1) L is running but not in CS ; H needs to run; H preempts L ; H starts running ; H relinquishes or
releases
control
;
L
resumes
and
starts
running
2) L is running in CS ; H needs to run but not in CS; H preempts L ; H starts running ; H relinquishes
control
;
L
resumes
and
starts
running.
3) L is running in CS ; H also needs to run in CS ; H waits for L to come out of CS ; L comes out of CS ;
H enters CS and starts running
Please note that the above scenarios dont show the problem of any Priority Inversion (not even scenario
3). Basically, so long as lower priority task isnt running in shared CS, higher priority task can preempt it.
But if L is running in shared CS and H also needs to run in CS, H waits until L comes out of CS. The idea
is that CS should be small enough so that it doesnt result in H waiting for long time while L was in CS.
Thats why writing CS code requires careful consideration. In any of the above scenarios, priority
inversion (i.e. reversal of priority) didnt occur because the tasks are running as per the design.
Now let us add another task of middle priority say M. Now the task priorities are in the order of L < M <
H. In our example, M doesnt share the same Critical Section (CS). In this case, the following sequence
of task running would result in Priority Inversion problem.
4) L is running in CS ; H also needs to run in CS ; H waits for L to come out of CS ; M interrupts L and
starts running ; M runs till completion and relinquishes control ; L resumes and starts running till the end
of
CS
;
H
enters
CS
and
starts
running.
Note that neither L nor H share CS with M.
Here, we can see that running of M has delayed the running of both L and H. Precisely speaking, H is of
higher priority and doesnt share CS with M; but H had to wait for M. This is where Priority based
scheduling didnt work as expected because priorities of M and H got inverted in spite of not sharing any
CS. This problem is called Priority Inversion. This is what the heck was Priority Inversion ! In a system
with priority based scheduling, higher priority tasks can face this problem and it can result in unexpected
behavior/result. In general purpose OS, it can result in slower performance. In RTOS, it can result in
more severe outcomes. The most famous Priority Inversion problem was what happened at Mars
Pathfinder.
If we have a problem, there has to be solution for this. For Priority Inversion as well, therere different
solutions such as Priority Inheritance etc. This is going to be our next article
Process Management
Question 1
Consider the following code fragment:
if (fork() == 0)
{ a = a + 5; printf("%d,%d\n", a, &a); }
else { a = a 5; printf("%d, %d\n", a, &a); }
Let u, v be the values printed by the parent process, and x, y be the values printed by the child process.
Which one of the following is TRUE?
A) u = x + 10 and v = y
B) u = x + 10 and v != y
C) u + 10 = x and v = y
D) u + 10 = x and v != y
Question 2
The atomic fetch-and-set x, y instruction unconditionally sets the memory location x to 1 and fetches the
old value of x in y without allowing any intervening access to the memory location x. consider the
following implementation of P and V functions on a binary semaphore .
void P (binary_semaphore *s) {
unsigned y;
unsigned *x = &(s->value);
do {
fetch-and-set x, y;
} while (y);
}
void V (binary_semaphore *s) {
S->value = 0;
}
Which one of the following is true?
A) The implementation may not work if context switching is disabled in P.
B) Instead of using fetch-and-set, a pair of normal load/store can be used.
C) The implementation of V is wrong.
D) The code does not implement a binary semaphore.
Question 2 Explanation:
Let us talk about the operation P(). It stores the value of s in x, then it fetches the old value of x, stores it
in y and sets x as 1. The while loop of a process will continue forever if some other process doesn't
execute V() and sets the value of s as 0. If context switching is disabled in P, the while loop will run
forever as no other process will be able to execute V().
Question 3
Three concurrent processes X, Y, and Z execute three different code segments that access and update
certain shared variables. Process X executes the P operation (i.e., wait) on semaphores a, b and c; process
Y executes the P operation on semaphores b, c and d; process Z executes the P operation on semaphores
c, d, and a before entering the respective code segments. After completing the execution of its code
segment, each process invokes the V operation (i.e., signal) on its three semaphores. All semaphores are
binary semaphores initialized to one. Which one of the following represents a deadlockfree order of
invoking the P operations by the processes? (GATE CS 2013).
A)
B)
C)
D)
Question 3 Explanation:
Option A can cause deadlock. Imagine a situation process X has acquired a, process Y has acquired b and
process Z has acquired c and d. There is circular wait now. Option C can also cause deadlock. Imagine a
situation process X has acquired b, process Y has acquired c and process Z has acquired a. There is
circular wait now. Option D can also cause deadlock. Imagine a situation process X has acquired a and b,
process Y has acquired c. X and Y circularly waiting for each other. See
http://www.eee.metu.edu.tr/~halici/courses/442/Ch5%20Deadlocks.pdf Consider option A) for example
here all 3 processes are concurrent so X will get semaphore a, Y will get b and Z will get c, now X is
blocked for b, Y is blocked for c, Z gets d and blocked for a. Thus it will lead to deadlock. Similarly one
can figure out that for B) completion order is Z,X then Y. This question is duplicate of
http://geeksquiz.com/gate-gate-cs-2013-question-16/
Question 4
A shared variable x, initialized to zero, is operated on by four concurrent processes W, X, Y, Z as follows.
Each of the processes W and X reads x from memory, increments by one, stores it to memory, and then
terminates. Each of the processes Y and Z reads x from memory, decrements by two, stores it to memory,
and then terminates. Each process before reading x invokes the P operation (i.e., wait) on a counting
semaphore S and invokes the V operation (i.e., signal) on the semaphore S after storing x to memory.
Semaphore S is initialized to two. What is the maximum possible value of x after all processes complete
execution? (GATE CS 2013)
A) 2
B) 1
C) 1
D) 2
Question 4 Explanation:
Processes can run in many ways, below is one of the cases in which x attains max value
Semaphore S is initialized to 2
Process W executes S=1, x=1 but it doesn't update the x variable.
Then process Y executes S=0, it decrements x, now x= -2 and
signal semaphore S=1
Now process Z executes s=0, x=-4, signal semaphore S=1
Now process W updates x=1, S=2
Then process X executes X=2
So correct option is D
Question 5
A shared variable x, initialized to zero, is operated on by four concurrent processes W, X, Y, Z as follows.
Each of the processes W and X reads x from memory, increments by one, stores it to memory, and then
terminates. Each of the processes Y and Z reads x from memory, decrements by two, stores it to memory,
and then terminates. Each process before reading x invokes the P operation (i.e., wait) on a counting
semaphore S and invokes the V operation (i.e., signal) on the semaphore S after storing x to memory.
Semaphore S is initialized to two. What is the maximum possible value of x after all processes complete
execution? (GATE CS 2013)
A) 2
B) 1
C) 1
D) 2
Answer: (D)
Explanation: Processes can run in many ways, below is one of the cases in which x attains max value
Semaphore S is initialized to 2
Process W executes S=1, x=1 but it doesn't update the x variable.
Then process Y executes S=0, it decrements x, now x= -2 and
signal semaphore S=1
Now process Z executes s=0, x=-4, signal semaphore S=1
Now process W updates x=1, S=2
Then process X executes X=2
So correct option is D
Question 6
A certain computation generates two arrays a and b such that a[i]=f(i) for 0 i < n and b[i]=g(a[i]) for 0
i < n. Suppose this computation is decomposed into two concurrent processes X and Y such that X
computes the array a and Y computes the array b. The processes employ two binary semaphores R and S,
both initialized to zero. The array a is shared by the two processes. The structures of the processes are
shown below.
Process X:
private i;
for (i=0; i < n; i++) {
a[i] = f(i);
ExitX(R, S);
}
Process Y:
private i;
for (i=0; i < n; i++) {
EntryY(R, S);
b[i]=g(a[i]);
}
Which one of the following represents the CORRECT implementations of ExitX and EntryY?
(A)
ExitX(R, S) {
P(R);
V(S);
}
EntryY (R, S) {
P(S);
V(R);
}
(B)
ExitX(R, S) {
V(R);
V(S);
}
EntryY(R, S) {
P(R);
P(S);
}
(C)
ExitX(R, S) {
P(S);
V(R);
}
EntryY(R, S) {
V(S);
P(R);
}
(D)
ExitX(R, S) {
V(R);
P(S);
}
EntryY(R, S) {
V(S);
P(R);
}
A) A
B) B
C) C
D) D
Question 6 Explanation:
The purpose here is neither the deadlock should occur nor the binary semaphores be assigned value
greater than one.
A leads to deadlock
B can increase value of semaphores b/w 1 to n
D may increase the value of semaphore R and S to 2 in some cases
Hence Option C is the answer.
http://quiz.geeksforgeeks.org/operating-systems/process-synchronization/
http://quiz.geeksforgeeks.org/cpu-scheduling/
http://quiz.geeksforgeeks.org/operating-systems/memory-management/
http://quiz.geeksforgeeks.org/operating-systems/iinput-output-systems/
A)
What is a process and process table? What are different states of process
A process is an instance of program in execution. For example a Web Browser is a process, a shell (or
command prompt) is a process.
The operating system is responsible for managing all the processes that are running on a computer and
allocated each process a certain amount of time to use the processor. In addition, the operating system
also allocates various other resources that processes will need such as computer memory or disks. To
keep track of the state of all the processes, the operating system maintains a table known as the process
table. Inside this table, every process is listed along with the resources the processes is using and the
current state of the process.
Processes can be in one of three states: running, ready, or waiting. The running state means that the
process has all the resources it need for execution and it has been given permission by the operating
system to use the processor. Only one process can be in the running state at any given time. The
remaining processes are either in a waiting state (i.e., waiting for some external event to occur such as
user input or a disk access) or a ready state (i.e., waiting for permission to use the processor). In a real
operating system, the waiting and ready states are implemented as queues which hold the processes in
these states. The animation below shows a simple representation of the life cycle of a process (Source:
http://courses.cs.vt.edu/csonline/OS/Lessons/Processes/index.html)
What is a Thread? What are the differences between process and thread?
A thread is a single sequence stream within in a process. Because threads have some of the properties of
processes, they are sometimes called lightweight processes. Threads are popular way to improve
application through parallelism. For example, in a browser, multiple tabs can be different threads. MS
word uses multiple threads, one thread to format the text, other thread to process inputs, etc.
A thread has its own program counter (PC), a register set, and a stack space. Threads are not independent
of one other like processes as a result threads shares with other threads their code section, data section
and OS resources like open files and signals. See
http://www.personal.kent.edu/~rmuhamma/OpSystems/Myos/threads.htm for more details.
What is deadlock?
Deadlock is a situation when two or more processes wait for each other to finish and none of them ever
finish. Consider an example when two trains are coming toward each other on same track and there is
only one track, none of the trains can move once they are in front of each other. Similar situation occurs
in operating systems when there are two or more processes hold some resources and wait for resources
held by other(s).
What are the necessary conditions for deadlock?
Mutual Exclusion: There is s resource that cannot be shared.
Hold and Wait: A process is holding at least one resource and waiting for another resource which is with
some other process.
No Preemption: The operating system is not allowed to take a resource back from a process until process
gives it back.
Circular Wait: A set of processes are waiting for each other in circular form.
What is Virtual Memory? How is it implemented?
Virtual memory creates an illusion that each user has one or more contiguous address spaces, each
beginning at address zero. The sizes of such virtual address spaces is generally very high.
The idea of virtual memory is to use disk space to extend the RAM. Running processes dont need to care
whether the memory is from RAM or disk. The illusion of such a large amount of memory is created by
subdividing the virtual memory into smaller pieces, which can be loaded into physical memory whenever
they are needed by a process.
What is Thrashing?
Thrashing is a situation when the performance of a computer degrades or collapses. Thrashing occurs
when a system spends more time processing page faults than executing transactions. While processing
page faults is necessary to in order to appreciate the benefits of virtual memory, thrashing has a negative
affect on the system. As the page fault rate increases, more transactions need processing from the paging
device. The queue at the paging device increases, resulting in increased service time for a page fault
(Source: http://cs.gmu.edu/cne/modules/vm/blue/thrash.html)
What is Beladys Anomaly?
Bldys anomaly is an anomaly with some page replacement policies where increasing the number of
page frames results in an increase in the number of page faults. It occurs with First in First Out page
replacement is used. See the wiki page for an example and more details.
Differences between mutex and semphore?
See http://www.geeksforgeeks.org/mutex-vs-semaphore/
OS articles
Batch OS: A set of similar jobs are stored in the main memory for execution. A job gets assigned
to the CPU, only when the execution of the previous job completes.
Multiprogramming OS: The main memory consists of jobs waiting for CPU time. The OS
selects one of the processes and assigns it the CPU time. Whenever the executing process needs to
wait for any other operation (like I/O), the OS selects another process from the job queue and
assigns it the CPU. This way, the CPU is never kept idle and the user gets the flavor of getting
multiple tasks done at once.
Time Sharing OS: Time sharing systems require interaction with the user to instruct the OS to
perform various tasks. The OS responds with an output. The instructions are usually given
through an input device like the keyboard.
Real Time OS : Real Time OS are usually built for dedicated systems to accomplish a specific set
of tasks within deadlines.
Threads
A thread is a light weight process and forms a basic unit of CPU utilization. A process can perform more
than one task at the same time by including multiple threads.
o A thread has its own program counter, register set, and stack
o A thread shares with other threads of the same process the code section, the data section,
files and signals.
A new thread, or a child process of a given process, can be introduced by using the fork() system call. A
process with n fork() system calls generates 2n 1 child processes.
There are two types of threads:
User threads
Kernel threads
Process:
A process is a program under execution. The value of program counter (PC) indicates the address of the
current instruction of the process being executed. Each process is represented by a Process Control Block
(PCB).
Process Scheduling:
Below are different time with respect to a process.
Arrival Time:
Time at which the process arrives in the ready queue.
Completion Time:
Time at which process completes its execution.
Burst Time:
Time required by a process for CPU execution.
Turn Around Time:
Time Difference between completion time and arrival time.
Turn Around Time = Completion Time - Arrival Time
Waiting Time(W.T): Time Difference between turn around time and burst time.
Waiting Time = Turn Around Time - Burst Time
Shortest Job First(SJF): Process which have the shortest burst time are scheduled first.
Shortest Remaining Time First(SRTF): It is preemptive mode of SJF algorithm in which jobs are
schedule according to shortest remaining time.
Round Robin Scheduling: Each process is assigned a fixed time in cyclic way.
Priority Based scheduling (Non Preemptive): In this scheduling, processes are scheduled according to
their priorities, i.e., highest priority process is schedule first. If priorities of two processes match, then
schedule according to arrival time.
Highest Response Ratio Next (HRRN) In this scheduling, processes with highest response ratio is
scheduled. This algorithm avoids starvation.
Response Ratio = (Waiting Time + Burst time) / Burst time
Multilevel Queue Scheduling: According to the priority of process, processes are placed in the different
queues. Generally high priority process are placed in the top level queue. Only after completion of
processes from top level queue, lower level queued processes are scheduled.
Multi level Feedback Queue Scheduling: It allows the process to move in between queues. The idea is to
separate processes according to the characteristics of their CPU bursts. If a process uses too much CPU
time, it is moved to a lower-priority queue.
1. Mutual Exclusion: If a process Pi is executing in its critical section, then no other process is
allowed to enter into the critical section.
2. Progress: If no process is executing in the critical section, then the decision of a process to enter a
critical section cannot be made by any other process that is executing in its remainder section. The
selection of the process cannot be postponed indefinitely.
3. Bounded Waiting: There exists a bound on the number of times other processes can enter into
the critical section after a process has made request to access the critical section and before the
requested is granted.
Synchronization Tools
Semaphores: A semaphore is an integer variable that is accessed only through two atomic operations,
wait () and signal (). An atomic operation is executed in a single CPU time slice without any pre-emption.
Semaphores are of two types:
1. Counting Semaphore: A counting semaphore is an integer variable whose value can range over
an unrestricted domain.
2. Mutex: Binary Semaphores are called Mutex. These can have only two values, 0 or 1. The
operations wait () and signal () operate on these in a similar fashion.
Deadlock
A situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.
Deadlock can arise if following four conditions hold simultaneously (Necessary
Conditions)
Mutual Exclusion: One or more than one resource are non-sharable (Only one process
can use at a time)
Hold and Wait: A process is holding at least one resource and waiting for resources.
No Preemption: A resource cannot be taken from a process unless the process releases the
resource.
Circular Wait: A set of processes are waiting for each other in circular form.
Methods for handling deadlock
There are three ways to handle deadlock
1) Deadlock prevention or avoidance: The idea is to not let the system into deadlock state.
2) Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it
once occurred.
3) Ignore the problem all together: If deadlock is very rare, then let it happen and reboot
the system. This is the approach that both Windows and UNIX take.
Bankers Algorithm:
This algorithm handles multiple instances of the same resource.
Example: The snapshot of the system at a given instant:
Memory Management:
These techniques allow the memory to be shared among multiple processes.
Overlays: The memory should contain only those instructions and data that are required at
a given time.
Swapping: In a multiprogramming program, the instructions that have used the time slice
are swapped out from the memory.
1. Paging: The physical memory is divided into equal sized frames. The main memory is
divided into fixed size pages. The size of a physical memory frame is equal to the size of a
virtual memory frame.
2. Segmentation: Segmentation is implemented to give users view of memory. The
logical address space is a collection of segments. Segmentation can be implemented with
or without the use of paging.
Page Fault
A page fault is a type of interrupt, raised by the hardware when a running program
accesses a memory page that is mapped into the virtual address space, but not loaded in
physical memory.
Beladys anomaly
Beladys anomaly proves that it is possible to have more page faults when increasing the
number of page frames while using the First in First Out (FIFO) page replacement
algorithm. For example, if we consider reference string
3 2 1 0 3 2 4
3 2 1 0 4 and 3 slots, we get 9 total page faults, but if we increase slots to 4, we
get 10 page faults.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots > 4 Page
faults
0 is already there so > 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of
time in the future.>1 Page fault.
0 is already there so > 0 Page fault..
4 will takes place of 1 > 1 Page Fault.
Now for the further page reference string > 0 Page fault because they are already
available in the memory.
Optimal page replacement is perfect, but not possible in practice as operating system
cannot know future requests. The use of Optimal Page replacement is to set up a
benchmark so that other replacement algorithms can be analyzed against it.
SOLUTION
/ C++ program to print pattern that first reduces 5 one
// by one, then adds 5. Without any loop an extra variable.
#include <iostream>
using namespace std;
// Recursive function to print the pattern without any extra
// variable
void printPattern(int n)
{
// Base case (When n becomes 0 or negative)
if (n ==0 || n<0)
{
cout << n << " ";
return;
}
// First print decreasing order
cout << n << " ";
printPattern(n-5);
// Then print increasing order
cout << n << " ";
}
// Driver Program
int main()
{
int n = 16;
printPattern(n);
return 0;
}
Output:
16, 11, 6, 1, -4, 1, 6, 11, 16