Chapter - 1
Chapter - 1
6. What is process? State and explain in brief different types of process states. [5m]
• A process is defined as, an entity which represents the basic unit of work to be
implemented in the system.
• 1. New: A process is said to be new state if it is being created.
• 2. Ready: A process is said to be ready state if it is ready for the execution and waiting
for the CPU to be allocated to it.
• 3. Running State: A process is said to be in running state if the CPU has been
allocated to it and it is currently being executed.
• 4. Waiting or Blocked: A process is said to be in waiting state if it has been blocked by
some event. Unless that event occurs the process cannot continue it execution.
• 5. Terminated: A process is said to be in waiting state if it has completed its execution
normally or it has been terminated abnormally by the OS because of some error or
killed by some other processes.
7. List any two operating system examples that uses one to one model.[1m]
• os/2, windows NT and windows 2000
10. Differentiate between user level thread and kernel level thread.[2m]
•
• If all processes are I/O bound, the ready queue will almost empty and the
short-term scheduler will have a little to do. I/O bound processes spend more
time doing I/O than computation
10 Explain multilevel queue scheduling with diagram
• A multilevel queue scheduling algorithm partitions the ready queue separate into
queues.
• In a multilevel queue scheduling processes are permanently assigned to one queue,
depending upon their properties such as the size of the memory or the type of the
process or priority of the process. So each queue follows a separate scheduling
algorithm.
• In multilevel queue scheduling algorithm scheduling the processes are classified into
different groups such as System processes, Interactive processes, Interactive editing
processes, Batch processes, User processes etc., as shown in Fig. 3.9.
• Here, each queue gets a certain portion of the CPU time, which it can then schedule
among its various processes.
• The time taken by dispatcher to stop one process and start another process to run is
called dispatch latency time.
• The following are functions performed by dispatcher:
• 1. Loading the register of the process.
• 2. Switching operating system to the user mode.
• 3. Restart the program by jumping to the proper location in the user program.
CHAPTER -4
1. What is race condition? [1m]
• Race condition is a situation where multiple processes access and manipulate the
same data concurrently and the outcome of the execution depends on the order in
which the instructions execute.
2. State two general approaches that are used to handle critical section in operating
system. [1m]
• Peterson Solution
o In Peterson's solution, when a process is executing in a critical state, then the
other process only executes the rest of the code, and the opposite can happen.
o This method also helps to make sure that only a single process runs in the
critical section at a specific time.
• Bakery algorithm
o The Bakery algorithm is one of the simplest known solutions to the mutual
exclusion
o
o problem for the general case of n (multiple) process.
o
o The Bakery algorithm is a critical section solution for n processes. The Bakery
algorithm preserves the First Come First Serve (FCFS) property.
3. What is semaphore?[5m]
• A semaphore is a variable used to control access to a common resource by multiple
processes and avoid critical section problem.
• A semaphore is an integer variable, which can be accessed only through two
operations Wait() and Signal()).
• Semaphores allow only one process into the critical section. They follow the mutual
exclusion principle strictly and are much more efficient than some other methods of
synchronization.
• There is no resource wastage because of busy waiting in semaphores as processor
time is not wasted unnecessarily to check if a condition is fulfilled to allow a process to
access the critical section.
• The two type's semaphores are binary semaphore and counting semaphore. A binary
semaphore is restricted to values of zero (0) or one (1), while a counting semaphore
can assume any non-negative integer value. The counting semaphores can range over
an unrestricted domain.
4. Explain bounded buffer problem. Give structure of producer and consumer.[5m]
• Bounded buffer problem is also called Producer-Consumer problem. Solution to this
problem is, creating two counting semaphores "full" and "empty" to keep track of the
current number of full and empty buffers respectively.
• Producers produce a product and consumers consume the product, but both use of
one of the containers each time.
• Producer
o Creates item and adds to the buffer.
o Do not want to overflow the buffer.
• Consumer
o Removes items from buffer (consumes it).
o Do not want to get ahead of producer.
• Bour Bounded-buffer assumes that there is a fixed buffer size. Consumer waits for
new item, producer waits if buffer is full. Producers and consumers are much like Unix
pipes.
• The bounded buffer problem can be handled using semaphores, mutex semaphore
provide mutual exclusion. Empty and full semaphores count number of empty and full
buffers respectively.
5. Counting semaphore can be implemented by using binary semaphore. True/ False.
Justify. [1m]
• False
6 What is external fragmentation? What are various ways to avoid external fragmentation
7 What is the advantage of paging with segmentation model
1. Paging allows jobs to be allocated in non-contiguous memory locations.
2. In paging memory used more efficiently.
3. Paging does not require any support for dynamic relocation because paging itself is
a form of dynamic relocation.
4. Paging support higher degree of multiprogramming.
8 What is fragmentation? Explain internal and external fragmentation in detail
• As process are loaded and removed from memory, the free memory space is broken
into little pieces. It happens after sometimes that processes cannot be allocated to
memory blocks considering their small size and memory blocks remains unused.
This problem is known as fragmentation.
• External Fragmentation : 1) It arises when free memory, areas existing in systems
are too small to be allocated to processes.
2) It can be eliminated by compaction technique.
16 Explain demand paging with example Discuss hardware required for demand paging
• Demand paging is a method of virtual memory management
• With demand-paged virtual memory, pages are only loaded when they are
demanded during program execution pages that are never accessed are thus never
loaded into physical memory
• The hardware to support demand paging:
o Page table
o Secondary memory