Os Assignment
Os Assignment
Os Assignment
a) Define a file system. What are various components of a file system? State and explain
various file allocation methods.
File system is the part of the operating system which is responsible for file management. It
provides a mechanism to store the data and access to the file contents including data and
programs.
File Structure
A File Structure should be according to a required format that the operating system can
understand.
A file has a certain defined structure according to its type.
A text file is a sequence of characters organized into lines.
A source file is a sequence of procedures and functions.
An object file is a sequence of bytes organized into blocks that are understandable by the
machine.
When operating system defines different file structures, it also contains the code to
support these file structure. Unix, MS-DOS support minimum number of file structure.
File Type
File type refers to the ability of the operating system to distinguish different types of file such as
text files source files and binary files etc. Many operating systems support many types of files.
Operating system like MS-DOS and UNIX have the following types of files −
Ordinary files
These files contain list of file names and other information related to these files.
Special files
1. Continuous Allocation: A single continuous set of blocks is allocated to a file at the time of
file creation. Thus, this is a pre-allocation strategy, using variable size portions. The file
allocation table needs just a single entry for each file, showing the starting block and the length
of the file. This method is best from the point of view of the individual sequential file. Multiple
blocks can be read in at a time to improve I/O performance for sequential processing. It is also
easy to retrieve a single block. For example, if a file starts at block b, and the ith block of the file
is wanted, its location on secondary storage is simply b+i-1.
Disadvantage
External fragmentation will occur, making it difficult to find contiguous blocks of space
of sufficient length. Compaction algorithm will be necessary to free up additional space on
disk.
Also, with pre-allocation, it is necessary to declare the size of the file at the time of
creation.
Disadvantage:
Internal fragmentation exists in last disk block of file.
There is an overhead of maintaining the pointer in every disk block.
If the pointer of any disk block is lost, the file will be truncated.
It supports only the sequential access of files.
3. Indexed Allocation:
It addresses many of the problems of contiguous and chained allocation. In this case, the file
allocation table contains a separate one-level index for each file: The index has one entry for
each block allocated to the file. Allocation may be on the basis of fixed-size blocks or variable-
sized blocks. Allocation by blocks eliminates external fragmentation, whereas allocation by
variable-size blocks improves locality. This allocation technique supports both sequential and
direct access to the file and thus is the most popular form of file allocation.
b) What problems could occur of system allowed a file system to be mounted
simultaneously at more than one location?
There is no problem with this beyond problems that would occur when more than one program
accesses a local file system at the same time or even when you have a single program accessing
the same files from multiple threads.
1. File semantics are not super tight. For a counter-example to file semantics look at
Apache Zookeeper. With Zookeeper, all znodes (similar to files or directories) are
updated completely or not at all and all updates to any znodes are completely ordered
for all updaters and observers. This makes it much easier to write distributed programs
and makes Zookeeper much slower than a high performance file system. MapR FS, in
contrast, only globally orders updates to overlapping byte ranges in a single file, or to
the same row in a table or to the same topic in a stream.
2. In addition, your program may be buffering updates for you. This is really good for
performance and can be really confusing for correctness. All your updates should have
been persisted when a flush returns, but you don’t know that no other updates got in
front of you and you don’t know if anything happened after your flush.
3. Failure modes in distributed systems are complex. If you do a write, do a flush and
the flush doesn’t return (because your program crashes) or returns an error, you really
don’t know if your update occurred.
4. For files, it is allowable for the contents of a large write to occur out of order and
possibly even in different orders for different readers. The only guarantee, really, is
that after a successful flush, all updates will have happened and before the first write
after a flush, no updates will have happened. If you write a megabyte, the last block of
the write could be visible first to one reader, but the first block might be visible first to
another reader. This can be damned confusing if you find out about this after you bake
all kinds of assumptions into your program.
Note that pretty much all modern computers are really distributed systems. They have multiple
hardware threads and many have multiple sockets. Memory caches can lead to contradictory
views of what is in memory so don’t be surprised when the situation is more difficult for
persisted data.
Also, to repeat, this can happen even when you have one program running as a single process on
a single computer. Even if you don’t think you have multiple threads going, you probably do
have multiple threads in your I/O system. Your single-threaded program is a distributed system
that only gives you an illusion of being anything else, especially when it comes to failure modes.
3.Write a short notes on:
4.
a) Write a detailed note on device management policies.
Device Management is another important function of the operating system. Device management
is responsible for managing all the hardware devices of the computer system. It may also include
the management of the storage device as well as the management of all the input and output
devices of the computer system. It is the responsibility of the operating system to keep track of
the status of all the devices in the computer system. The status of any computing devices,
internal or external may be either free or busy. If a device requested by a process is free at a
specific instant of time, the operating system allocates it to the process.
An operating system manages the devices in a computer system with the help of device
controllers and device drivers. Each device in the computer system is equipped with the help of
device controller. For example, the various devices controllers in a computer system may be disk
controller, printer controller, tape-drive controller and memory controller. All these devices
controllers are connected with each other through a system bus. The device controllers are
actually the hardware components that contains some buffers registers to store the data
temporarily. The transfer of data between a running process and the various devices of the
computer system is accomplished only through these devices controllers.
Wait
The wait operation decrements the value of its argument S, if it is positive. If S is
negative or zero, then no operation is performed.
Signal
The signal operation increments the value of its argument S.
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary semaphores.
Details about these are given as follows:
Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These
semaphores are used to coordinate the resource access, where the semaphore count is the
number of available resources. If the resources are added, semaphore count automatically
incremented and if the resources are removed, the count is decremented.
Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0 and
1. The wait operation only works when the semaphore is 1 and the signal operation
succeeds when semaphore is 0. It is sometimes easier to implement binary semaphores
than counting semaphores.
Advantages of Semaphores
Some of the advantages of semaphores are as follows:
Semaphores allow only one process into the critical section. They follow the mutual
exclusion principle strictly and are much more efficient than some other methods of
synchronization.
There is no resource wastage because of busy waiting in semaphores as processor time is
not wasted unnecessarily to check if a condition is fulfilled to allow a process to access
the critical section.
Semaphores are implemented in the machine independent code of the microkernel. So
they are machine independent.
Disadvantages of Semaphores
Some of the disadvantages of semaphores are as follows −
Semaphores are complicated so the wait and signal operations must be implemented in
the correct order to prevent deadlocks.
Semaphores are impractical for last scale use as their use leads to loss of modularity. This
happens because the wait and signal operations prevent the creation of a structured layout
for the system.
Semaphores may lead to a priority inversion where low priority processes may access the
critical section first and high priority processes later.
Multiprocessing is the use of two or more central processing units within a single computer
system. These systems have multiple processors working in parallel that share the computer
clock,memory,bus,peripheral devices.
Types of Multiprocessors
There are mainly two types of multiprocessors i.e. symmetric and asymmetric multiprocessors.
Details about them are as follows:
Symmetric Multiprocessors
In these types of systems, each processor contains a similar copy of the operating system and
they all communicate with each other. All the processors are in a peer to peer relationship i.e. no
master - slave relationship exists between them.
An example of the symmetric multiprocessing system is the Encore version of Unix for the
Multimax Computer.
Asymmetric Multiprocessors
In asymmetric systems, each processor is given a predefined task. There is a master processor
that gives instruction to all the other processors. Asymmetric multiprocessor system contains a
master slave relationship.
Asymmetric multiprocessor was the only type of multiprocessor available before symmetric
multiprocessors were created. Now also, this is the cheaper option.
Advantages of Multiprocesssor Systems:
There are multiple advantages to multiprocessor systems. Some of these are:
Enhanced Throughput
Increased Expense
Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3
Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. —>1 Page
Fault.
6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —>1 Page
Fault.
Finally when 3 come it is not available so it replaces 0 1 page fault.
Least Recently Used –
In this algorithm page will be replaced which is least recently used.
Example-3Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with 4 page
frames.Find number of page faults.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so —> 0 Page fault when 3 came it will take the place of 7 because it is least
recently used —>1 Page fault
Now for the further page reference string —> 0 Page fault because they are already available in
the memory.
7.
Program execution
I/O operations
File System manipulation
Communication
Error Detection
Resource Allocation
Protection
Program execution
Operating systems handle many kinds of activities from user programs to system programs like
printer spooler, name servers, file server, etc. Each of these activities is encapsulated as a
process.
A process includes the complete execution context (code to execute, data to manipulate,
registers, OS resources in use). Following are the major activities of an operating system with
respect to program management −
I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software. Drivers
hide the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.
I/O operation means read or write operation with any file or any specific I/O device.
Operating system provides the access to the required I/O device when required.
A file represents a collection of related information. Computers can store files on the disk
(secondary storage), for long-term storage purpose. Examples of storage media include
magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has its
own properties like speed, capacity, data transfer rate and data access methods.
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. Following are the major activities of an
operating system with respect to file management −
Communication
In case of distributed systems which are a collection of processors that do not share memory,
peripheral devices, or a clock, the operating system manages communications between all the
processes. Multiple processes communicate with one another through communication lines in
the network.
The OS handles routing and connection strategies, and the problems of contention and security.
Following are the major activities of an operating system with respect to communication −
Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect to
error handling −
Resource Management
Protection
Considering a computer system having multiple users and concurrent execution of multiple
processes, the various processes must be protected from each other's activities.
Protection refers to a mechanism or a way to control the access of programs, processes, or users
to the resources defined by a computer system. Following are the major activities of an
operating system with respect to protection −
Algorithm:
1. Sort all the process according to the arrival time.
2. Then select that process which has minimum arrival time and minimum Burst time.
3. After completion of process make a pool of process which after till the completion of
previous process and select that process among the pool which is having minimum Burst
time.
The definition of a multilevel feedback queue scheduler makes it the most general CPU-
scheduling algorithm. It can be configured to match a specific system under design.
Unfortunately, it also requires some means of selecting values for all the parameters to define the
best scheduler. Although a multilevel feedback queue is the most general scheme, it is also
the most complex.
8.a)Explain with an example the concept of shared pages in detail.
It is very common for many computer users to be running the same program at the same time in
a large multiprogramming computer system.
Now, to avoid having two copies of same page in the memory at the same time, just share the
pages.
But a problem arises that not all the pages are shareable.
Generally, read-only pages are shareable, for example, program text; but data pages are not
shareable.
With shared pages, a problem occurs, whenever two or more than two processes (multiple
processes) share some code.
Let's suppose that the process X and process Y, both are running the editor and sharing its pages.
Now if the scheduler decides to remove the process X from the memory, evicting all its pages
and filling the empty page frames with the other program will cause the process Y to generate
large number of page faults just to bring them back again.
In the similar way, whenever the process X terminates, it is essential to be able to discover that
the pages are still in use so that their disk space will not be freed by any accident.
With the 2001 release of Windows XP, Microsoft united its various Windows packages under a
single banner, offering multiple editions for consumers, businesses, multimedia developers, and
others. Windows XP abandoned the long-used Windows 95 kernel for a more powerful code
base and offered a more practical interface and improved application and memory management.
The highly successful XP standard was succeeded in late 2006 by Windows Vista, which
experienced a troubled rollout and met with considerable marketplace resistance, quickly
acquiring a reputation for being a large, slow, and resource-consuming system. Responding to
Vista’s disappointing adoption rate, Microsoft in 2009 released Windows 7, an OS whose
interface was similar to that of Vista but was met with enthusiasm for its noticeable speed
improvement and its modest system requirements.
Windows 8 in 2012 offered a start screen with applications appearing as tiles on a grid and the
ability to synchronize settings so users could log on to another Windows 8 machine and use their
preferred settings. In 2015 Microsoft released Windows 10, which came with Cortana, a digital
personal assistant like Apple’s Siri, and the Web browser Microsoft Edge, which replaced
Internet Explorer. Microsoft also announced that Windows 10 would be the last version of
Windows, meaning that users would receive regular updates to the OS but that no more large-
scale revisions would be done.
9.Explain the security attacks on operating system. Write the steps to protect the system
from various attacks.
Security refers to providing a protection system to computer system resources such as CPU,
memory, disk, software programs and most importantly data/information stored in the computer
system. If a computer program is run by an unauthorized user, then he/she may cause severe
damage to computer or data stored in it. So a computer system must be protected against
unauthorized access, malicious access to system memory, viruses, worms etc.
Program Threats
Operating system's processes and kernel do the designated task as instructed. If a user program
made these process do malicious tasks, then it is known as Program Threats. One of the
common example of program threat is a program installed in a computer which can store and
send user credentials via network to some hacker. Following is the list of some well-known
program threats.
Trojan Horse − Such program traps user login credentials and stores them to send to
malicious user who can later on login to computer and can access system resources.
Trap Door − If a program which is designed to work as required, have a security hole in
its code and perform illegal action without knowledge of user then it is called to have a
trap door.
Logic Bomb − Logic bomb is a situation when a program misbehaves only when certain
conditions met otherwise it works as a genuine program. It is harder to detect.
Virus − Virus as name suggest can replicate themselves on computer system. They are
highly dangerous and can modify/delete user files, crash systems. A virus is generatlly a
small code embedded in a program. As user accesses the program, the virus starts
getting embedded in other files/ programs and can make system unusable for user
System Threats
System threats refers to misuse of system services and network connections to put user in
trouble. System threats can be used to launch program threats on a complete network called as
program attack. System threats creates such an environment that operating system resources/
user files are misused. Following is the list of some well-known system threats.
Worm − Worm is a process which can choked down a system performance by using
system resources to extreme levels. A Worm process generates its multiple copies where
each copy uses system resources, prevents all other processes to get required resources.
Worms processes can even shut down an entire network.
Port Scanning − Port scanning is a mechanism or means by which a hacker can detects
system vulnerabilities to make an attack on the system.
Denial of Service − Denial of service attacks normally prevents user to make legitimate
use of the system. For example, a user may not be able to use internet if denial of service
attacks browser's content settings.
There are some disadvantages as well to multiprocessor systems. Some of these are:
Increased Expense
Even though multiprocessor systems are cheaper in the long run than using multiple computer
systems, still they are quite expensive. It is much cheaper to buy a simple single processor
system than a multiprocessor system.
Complicated Operating System Required
There are multiple processors in a multiprocessor system that share peripherals, memory etc. So,
it is much more complicated to schedule processes and impart resources to processes.than in
single processor systems. Hence, a more complex and complicated operating system is required
in multiprocessor systems.
Large Main Memory Required
All the processors in the multiprocessor system share the memory. So a much larger pool of
memory is required as compared to single processor systems.
b)What are the basic components of linux?
Linux is one of popular version of UNIX operating System. It is open source as its source
code is freely available. It is free to use. Linux was designed considering UNIX compatibility.
Its functionality list is quite similar to that of UNIX.
If we simulate deadlock with a table which is standing on its four legs then we can also simulate
four legs with the four conditions which when occurs simultaneously, cause the deadlock.
However, if we break one of the legs of the table then the table will fall definitely. The same
happens with deadlock, if we can be able to violate one of the four necessary conditions and
don't let them occur together then we can prevent the deadlock.
Let's see how we can prevent each of the conditions.
1. Mutual Exclusion
Mutual section from the resource point of view is the fact that a resource can never be used by
more than one process simultaneously which is fair enough but that is the main reason behind the
deadlock. If a resource could have been used by more than one process at the same time then the
process would have never been waiting for any resource.
However, if we can be able to violate resources behaving in the mutually exclusive manner then
the deadlock can be prevented.
Spooling
For a device like printer, spooling can work. There is a memory associated with the printer which
stores jobs from each of the process into it. Later, Printer collects all the jobs and print each one
of them according to FCFS. By using this mechanism, the process doesn't have to wait for the
printer and it can continue whatever it was doing. Later, it collects the output when it is
produced.
Although, Spooling can be an effective approach to violate mutual exclusion but it suffers from
two kinds of problems.
Hold and wait condition lies when a process holds a resource and waiting for some other
resource to complete its task. Deadlock occurs because there can be more than one process which
are holding one resource and waiting for other in the cyclic order.
However, we have to find out some mechanism by which a process either doesn't hold any
resource or doesn't wait. That means, a process must be assigned all the necessary resources
before the execution starts. A process must not wait for any resource once the execution has been
started.
!(Hold and wait) = !hold or !wait (negation of hold and wait is, either you don't hold or you
don't wait)
This can be implemented practically if a process declares all the resources initially. However,
this sounds very practical but can't be done in the computer system because a process can't
determine necessary resources initially.
Process is the set of instructions which are executed by the CPU. Each of the instruction may
demand multiple resources at the multiple times. The need cannot be fixed by the OS.
3. No Preemption
Deadlock arises due to the fact that a process can't be stopped once it starts. However, if we take
the resource away from the process which is causing deadlock then we can prevent deadlock.
This is not a good approach at all since if we take a resource away which is being used by the
process then all the work which it has done till now can become inconsistent.
Consider a printer is being used by any process. If we take the printer away from that process
and assign it to some other process then all the data which has been printed can become
inconsistent and ineffective and also the fact that the process can't start printing again from
where it has left which causes performance inefficiency.
4. Circular Wait
To violate circular wait, we can assign a priority number to each of the resource. A process can't
request for a lesser priority resource. This ensures that not a single process can request a resource
which is being utilized by some other process and no cycle will be formed.
Among all the methods, violating Circular wait is the only approach that can be implemented
practically.
17)
a)Explain the paging scheme for memory management in detail.
Paging is a memory management scheme that eliminates the need for contiguous allocation of
physical memory. This scheme permits the physical address space of a process to be non –
contiguous.
Logical Address or Virtual Address (represented in bits): An address generated by the
CPU
Logical Address Space or Virtual Address Space( represented in words or bytes): The set
of all logical addresses generated by a program
Physical Address (represented in bits): An address actually available on memory unit
Physical Address Space (represented in words or bytes): The set of all physical addresses
corresponding to the logical addresses.
Example:
If Logical Address = 31 bit, then Logical Address Space = 231 words = 2 G words (1 G =
230)
If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address =
log2 227 = 27 bits
If Physical Address = 22 bit, then Physical Address Space = 222 words = 4 M words (1 M
= 220)
If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address =
log2 224 = 24 bits
The mapping from virtual to physical address is done by the memory management unit (MMU)
which is a hardware device and this mapping is known as paging technique.
The Physical Address Space is conceptually divided into a number of fixed-size blocks,
called frames.
The Logical address Space is also splitted into fixed-size blocks, called pages.
Page Size = Frame Size
18)
a)Explain the different views of an operating system in brief.
User View of Operating System:
The Operating System is an interface, hides the details which must be performed and presents a
virtual machine to the user that makes easier to use. Operating System provides the following
services to the user.
Execution of a program
Access to I/O devices
Controlled access to files
Error detection (Hardware failures, and software errors)
Hardware View of Operating System:
The Operating System manages the resources efficiently in order to offer the services to the user
programs. Operating System acts as a resource managers:
Allocation of resources
Controlling the execcution of a program
Control the operationss of I/O devices
Protecftion of resources
Monitors the data
System View of Operating System:
Operating System is a program that functions in the same way as other programs . It is a set of
instructions that are executed by the processor. Operating System acts as a program to perform
the following.
Hardware upgrades
New services
Fixes the issues of resources
Controls the user and hardware operations
Deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.
There are four different conditions that result in Deadlock. These four conditions are also
known as Coffman conditions and these conditions are not mutually exclusive. Let's look at
them one by one.
Mutual Exclusion: A resource can be held by only one process at a time. In other
words, if a process P1 is using some resource R at a particular instant of time, then
some other process P2 can't hold or use the same resource R at that particular instant of
time. The process P2 can make a request for that resource R but it can't use that
resource simultaneously with process P1.
Hold and Wait: A process can hold a number of resources at a time and at the same
time, it can request for other resources that are being held by some other process. For
example, a process P1 can hold two resources R1 and R2 and at the same time, it can
request some resource R3 that is currently held by process P2.
No preemption: A resource can't be preempted from the process by another process,
forcefully. For example, if a process P1 is using some resource R, then some other
process P2 can't forcefully take that resource. If it is so, then what's the need for various
scheduling algorithm. The process P2 can request for the resource R and can wait for
that resource to be freed by the process P1.
Circular Wait: Circular wait is a condition when the first process is waiting for the
resource held by the second process, the second process is waiting for the resource held
by the third process, and so on. At last, the last process is waiting for the resource held
by the first process. So, every process is waiting for each other to release the resource
and no one is releasing their own resource. Everyone is waiting here for getting the
resource. This is called a circular wait.
Deadlock will happen if all the above four conditions happen simultaneously.
The secondary-storage structure depicts a computer peripherals and represent powerful functions
that is more reliable and dependable that the older versions. However, it must be agreeable that
the Storage devices are becoming more powerful with the passage of time along with the
development of newer technologies. The functions and facilities of these storage devices are also
increasing with the evolution of better and higher machines.
Disk Scheduling
Disk scheduling is a scheduling process that mainly focuses on the servicing of the
disk input and output requests in a proper order.
Disk Structure
Disk Structures for examples Hard Disks, Compact Disks, and Removable Compact Disks, and
DVDs are being used to strengthen the User’s system and to make and store the data and
information for a longer period. As various types of disks are being used in larger number these
days so it is very important for the users to know about the structure of these various types of
disks which include.
Swap-Space Management
Space management using swapping is a method by which a single page of memory can be copied
to the preconfigured space on a hard disk. This process is used to free up the memory page which
are been occupied earlier. E.g. Linux breaks the physical RAM which is structured with random
access memory into some pitches of memory that is called pages.
Disk Management
Disk management is managing the disk space by creating different disk drives installed and the
partitions associated with those drives. There can be several layers of caching that exists between
the main memory of the computer and disk platters
20) Explain in detail the various algorithms of disk scheduling with an example.
Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk.
Disk scheduling is also known as I/O scheduling.
Disk scheduling is important because:
Multiple I/O requests may arrive by different processes and only one I/O request can be
served at a time by the disk controller. Thus other I/O requests need to wait in the waiting
queue and need to be scheduled.
Two or more request may be far from each other so can result in greater disk arm
movement.
Hard drives are one of the slowest parts of the computer system and thus need to be
accessed in an efficient manner.
Seek Time:Seek time is the time taken to locate the disk arm to a specified track where
the data is to be read or write. So the disk scheduling algorithm that gives minimum
average seek time is better.
Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to
rotate into a position so that it can access the read/write heads. So the disk scheduling
algorithm that gives minimum rotational latency is better.
Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating
speed of the disk and number of bytes to be transferred.
Disk Response Time: Response Time is the average of time spent by a request waiting
to perform its I/O operation. Average Response time is the response time of the all
requests. Variance Response Time is measure of how individual request are serviced with
respect to average response time. So the disk scheduling algorithm that gives minimum
variance response time is better.
Advantages:
Every request gets a fair chance
No indefinite postponement
Disadvantages:
Does not try to optimize seek time
May not provide the best possible service
2. SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are
executed first. So, the seek time of every request is calculated in advance in the queue and
then they are scheduled according to their calculated seek time. As a result, the request near
the disk arm will get executed first. SSTF is certainly an improvement over FCFS as it
decreases the average response time and increases the throughput of system.
Advantages:
Average Response Time decreases
Throughput increases
Disadvantages:
Overhead to calculate seek time in advance
Can cause Starvation for a request if it has higher seek time as compared to incoming
requests
High variance of response time as SSTF favours only some requests
3. SCAN: In SCAN algorithm the disk arm moves into a particular direction and services
the requests coming in its path and after reaching the end of disk, it reverses its direction
and again services the request arriving in its path. So, this algorithm works as an elevator
and hence also known as elevator algorithm. As a result, the requests at the midrange are
serviced more and those arriving behind the disk arm will have to wait.
Advantages:
High throughput
Low variance of response time
Average response time
Disadvantages:
Long waiting time for requests for locations just visited by disk arm
4. CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned,
after reversing its direction. So, it may be possible that too many requests are waiting at the
other end or there may be zero or few requests pending at the scanned area.
These situations are avoided in CSCAN algorithm in which the disk arm instead of reversing its
direction goes to the other end of the disk and starts servicing the requests from there. So, the
disk arm moves in a circular fashion and this algorithm is also similar to SCAN algorithm and
hence it is known as C-SCAN (Circular SCAN).
Advantages:
Provides more uniform wait time compared to SCAN
5. LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference that
the disk arm in spite of going to the end of the disk goes only to the last request to be
serviced in front of the head and then reverses its direction from there only. Thus it
prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.
21
a)Explain in detail the layered architecture of an OS.
There are six layers in the layered operating system. A diagram demonstrating these layers is as
follows:
Details about the six layers are:
Hardware
This layer interacts with the system hardware and coordinates with all the peripheral devices
used such as printer, mouse, keyboard, scanner etc. The hardware layer is the lowest layer in the
layered operating system architecture.
CPU Scheduling
This layer deals with scheduling the processes for the CPU. There are many scheduling queues
that are used to handle processes. When the processes enter the system, they are put into the job
queue. The processes that are ready to execute in the main memory are kept in the ready queue.
Memory Management
Memory management deals with memory and the moving of processes from disk to primary
memory for execution and back again. This is handled by the third layer of the operating system.
Process Management
This layer is responsible for managing the processes i.e assigning the processor to a process at a
time. This is known as process scheduling. The different algorithms used for process scheduling
are FCFS (first come first served), SJF (shortest job first), priority scheduling, round-robin
scheduling etc.
I/O Buffer
I/O devices are very important in the computer systems. They provide users with the means of
interacting with the system. This layer handles the buffers for the I/O devices and makes sure
that they work correctly.
User Programs
This is the highest layer in the layered operating system. This layer deals with the many user
programs and applications that run in an operating system such as word processors, games,
browsers etc.
There are many different criterias to check when considering the "best" scheduling algorithm,
they are:
CPU Utilization
To make out the best use of CPU and not to waste any CPU cycle, CPU would be working most
of the time(Ideally 100% of the time). Considering a real system, CPU usage should range from
40% (lightly loaded) to 90% (heavily loaded.)
Throughput
It is the total number of processes completed per unit time or rather say total amount of work
done in a unit of time. This may range from 10/second to 1/hour depending on the specific
processes.
Turnaround Time
It is the amount of time taken to execute a particular process, i.e. The interval from time of
submission of the process to the time of completion of the process(Wall clock time).
Waiting Time
The sum of the periods spent waiting in the ready queue amount of time a process has been
waiting in the ready queue to acquire get control on the CPU.
Load Average
It is the average number of processes residing in the ready queue waiting for their turn to get into
the CPU.
Response Time
Amount of time it takes from when a request was submitted until the first response is produced.
Remember, it is the time till the first response and not the completion of process execution(final
response).
In general CPU utilization and Throughput are maximized and other factors are reduced for
proper optimization.