Os Unit I

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 29

OS-UNIT-I

(Syllabus: Operating System - Introduction, Structures - Simple Batch, Multiprogrammed, Time-shared,


Personal Computer, Parallel, Distributed Systems, Real-Time Systems, System components, Operating
System services, System Calls )

Introduction to Operating Systems


An operating system is a program that manages the computer hardware. It also provides a basis for
application programs and acts as an intermediary between a user of a computer and the computer hardware.
A computer system can be divided roughly into for components – the hardware, the operating system, the
application programs and the users. The hardware consisting of CPU, memory and I/O devices provides
the basic computing resources for the system. The application programs define the ways in which these
resources are used to solve users computing problems.
Fig:Abstract view of the components of a computer system

1
The operating system controls and co-ordinates the use of hardware among the various application programs
for the various users.
Different views of Operating System
 User view of an Operating System: The user’s view of the computer varies according to the
interface being used. While designing a PC for one user, the goal is to maximize the work that the
user is performing. Here OS is designed mostly for ease of use. In another case the user sits at a
terminal connected to a main frame or minicomputer. Other users can access the same computer
through other terminals. OS here is designed to maximize resource utilization to assure that all
available CPU time, memory and I/O are used efficiently. In other cases, users sit at workstations
connected to networks of other workstations and servers. These users have dedicated resources but
they also share resources such as networking and servers. Here OS is designed to compromise
between individual usability and resource utilization.
 System view of an Operating System: From the computer’s point of view, OS can be viewed as
resource allocator where in resources are – CPU time, memory space, file storage space, I/O devices
etc. OS must decide how to allocate these resources to specific programs and users so that it can
operate the computer system efficiently. OS is also a control program. A control program manages
the execution of user programs to prevent errors and improper use of computer. It is concerned with
the operation and control of I/O devices.
Another service provided by the OS is of Resource manager. Resource management includes
multiplexing resources in two ways - "in time" and "in space". When a resource is time multiplexed
different programs or different users gets their turn to use that resource. Ex: Printer. (ii)When a
resource is space multiplexed instead of taking turns, the resource is shared among them, i.e. each one
gets a part of the resource. Ex: Sharing main memory, hard disk etc.
OBJECTIVES OF AN OPERATING SYSTEM
 Convenience - An operating system makes a computer more convenient to use.
 Efficiency - An operating system allows the computer system resources to be used in an efficient
manner.

2
 Ability to Evolve - Should permit effective development, testing, and introduction of new system
features and functions without interfering with service.
DIFFERENT OPERATING SYSTEMS: Over the years, several different operating systems have been
developed for different purposes. The most typical operating systems in ordinary computers are Windows,
Linux and Mac OS.
WINDOWS: The name of the Windows OS comes from the fact that programs are run in “windows”: each
program has its own window, and you can have several programs open at the same time. Windows is the
most popular OS for home computers, and there are several versions of it. The newest version is Windows
10.

LINUX AND UNIX: Linux is an open-source OS, which means that its program code is freely available to
software developers. This is why thousands of programmers around the world have developed Linux, and it
is considered the most tested OS in the world. Linux has been very much influenced by the commercial Unix
OS.

MAC OS X: Apple’s Mac computers have their own operating system, OS X. Apple’s lighter portable
devices (iPads, iPhones) use a light version of the same operating system, called iOS.Mac computers are
popular because OS X is considered fast, easy to learn and very stable and Apple’s devices are considered
well-designed—though rather expensive.

ANDROID: Android is an operating system designed for phones and other mobile devices. Android is not
available for desktop computers, but in mobile devices it is extremely popular: more than a half of all mobile
devices in the world run on Android.

Computer system:
A common computer system consists of a CPU and a number of device controllers that are connected
through a common bus that provides access to shared memory. Each device controller is in charge of a
specific type of device as shown. The CPU and the device controllers can execute concurrently, competing
for memory cycles. To ensure orderly access to the shared memory, a memory controller is provided whose
function is to synchronize access to the memory.

3
A bootstrap program is stored in read-only memory (ROM) within the computer hardware and it initializes
all aspects of the system, from CPU registers to device controllers to memory contents. In order to load the
operating system, the bootstrap program must locate and load into memory the operating-system kernel. The
OS then starts executing the first process, such as “init”, and waits for some event to occur. The occurrence
of an event is usually signaled by an interrupt from either the hardware or the software.
o Hardware may trigger an interrupt at any time by sending a signal to the CPU, usually by way of the
system bus.
o Software may trigger an interrupt by executing a special operation called a system call (also called a
monitor call).
o Events are almost always signaled by the occurrence of an interrupt or a trap. A trap (or an
exception) is a software-generated interrupt caused either by an error (for example, division by zero
or invalid memory access) or by a specific request from a user program that an operating-system
service be performed.
o For each type of interrupt, separate segments of code in the operating system determine what action
should be taken. An interrupt service routine is provided that is responsible for dealing with the
interrupt.
o Interrupts are an important part of computer architecture. Each computer design has its own interrupt
mechanism, but several functions are common.
o The interrupt must transfer control to the appropriate interrupt service routine. Interrupt vector
contains the address of service routines. Incoming interrupts are disabled while another interrupt is
being processed to prevent a lost interrupt.

4
I/O Structure
A device controller maintains some local buffer storage and a set of special-purpose registers. The device
controller is responsible for moving the data between the peripheral devices that it controls and its local
buffer storage. The size of the local buffer within a device controller varies from one controller to another.

I/O Interrupts
Once the I/O is started, two kinds of I/O methods exist. In synchronous I/O, control returns to the user
program only upon I/O completion. A special wait instruction idles CPU until next interrupt or some
machines may have a wait loop: Loop: jmpLoop. No simultaneous I/O processing is present so, at most one
outstanding I/O request at a time. Another possibility called asynchronous I/O returns control to the user
program without waiting for the I/O to complete. The I/O then can continue while other system operations
occur. A system call is needed to allow the user program to wait for I/O completion.

DMA Structure

5
Direct Memory Access is used for high speed I/O devices that are able to transmit information at close to memory
speeds. Device controller transfers blocks of data from buffer storage directly to main memory without CPU
intervention. Only one interrupt is generated per block, rather than one per byte (or word).

Storage Structure
Computer programs must be in main memory (also called random-access memory or RAM) to be executed.
Main memory is implemented in a semiconductor technology called dynamic random-access memory
(DRAM), which forms an array of memory words. A typical instruction -execution cycle, as executed on a
system with a von Neumann architecture, will first fetch an instruction from memory and will store that
instruction in the instruction register. The instruction is then decoded and may cause operands to be fetched
from memory and stored in some internal register. After the instruction on the operands has been executed,
the result may be stored back in memory. Since main memory is too small and it’s volatile, most computer
systems provide secondary storage as an extension. The main idea is to store large quantities of data
permanently. The most common secondary-storage device is a magnetic disk, which provides storage of both
programs and data. Most programs (web browsers, compilers, word processors, spreadsheets, and so on) are
stored on a disk until they are loaded into memory. Many programs then use the disk as both a source and a
destination of the information for their processing. Hence, the proper management of disk storage is of
central importance to a computer system. Main memory and the registers built into the processor itself are the
only storage that the CPU can access directly.

Storage Hierarchy:
The wide variety of storage systems in a computer system can be organized in a hierarchy according to speed
and cost. The higher levels are expensive, but they are fast. As we move down the hierarchy, the cost per bit
generally decreases, whereas the access time and storage capacity increases. The top three levels of memory in
above figure may be constructed using semiconductor memory. In the hierarchy shown above, the storage
6
systems above the electronic disk are volatile, whereas those below are nonvolatile. An electronic disk can be
designed to be either volatile or nonvolatile. Volatile storage loses its contents when the power to the device
is removed.

Computer-System Architecture
A computer system may be organized in a number of different ways, which we can categorize roughly
according to the number of general-purpose processors used.

Single-Processor Systems: On a single-processor system, there is one main CPU capable of executing a
general-purpose instruction set, including instructions from user processes.
 Almost all systems have other special-purpose processors as well. They may come in the form of
device-specific processors, such as disk, keyboard, and graphics controllers; or, on mainframes, they

7
may come in the form of more general-purpose processors, such as I/O processors that move data
rapidly among the components of the system.
 All of these special-purpose processors run a limited instruction set and do not run user processes.
Sometimes they are managed by the operating system, in that the operating system sends them
information about their next task and monitors their status.
Multiprocessor Systems:Multiprocessor systems (also known as parallel systems or tightly coupled
systems) are growing in importance. Such systems have two or more processors in close communication,
sharing the computer bus and sometimes the clock, memory, and peripheral devices.
Multiprocessor system's advantages:
 Increased throughput. By increasing the number of processors, we expect to get more work done in
less time. When multiple processors cooperate on a task, a certain amount of overhead is incurred in
keeping all the parts working correctly. This overhead, plus contention for shared resources, lowers
the expected gain from additional processors. The speed-up ratio with N processors is not N,
however; rather, it is less than N.
 Economy of scale. Multiprocessor systems can cost less than equivalent multiple single-processor
systems, because they can share peripherals, mass storage, and power supplies. If several programs
operate on the same set of data, it is cheaper to store those data on one disk and to have all the
processors share them than to have many computers with local disks and many copies of the data.
 Increased reliability. If functions can be distributed properly among several processors, then the
failure of one processor will not halt the system, only slow it down. If we have ten processors and one
fails, then each of the remaining nine processors can pick up a share of the work of the failed
processor. Thus, the entire system runs only 10 percent slower, rather than failing altogether.The
ability to continue providing service proportional to the level of surviving hardware is called graceful
degradation. Some systems go beyond graceful degradation and are called fault tolerant, because they
can suffer a failure of any single component and still continue operation.

Classification of Multi-Processor System


The multiple-processor systems are of two types.
1. Asymmetric multiprocessing, in which each processor is assigned a specific task. A master
processor controls the system; the other processors either look to the master for instruction or have
predefined tasks. This scheme defines a master-slave relationship. The master processor schedules
and allocates work to the slave processors.

8
2. Symmetric multiprocessing (SMP), in which each processor performs all tasks within the operating
system. SMP means that all processors are peers; no master-slave relationship exists between
processors.

Clustered Systems: Another type of multiple-CPU system is the clustered system. Like multiprocessor
systems, clustered systems gather together multiple CPUs to accomplish computational. Clustered
systems differ from multiprocessor systems, however, in that they are composed of two or more
individual systems coupled together. The clustered computers share storage and are closely linked via a
local-area network (LAN).

Clustering is usually used to provide high-availability service; that is, service will continue even if one
or more systems in the cluster fail.

Clustering Classification
 Clustering can be structured asymmetrically or symmetrically.
 In asymmetric clustering, one machine is in hot-standby mode while the other is running the
applications. The hot-standby host machine does nothing but monitor the active server. If that server
fails, the hot-standby host becomes the active server.
 In symmetric mode, two or more hosts are running applications, and are monitoring each other. This
mode is obviously more efficient, as it uses all of the available hardware.

Operating-System Structure

9
An operating system provides the environment within which programs are executed. Internally, operating
systems vary greatly in their makeup, since they are organized along many different lines. There are,
however, manycommonalities among them Multi- Programming and Time sharing Systems are common.

Multiprogrammed systems
One of the most important aspects of OS is its ability to multi program. Multi programming increases CPU
utilization by organizing jobs (code and data) so that the CPU always has one to execute. OS keeps several
jobs in memory.
 This set of jobs can be a subset of jobs kept in the job pool which contains all jobs that enter the
system. OS picks and begins to execute one of the jobs in memory.
 The job may have to wait for some task, such as I/O operation to complete. In a non multi
programmed system, the CPU would sit idle. But here, the OS simply switches to and executes
another job. When that job needs to wait, CPU is switched to another job and so on.
 As long as at least on job needs to execute, CPU is never idle. Multiprogramming is the first instance
where the operating system must make decisions for the users. Multiprogrammed operating systems
are therefore fairly sophisticated. All the jobs that enter the system are kept in the job pool.

 This Job pool consists of all processes residing on disk awaiting allocation of main memory. If
several jobs are ready to be brought into memory, and if there is not enough room for all of them,
then the system must choose among them. Making this decision is job scheduling.
 If several jobs are ready to run at the same time, the system must choose among them. Making this
decision is CPU scheduling.
 Multi programmed systems provide an environment in which the various system resources are
utilized effectively but they do not provide for user interaction with the computer system.

Time- Sharing systems

10
 Time-sharing or multi-tasking is a logical extension of multi programming. In time sharing systems,
CPU executes multiple jobs by switching among them but the switches occur so frequently that the
users can interact with each program while it is running.

 Time-sharing requires an interactive computer system which provides direct communication between
the user and the system.
 A time shared operating system allows many users to share the computer simultaneously. It uses CPU
scheduling and multi programming to provide each user with a small portion of a time shared
computer.
 A program loaded into memory and executing is called a process. Time sharing and multi
programming require several jobs to be kept simultaneously in memory. Since main memory is too
small to accommodate all jobs, the jobs are kept initially on the disk in the job pool.
 Time-sharing operating systems are even more complex than multi-programmed operating systems.
In Time sharing systems, to obtain a reasonable response time, jobs may have to be swapped in and
out of main memory to the disk that now serves as a backing store for main memory.
 A common method for achieving this goal is virtual memory, which is a technique that allows the
execution of a job that may not be completely in memory. The main advantage of the virtual-memory
scheme is that programs can be larger than physical memory.
Operating system Operations
Such a large and a complex operating system can be created by partitioning it into smaller pieces and each
piece should be a well-delineated portion of the system, with carefully defined inputs, outputs, and functions.
Most common system components are Process Management, Main memory Management, I/O system
Management, File Memory Management, Secondary storage Management, Networking Management,
Protection Management and Command – Interpreter System

Dual-Mode Operation

11
Since the operating system and the users share the hardware and software resources of the computer system.
We need to make sure that an error in a user program should not cause any problem to operating system code
or other programs. A properly designed operating system must ensure that an incorrect (or malicious)
program cannot cause other programs to execute incorrectly. In order to ensure the proper execution of the
operating system, we must be able to distinguish between the execution of operating-system code and user
defined code. The approach taken by most computer systems is to provide hardware support that allows us to
differentiate among various modes of execution.

Program can execute in two modes which are called User Mode and System mode or Kernel mode. A bit,
called the mode bit is added to the hardware of the computer to indicate the current mode: kernel (0) or user
(1). With the mode bit, we are able to distinguish between a task that is executed on behalf of the operating
system and one that is executed on behalf of the user. When the computer system is executing on behalf of a
user application, the system is in user mode. However, when a user application requests a service from the
operating system (via a system call), it must transition from user to kernel mode to fulfill the request.

The dual mode of operation provides us with the means for protecting the operating system from errant user.
The hardware allows privileged instructions to be executed only in kernel mode. If an attempt is made to
execute a privileged instruction in user mode, the hardware does not execute the instruction but rather treats
it as illegal and traps it to the operating system.

Timer:
We must ensure that the operating system maintains control over the CPU. We must prevent a user program
from getting stuck in an infinite loop or not calling system services and never returning control to the
operating system.
 To accomplish this goal, we can use a timer. A timer can be set to interrupt the computer after a
specified period.
 The period may be fixed (for example, 1/60 second) or variable (for example, from 1 millisecond to 1
second). A variable timer is generally implemented by a fixed-rate clock and a counter.
 The operating system sets the counter. Every time the clock ticks, the counter is decremented. When
the counter reaches 0, an interrupt occurs.

12
 Before turning over control to the user, the operating system ensures that the timer is set to interrupt.
If the timer interrupts, control transfers automatically to the operating system, which may treat the
interrupt as a fatal error or may give the program more time.
Clearly, instructions that modify the content of the timer are privileged. Thus, we can use the timer to prevent
a user program from running too long. A simple technique is to initialize a counter with the amount of time
that a program is allowed to run. A program with a 7-minute time limit, for example, would have its counter
initialized to 420. Every second, the timer interrupts and the counter is decremented by 1. As long as the
counter is positive, control is returned to the user program. When the counter becomes negative, the
operating system terminates the program for exceeding the assigned time limit.

Process Management
A process is a program in execution. A process needs certain resources, including CPU time, memory, files,
and I/O devices, to accomplish its task. These resources are either given to the process when it is created, or
allocated to it when it is running. Here program is a passive entity, such as the contents of a file stored on
disk, whereas process is an active entity, with a program counter specifying the next instruction to execute.
The execution of a process must be sequential. The operating system is responsible for the following
activities in connection with process management:
 Creating and deleting both user and system processes.
 Suspending and resuming processes.
 Providing mechanisms for process synchronization.
 Providing mechanisms for process communication.
 Providing mechanisms for deadlock handling.
Main Memory Management
Main memory is a large array of words or bytes each with its own address. Main memory is a repository of
quickly accessible data shared by the CPU and I/O devices. Many different memory management schemes
are available and the effectiveness of the different algorithms depends on the particular situation. The
operating system is responsible for the following activities in connection with memory management:
 Keeping track of which parts of memory are currently being used and by whom.
 Deciding which processes are to be loaded into memory when memory space becomes available.
 Allocating and de allocating memory space as needed.
File Management
File management is one of the most visible components of operating systems. Computers can store
information on several different types of physical media each having its own characteristics and physical
organization. A file is a collection of related information defined by its creator. It represents programs and
data. The OS implements the abstract concept of a file by managing mass storage media, such as disks and
tapes, and the devices that control them.
13
The operating system is responsible for the following activities in connection with file management:
 Creating and deleting files
 Creating and deleting directories
 Supporting primitives for manipulating files and directories
 Mapping files onto secondary storage
 Backing up files on stable (nonvolatile) storage media.
I/O –System Management
One of the purposes of an operating system is to hide the peculiarities of specific hardware devices form the
user. The I/O subsystem consists of
 A memory management component that includes buffering, caching and spooling.
 A general device – driver interface.
 Drivers for specific hardware devices

Secondary storage management


The main purpose of a computer system is to execute programs. These programs, with the data they access,
must be in main memory, or primary storage, during execution. Because main memory is too small and
volatile, the computer system must provide secondary storage to back up main memory. The operating
system is responsible for the following activities in connection with disk management:
 Free – space management
 Storage allocation
 Disk scheduling.
Caching
Caching is an important principle of computer systems. Information is normally kept in some storage system
(such as main memory). As it is used, it is copied into a faster storage system-the cache-on a temporary basis.
When we need a particular piece of information, we first check whether it is in the cache. If it is, we use the
information directly from the cache; if it is not, we use the information from the source, putting a copy in the
cache under the assumption that we will need it again soon.

Without this cache, the CPU would have to wait several cycles while an instruction was fetched from main
memory. For similar reasons, most systems have one or more high-speed data caches in the memory
hierarchy. Because caches have limited size, cache management is an important design problem. Careful
selection of the cache size and of a replacement policy can result in greatly increased performance. Main
14
memory can be viewed as a fast cache for secondary storage, since data in secondary storage must be copied
into main memory for use, and data must be in main memory before being moved to secondary storage for
safekeeping.

Networking
Distributed system is a collection of processors that do not share memory, peripheral devices, or a clock.
Each processor has its own local memory and clock, and the processors communicate with one another
through various communication lines, such as high – speed buses or networks. The processors in the system
are connected through a communication network, which can be configured in a number of different ways. A
distributed system collects physically separate, possibly heterogeneous systems into a single coherent system,
providing the user with access to the various resources that the system maintains. Access to a shared resource
allows computation speedup, increased functionality, increased data availability, and enhanced reliability.
Operating systems usually generalize network access as a form of file access, with the details of networking
being contained in the network interface's device driver.
Protection System
If a computer system has multiple users and allows the concurrent execution of multiple processes, then the
various processes must be protected from one another’s activities. Protection is any mechanism for
controlling the access of programs, processes, or users to the resources defined by a computer system.
Protection can improve reliability by detecting latent errors at the interfaces between component subsystems.
Early detection of interface errors can often prevent contamination of a healthy subsystem by another
subsystem that is malfunctioning.
Command-Interpreter System
One of the most important systems programs for an operating system is the command interpreter, which is
the interface between the user and the operating system. Some of the operating systems include the command
interpreter in the kernel. When a new job is started in a batch system, or when a user logs on to a time –
shared system, a program that reads and interprets control statements is executed automatically. This
program is sometimes called the control – card interpreter or the command- line interpreter, and is often
known as the shell. Its function is simple: To get the next command statement and execute it. The command
statements themselves deal with process creation and management, I/O handling, secondary-storage
management, main-memory management, file-system access, protection, and networking.

Different Types of Computing Environments:


Traditional Computing
Traditional Computing, as name suggests, is a possess of using physical data centers for storing digital
assets and running complete networking system for daily operations. In this, access to data, or software, or
storage by users is limited to device or official network they are connected with.
15
Batch systems
These are the early computers that are physically enormous machines running from a console. The common
input devices are card readers, tape drivers and some of the common output devices are Line printers, Tape
drives, Card Punches. The user did not interact directly with the computer systems.

Bring cards to 1401, Read cards onto input tape, Put input tape on 7094, perform the computation,
writing results to output tape, Put output tape on 1401, which prints output.
The operating system in these early computers was fairly simple. Its major task was to transfer control
automatically from one job to the next. The operating system was always resident in memory. To speed up
processing, operators batched together jobs with similar needs and ran them through the computer as a group.
Thus, the programmers would leave their programs with the operator. The operator would sort programs into
batches with similar requirements and, as the computer became available, would run each batch. The output
from each job would be sent back to the appropriate programmer. The introduction of disk technology
allowed the operating system to keep all jobs on a disk, rather than in a serial card reader. With direct access to
several jobs, the operating system could perform job scheduling, to use resources and perform tasks efficiently.
Desktop systems
Once the personal computers PCs appeared in 1970’s, during the first decade, the CPUs in these PCs lacked
the features of protection of OS from user programs. PC operating systems were neither multiuser nor
multitasking. However as time passed, the goals of these operating systems shifted more towards maximizing
user convenience and responsiveness. The PCs started running Microsoft Windows and Apple Macintosh.
Linux, a UNIX-like operating system available for PCs, has also become popular recently.
File protection was not a priority for these systems at first, but as these computers are now often tied into
other computers over local-area networks or other Internet connections enabling other computers and users
access the files on a PC, file protection again becomes a necessary feature of the operating system. The lack
of such protection has made it easy for malicious programs to destroy data on systems such as MS-DOS and
the Macintosh operating system. These programs may be self-replicating, and may spread rapidly via worm
or virus mechanisms and disrupt entire companies or even worldwide networks. Security mechanisms
capable of countering these attacks are to be implemented.
16
Distributed systems
Distributed systems depend on networking for their functionality. By being able to communicate, distributed
systems are able to share computational tasks, and provide a rich set of features to users. Also called as
Loosely coupled system – each processor has its own local memory; processors communicate with one
another through various communications lines, such as high-speed buses or telephone lines. These systems
have many advantages such as resource sharing, computation speedup (load sharing), reliability and
communications. Distributed systems require networking infrastructure such as Local Area Networks (LAN)
or Wide Area Networks (WAN). They may be client-server or peer-to-peer systems.
Client -Server Systems

As PCs have become very powerful, faster and cheaper, the design interest has shifted from a centralized
system to that of a client-server system. Centralized systems today act as server systems to satisfy requests
generated by client systems. The general structure of a client-server system is depicted below:
Server systems can be broadly categorized as compute servers and file servers.
 Compute-server systems provide an interface to which clients can send requests to perform an action,
in response to which they execute the action and send back results to the client.
 File-server systems provide a file-system interface where clients can create, update, read, and delete
files.

Peer-to-Peer Systems
Peer-to-peer network operating systems allow users to share resources and files located on their computers
and to access shared resources found on other computers. ... In a peer-to-peer network, all computers are
considered equal; they all have the same abilities to use the resources available on the network.

17
Web-Based Computing
In brief, Web-based computing is an environment that consists of ultra-thin clients networked over
the Internet or intranet. The implementation of web-based computing has given rise to new categories of
devices, such as load balancers, which distribute network connections among a pool of similar servers.
Operating systems like Windows 95, which acted as web clients, have evolved into Linux and Windows XP,
which can act as web servers as well as clients. Generally, the Web has increased the complexity of devices,
because their users require them to be web-enabled.

Special-Purpose Systems
Unlike general purpose computers, there are different classes of computer systems whose functions are more
limited and whose objective is to deal with limited computation domains. Such special purpose systems are
 Real-Time Embedded Systems
 Multimedia Systems
 Hand held systems
Real-time systems
A real-time system is used when rigid time requirements have been placed on the operation of a processor or
the flow of data; thus, it is often used as a control device in a dedicated application. A real-time system has
well-defined, fixed time constraints. Processing must be done within the defined constraints, or the system
will fail. Real-Time systems are of two types. A hard real-time system guarantees that critical tasks be
completed on time. Due to the stringent time constraints, hard real-time systems conflict with the operation
of time-sharing systems, and the two cannot be mixed. A less restrictive type of real-time system is a soft
real-time system, where a critical real-time task gets priority over other tasks, and retains that priority until it
completes. Though soft real-time system is an achievable goal, they have more limited utility than hard real-
time systems and therefore risky to be used in industrial control and robotics. They are useful in several
areas, including multimedia, virtual reality, and advanced scientific projects-such as undersea exploration and
planetary rovers.
Multimedia Systems

18
Multimedia data consist of audio and video files as well as conventional files. These data differ from
conventional data in that multimedia data-such as frames of video-must be delivered (streamed) according to
certain time restrictions (for example, 30 frames per second). Multimedia describes a wide range of
applications that are in popular use today. These include audio files such as MP3 DVD movies, video
conferencing, and short video clips of movie previews or news stories downloaded over the Internet.
Multimedia applications may also include live webcasts (broadcasting over the World Wide Web) of
speeches or sporting events. Most operating systems are designed to handle multimedia along with
conventional data such as text files, programs, word-processing documents, and spreadsheets.

Hand held systems:


Handheld systems includes personal Digital Assistants (PDAs) such as palm, pocket – pcs/ cellular
telephones. The main challenge of this type system is limited size of the system. So, it holds small amount of
storage space. One approach for displaying the content in webpage is web clipping, where only a small
subset of a web page is delivered and displayed on the handheld device.

SERVICES PROVIDED BY AN OPERATING SYSTEM


An Operating System provides services to both the users and to the programs. It provides programs, an
environment to execute. It provides users, services to execute the programs in a convenient manner.
Following are few common services provided by operating systems.
 User Interface:
 Program execution
 I/O operations
 File System manipulation
 Communication
 Error Detection
 Resource Allocation
 Protection and Security
 Accounting:
UI: A user interface refers to the part of an operating system, program, or device that allows a user to enter
and receive information. A text-based user interface (CUI) displays text, and its commands are usually
typed on a command line using a keyboard. With a graphical user interface (CUI) the functions are carried
out by clicking or moving buttons, icons and menus by means of a pointing device.
Program execution: Operating system handles many kinds of activities from user programs to system
programs like printer spooler, name servers, file server etc. Each of these activities is encapsulated as a
process. A process includes the complete execution context. Following are the major activities of an
operating system with respect to program management.
19
 Loads a program into memory.
 Executes the program.
 Handles program's execution.
 Provides a mechanism for process synchronization.
 Provides a mechanism for process communication.
 Provides a mechanism for deadlock handling.
I/O Operation: I/O subsystem comprised of I/O devices and their corresponding driver software. Drivers
hides the peculiarities of specific hardware devices from the user as the device driver knows the peculiarities
of the specific device. Operating System manages the communication between user and device drivers.
Following are the major activities of an operating system with respect to I/O Operation.
 I/O operation means read or write operation with any file or any specific I/O device.
 Program may require any I/O device while running.
 Operating system provides the access to the required I/O device when required.
File system manipulation:A file represents a collection of related information. Computer can store files on
the disk (secondary storage), for long term storage purpose. Few examples of storage media are magnetic
tape, magnetic disk and optical disk drives like CD, DVDect. Each of these media has its own properties like
speed, capacity, and data transfer rate and data access methods. A file system is normally organized into
directories for easy navigation and usage. These directories may contain files and other directions. Following
are the major activities of an operating system with respect to file management.
 Program needs to read a file or write a file.
 The operating system gives the permission to the program for operation on file.
 Permission varies from read-only, read-write, denied and so on.
 Operating System provides an interface to the user to create/delete files.
 Operating System provides an interface to the user to create/delete directories.
 Operating System provides an interface to create the backup of file system.
Communication:In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, operating system manages communications between processes.
Multiple processes communicate with one another through communication lines in the network. OS handles
routing and connection strategies, and the problems of contention and security. Following are the major
activities of an operating system with respect to communication.
 Two processes often require data to be transferred between them.
 The both processes can be on the one computer or on different computer but are connected through
computer network.

20
 Communication may be implemented by two methods either by Shared Memory or by Message
Passing.
Error handling:Error can occur anytime and anywhere. Error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect to error handling.
 OS constantly remains aware of possible errors.
 OS takes the appropriate action to ensure correct and consistent computing.
Resource Management: In case of multi-user or multi-tasking environment, resources such as main
memory, CPU cycles and files storage are to be allocated to each user or job. Following are the major
activities of an operating system with respect to resource management.
 OS manages all kind of resources using schedulers.
 CPU scheduling algorithms are used for better utilization of CPU.
Protection:Considering a computer systems having multiple users the concurrent execution of multiple
processes, then the various processes must be protected from each another's activities. Protection refers to
mechanism or a way to control the access of programs, processes, or users to the resources defined by a
computer systems. Security refers to providing a protection to the system.Following are the major activities
of an operating system with respect to protection.
 OS ensures that all access to system resources is controlled.
 OS ensures that external I/O devices are protected from invalid access attempts.
 OS provides authentication feature for each user by means of a password.

Accounting: This service of the operating system keeps track of which users are using how much and what
kinds of computer resources have been used for accounting or simply to accumulate usage statistics.
System Calls
System calls provide an interface to the services made available by an operating system. These calls are
generally available as routines written in C and C++, although certain low-level tasks, may need to be written
using assembly-language instructions.
Most programming languages provide a system-call interface that serves as the link to system calls made
available by the operating system. The system-call interface intercepts function calls in the API and invokes
the necessary system call within the operating system. Typically, a number is associated with each system
call, and the system-call interface maintains a table indexed according to these numbers. The system call
interface then invokes the intended system call in the operating system kernel and returns the status of the
system call and any return values.

21
Fig: The handling of a user application invoking the open() system call.
Three general methods are used to pass parameters between a running program and the operating system:
 Pass parameters in registers
 Block or Table of Parameters
 The Stack Approach
The simplest approach is to pass the parameters in registers. In some cases, however, there may be more
parameters than registers. In these cases, the parameters are generally stored in a block, or table, in
memory, and the address of the block is passed as a parameter in a register. This is the approach taken by
Linux and Solaris.

Parameters also can be placed, or pushed, onto the stack by the program and popped off the stack by the
operating system. Some operating systems prefer the block or stack method, because those approaches do
not limit the number or length of parameters being passed.

22
Types of System Calls
System calls can be grouped roughly into five major categories:
1. Process control
2. File manipulation
3. Device manipulation
4. Information maintenance and
5. Communications.
Process control : This system calls perform the task of process creation, process termination, etc.
 end, abort
 load, execute
 create process, terminate process
 get process attributes, set process attributes
 wait for time
 wait event, signal event
 allocate and free memory
File management : File management system calls handle file manipulation jobs like creating a file, reading,
and writing, etc.
 create file, delete file
 open, close
 read, write, reposition
 get file attributes, set file attributes

Device management : Device management does the job of device manipulation like reading from device
buffers, writing into device buffers, etc.
 request device, release device
 read, write, reposition
 get device attributes, set device attributes
 logically attach or detach devices

Information maintenance: It handles information and its transfer between the OS and the user program.
 get time or date, set time or date
 get system data, set system data
 get process, file, or device attributes
 set process, file, or device attributes

23
Communications: These types of system calls are specially used for inter-process communications.
 create, delete communication connection
 send, receive messages
 transfer status information
 attach or detach remote devices

There are two common models for Inter Process Communication


1. Message passing model
2. Shared memory model
In message passing model, information is exchanged through an interprocess – communication facility
provided by the OS. Before communication can take place, a condition must be opened. The name of the
other communicator must be known be it another process on the same CPU / Process on another computer
connected by a communications network.

In shared memory, processes use map memory system calls to gain access to regions of memory owned other
processes. They may then exchange information by reading and writing data in the shared area. Message
passing is useful when smaller numbers of data need to be exchanged, because no conflicts need to be
avoided. It is also easier to implement than is shared memory for inter-computer communication. Shared
memory allows maximum speed and convenience of communication, as it can be done at memory speeds
when within a computer.

Operating System Design and Implementation

An operating system is a construct that allows the user application programs to interact with the system
hardware. There are many problems that can occur while designing and implementing an operating system. It
is quite complicated to define all the goals and specifications of the operating system while designing it.
System design is dominated by the choice of hardware and system type, it may change from system to
system.

24
There are basically two types of goals while designing an operating system. These are −

1. User Goals: The operating system should be convenient, easy to use, reliable, safe and fast according
to the users. However, these specifications are not very useful as there is no set method to achieve
these goals.

2. System Goals: The operating system should be easy to design, implement and maintain. These are
specifications required by those who create, maintain and operate the operating system. But there is
not specific method to achieve these goals as well.

Operating System Mechanisms and Policies


There is no specific way to design an operating system as it is a highly creative task. However, there are
general software principles that are applicable to all operating systems.

A subtle difference between mechanism and policy is that mechanism shows how to do something and policy
shows what to do. Policies may change over time and this would lead to changes in mechanism. So, it is
better to have a general mechanism that would require few changes even when a policy change occurs.

Operating System Implementation


The operating system needs to be implemented after it is designed. Earlier they were written in assembly
language but now higher level languages are used. The first system not written in assembly language was the
Master Control Program (MCP) for Burroughs Computers.

Advantages of Higher Level Language


There are multiple advantages to implementing an operating system using a higher level language such as:
the code is written faster, it is compact and also easier to debug and understand. Also, the operating system
can be easily moved from one hardware to another if it is written in a high level language.

Disadvantages of Higher Level Language


Using high level language for implementing an operating system leads to a loss in speed and increase in
storage requirements. However in modern systems only a small amount of code is needed for high
performance, such as the CPU scheduler and memory manager. Also, the bottleneck routines in the system
can be replaced by assembly language equivalents if required.

Operating-System Structure

A modern operating system is large in size and complex in structure, must be engineered carefully if it is to
function properly and be modified easily. A common approach is to partition the task into small components
rather than have one monolithic system. Each of these modules should be a well-defined portion of the
system, with carefully defined inputs, outputs, and functions.

Simple Structure:
Many commercial systems do not have well-defined structures. Frequently, such operating systems started as
small, simple, and limited systems and then grew beyond their original scope. MS-DOS is an example of
such a system. It was originally designed and implemented by a few people who had no idea that it would

25
become so popular. It was written to provide the most functionality in the least space, so it was not divided
into modules carefully. The following Figure shows its structure.

Layered Approach

Layered Structure is a type of system structure in which the different services of the operating system are
split into various layers, where each layer has a specific well-defined task to perform. It was created to
improve the pre-existing structures like the Monolithic structure ( UNIX ) and the Simple structure ( MS-
DOS ).

Example – The Windows NT operating system uses this layered approach as a part of it.

Design Analysis : The whole Operating System is separated into several layers ( from 0 to n ) as the
diagram shows. Each of the layers must have its own specific function to perform. There are some rules in
the implementation of the layers as follows.
1. The outermost layer must be the User Interface layer.
2. The innermost layer must be the Hardware layer.
3. A particular layer can access all the layers present below it, but it cannot access the layers present
above it. That is layer n-1 can access all the layers from n-2 to 0 but it cannot access the nth layer.

26
Thus if the user layer wants to interact with the hardware layer, the response will be travelled through all
the layers from n-1 to 1. Each layer must be designed and implemented such that it will need only the
services provided by the layers below it.
Advantages :
There are several advantages to this design :
1. Modularity: This design promotes modularity as each layer performs only the tasks it is scheduled to
perform.
2. Easy debugging: As the layers are discrete so it is very easy to debug. Suppose an error occurs in the
CPU scheduling layer, so the developer can only search that particular layer to debug, unlike the
Monolithic system in which all the services are present together.
3. Easy update: A modification made in a particular layer will not affect the other layers.
4. No direct access to hardware: The hardware layer is the innermost layer present in the design. So a
user can use the services of hardware but cannot directly modify or access it, unlike the Simple system
in which the user had direct access to the hardware.
5. Abstraction: Every layer is concerned with its own functions. So the functions and implementations of
the other layers are abstract to it.
Disadvantages :
Though this system has several advantages over the Monolithic and Simple design, there are also some
disadvantages as follows.
1. Complex and careful implementation: As a layer can access the services of the layers below it, so the
arrangement of the layers must be done carefully
2. Slower in execution: If a layer wants to interact with another layer, it sends a request that has to travel
through all the layers present in between the two interacting layers. Thus it increases response time,
unlike the Monolithic system which is faster than this. Thus an increase in the number of layers may
lead to a very inefficient design.

Virtual Machines

The fundamental idea behind a virtual machine is to abstract the hardware of a single computer into several
different execution environments, thereby creating the illusion that each separate execution environment is
running its own private computer. We can create a virtual machine for several reasons, all of which are
fundamentally related to the ability to share the same basic hardware yet can also support different
execution environments, i.e., different operating systems simultaneously.

Advantages of virtual machines

 VMs can run multiple operating system environments on a single physical computer, saving physical
space, time and management costs.
 Virtual machines support legacy applications, reducing the cost of migrating to a new operating system.

27
Disadvantages of virtual machines:

While virtual machines have several advantages over physical machines, there are also
some potential disadvantages:
 Running multiple virtual machines on one physical machine can result in unstable performance if
infrastructure requirements are not met.
 Virtual machines are less efficient and run slower than a full physical computer.

Most enterprises use a combination of physical and virtual infrastructure to balance the corresponding
advantages and disadvantages.

What is Microkernel?

Kernel is the core part of an operating system which manages system resources. It also acts like a bridge
between application and hardware of the computer. It is one of the first programs loaded on start-up (after
the Bootloader). Microkernel is one of the classification of the kernel. Being a kernel it manages all system
resources. But in a microkernel, the user services and kernel services are implemented in different
address space. The user services are kept in user address space, and kernel services are kept under kernel
address space, thus also reduces the size of kernel and size of operating system as well. It provides
minimal services of process and memory management. The communication between client
program/application and services running in user address space is established through message passing,
reducing the speed of execution microkernel.

28
Questions from Previous Exams:
1. List the objectives of operating system(2M)
2. What are the services provided by operating system? Explain.(5M)
3. Discuss in detail about computer system architecture.(5M)
4. What are the goals of protection in operating system? Differentiate between protection and
security(5M)
5. Draw a Modern computer system(2M)
6. What is Linux and why it is used(2M)
7. Define the essential properties of the following operating systems (14M)
a) Batch b) Interactive c) Time Sharing d) Real time e) Parallel f) Distributed g)Hand held
8. Explain the importance of real time embedded system(2M)
9. Explain the Time sharing operating system(6M)
10. Explain different categories of system calls with examples( 7M)
11. Explain briefly the layered operating system structure with neat sketch(7M)
12. Define Operating System(2M)
13. Define System call and list out any 4 process control system calls(3M)
14. Distinguish between client-server and peer-to-peer models of distributed system (7M)
15. Explain Dual Mode Operation of the operating system(7M)
16. List out the services provided by the operating system(2M)
17. Explain the objectives and functions of operating system(7M)
18. Explain in-detail about system call interface(7M)
19. Define the essential properties of Time-sharing systems and Clustered systems.(3M)
20. What do you mean by Virtual Memory(2M)
21. What is the difference between operating system for mainframe computer and operating system for
personal computer. (5M)
22. What are the goals of operating system(2M)
23. What is operating system and what are its components(3M)
24. Describe evolution of operating system in detail(10M)
25. What is the need for system calls? Explain the types of system calls provided by an operating system
with respect to memory management.(10M)
26. Distinguish between symmetric and asymmetric multi processor systems.(2M)
27. Define briefly about virtual machines and micro kernels(5M)

THE END

29

You might also like