0% found this document useful (0 votes)
13 views55 pages

CH1_2.pptx

Detailed unit on introduction to OS

Uploaded by

Nilesh Pawar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
13 views55 pages

CH1_2.pptx

Detailed unit on introduction to OS

Uploaded by

Nilesh Pawar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 55

Review of Computer Organization

• Without its software, a computer is basically a useless lump of metal.


• With its software, a computer can store, process, and retrieve information; play music and videos; send e-mail,
search the Internet; and engage in many other valuable activities to earn its keep.
• Computer software:
1) System programs: manages the operation of the computer itself.
2) Application programs: perform the actual work the user wants.

• The most fundamental system program is the operating system is used to control all the computer's resources
and provide a base upon which the application programs can be written.

• An operating system is a program that manages the computer hardware. It also provides a basis for application
programs and acts as an intermediary between the computer user and the computer hardware.
✔ Mainframe operating systems are designed primarily to optimize utilization of hardware.
✔ Personal computer (PC) operating systems support complex games, business applications, and everything in
between.
Operating systems handles computers and are designed to provide an environment in which a user can easily
interface with the computer to execute programs.

Operating systems are designed to be convenient, others to be efficient, and others some combination of the two.
Objective of OS

• Convenience: An OS makes a computer more convenient to use.

• Efficiency: An OS allows the computer system resources to be used in an efficient manner.

• Ability to evolve: An OS should be constructed in such a way as to permit the effective development, testing, and
introduction of new system functions without interfering with service.

Operating system is used for:


• Managing Resources: Programs that manage the resources of a computer such as the printer, mouse, keyboard,
memory and monitor.
• Providing User Interface: Graphical user interface (GUI) is something developers create to allow users to easily
click something without having to understand how or why they clicked an icon. Each icon on a desktop represents
code linking to the spot in which the icon represents. It makes it very easy for uneducated users.
• Running Applications, is the ability to run an application such as Word processor by locating it and loading it into
the primary memory. Most operating systems can multitask by running many applications at once.
• Support for built-in Utility Programs: This is the program that find and fixes errors in the operating system.
• Control Computer Hardware: All programs that need computer hardware must go through the operating system
which can be accessed through the BIOS (basic input output system) or the device drivers.
Review of Computer Organization

Abstract view of the components of a Block diagram of computer system.


computer system.

A modern computer system consists of one or more processors, main memory, disks, printers, a keyboard, a display,
network interfaces, and other input/output devices. A computer system can be divided roughly into four components:
the hardware/ the operating system, the application programs/and the users
Review of Computer Organization

• Writing programs that keep track of all these components and use them correctly, let alone optimally, is an
extremely difficult job. If every programmer had to be concerned with how disk drives work, and with all the dozens
of things that could go wrong when reading a disk block, it is unlikely that many programs could be written at all.
• The way that has evolved gradually is to put a layer of software on top of the bare hardware, to manage all parts of
the system, and present the user with an interface or virtual machine that is easier to understand and program. This
layer of software is the operating system.
• At the bottom is the hardware, is itself composed of two or more levels (or layers). The lowest level contains physical
devices, consisting of integrated circuit chips, wires, power supplies, cathode ray tubes, and similar physical devices.
How these are constructed and how they work is the province of the electrical engineer.

User View
The user's view of the computer varies according to the interface being used.
1) One user to monopolize its resources. Most computer users sit in front of a PC, consisting of a monitor/ keyboard/
mouse, and system unit.
The goal is to maximize the work (or play) that the user is performing. In this case/ the operating system is
designed mostly for ease of use, with some attention paid to performance and none paid resource utilization-how
various hardware and software resources are shared.
Performance is important to the user; but such systems are optimized for the single-user experience rather than
the requirements of multiple users.
Review of Computer Organization

2) A user sits at a terminal connected to a Mainframe or a


minicomputer.
Other users are accessing the same computer through
other terminals. These users share resources and may
exchange information.

The operating system in such cases is designed to


maximize resource utilization to assure that every user
get CPU time, memory, and I/0 are used efficiently and
that no individual user takes more than her fair share.

5) Some computers have little or no user view.


Some operating systems are designed primarily to
run without user intervention. Embedded computers
in home devices and automobiles may have numeric
keypads and may turn indicator lights on or off to
show status, but they and their
Review of Computer Organization

3) Users sit at workstations


connected to networks of
other workstations and
servers.
These users have
dedicated resources at
their disposal, but they
also share resources
such as networking
and servers- file,
compute, and print
servers.
The operating system
is designed to
compromise between
individual usability and
resource utilization.
Review of Computer Organization

4) Recent variations of handheld computers.


Most of these devices are standalone units for
individual users. Some are connected to networks,
either directly by wire or (more often) through
wireless modems and networking. Because of power,
speed, and interface limitations, they perform
relatively few remote operations.
The operating systems are designed mostly for
individual usability, but performance per unit of
battery life is important as well.
Review of Computer Organization

5) Some computers have little


or no user view.
Some operating systems
are designed primarily to
run without user
intervention. Embedded
computers in home
devices and automobiles
may have numeric
keypads and may turn
indicator lights on or off
to show status,
Review of Computer Organization

System View
From the computer's point of view, the operating system is the program most intimately involved with the hardware.
A computer system has many resources that may be required to solve a problem: CPU time, memory space, file-storage
space, I/0 devices, and so on.
1) Operating system as a resource allocation.
Facing numerous and possibly conflicting requests for resources, the operating system must decide how to allocate
them to specific programs and users so that it can operate the computer system efficiently and fairly. The operating
system acts as the manager of these resources Resource allocation is especially important where many users access the
same mainframe or minicomputer.
2) An operating system is a control program.
An operating system emphasizes the need to control the various I/0 devices and user programs. A control program
manages the execution of user programs to prevent errors and improper use of the computer. It is especially concerned
with the operation and control of I/O devices.

Operating systems exist because they offer a reasonable way to solve the problem of creating a usable computing
system. The fundamental goal of computer systems is to execute user programs and to make solving user problems
easier. Toward this goal, computer hardware is constructed. Since bare hardware alone is not particularly easy to use,
application programs are developed. These programs require certain common operations, such as those controlling the
IO devices. The common functions of controlling and allocating resources are then brought together into one piece of
software: the operating system.
OS Structure

An operating system provides the environment within which programs are executed.
1) One of the most important aspects of operating systems is the ability to multiprogram. A single program cannot, in
general keep either the CPU or the IO device at all times: Single users frequently have multiple programs running.
Multiprogramming increases CPU utilization by organizing jobs(code and data) so that CPU has to one to execute.
The operating system keeps several jobs in memory simultaneously as shown in Figure.
Since, in general main memory is too small to accommodate all jobs, the jobs are kept
initially on the disk in the Job pool. This pool consists of all processes residing on disk
awaiting allocation of main memory.
The set of jobs in memory can be subset of the jobs kept in the Job pool. The operating
system picks and begins to execute one of the jobs in memory. Eventually, the job may
have to wait for some task, such as an I/O operation to complete.
In a non-multiprogrammed system, the CPU would sit idle.
In a multiprogrammed system, the operating system simply switches to, and executes,
another job. When that job needs to wait, the CPU is switched to another job, and so on.
Eventually the first job finishes waiting and gets the CPU back. As long as at least one job
needs to execute, the CPU is never idle.
Multiprogrammed systems provide an environment in which the various system
resources (for example, CPU, memory, and peripheral devices) are utilized effectively, but
they do not provide for user interaction with the computer system.
OS Structure

2) Time sharing or multitasking is a logical extension of multiprogramming.


Time sharing systems ,the CPU executes multiple jobs by switching among them, but the switches occur so frequently
that the users can interact with each program while it is running. Time sharing systems requires an hands-on or
interactive, provides direct communication between the user and the system.
A time-shared operating system allows many users to share the computer simultaneously.
A time-shared operating system uses CPU scheduling and multiprogramming to provide each user with a small portion
of a time-shared computer. Each user has at least one separate program in memory. A program loaded into memory
and executing is called a process.
Time sharing and multiprogramming require that several jobs be kept simultaneously in memory. If several jobs are
ready to be brought into memory, and if there is not enough room for all of them, then the system must choose among
them. Making this decision is Job scheduling.
In a time-sharing system, the operating system must ensure reasonable response time, which is sometimes
accomplished through swapping.
Time-sharing systems must also provide a file system. The file system resides on a collection of disks; hence, disk
management must be provided
OS Structure

The five designs are monolithic systems, layered systems, virtual machines, exokernels, and client-server systems.

Simple systems
Operating systems such as MS-DOS and the original UNIX did not have well-defined structures.
There was no CPU Execution Mode (user and kernel), and so errors in applications could cause the whole system to
crash.

In MS-DOS, applications may bypass the operating


system.
OS Structure

Monolithic Approach
Functionality of the OS is to invoked with simple function calls within the kernel, which is one large program.
Device drivers are loaded into the running kernel and become part of the kernel. It is the oldest architecture used for
developing operating system. Operating system resides on kernel for anyone to execute. System call is invoked i.e.
Switching from user mode to kernel mode and transfer control to operating system shown as event 1. Many CPU has two
modes, kernel mode, for the operating system in which all instruction are allowed and user mode for user program in
which I/O devices and certain other instruction are not allowed. Operating system then examines the parameter of the
call to determine which system call is to be carried out shown in event 2. Next, the operating system index’s into a table
that contains procedure that carries out system call. This operation is shown in events. Finally, it is called when the work
has been completed and the system call is finished, control is given back to the user mode as shown in event 4.

A monolithic kernel, such as Linux and other Unix systems.


OS Structure
Layered system
The layered Architecture of operating system was developed in 60’s in this approach; the operating system is broken up
into number of layers. The bottom layer (layer 0) is the hardware layer and the highest layer (layer n) is the user interface
layer as shown in the figure. If an error is found during the debugged of particular layer, the error must be on that layer,
because the layer below it already debugged. Example Os/2 OS and earlier version of WindowsNT.
This approach breaks up the operating system into different layers.
•This allows implementers to change the inner workings, and increases modularity.
•As long as the external interface of the routines don’t change, developers have more freedom to change the inner
workings of the routines.
•With the layered approach, the bottom layer is the hardware, while the highest layer is the user interface.
•The main advantage is simplicity of construction and debugging.
•The main difficulty is defining the various layers.
•The main disadvantage is that the OS tends to be less efficient than other implementations. It requires an
appropriate definition of the various layers & a careful planning of the proper placement of the layer.

The Microsoft Windows NT


Operating System. The lowest level
is a monolithic kernel, but many OS
components are at a higher level,
but still part of the OS.
Layered Structure

Layered Structure is a type of system structure in which the different services of the operating system are split into
various layers, where each layer has a specific well-defined task to perform. It was created to improve the pre-existing
structures like the Monolithic structure ( UNIX ) and the Simple structure ( MS-DOS ).

Example – The Windows NT operating system uses this layered approach as a part of it.

Design Analysis :
The whole Operating System is separated into several layers ( from 0 to n ) as
the diagram shows. Each of the layers must have its own specific function to
perform. There are some rules in the implementation of the layers as follows.

The outermost layer must be the User Interface layer.


The innermost layer must be the Hardware layer.
A particular layer can access all the layers present below it but it cannot access
the layers present above it. That is layer n-1 can access all the layers from n-2 to
0 but it cannot access the nth layer.

Thus if the user layer wants to interact with the hardware layer, the response will be traveled through all the layers from
n-1 to 1. Each layer must be designed and implemented such that it will need only the services provided by the layers
below it.
Layered Structure

Advantages :
There are several advantages to this design :
Modularity :
This design promotes modularity as each layer performs only the tasks it is scheduled
to perform.
Easy debugging :
As the layers are discrete so it is very easy to debug. Suppose an error occurs in the
CPU scheduling layer, so the developer can only search that particular layer to debug,
unlike the Monolithic system in which all the services are present together.
Easy update :
A modification made in a particular layer will not affect the other layers.
No direct access to hardware :
The hardware layer is the innermost layer present in the design. So a user can use the
services of hardware but cannot directly modify or access it, unlike the Simple system in
which the user had direct access to the hardware.
Layered Structure

Disadvantages :
Though this system has several advantages over the Monolithic and Simple design,
there are also some disadvantages as follows.

Complex and careful implementation :


As a layer can access the services of the layers below it, so the arrangement of the
layers must be done carefully. For example, the backing storage layer uses the
services of the memory management layer. So it must be kept below the memory
management layer. Thus with great modularity comes complex implementation.
Slower in execution :
If a layer wants to interact with another layer, it sends a request that has to travel
through all the layers present in between the two interacting layers. Thus it increases
response time, unlike the Monolithic system which is faster than this. Thus an
increase in the number of layers may lead to a very inefficient design
OS Structure

Virtual memory architecture of operating system


A virtual machine (VM) is a virtual environment which functions as a virtual computer system with
its own CPU, memory, network interface, and storage, created on a physical hardware system.

VMs are isolated from the rest of the system, and multiple VMs can exist on a single piece of
hardware, like a server. That means, it as a simulated image of application software and operating
system which is executed on a host computer or a server.

It has its own operating system and software that will facilitate the resources to virtual computers.
Characteristics of virtual machines
• Multiple OS systems use the same hardware and partition resources between virtual
computers.
• Separate Security and configuration identity.
• Ability to move the virtual computers between the physical host computers as holistically
integrated files.
OS Structure
Benefits
❑ The multiple Operating system environments exist simultaneously on the same machine, which is
isolated from each other.
❑ Virtual machine offers an instruction set architecture which differs from real computer.
❑ Using virtual machines, there is easy maintenance, application provisioning, availability and
convenient recovery.

The operating system achieves virtualization with the help of a specialized software called a
hypervisor, which emulates the PC client or server CPU, memory, hard disk, network and other
hardware resources completely, enabling virtual machines to share resources.

The hypervisor can emulate multiple virtual


hardware platforms that are isolated from
each other allowing virtual machines to run
Linux and window server operating machines
on the same underlying physical host.
OS Structure

Microkernels
This structures the operating system by removing all nonessential portions of the kernel and implementing them as system
and user level programs.
•Generally they provide minimal process and memory management, and a communications facility.
•Communication between components of the OS is provided by message passing.
The benefits of the microkernel are as follows:
•Extending the operating system becomes much easier.
•Any changes to the kernel tend to be fewer, since the kernel is smaller.
•The microkernel also provides more security and reliability.
Main disadvantage is poor performance due to increased system overhead from message passing.

A Microkernel architecture.
OS Structure

What is Microkernel?
A microkernel is one of the classifications of the kernel. Being a kernel it manages all
system resources. But in a microkernel, the user services and kernel services are
implemented in different address spaces. The user services are kept in user address
space, and kernel services are kept under kernel address space, thus also reduces the size
of kernel and size of an operating system as well
OS Structure

❑ It provides minimal services of process and memory management.


❑ The communication between client program/application and services running in user
address space is established through message passing, reducing the speed of execution
microkernel.
❑ The Operating System remains unaffected as user services and kernel services are
isolated so if any user service fails it does not affect kernel service. Thus it adds to one
of the advantages of a microkernel. It is easily extendible i.e. if any new services are to
be added they are added to user address space and hence require no modification in
kernel space. It is also portable, secure, and reliable.
OS Structure

Microkernel Architecture –
Since the kernel is the core part of the operating system, so it is meant for handling the
most important services only. Thus in this architecture, only the most important services
are inside the kernel and the rest of the OS services are present inside the system
application program. Thus users are able to interact with those not-so-important services
within the system application. And the microkernel is solely responsible for the most
important services of the operating system they are named as follows:
Inter process-Communication
Memory Management
CPU-Scheduling

Advantages of Microkernel –
The architecture of this kernel is small and isolated hence it can function better.
Expansion of the system is easier, it is simply added to the system application without
disturbing the kernel.
OS Structure

Client/server architecture of operating system


A trend in modern operating system is to move maximum code into the higher level and remove as much as possible
from operating system, minimizing the work of the kernel. The basic approach is to implement most of the operating
system functions in user processes. To request a service, like to read a particular file, user send a request to the server
process, server checks the parameter and finds whether it is valid or not, after that server process it and send back the
answer to client. Server model works on request- response technique i.e. Client always send request to the side in
order to perform the task, and on the other side, server process that request send back response. The figure below
shows client server architecture.
In this model, the main task of the kernel is to handle all the communication between the client and the server by
splitting the operating system into number of ports, each of which only handle some specific task. I.e. file server,
process server, terminal server and memory service.
Another advantage of the client-server model is it’s adaptability to user in distributed system. If the client
communicates with the server by sending it the message, the client need not know whether it was send a ……. Is the
network to a server on a remote machine? As in case of client, same thing happen and occurs in client side that is a
request was send and a reply come back.
System Calls

To understand system calls, first one needs to understand the difference between kernel mode and user mode of a CPU.
Every modern operating system supports these two modes.
Modes supported by the operating system
Kernel Mode
•When CPU is in kernel mode, the code being executed can access any memory address and any hardware resource.
•Hence kernel mode is a very privileged and powerful mode.
•If a program crashes in kernel mode, the entire system will be halted.
User Mode
•When CPU is in user mode, the programs don't have direct access to memory and hardware resources.
•In user mode, if any program crashes, only that particular program is halted.
•That means the system will be in a safe state even if a program in user mode crashes.
•Hence, most programs in an OS run in user mode.
When a program in user mode requires access to RAM or a hardware resource, it must ask the kernel to provide access to
that resource. This is done via something called a system call.
System Calls
When a program makes a system call, the mode is switched from user mode to kernel mode. This is called a context
switch.
Then the kernel provides the resource which the program requested. After that, another context switch happens which
results in change of mode from kernel mode back to user mode.
Generally, system calls are made by the user level programs in the following situations:
•Creating, opening, closing and deleting files in the file system.
•Creating and managing new processes.
•Creating a connection in the network, sending and receiving packets.
•Requesting access to a hardware device, like a mouse or a printer

•System calls provide a means for user


or application programs to call upon
the services of the operating system.
•Generally written in C or C++,
although some are written in
assembly for optimal performance.
•Figure 2.5 illustrates the sequence of
system calls required to copy a file:
System Calls

•Most programmers do not


use the low-level system
calls directly, but instead
use an "Application
Programming Interface",
API. The following sidebar
shows the read( ) call
available in the API on
UNIX based systems::
System Calls

The use of APIs instead of direct system calls provides for greater program portability between different systems. The API
then makes the appropriate system calls through the system call interface, using a table lookup to access specific
numbered system calls, as shown in Figure 2.6:
System Calls

•Parameters are generally passed to system calls via registers, or less commonly, by values pushed onto the stack. Large
blocks of data are generally accessed indirectly, through a memory address passed in a register or on the stack, as
shown in Figure 2.7:
System Calls

Types of System Calls


System Calls •Standard library calls may also
generate system calls
System Calls

Process Control
•Process control system calls include end, abort, load, execute, create process, terminate process, get/set process attributes,
wait for time or event, signal event, and allocate and free memory.
•Processes must be created, launched, monitored, paused, resumed, and eventually stopped.
•When one process pauses or stops, then another must be launched or resumed
•When processes stop abnormally it may be necessary to provide core dumps and/or other diagnostic or recovery tools.
•Compare DOS ( a single-tasking system ) with UNIX ( a multi-tasking system ).
•When a process is launched in DOS, the
command interpreter first unloads as much
of itself as it can to free up memory, then
loads the process and transfers control to
it. The interpreter does not resume until the
process has completed,
as shown in Figure 2.9:
System Calls

•Because UNIX is a multi-tasking system, the command


interpreter remains completely resident when executing a
process, as shown in Figure 2.10 below.The user can switch back
to the command interpreter at any time, and can place the running
process in the background even if it was not originally launched
as a background process.
•In order to do this, the command interpreter first executes a
"fork" system call, which creates a second process which is an
exact duplicate ( clone ) of the original command interpreter. The
original process is known as the parent, and the cloned process is
known as the child, with its own unique process ID and parent
ID.
•The child process then executes an "exec" system call, which
replaces its code with that of the desired process.
•The parent ( command interpreter ) normally waits for the child
to complete before issuing a new command prompt, but in some
cases it can also issue a new prompt right away, without waiting
for the child process to complete. ( The child is then said to be
System Calls
File Management
•File management system calls include create file, delete file, open, close, read, write, reposition, get file attributes, and set
file attributes.
•These operations may also be supported for directories as well as ordinary files.

Device Management
•Device management system calls include request device, release device, read, write, reposition, get/set device attributes,
and logically attach or detach devices.
•Devices may be physical ( e.g. disk drives ), or virtual / abstract ( e.g. files, partitions, and RAM disks ).
•Some systems represent devices as special files in the file system, so that accessing the "file" calls upon the appropriate
device drivers in the OS. See for example the /dev directory on any UNIX system.

Information Maintenance
•Information maintenance system calls include calls to get/set the time, date, system data, and process, file, or device
attributes.
•Systems may also provide the ability to dump memory at any time, single step programs pausing execution after each
instruction, and tracing the operation of programs, all of which can help to debug programs.

Communication
•Communication system calls create/delete communication connection, send/receive messages, transfer status information,
and attach/detach remote devices.
System Calls

The message passing model must support calls to:


• Identify a remote process and/or host with which to communicate.
• Establish a connection between the two processes.
• Open and close the connection as needed.
• Transmit messages along the connection.
• Wait for incoming messages, in either a blocking or non-blocking state.
• Delete the connection when no longer needed.
The shared memory model must support calls to:
• Create and access memory that is shared amongst processes ( and threads. )
• Provide locking mechanisms restricting simultaneous access.
• Free up shared memory and/or dynamically allocate it as needed.
•Message passing is simpler and easier and is generally appropriate for small amounts of data.
•Shared memory is faster, and is generally the better approach where large amounts of data are to be shared.

Protection
•Protection provides mechanisms for controlling which users / processes have access to which system resources.
•System calls allow the access mechanisms to be adjusted as needed, and for non-priveleged users to be granted
elevated access permissions under carefully controlled temporary circumstances.
•Once only of concern on multi-user systems, protection is now important on all systems, in the present age of network
connectivity.
System Calls

In a typical UNIX system, there are around 300 system calls. Some important ones are as below.
Fork()
The fork() system call is used to create processes. When a process (a program in execution) makes a fork() call, an exact
copy of the process is created. Now there are two processes, one being the parent process and the other being
the child process.
The process which called the fork() call is the parent process and the process which is created newly is called
the child process. The child process will be exactly the same as the parent. Note that the process state of the parent i.e.,
the address space, variables, open files etc. is copied into the child process. This means that the parent and child
processes have identical but physically different address spaces. The change of values in parent process doesn't affect the
child and vice versa is true too.
Both processes start execution from the next line of code i.e., the line after the fork() call. Let's look at an example:
//example.c
When the above example code is executed, when line A is executed, a child process
#include <stdio.h>
is created. Now both processes start execution from line A. To differentiate
void main() between the child process and the parent process, we need to look at the value
{ returned by the fork() call.
int val; The difference is that, in the parent process, fork() returns a value which represents
the process ID of the child process. But in the child process, fork() returns the value
val = fork(); 0.
// line A printf("%d",val); This means that according to the program, the output of parent process will be
// line B the process ID of the child process and the output of the child process will be 0.
System Calls

Exec()
The exec() system call is also used to create processes. But there is one big difference between fork() and exec() calls.
The fork() call creates a new process while preserving the parent process. But, an exec() call replaces the address space,
text segment, data segment etc. of the current process with the new process.
It means, after an exec() call, only the new process exists. The process which made the system call, wouldn't exist.
There are many flavors of exec() in UNIX, one being execl() which is shown below as an example:

//example2.c As shown, the first parameter to the execl() function is the address
of the program which needs to be executed, in this case, the
#include void main() address of the ls utility in UNIX. Then it is followed by the name of
{ the program which is ls in this case and followed by optional
execl("/bin/ls", "ls", 0); arguments. Then the list should be terminated by a NULL pointer
// line A printf("This text won't (0).
be printed unless an error occurs When the above example is executed, at line A, the ls program is
called and executed and the current process is halted. Hence
in exec()."); the printf() function is never called since the process has already
} been halted. The only exception to this is that, if
the execl() function causes an error, then the printf()function is
executed.
Functions of OS

The Main Goal of Operating System is to Provide the Interface between the user and the hardware. Means it provides
the Interface for Working on the System by the user.

Functions Performed by the Operating System are

Operating System as a Resource Manager


Operating System will Manage all the Resources those are attached to the System, like Memory, Processor and all the
Input output Devices those are the Resources of the Computer System. The Operating System will identify at which
Time the CPU will perform which Operation and in which Time the Memory is used by which Programs. And
which Input Device will respond to which Request of the user means When the Input and Output Devices are used by
the which Programs. So OS will manage all the Resources those are attached to the Computer System.

Storage Management
Operating System also Controls the all the Storage Operations means how the data or files will be Stored into the
computers and how the Files will be Accessed by the users etc. All the Operations those are Responsible for Storing
and Accessing the Files is determined by the Operating System. Operating System also Allows us Creation of Files,
Creation of Directories and Reading and Writing the data of Files and Directories and also Copy the contents of the
Files and the Directories from One Place to Another Place.
Functions of OS

1)Process Management :
• Every program running on a computer is a process whether it is in the background or in frontend. The operating
system is responsible for making multiple tasks to run at the same time (multitasking).
• Operating system finds the status of processor and processes, it chooses job ie process and allocates it to
processor. OS allocates and de-allocates process when it’s executed.
2)Memory Management:
Operating System also Manages the Memory by Allocating and Deallocate the Memory to the Process.
3)Extended Machine : Provides us Sharing of Files between Multiple Users, also Provides Graphical Environments and
also Provides Various Languages for Communications and also Provides Many Complex Operations like using Many
Hardware’s and Software’s.
4)Mastermind: Operating system performs a multi functions which only can be performed by super-intelligent mind
hence the term “Mastermind”.
• OS provides Booting without an Operating System
• Provides Facility to increase the Logical Memory of the Computer System by using the Physical Memory of the
Computer System.
• OS controls the Errors that have been Occurred into the Program
• Provides Recovery of the System when the System gets Damaged.
• Operating System breaks the large program into the Smaller Programs those are also called as the threads. And
execute those threads one by one
Functions of OS
Evolution of OSs

The first true digital computer was designed by the English mathematician Charles Babbage. Although Babbage spent
most of his life and fortune trying to build his "analytical engine," he never got it working properly because it was
purely mechanical, and the technology of his day could not produce the required wheels, gears, and cogs to the high
precision that he needed. Needless to say, the analytical engine did not have an operating system.
As an interesting historical aside, Babbage realized that he would need software for his analytical engine, so he hired
a young woman named Ada Lovelace, who was the daughter of the famed British poet Lord Byron, as the world's
first programmer. The programming language Ada was named after her.

Serial Processing
Users access the computer in series. From the late 1940's to mid 1950's, the programmer interacted directly with
computer hardware i.e., no operating system. These machines were run with a console consisting of display lights,
toggle switches, some form of input device and a printer. Programs in machine code are loaded with the input device
like card reader. If an error occur the program was halted and the error condition was indicated by lights.
Programmers examine the registers and main memory to determine error. If the program is success, then output will
appear on the printer.

Main problem here is the setup time. That is single program needs to load source program into memory, saving the
compiled (object) program and then loading and linking together.
Evolution of OSs
Simple Batch Systems
To speed up processing, jobs with similar needs are batched together and run as a group. Thus, the programmers will
leave their programs with the operator. The operator will sort programs into batches with similar requirements.
The problems with Batch Systems are: Lack of interaction between the user and job. CPU is often idle, because the
speeds of the mechanical I/O devices are slower than CPU.

For overcoming this problem use the Spooling Technique. Spool is a buffer that holds output for a device, such as
printer, that can not accept interleaved data streams. That is when the job requests the printer to output a line, that
line is copied into a system buffer and is written to the disk. When the job is completed, the output is printed. Spooling
technique can keep both the CPU and the I/O devices working at much higher rates.

Multiprogrammed Batch Systems


Jobs must be run sequentially, on a first-come, first-served basis. However when several jobs are on a direct-access
device like disk, job scheduling is possible. The main aspect of job scheduling is multiprogramming. Single user cannot
keep the CPU or I/O devices busy at all times. Thus multiprogramming increases CPU utilization.
In when one job needs to wait, the CPU is switched to another job, and so on. Eventually, the first job finishes waiting
and gets the CPU back.
Evolution of OSs
Time-Sharing Systems
Time-sharing systems are not available in 1960s. Time-sharing or multitasking is a logical extension of
multiprogramming. That is processors time is shared among multiple users simultaneously is called time-sharing. The
main difference between Multiprogrammed Batch Systems and Time-Sharing Systems is in Multiprogrammed batch
systems its objective is maximize processor use, whereas in Time-Sharing Systems its objective is minimize response
time.

Multiple jobs are executed by the CPU by switching between them, but the switches occur so frequently. Thus, the
user can receives an immediate response. For example, in a transaction processing, processor execute each user
program in a short burst or quantum of computation. That is if n users are present, each user can get time quantum.
When the user submits the command, the response time is seconds at most.

Operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of a time.
Computer systems that were designed primarily as batch systems have been modified to time-sharing systems.

For example IBM's OS/360.

Time-sharing operating systems are even more complex than multiprogrammed operating systems. As in
multiprogramming, several jobs must be kept simultaneously in memory.
Evolution of OSs

Personal-Computer Systems (PCs)


A computer system is dedicated to a single user is called personal computer, appeared in the 1970s. Micro computers
are considerably smaller and less expensive than mainframe computers. The goals of the operating system have
changed with time; instead of maximizing CPU and peripheral utilization, the systems developed for maximizing user
convenience and responsiveness.

For e.g., MS-DOS, Microsoft Windows and Apple Macintosh.

Hardware costs for microcomputers are sufficiently low. Decrease the cost of computer hardware (such as processors
and other devices) will increase our needs to understand the concepts of operating system. Malicious programs
destroy data on systems. These programs may be self-replicating and may spread rapidly via worm or virus
mechanisms to disrupt entire companies or even worldwide networks.

MULTICS operating system was developed from 1965 to 1970 at the Massachusetts Institute of Technology (MIT) as a
computing utility. Many of the ideas in MULTICS were subsequently used at Bell Laboratories in the design of UNIX OS.
Evolution of OSs

Parallel Systems
Most systems to date are single-processor systems; that is they have only one main CPU. Multiprocessor systems
have more than one processor.

The advantages of parallel system are as follows:


throughput (Number of jobs to finish in a time period)
save money by sharing peripherals, cabinets and power supplies
Increase reliability
Fault-tolerant (Failure of one processor will not halt the system).

Symmetric multiprocessing model


Each processor runs an identical job (copy) of the operating system, and these copies communicate. Encore's version
of UNIX operating system is a symmetric model.
E.g., If two processors are connected by a bus. One is primary and the other is the backup. At fixed check points in the
execution of the system, the state information of each job is copied from the primary machine to the backup. If a
failure is detected, the backup copy is activated, and is restarted from the most recent checkpoint. But it is expensive.
Evolution of OSs

Asymmetric multiprocessing model


Each processor is assigned a specific task. It allow only one CPU to execute operating system code or might allow only
one CPU to perform I/O operations. Personal computers contain a microprocessor in the keyboard to convert the key
strokes into codes to be sent to the CPU.

Distributed Systems
Distributed systems distribute computation among several processors. In contrast to tightly coupled systems (i.e., parallel
systems), the processors do not share memory or a clock. Instead, each processor has its own local memory.
The processors communicate with one another through various communication lines (such as high-speed buses or
telephone lines). These are referred as loosely coupled systems or distributed systems. Processors in a distributed system
may vary in size and function. These processors are referred as sites, nodes, computers and so on.

The advantages of distributed systems are as follows:


Resource Sharing: With resource sharing facility user at one site may be able to use the resources available at
another.
Communication Speedup: Speedup the exchange of data with one another via electronic mail.
Reliability: If one site fails in a distributed system, the remaining sites can potentially continue operating.
Evolution of OSs
•Real-time Systems
Real-time systems are used when there are rigid time requirements on the operation of a processor or the flow of
data and real-time systems can be used as a control device in a dedicated application. Real-time operating system has
well-defined, fixed time constraints otherwise system will fail.

E.g., Scientific experiments, medical imaging systems, industrial control systems, weapon systems, robots, and
home-applicance controllers.

There are two types of real-time systems:


Hard real-time systems guarantees that critical tasks complete on time. In hard real-time systems secondary storage
is limited or missing with data stored in ROM. A hard real-time system that must operate within the confines of a
stringent deadline. The application may be considered to have failed if it does not complete its function within the
allotted time span.

Soft real-time system are less restrictive. Critical real-time task gets priority over other tasks and retains the priority
until it completes. Soft real-time systems have limited utility than hard real-time systems.
E.g., Multimedia, virtual reality, Advanced Scientific Projects like undersea exploration and planetary rovers.
Unix OS

UNIX is an operating system which was first developed in the 1960s, and has been under constant development ever since.
By operating system, we mean the suite of programs which make the computer work. It is a stable, multi-user, multi-tasking
system for servers, desktops and laptops.
UNIX systems also have a graphical user interface (GUI) similar to Microsoft Windows which provides an easy to use
environment. However, knowledge of UNIX is required for operations which aren't covered by a graphical program, or for
when there is no windows interface available, for example, in a telnet session.
There are many different versions of UNIX, although they share common similarities. The most popular varieties of UNIX
are Sun Solaris, GNU/Linux, and MacOS X.

The UNIX operating system is made up of three parts; the kernel, the shell and the programs.
The kernel
The kernel of UNIX is the hub of the operating system: it allocates time and memory to programs and handles the filestore
and communications in response to system calls.
As an illustration of the way that the shell and the kernel work together, suppose a user types rm myfile (which has the
effect of removing the file myfile). The shell searches the filestore for the file containing the program rm, and then
requests the kernel, through system calls, to execute the program rm on myfile. When the process rm myfile has finished
running, the shell then returns the UNIX prompt % to the user, indicating that it is waiting for further commands.
Unix OS

The shell
The shell acts as an interface between the user and the kernel. When a user logs in, the login program checks the
username and password, and then starts another program called the shell. The shell is a command line interpreter
(CLI). It interprets the commands the user types in and arranges for them to be carried out. The commands are
themselves programs: when they terminate, the shell gives the user another prompt (% on our systems).
The adept user can customise his/her own shell, and users can use different shells on the same machine. Staff and
students in the school have the tcsh shell by default.
The tcsh shell has certain features to help the user inputting commands.
Filename Completion - By typing part of the name of a command, filename or directory and pressing the [Tab] key,
the tcsh shell will complete the rest of the name automatically. If the shell finds more than one name beginning with
those letters you have typed, it will beep, prompting you to type a few more letters before pressing the tab key again.
History - The shell keeps a list of the commands you have typed in. If you need to repeat a command, use the cursor
keys to scroll up and down the list or type history for a list of previous commands.
Unix OS
Everything in UNIX is either a file or a process.
A process is an executing program identified by a unique PID (process identifier).
A file is a collection of data. They are created by users using text editors, running compilers etc.
Examples of files:
•a document (report, essay etc.)
•the text of a program written in some high-level programming language
•instructions comprehensible directly to the machine and incomprehensible to a casual user, for example, a collection
of binary digits (an executable or binary file);
•a directory, containing information about its contents, which may be a mixture of other directories (subdirectories)
and ordinary files.

All the files are grouped together in the directory structure. The file-system is arranged in a hierarchical structure, like
an inverted tree. The top of the hierarchy is traditionally called root (written as a slash / )

Examples of modern UNIX operating systems include IRIX(from SGI), Solaris (from Sun), Tru64 (from Compaq)
and Linux (from the Free Software community). Even though these different "flavors" of UNIX have unique
characteristics and come from different sources, they all work alike in a number of fundamental ways. If you gain
familiarity with any one of these UNIX-based operating systems, you will also have gained at least some familiarity
with nearly every other variant of UNIX.
Windows

Windows is the operating system sold by the Seattle-based company Microsoft. Microsoft, originally christened
"Traf-O-Data" in 1972, was renamed "Micro-soft" in November 1975, then "Microsoft" on November 26, 1976.

Microsoft entered the marketplace in August 1981 by releasing version 1.0 of the operating system Microsoft
DOS (MS-DOS), a 16-bit command-line operating system

The first version of Microsoft Windows (Microsoft Windows 1.0) came out in November 1985. It had a graphical user
interface, inspired by the user interface of the Apple computers of the time. Windows 1.0 was not successful with the
public, and Microsoft Windows 2.0, launched December 9, 1987, did not do much better.

It was on May 22, 1990 that Microsoft Windows became a success, with Windows 3.0, then Windows 3.1 in 1992,
and finally Microsoft Windows for Workgroups, later renamed Windows 3.11, which included network capabilities.
Windows 3.1 cannot be considered an entirely separate operating system because it was only a graphical user
interface running on top of MS-DOS.

On August 24, 1995, Microsoft launched the operating system Microsoft Windows 95. Windows 95 signified
Microsoft's willingness to transfer some of MS-DOS's capabilities into Windows, but this new version was based more
heavily on 16-bit DOS and retained the limitations of the FAT16 file system, so that it was not possible to use long file
names.
Computer Organization Interface

Modern operating systems are interrupt driven. If there are no processes to execute, OS will sit idle and wait for some
event to happen. Interrupts could be hardware interrupts or software interrupts. The OS is designed to handle both.
A trap (or an exception) is a software generated interrupt caused either by an error (e.g. divide by zero) or by a
specific request from a user program. A separate code segment is written in the OS to handle different types of
interrupts. These codes are known as interrupt handlers/ interrupt service routine. A properly designed OS ensures
that an illegal program should not harm the execution of other programs. To ensure this, the OS operates in dual
mode.
Dual mode of operation
The OS is design in such a way that it is capable of differentiating between the execution of OS code and user defined
code. To achieve this OS need two different modes of operations this is thereby controlled by mode bit added to
hardware of computer system as shown in Table 4.
Computer Organization Interface
Transition from User to Kernel mode
When a user application is executing on the computer system OS is working in user mode. On signal
of system call via user application, the OS transits from user mode to kernel mode to service that
request as shown in Fig. 11.

When the user starts the system the hardware starts in monitor/ kernel mode and loads the
operating system. OS has the initial control over the entire system, when instructions are
executed in kernel mode. OS then starts the user processes in user mode and on occurrence of
trap, interrupt or system call again switch to kernel mode and gains control of the system.
System calls are designed for the user programs through which user can ask OS to perform tasks
reserved for operating system. System calls usually take the form of the trap. Once the OS
service the interrupt it transfers control back to user program hence user mode by setting mode
bit=1 as shown in Fig. 12.
Computer Organization Interface
Computer Organization Interface
Benefits of Dual Mode
The dual mode of operation protects the operating system from errant users, and errant users from one another by
designating some of the machine instructions that may cause harm as privileged instructions. These instructions can
execute only in kernel mode. If an attempt is made to execute a privileged instruction in user mode, the hardware does
not execute the instruction, but rather treats the instruction as illegal and traps to the operating system. Examples of
privileged instructions:
1.Switching to kernel mode
2.Managing I/O control
3.Timer Management
4.Interrupt Management
Timer
Since OS operates in dual mode it should maintain control over CPU. The system should not allow a user application:
1.To be stuck in an infinite loop
2.To fail to call system services
3.Never return control to the OS
To achieve this goal, we can use timer. This timer control mechanism will interrupt the system at a specified period;
thereby preventing user program from running too long. This can be implemented either as fixed timer or variable
timer.
The OS must ensure that the timer is set to interrupt before control is passed to user. If the timer interrupts control is
passed to the OS. Only privileged instructions can modify the content of the timer. Simplest technique is to set the
counter with the amount of time that a program is allowed to run and terminates the user program when it becomes
negative.

You might also like