Final Te 2019 Sposl Lab Manual 2022-4-33

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

System Programming & OS Laboratory Third Year(2019) Computer Engineering

GROUP - A
EXPERIMENT NO : 01
1. Title:

Design suitable Data structures and implement Pass-I and Pass-II of a two-pass assembler for
pseudo-machine. Implementation should consist of a few instructions from each category and few
assembler directives. The output of Pass-I (intermediate code file and symbol table) should be input
for Pass-II..

2. Objectives :

- To understand Data structure of Pass-1 assembler


- To understand Pass-1 assembler concept
- To understand Advanced Assembler Directives

3. Problem Statement :

Design suitable data structures and implement Pass-I and Pass-II of a two-pass assembler for
pseudo-machine in Java using object oriented feature.

4. Outcomes:
After completion of this assignment students will be able to:
- Implemented Pass-I and Pass-II assembler
- Implemented Symbol table, Literal table & Pool table, Intermediate Code, Machine code
- Understood concept Advanced Assembler Directive.

5. Software Requirements:

Latest jdk, Eclipse

6. Hardware Requirement:

- M/C Lenovo Think center M700 Ci3,6100,6th Gen. H81, 4GB RAM ,500GB HDD

7. Theory Concepts:

Introduction :-

There are two main classes of programming languages: high level (e.g., C, Pascal) and low
level. Assembly Language is a low level programming language. Programmers code symbolic
instructions, each of which generates machine instructions.

MET’s Institute of Engineering, BKC, Nashik. Prepared by: prof. Gharu A. N.


System Programming & OS Laboratory Third Year(2019) Computer Engineering

An assembler is a program that accepts as input an assembly language program (source) and
produces its machine language equivalent (object code) along with the information for the loader.

Figure 1. Executable program generation from an assembly source code

Advantages of coding in assembly language are:


 Provides more control over handling particular hardware components
 May generate smaller, more compact executable modules
 Often results in faster execution

Disadvantages:
 Not portable
 More complex
 Requires understanding of hardware details (interfaces)

Pass – 1 Assembler:

An assembler does the following:


1. Generate machine instructions
- evaluate the mnemonics to produce their machine code
- evaluate the symbols, literals, addresses to produce their equivalent machine addresses
- convert the data constants into their machine representations
2. Process pseudo operations

Pass – 2 Assembler:

A two-pass assembler performs two sequential scans over the source code:

Pass 1: symbols and literals are defined


Pass 2: object program is generated

Parsing: moving in program lines to pull out op-codes and operands

MET’s Institute of Engineering, BKC, Nashik. Prepared by: prof. Gharu A. N.


System Programming & OS Laboratory Third Year(2019) Computer Engineering

Data Structures:

- Location counter (LC): points to the next location where the code will be placed

- Op-code translation table: contains symbolic instructions, their lengths and their op-codes (or
subroutine to use for translation)

- Symbol table (ST): contains labels and their values

- String storage buffer (SSB): contains ASCII characters for the strings

- Forward references table (FRT): contains pointer to the string in SSB and offset where its value
will be inserted in the object code

Figure 2. A simple two pass assembler.

Elements of Assembly Language :

An assembly language programming provides three basic features which simplify programming when
compared to machine language.

1. Mnemonic Operation Codes :

Mnemonic operation code / Mnemonic Opcodes for machine instruction eliminates the need to
memorize numeric operation codes. It enables assembler to provide helpful error diagnostics. Such as
indication of misspelt operation codes.

2. Symbolic Operands :

Symbolic names can be associated with data or instructions. These symbolic names can be used as
operands in assembly statements. The assembeler performes memory bindinding to these names; the
programmer need not know any details of the memory bindings performed by the assembler.

3. Data declarations :
Data can be declared in a variety of notations, including the decimal notation. This avoids manual
conversion of constants into their internal machine representation, for example -5 into (11111010)2 or
10.5 into (41A80000)16

MET’s Institute of Engineering, BKC, Nashik. Prepared by: prof. Gharu A. N.


System Programming & OS Laboratory Third Year(2019) Computer Engineering

Statement format :

An assembly language statement has the following format :

[ Label] <Opcode> <operand Spec> [, operand Spec> ..]

Where the notation [..] indicates that the enclosed specification is optional.

Label associated as a symbolic name with the memory word(s) generated for the statement

Mnemonic Operation Codes :

Instruction Format :

Sign is not a part of Instruction

An Assembly and equivalent machine language program :(solve it properly)

Note : you can also take other example with solution

MET’s Institute of Engineering, BKC, Nashik. Prepared by: prof. Gharu A. N.


System Programming & OS Laboratory Third Year(2019) Computer Engineering

Assembly Language Statements :


Three Kinds of Statements
1. Imperative Statements
2. Declaration Statements
3. Assembler Directives

a) Imperative Statements : It indicates an action to be performed during the execution of the


assembled program. Each imperative statement typically translates into one machine instruction.
e.g, MOVER, ADD, MULT etc (All executable statements)

b) Declaration Statements : Two types of declaration statements is as follows

[Label] DS <constant>
[Label] DC ‘<Value>’

The DS (Declare Storage) statement reserves areas of memory and associates names with them.
Eg)A DS 1
B DS 150

First statement reserves a memory of 1 word and associates the name of the memory as A.
Second statement reserves a memory of 150 word and associates the name of the memory as B.

The DC (Declare Constant) Statement constructs memory word containing constants.

Eg ) ONE DC ‘1’

Associates the name ONE with a memory word containing the value ‘1’ . The programmer can declare
constants in decimal, binary, hexadecimal forms etc., These values are not protected by the assembler. In
the above assembly language program the value of ONE Can be changed by executing an instruction
MOVEM BREG,ONE

c. Assembler Directives :
Assembler directives instruct the assembler to perform certain actions during the assembly of a
program. Some Assembler directives are described in the following

START <Constant>

MET’s Institute of Engineering, BKC, Nashik. Prepared by: prof. Gharu A. N.


System Programming & OS Laboratory Third Year(2019) Computer Engineering

Indicates that the first word of the target program generated by the assembler should be placed in
the memory word with address <Constant>

END [ <operand spec>]


It Indicates the end of the source program

Pass Structure of Assembler :

One complete scan of the source program is known as a pass of a Language Processor.

Two types 1) Single Pass Assembler 2) Two Pass Assembler.

Single Pass Assembler :

First type to be developed Most Primitive Source code is processed only once.

The operand field of an instruction containing forward reference is left blank intially

Eg) MOVER BREG,ONE

Can be only partially synthesized since ONE is a forward reference

During the scan of the source program, all the symbols will be stored in a table called
SYMBOL TABLE. Symbol table consists of two important fields, they are symbol name and
address.
All the statements describing forward references will be stored in a table called Table of
Incompleted Instructions (TII)

TII (Table of Incomplete instructions)

Instruction Address Symbol


101 ONE

By the time the END statement is processed the symbol table would contain the address of all
symbols defined in the source program.

MET’s Institute of Engineering, BKC, Nashik. Prepared by: prof. Gharu A. N..
System Programming & OS Laboratory Third Year(2019) Computer Engineering

Two Pass Assembler :

Can handle forward reference problem easily.

First Phase : (Analysis)


 Symbols are entered in the table called Symbol table

 Mnemonics and the corresponding opcodes are stored in a table called Mnemonic table

 LC Processing

Second Phase : (Synthesis)


 Synthesis the target form using the address information found in Symbol table.

 First pass constructs an Intermediated Representation (IR) of the source program for use by
the second pass.


Data Structure used during Synthesis Phase :

1. Symbol table
2. Mnemonics table

Processed form of the source program called Intermediate Code (IC)

MET’s Institute of Engineering, BKC, Nashik. Prepared by: prof. Gharu A. N..
System Programming & OS Laboratory Third Year(2019) Computer Engineering

ADVANCED ASSEMBLER DIRECTIVES

1. ORIGIN
2. EQU
3. LTROG

ORIGIN :

Syntax : ORIGIN < address spec>

< address spec>can be an <operand spec> or constant

Indicates that Location counter should be set to the address given by < address spec>
This statement is useful when the target program does not consist of consecutive memory words.
Eg) ORIGIN Loop + 2

EQU :
Syntax

<symbol> EQU <address spec>

<address spec>operand spec (or) constant


Simply associates the name symbol with address specification No
Location counter processing is implied
Eg ) Back EQU Loop

LTORG : (Literal Origin)

Where should the assembler place literals ?


It should be placed such that the control never reaches it during the execution of a program.

By default, the assembler places the literals after the END statement.
LTROG statement permits a programmer to specify where literals should be placed.

MET’s Institute of Engineering, BKC, Nashik. Prepared by: prof. Gharu A. N..
System Programming & OS Laboratory Third Year(2019) Computer Engineering

Note :(you can also write your own theory for this practical)
Solve the below example.for Pass-1 & Pass-2 of Two Pass Assembler.

START 200
MOVER AREG, =‘5’
MOVEM AREG, X
L1 MOVER BREG, =‘2’
ORIGIN L1+3
LTORG
NEXT ADD AREG, =‘1’
SUB BREG, =‘2’
BC LT, BACK
LTORG
BACK EQU L1
ORIGIN NEXT+5
MULT CREG, =‘4’
STOP
X DS 1
END

Algorithms :

Write Algorithm for Pass-1 & Pass-2 Assembler.

Flowchart :

Draw Flowchart for Pass-1 & Pass-2 Assembler.

8. Conclusion :

Thus , We have implemented Pass-1 & Pass-2 assembler with symbol table, literal table and pool
table, Intermediate code and Machine code.

Continuous Assessment of Student :


TC PR IN EC PN Total Marks
Faculty
(2) (2) (2) (2) (2) (10) Signature

– TC - Timely completion, PR - Performance, IN - Innovation, EC - Efficient Code,


PN - Punctuality and Neatness.

MET’s Institute of Engineering, BKC, Nashik. Prepared by: prof. Gharu A. N..
Laboratory Practice – I Third Year (2019)Computer Engineering

GROUP - A

EXPERIMENT NO : 02

1. Title:

Write a program to create Dynamic Link Library for any mathematical operation and write an application
program to test it. (Java Native Interface / Use VB or VC++).

2. Objectives :
- To understand Dynamic Link Libraries Concepts
- To implement dynamic link library concepts
- To study about Visual Basic

3. Problem Statement :
Write a program to create Dynamic Link Library for Arithmetic Operation in VB.net

4. Outcomes:
After completion of this assignment students will be able to:
- Understand the concept of Dynamic Link Library
- Understand the Programming language of Visual basic

5. Software Requirements:
 Visual Studio 2010

6. Hardware Requirement:

- M/C Lenovo Think center M700 Ci3,6100,6th Gen. H81, 4GB RAM ,500GB HDD

7. Theory Concepts:

Dynamic Link Library :

A dynamic link library (DLL) is a collection of small programs that can be loaded when needed by
larger programs and used at the same time. The small program lets the larger program communicate
with a specific device, such as a printer or scanner. It is often packaged as a DLL program, which is
usually referred to as a DLL file. DLL files that support specific device operation are known
as device drivers.

A DLL file is often given a ".dll" file name suffix. DLL files are dynamically linked with the
program that uses them during program execution rather than being compiled into the main program.

MET’s INSTITUTE OF ENGINEERING,BKC,NASHIK. Prepared by: Prof. Anand Gharu


Laboratory Practice – I Third Year (2019)Computer Engineering

The advantage of DLL files is space is saved in random access memory (RAM) because the
files don't get loaded into RAM together with the main program. When a DLL file is needed, it is
loaded and run. For example, as long as a user is editing a document in Microsoft Word, the printer
DLL file does not need to be loaded into RAM. If the user decides to print the document, the Word
application causes the printer DLL file to be loaded and run.

A program is separated into modules when using a DLL. With modularized components, a program
can be sold by module, have faster load times and be updated without altering other parts of the
program. DLLs help operating systems and programs run faster, use memory efficiently and take up
less disk space.

Feature of DLL :

DLLs are essentially the same as EXEs, the choice of which to produce as part of the linking process
is for clarity, since it is possible to export functions and data from either.

- It is not possible to directly execute a DLL, since it requires an EXE for the operating system to
load it through an entry point, hence the existence of utilities like RUNDLL.EXE or
RUNDLL32.EXE which provide the entry point and minimal framework for DLLs that contain
enough functionality to execute without much support.

- DLLs provide a mechanism for shared code and data, allowing a developer of shared code/data
to upgrade functionality without requiring applications to be re-linked or re-compiled. From the
application development point of view Windows and OS/2 can be thought of as a collection of
DLLs that are upgraded, allowing applications for one version of the OS to work in a later one,
provided that the OS vendor has ensured that the interfaces and functionality are compatible.

- DLLs execute in the memory space of the calling process and with the same access permissions
which means there is little overhead in their use but also that there is no protection for the calling
EXE if the DLL has any sort of bug.

Difference between the Application & DLL :

- An application can have multiple instances of itself running in the system simultaneously,
whereas a DLL can have only one instance.
- An application can own things such as a stack, global memory, file handles, and a message
queue, but a DLL cannot.

MET’s INSTITUTE OF ENGINEERING,BKC,NASHIK. Prepared by: Prof. Anand Gharu


Laboratory Practice – I Third Year (2019)Computer Engineering

Executable file links to DLL :


An executable file links to (or loads) a DLL in one of two ways:
 Implicit linking
 Explicit linking
Implicit linking is sometimes referred to as static load or load-time dynamic linking. Explicit
linking is sometimes referred to as dynamic load or run-time dynamic linking.
With implicit linking, the executable using the DLL links to an import library (.lib file) provided by
the maker of the DLL. The operating system loads the DLL when the executable using it is loaded.
The client executable calls the DLL's exported functions just as if the functions were contained
within the executable.
With explicit linking, the executable using the DLL must make function calls to explicitly load and
unload the DLL and to access the DLL's exported functions. The client executable must call the
exported functions through a function pointer.
An executable can use the same DLL with either linking method. Furthermore, these mechanisms
are not mutually exclusive, as one executable can implicitly link to a DLL and another can attach to
it explicitly.

DLL’s Advantages :

- Saves memory and reduces swapping. Many processes can use a single DLL simultaneously,
sharing a single copy of the DLL in memory. In contrast, Windows must load a copy of the
library code into memory for each application that is built with a static link library.
- Saves disk space. Many applications can share a single copy of the DLL on disk. In contrast,
each application built with a static link library has the library code linked into its executable
image as a separate copy.
- Upgrades to the DLL are easier. When the functions in a DLL change, the applications that use
them do not need to be recompiled or relinked as long as the function arguments and return
values do not change. In contrast, statically linked object code requires that the application be
relinked when the functions change.
- Provides after-market support. For example, a display driver DLL can be modified to support a
display that was not available when the application was shipped.
- Supports multi language programs. Programs written in different programming languages can
call the same DLL function as long as the programs follow the function's calling convention. The
programs and the DLL function must be compatible in the following ways: the order in which
the function expects its arguments to be pushed onto the stack, whether the function or the
application is responsible for cleaning up the stack, and whether any arguments are passed in
registers.
MET’s INSTITUTE OF ENGINEERING,BKC,NASHIK. Prepared by: Prof. Anand Gharu
Laboratory Practice – I Third Year (2019)Computer Engineering

- Provides a mechanism to extend the MFC library classes. You can derive classes from the
existing MFC classes and place them in an MFC extension DLL for use by MFC applications.
- Eases the creation of international versions. By placing resources in a DLL, it is much easier to
create international versions of an application. You can place the strings for each language
version of your application in a separate resource DLL and have the different language versions
load the appropriate resources.

Disadvantage :
- A potential disadvantage to using DLLs is that the application is not self-contained; it depends
on the existence of a separate DLL module.

Visual Basic :
Visual Basic is a third-generation event-driven programming language first released by Microsoft in
1991. It evolved from the earlier DOS version called BASIC. BASIC means Beginners' All-
purpose Symbolic Instruction Code. Since then Microsoft has released many versions of Visual
Basic, from Visual Basic 1.0 to the final version Visual Basic 6.0. Visual Basic is a user-friendly
programming language designed for beginners, and it enables anyone to develop GUI window
applications easily.
In 2002, Microsoft released Visual Basic.NET(VB.NET) to replace Visual Basic 6. Thereafter,
Microsoft declared VB6 a legacy programming language in 2008. Fortunately, Microsoft still
provides some form of support for VB6. VB.NET is a fully object-oriented programming language
implemented in the .NET Framework. It was created to cater for the development of the web as well
as mobile applications. However, many developers still favor Visual Basic 6.0 over its successor
Visual Basic.NET.

8. Design (architecture) :

MET’s INSTITUTE OF ENGINEERING,BKC,NASHIK. Prepared by: Prof. Anand Gharu


Laboratory Practice – I Third Year (2019)Computer Engineering

STEP FOR DLL PROGRAM :


1. Create new project  project  visual basic  windows form application  name the project 
click OK.

2. Design form :
3. Right click on solution  Add  new project  windows  empty project  name to dll
file(p5dll)  OK.

4. Right click on dll file (p5dll)  add  module  renmame module name as p5dll  OK.

5. Right click on dll file  properties  application type (class library)  save file.

6. Right click on main project  add references (created dll file will be displayed)  select dll file 
OK.

7. Write coding on button click event to main project (i.e. step no. 2)
8. Run application

9. Display Result

9. Algorithms(procedure) :

Note: you should write algorithm & procedure as per program/concepts

10. Flowchart :

Note: you should draw flowchart as per algorithm/procedure

11. Conclusion:
Thus, I have studied visual programming and implemented dynamic link library application
for arithmetic operation

Continuous Assessment of Student :


TC PR IN EC PN Total Marks
Faculty
(2) (2) (2) (2) (2) (10) Signature

– TC - Timely completion, PR - Performance, IN - Innovation, EC - Efficient Code,


PN - Punctuality and Neatness.

MET’s INSTITUTE OF ENGINEERING,BKC,NASHIK. Prepared by: Prof. Anand Gharu


GROUP - B

GROUP - B
Laboratory Practice – I Third Year (2019)Computer Engineering

GROUP - B

EXPERIMENT NO : 03

1. Title:

Write a program to solve classical problems of synchronization using mutex and semaphore.

2. Objectives :
 To understand reader writer synchronization problem
 To solve reader-writer synchronization problem using mutex and semaphore

3. Problem Statement :
Write a program to solve classical problems of synchronization using mutex and emaphore.
(Reader-Writer Problem)

4. Outcomes:
After completion of this assignment students will be able to:
- Understand the concept of Deadlock, Semaphore, Mutex
- Understand of Classical synchronization problem

5. Software Requirements:
 Eclipse IDE

6. Hardware Requirement:

- M/C Lenovo Think center M700 Ci3,6100,6th Gen. H81, 4GB RAM ,500GB HDD

7. Theory Concepts:
There is a data area shared among a number of processor registers.
• The data area could be a file, a block of main memory, or even a bank of processor
registers.
• There are a number of processes that only read the data area (readers) and a number that
only write to the data area (writers).
• The conditions that must be satisfied are
➢ Any number of readers may read simultaneously read the file.
➢ Only one write at a time may write to the file.
➢ If a writer is writing to the file, no reader may read it.

MET’s INSTITUTE OF ENGINEERING,BKC,NASHIK. Prepared by: Prof. Anand Gharu


Laboratory Practice – I Third Year (2019)Computer Engineering

 Classical Synchronization Problem :


We will see number of classical problems of synchronization as examples of a large class of
concurrency-control problems. In our solutions to the problems, we use semaphores for
synchronization, since that is the traditional way to present such solutions. However, actual
implementations of these solutions could use mutex locks in place of binary semaphores.

These problems are used for testing nearly every newly proposed synchronization scheme. The
following problems of synchronization are considered as classical problems:

1. Bounded-buffer (or Producer-Consumer) Problem,


2. Dining-Philosophers Problem,
3. Readers and Writers Problem,
4. Sleeping Barber Problem

1. Bounded-buffer (or Producer-Consumer) Problem:


Bounded Buffer problem is also called producer consumer problem. This problem is generalized in
terms of the Producer-Consumer problem. Solution to this problem is, creating two counting
semaphores “full” and “empty” to keep track of the current number of full and empty buffers
respectively. Producers produce a product and consumers consume the product, but both use of one
of the containers each time.

2. Dining-Philosophers Problem:
The Dining Philosopher Problem states that K philosophers seated around a circular table with one
chopstick between each pair of philosophers. There is one chopstick between each philosopher. A
philosopher may eat if he can pickup the two chopsticks adjacent to him. One chopstick may be
picked up by any one of its adjacent followers but not both. This problem involves the allocation of
limited resources to a group of processes in a deadlock-free and starvation-free manner.

MET’s INSTITUTE OF ENGINEERING,BKC,NASHIK. Prepared by: Prof. Anand Gharu


Laboratory Practice – I Third Year (2019)Computer Engineering

3. Readers and Writers Problem:


Suppose that a database is to be shared among several concurrent processes. Some of these processes
may want only to read the database, whereas others may want to update (that is, to read and write)
the database. We distinguish between these two types of processes by referring to the former as
readers and to the latter as writers. Precisely in OS we call this situation as the readers-writers
problem. Problem parameters:
 One set of data is shared among a number of processes.

 Once a writer is ready, it performs its write. Only one writer may write at a time.

 If a process is writing, no other process can read it.

 If at least one reader is reading, no other process can write.

 Readers may not write and only read.

4. Sleeping Barber Problem:


Barber shop with one barber, one barber chair and N chairs to wait in. When no customers the barber
goes to sleep in barber chair and must be woken when a customer comes in. When barber is cutting
hair new customers take empty seats to wait, or leave if no vacancy.

Semaphore:
Definition: Semaphores are system variables used for synchronization of process.
Semaphore can be used in other synchronization problems besides Mutual Exclusion.

Two types of Semaphore:


➢ Counting semaphore – integer value can range over an unrestricted domain
➢ Binary semaphore –
Integer value can range only between 0 and 1; can be simpler to implement
Also known as mutex locks

MET’s INSTITUTE OF ENGINEERING,BKC,NASHIK. Prepared by: Prof. Anand Gharu


Laboratory Practice – I Third Year (2019)Computer Engineering

Semaphore functions:
Package: import java.util.concurrent.Semaphore;
1) To initialize a semaphore:
Semaphore Sem1 = new Semaphore(1);

2) To wait on a semaphore:
/* Wait (S)
while S<=0
no-op;
S - -;
*/
Sem1.acquire();

3) To signal on a semaphore:
/* Signal(S)
S ++;
*/ mutex.release();

8. Algorithms(procedure) :

Note: you should write algorithm & procedure as per program/concepts


(its sample algo. For reference)
1. import java.util.concurrent.Semaphore;
2. Create a class RW
3. Declare semaphores – mutex and wrt
4. Declare integer variable readcount = 0
5. Create a nested class Reader implements Runnable
a. Override run method (Reader Logic)
i. wait(mutex);
ii. readcount := readcount +1;
iii. if readcount = 1 then
iv. wait(wrt);
v. signal(mutex);
vi. …
vii. reading is performed
viii. …
ix. wait(mutex);
x. readcount := readcount – 1;
xi. if readcount = 0 then signal(wrt);
xii. signal(mutex):
6. Create a nested class Writer implements Runnable
a. Override run method (Writer Logic)
i. wait(wrt);
ii. …
MET’s INSTITUTE OF ENGINEERING,BKC,NASHIK. Prepared by: Prof. Anand Gharu
Laboratory Practice – I Third Year (2019)Computer Engineering

iii. writing is performed


iv. …
v. signal(wrt);
7. Create a class main
a. Create Threads for Reader and Writer
b. Start these thread

9. Flowchart :

Note: you should draw flowchart as per algorithm/procedure

10. Conclusion:
Thus, I have studied classical synchronization problem to implement reader-writer problem
using semaphore and mutex.

Continuous Assessment of Student :


TC PR IN EC PN Total Marks
Faculty
(2) (2) (2) (2) (2) (10) Signature

– TC - Timely completion, PR - Performance, IN - Innovation, EC - Efficient Code,


PN - Punctuality and Neatness.

MET’s INSTITUTE OF ENGINEERING,BKC,NASHIK. Prepared by: Prof. Anand Gharu


System Programming & OS Laboratory Third Year(2019) Computer Engineering

GROUP - B
EXPERIMENT NO : 04

1. Title:
Write a Java program (using OOP features) to implement following scheduling algorithms: FCFS ,
SJF (Preemptive), Priority (Non-Preemptive) and Round Robin (Preemptive).

2. Objectives :
- To understand OS & SCHEDULLING Concepts
- To implement Scheduling FCFS, SJF, RR & Priority algorithms
- To study about Scheduling and scheduler

3. Problem Statement :
Write a Java program (using OOP features) to implement following scheduling algorithms: FCFS ,
SJF, Priority and Round Robin .

4. Outcomes:
After completion of this assignment students will be able to:
- Knowledge Scheduling policies
- Compare different scheduling algorithms

5. Software Requirements:
JDK/Eclipse

6. Hardware Requirement:

- M/C Lenovo Think center M700 Ci3,6100,6th Gen. H81, 4GB RAM ,500GB HDD

7. Theory Concepts:

CPU Scheduling:

• CPU scheduling refers to a set of policies and mechanisms built into the operating systems that govern
the order in which the work to be done by a computer system is completed.

• Scheduler is an OS module that selects the next job to be admitted into the system and next process to
run.

• The primary objective of scheduling is to optimize system performance in accordance with the criteria
deemed most important by the system designers.

MET’s Institute of Engineering, BKC, Nashik. Prepared by: prof. Gharu A. N.


System Programming & OS Laboratory Third Year(2019) Computer Engineering
What is scheduling?

Scheduling is defined as the process that governs the order in which the work is to be done. Scheduling
is done in the areas where more no. of jobs or works are to be performed. Then it requires some plan i.e.
scheduling that means how the jobs are to be performed i.e. order. CPU scheduling is best example of
scheduling.

What is scheduler?
1. Scheduler in an OS module that selects the next job to be admitted into the system and the next
process to run.
2. Primary objective of the scheduler is to optimize system performance in accordance with the
criteria deemed by the system designers. In short, scheduler is that module of OS which
schedules the programs in an efficient manner.

Necessity of scheduling
• Scheduling is required when no. of jobs are to be performed by CPU.
• Scheduling provides mechanism to give order to each work to be done.
• Primary objective of scheduling is to optimize system performance.
• Scheduling provides the ease to CPU to execute the processes in efficient manner.

Types of schedulers
In general, there are three different types of schedulers which may co-exist in a complex operating
system.
• Long term scheduler
• Medium term scheduler
• Short term scheduler.

MET’s Institute of Engineering, BKC, Nashik. Prepared by: prof. Gharu A. N.


System Programming & OS Laboratory Third Year(2019) Computer Engineering
``
Long Term Scheduler
• The long term scheduler, when present works with the batch queue and selects the next batch job to be
executed.
• Batch is usually reserved for resource intensive (processor time, memory, special I/O devices) low
priority programs that may be used fillers of low activity of interactive jobs.
• Batch jobs usually also contains programmer-assigned or system-assigned estimates of their resource
needs such as memory size, expected execution time and device requirements.
• Primary goal of long term scheduler is to provide a balanced mix of jobs.

Medium Term Scheduler


• After executing for a while, a running process may because suspended by making an I/O request or by
issuing a system call.
• When number of processes becomes suspended, the remaining supply of ready processes in systems
where all suspended processes remains resident in memory may become reduced to a level that impairs
functioning of schedulers.
• The medium term scheduler is in charge of handling the swapped out processes.
• It has little to do while a process is remained as suspended.

Short Term Scheduler


• The short term scheduler allocates the processor among the pool of ready processes resident in the
memory.
• Its main objective is to maximize system performance in accordance with the chosen set of criteria.
• Some of the events introduced thus for that cause rescheduling by virtue of their ability to change the
global system state are:
• Clock ticks
• Interrupt and I/O completions
• Most operational OS calls
• Sending and receiving of signals
• Activation of interactive programs.
• Whenever one of these events occurs ,the OS involves the short term scheduler.

Scheduling Criteria :
 CPU Utilization:

Keep the CPU as busy as possible. It range from 0 to 100%. In practice, it range from 40 to 90%.

 Throughput:

Throughput is the rate at which processes are completed per unit of time.

MET’s Institute of Engineering, BKC, Nashik. Prepared by: prof. Gharu A. N.


System Programming & OS Laboratory Third Year(2019) Computer Engineering
``
 Turnaround time:

This is the how long a process takes to execute a process. It is calculated as the time gap between the
submission of a process and its completion.

 Waiting time:

Waiting time is the sum of the time periods spent in waiting in the ready queue.

 Response time:

Response time is the time it takes to start responding from submission time. It is calculated as the
amount of time it takes from when a request was submitted until the first response is produced.

Non-preemptive Scheduling :

In non-preemptive mode, once if a process enters into running state, it continues to execute until it
terminates or blocks itself to wait for Input/Output or by requesting some operating system service.

Preemptive Scheduling :

In preemptive mode, currently running process may be interrupted and moved to the ready State by the
operating system.
When a new process arrives or when an interrupt occurs, preemptive policies may incur greater
overhead than non-preemptive version but preemptive version may provide better service.

It is desirable to maximize CPU utilization and throughput, and to minimize turnaround time, waiting
time and response time.

Types of scheduling Algorithms


• In general, scheduling disciplines may be pre-emptive or non-pre-emptive .
• In batch, non-pre-emptive implies that once scheduled, a selected job turns to completion.
There are different types of scheduling algorithms such as:
 FCFS(First Come First Serve)
 SJF(Short Job First)
 Priority scheduling
 Round Robin Scheduling algorithm

First Come First Serve Algorithm


• FCFS is working on the simplest scheduling discipline.
• The workload is simply processed in an order of their arrival, with no pre-emption.
• FCFS scheduling may result into poor performance.
• Since there is no discrimination on the basis of required services, short jobs may considerable in turn
around delay and waiting time.

MET’s Institute of Engineering, BKC, Nashik. Prepared by: prof. Gharu A. N.


System Programming & OS Laboratory Third Year(2019) Computer Engineering
``
Advantages

 Better for long processes


 Simple method (i.e., minimum overhead on processor)
 No starvation

Disadvantages

 Convoy effect occurs. Even very small process should wait for its turn to come to utilize the CPU.
Short process behind long process results in lower CPU utilization.
 Throughput is not emphasized.

Note : solve complete e.g. as we studied in practical(above is just sample e.g.). you can take any
e.g.

Shortest Job First Algorithm :


 his is also known as shortest job first, or SJF

 This is a non-preemptive, pre-emptive scheduling algorithm.

 Best approach to minimize waiting time.

 Easy to implement in Batch systems where required CPU time is known in advance.

 Impossible to implement in interactive systems where required CPU time is not known.

 The processer should know in advance how much time process will take.

MET’s Institute of Engineering, BKC, Nashik. Prepared by: prof. Gharu A. N.


System Programming & OS Laboratory Third Year Computer Engineering
``
Advantages

 It gives superior turnaround time performance to shortest process next because a short job is given
immediate preference to a running longer job.
 Throughput is high.

Disadvantages

 Elapsed time (i.e., execution-completed-time) must be recorded, it results an additional overhead on


the processor.
 Starvation may be possible for the longer processes.

This algorithm is divided into two types:


• Pre-emptive SJF
• Non-pre-emptive SJF

• Pre-emptive SJF Algorithm:


In this type of SJF, the shortest job is executed 1st. the job having least arrival time is taken first for
execution. It is executed till the next job arrival is reached.

Note : solve complete e.g. as we studied in practical(above is just sample e.g.). you can take any
e.g.

Non-pre-emptive SJF Algorithm:


In this algorithm, job having less burst time is selected 1st for execution. It is executed for its total
burst time and then the next job having least burst time is selected.

MET’s Institute of Engineering, BKC, Nashik. Prepared by: prof. Gharu A. N.


System Programming & OS Laboratory Third Year Computer Engineering
``

Note : solve complete e.g. as we studied in practical(above is just sample e.g.). you can take any
e.g.

Round Robin Scheduling :


 Round Robin is the preemptive process scheduling algorithm.

 Each process is provided a fix time to execute, it is called a quantum.

 Once a process is executed for a given time period, it is preempted and other process executes for a
given time period.

 Context switching is used to save states of preempted processe


Advantages

 Round-robin is effective in a general-purpose, times-sharing system or transaction-processing


system.
 Fair treatment for all the processes.
 Overhead on processor is low.
 Overhead on processor is low.
 Good response time for short processes.

Disadvantages

 Care must be taken in choosing quantum value.


 Processing overhead is there in handling clock interrupt.
 Throughput is low if time quantum is too small.

MET’s Institute of Engineering, BKC, Nashik. Prepared by: prof. Gharu A. N.


System Programming & OS Laboratory Third Year Computer Engineering

Note : solve complete e.g. as we studied in practical(above is just sample e.g.). you can take any
e.g.

Priority Scheduling :
 Priority scheduling is a non-preemptive algorithm and one of the most common scheduling
algorithms in batch systems.

 Each process is assigned a priority. Process with highest priority is to be executed first and so on.

 Processes with same priority are executed on first come first served basis.

 Priority can be decided based on memory requirements, time requirements or any other resource
requirement.

Advantage
 Good response for the highest priority processes.

Disadvantage
 Starvation may be possible for the lowest priority processes.


















 Note : solve complete e.g. as we studied in practical(above is just sample e.g.). you can take any e.g.

MET’s Institute of Engineering, BKC, Nashik. Prepared by: prof. Gharu A. N.


System Programming & OS Laboratory Third Year Computer Engineering

8. Algorithms(procedure) :

FCFS :

Step 1: Start the process


Step 2: Accept the number of processes in the ready Queue
Step 3: For each process in the ready Q, assign the process id and accept the CPU burst time
Step 4: Set the waiting of the first process as ‘0’ and its burst time as its turn around time
Step 5: for each process in the Ready Q calculate
(a) Waiting time for process(n)= waiting time of process (n-1) + Burst time of process(n-1)
(b) Turn around time for Process(n)= waiting time of Process(n)+ Burst time for process(n)
Step 6: Calculate
(a) Average waiting time = Total waiting Time / Number of process
(b) Average Turnaround time = Total Turnaround Time / Number of process
Step 7: Stop the process

SJF :
Step 1: Start the process
Step 2: Accept the number of processes in the ready Queue
Step 3: For each process in the ready Q, assign the process id and accept the CPU burst time
Step 4: Start the Ready Q according the shortest Burst time by sorting according to lowest to
highest burst time.

Step 5: Set the waiting time of the first process as ‘0’ and its turnaround time as its burst time.
Step 6: For each process in the ready queue, calculate
(c) Waiting time for process(n)= waiting time of process (n-1) + Burst time of process(n-1)
(d) Turn around time for Process(n)= waiting time of Process(n)+ Burst time for process(n)
Step 6: Calculate
(c) Average waiting time = Total waiting Time / Number of process
(d) Average Turnaround time = Total Turnaround Time / Number of process
Step 7: Stop the process

MET’s Institute of Engineering, BKC, Nashik. Prepared by: prof. Gharu A. N.


System Programming & OS Laboratory Third Year Computer Engineering

RR :
Step 1: Start the process
Step 2: Accept the number of processes in the ready Queue and time quantum (or) time slice
Step 3: For each process in the ready Q, assign the process id and accept the CPU burst time
Step 4: Calculate the no. of time slices for each process where
No. of time slice for process(n) = burst time process(n)/time slice
Step 5: If the burst time is less than the time slice then the no. of time slices =1.
Step 6: Consider the ready queue is a circular Q, calculate
(a) Waiting time for process(n) = waiting time of process(n-1)+ burst time of process(n-1 ) +
the time difference in getting the CPU from process(n-1)
(b) Turn around time for process(n) = waiting time of process(n) + burst time of process(n)+
the time difference in getting CPU from process(n).
Step 7: Calculate
(e) Average waiting time = Total waiting Time / Number of process
(f) Average Turnaround time = Total Turnaround Time / Number of process
Step 8: Stop the process.

Priority Scheduling :
Algorithms :
Step 1: Start the process
Step 2: Accept the number of processes in the ready Queue
Step 3: For each process in the ready Q, assign the process id and accept the CPU burst time, priority
Step 4: Start the Ready Q according the priority by sorting according to lowest to
highest burst time and process.
Step 5: Set the waiting time of the first process as ‘0’ and its turnaround time as its burst time.
``
Step 6: For each process in the ready queue, calculate
(e) Waiting time for process(n)= waiting time of process (n-1) + Burst time of process(n-1)
(f) Turn around time for Process(n)= waiting time of Process(n)+ Burst time for process(n)
Step 6: Calculate
(g) Average waiting time = Total waiting Time / Number of process
(h) Average Turnaround time = Total Turnaround Time / Number of process
Step 7: Stop the process
Note: you can write algorithm & procedure as per your program/concepts

MET’s Institute of Engineering, BKC, Nashik. Prepared by: prof. Gharu A. N.

You might also like