OS (Unit 5)
OS (Unit 5)
OS (Unit 5)
Unit 5
Memory Management
load users program into users’ memory, to dump out those program in case of
error, to perform IO to and from user memory and to provide many other service.
Address Binding:
The process address space is the set of logical address that a process reference in
its code. The operating system takes care of mapping the logical address to
physical at the time of the memory allocation of the program. Address binding
refers to mapping of such logical address to physical address. Each binding is the
mapping form one address space to another.
For the execution, program must be brought into memory. The program on the
disk waiting to be brought into memory forms the input queue. In most of the
case a user program goes through several steps and during this steps address can
be represented in different ways such as:
• Symbolic address: the address used in the source code. The variable name,
constant and instruction label are the basic element of symbolic address.
• Relative address: at the time of compilation, compiler bind these symbolic
address to relocatable address.
• Physical address: the linkage editor or loader in turn binds the relocatable
address to absolute address (such as 74014). The loader generates this
address at the time when the program is loaded into the memory.
The binding of the instruction and data to memory address can be done at any
step:
• Compile Time:
If it is known that, at compile time where the process will resides in
memory then absolute code (real address) can be generate. For e.g. if it is
known that a user process will reside starting at location R, then the
generated compiler code will start at that location and extend up from there.
If at some later time, the starting location changes then it will be necessary
to recompile this code.
• Load Time:
If it is not known at compile time where the process will reside in memory
then the compiler must generates relocatable code. In this case final
binding is delayed until load time. If the starting address changes then it is
needed to only reload the user code to incorporate this changed value.
Loader translates the relocatable address into absolute address. The base
address of the process in the main memory is added to all logical address
by the loader to generate absolute address.
• Execution Time: if the process can be moved during its execution from
one memory segment to another then binding must be delayed until run
time. Special hardware must be available for this scheme. Additional
The logical address does not exist Physical address is the location in the
physically in the memory memory that can be accessed
physically
User can view the logical address of User can never view the physical
program address of memory
User can use the logical address to Logical address must convert to
access physical address. physical address in order to execute
program.
Generated by CPU Computed by MMU
Dynamic Loading:
Dynamic loading is a mechanism in which only the required routines are loaded
into memory first and other routines are loaded only when they are required or
called i.e. a routine is not loaded until it is called. Only main program is loaded
into memory first and other routine are loaded when it is required. All the routines
are kept on disk in a relocatable load format and loaded to memory whenever it
is required.
Advantages:
• A routine is loaded only when it is required.
• Useful when large amount of code are needed to handle infrequently
occurring case such as error routines.
• Does not require special support from OS. It is the responsibility of user to
design their program to take advantage of such a method.
Dynamic linking:
Dynamic linking is the mechanism in which all the required system libraries are
linked or reference to a user program whenever it is required or run i.e. system
libraries are not linked until execution time. For example if a module or program
calls the function and body of function resides in separate system library. When
the respective function is called then only such required library routine is loaded
into the memory to provide the body of the function. This mechanism is called
dynamic linking.
With dynamic linking a stub is included in the image for each library routine
reference. A stub is a piece of code that indicates how to locate the appropriate
library routines or how to load the library if the routine is not already present.
When a stub is executed it checks to see whether the needed routine is already in
memory or not. If it is not program load the routine into memory.
Advantages:
• Without this scheme, each program in a system must include a copy of its
library or routine reference by the program in the executable image. This
will waste both disk space and main memory.
20 KB job waiting
Internal fragmentation:
III. Worst fit: this strategy allocate the largest hole. That is searching is made
randomly for the memory hole that is greater amongst all the memory hole.
This strategy will produce the largest left over hole.
Example of first fit, best fit and worst fit are as follows:
Given the five memory partition of 100KB, 500KB, 200KB, 300KB and 600KB
in order. How would first fit, best fit and worst fit algorithm place process of 212
KB, 417KB, 112KB, and 426KB in order using variable partitioning. Which
algorithm makes the efficient use of memory?
Solution:
First Fit:
Here, 212KB process will comes first and search is made from starting of
memory. Those memory block whose size is greater or equal to 212KB comes
first that block will be allocated to process with 212KB size. Since variable
partitioning is used, the free space or unused space will be used for another
process. The process runs until all the memory block will not satisfy the memory
requirement of process.
Best Fit:
Here, 212 KB process will comes first and search is made for memory block
which is just a little greater of equal to 212KB.
In this case 300KB memory is just little bit greater that 212KB so that block is
allocated. Further process is shown in figure below:
Worst Fit:
Here, 212 KB process will comes first and search is made for memory block with
largest size of all block is selected. In this case 600KB is largest that all so, 600KB
is allocated to process with 212KB.
Note: further example of best bit, first fit and best fit using variable partitioning
and fixed partitioning are done in class. So, refer your class note for further
solution.
Working mechanism:
• The system maintains a ready queue consisting of all process whose
memory image are on the backing store.
• Whenever the CPU scheduler decides to execute a process it calls
the dispatcher. Dispatcher checks to see whether the next process in
the queue is in memory or not.
• If it is not and if there is no free memory region the dispatcher swaps
out the process currently in memory and swaps in the desired
Here, the process gets memory size that it needs. Memory is not divided into parts
i.e. whole free block is allocated for user process.
2. Virtual memory:
Virtual memory is a technique that allows the execution of the processes
that are not completely in a memory. The combined size of the program,
data and stack may exceeds the amount of physical memory available for
it. Entire whole program may not be all needed at the same time. So when
using the virtual memory, OS keeps those part of the program that are
currently in use or currently needed in physical memory and rest in
secondary disk.
Virtual memory involves the separation of logical memory as perceived by
users from physical memory. This separation allows an extremely large
virtual memory to be provided for user when only a smaller physical
memory is available. Virtual address space of a process refers to the
logical view of how a process is stored in memory. This view is that a
process begins at a certain logical address (say 0) and exist in contiguous
memory.
For example: a 512 MB program can run in 256MB system by carefully
choosing, out of 512 MB which part of 256MB to keep in memory at each
instant with piece of program being swapped between the disk and memory
as needed.
Here we have 3 frame which are initially empty. The first three reference 7, 0 and
1 cause page fault and are brought into these frame. The next reference 2 replaces
the 7 because 7 was brought first. 0 is the next reference and 0 is already in frame
it will not cause page fault and is not replace. The process continue up to last
reference.
Optimal Page Replacement:
This method replaces the page that will not be used for the longest period of time.
This will guarantees the lowest possible page fault rate for a fixed number of
frame. It is not possible or difficult to implement because future knowledge of
reference string should be known but serve as a standard to compare with the
other algorithm.
Following example illustrate the working of optimal page replacement.
Here we have 3 frame which are initially empty. The first three reference 7, 0 and
1 cause page fault and are brought into these frame. The next reference 2 replaces
the 7 because out of 7, 0 and 1: 7 will not be used for a longer period of time. This
process will continue up to last reference string.
Here, we have 3 frame which are initially empty. The first three reference 7, 0
and 1 cause page fault and are brought into these frame. The next reference 2
replaces the 7 because out of 7, 0 and 1: 7 was used least recently (7 has not been
used for long time from past to now). After this reference 0 will request and for
this no any replacement takes place as 0 is already in frame. When the reference
3 makes request then 3 will replace 1 because out of 2, 0 and 1: 1 has not been
recently used from past. This process will continue up to last reference string.
When one copy of an operand is changed the other copy of operand must
be changed. This is known as cache coherency which ensures that change
in the value of shared operand are propagated throughout the system in a
timely fashion.
There are three class of multiprocessor:
• Uniform memory access in which all the processor shares a unique
centralized primary memory so, each CPU has the same memory
access time.
• Non uniform memory access in which logical address are shared
and physical address are distributed among processor so that access
time to data depends on data position in local or in remote memory.
• Cache only memory access in which data have no specific or
permanent location (no specific memory address) where they stay
and hence they can be read (copied into local cache) and modified
(first in the cache then in permanent location).