Important questions and answers

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

UNIT – III Operating

System

Memory Management

Introduction:

Memory consists of a large array or words or bytes, each with its own address.
Program must be brought into memory for its execution. Collection of processes that are
waiting to be brought into memory for execution is called input queue. The CPU fetches
instructions from memory according to the value of the program counter. After the
instruction has been executed, result may be stored back in memory.

Address binding:

Binding the function call with its associated definition is called address binding.
Addresses in the source program are generally symbolic. A compiler will bind these symbolic
addresses to relocatable addresses. The linkage editor or loader will map the relocatable
addresses to absolute addresses.

Types of address binding:

There are three types of address bindings are there. They are compile time (Early
binding), load time (delayed binding) and execution time (late binding).

Compile time: during compilation if you know where the process will reside in memory, then
absolute code can be generated. If starting location changes, it is necessary to recompile this
code.

Load time: If it is not known at compile time where the process will reside in memory, then
the compiler must generate re-locatable code. In this case, final binding is delayed until load
time. If the starting addresses change, we need only to reload the user code.

Execution time: If the process can be moved during its execution from one memory segment
to another, then binding must be delayed until run time. Special hardware support must be
needed for address binding (e.g. base and limit registers).

Logical Vs physical address space:

An address generated by the CPU is commonly referred as logical address. We usually


refer the logical address as virtual address. Actual address of the memory is called physical

MS Page 1
UNIT – III Operating
System

address. Logical and physical addresses are the same in compile time and load-time binding
schemes. But logical and physical addresses differ in execution-time address-binding
scheme.

The set of all logical addresses generated by the CPU is called logical-address space.
The set of physical addresses corresponding to these logical addresses is called physical –
address space.

The mapping from virtual (logical) to physical address is done by a hardware device
called the Memory Management Unit (MMU).

The memory – mapping hardware converts logical addresses into physical addresses.
The value in the re-location register is added to every address generated by a user process at
that time it is sent to memory. User program deals with the logical addresses it never sees
the real physical addresses.

MS Page 2
UNIT – III Operating
System

Dynamic Loading:

For better memory space utilization, we can use dynamic loading. In dynamic loading a
routine is not loaded until it is called. All the routines are kept in memory. For a program
execution main function is loaded into memory and begins the execution. When a routine
needs to call another one routine first checks whether the routines is in memory or not. If not,
the relocatable linking loader is called to load the desired routine into memory.

The advantage of dynamic loading is that an unused routine is never loaded. This
method is particularly useful when large amounts of code are needed to handle infrequently,
such error routines. It does not require any special support from the operation system.

Dynamic Linking:

In dynamic linking, linking is postponed until execution time. This feature is particularly
useful with system libraries, such as language routines.

The stub is a small piece of code. It is used to indicate how to load the library into
memory, if it is not already present. When this stub is executed, it checks whether the needed
routine is already in memory or not. If not, the stub program loads the routine into memory.

Overlay:

Overlay is needed when process is larger than amount of memory allocated to it. The
idea of overlays is to keep instructions and data in memory that are needed at any given time.
When other instructions are needed to load into memory, it swaps the instructions and data
that are no longer needed.

For example consider a two pass assembler. During pass1 it constructs a symbol
table; then during pass2 it generates machine-code. We can partition such an assembler into
pass1 code, pass2 code, the symbol table, and common support routine used by both pass1
and pass2. Assume the size of each component as follows.

Pass1 70KB
Pass2 80KB
Symbol table 20KB

MS Page 3
UNIT – III Operating
System

Common routines 30KB

To load everything at once, we need 200KB of memory. If our system has only 150KB
of memory, we cannot run our process.

We add an overlay driver (10KB) and start with overlay A in memory. We can define
two overlays: overlay A has the symbol table, common routine, and pass 1, and overlay B has
the symbol table, common routines, and pass 2. When we finish overlay A, we jump to the
overlay driver, which reads overlay B into memory and overwrite overlay A. Overlay A needs
only 120KB and overlay B needs only 130KB. So now we can run our program even if the
system has 150KB of memory.

Swapping:

Swapping means that the exchanging the process between memory and backing store.
For example in the multi-programming environment with round-robin CPU scheduling
algorithm, when a time-quantum expires, the memory manager will start to swap out the
process that just finished, and swap in another process for its execution.

If a higher-priority process arrives and wants service, the memory manager can swap

MS Page 4
UNIT – III Operating
System

out the lower-priority process so that the higher-priority process can load and execute. When
the higher-priority process finishes, the lower-priority process can be swapped back into the
memory and continued its execution. This process is sometimes called roll in and roll out.

Swapping requires a backing store. The backing store is commonly a fast disk. It
must be large enough to accommodate copies of all memory images for all users.

Contiguous memory allocation:

The main memory must keep both the operating system and the various user
processes. The memory is usually divided into two partitions: one for the operating system,
and one for the user processes.

Memory protection

Protection means that protecting the operating system form the user processes, and
protecting user processes form one another. We can provide this protection by using a

MS Page 5
UNIT – III Operating
System

relocation register with a limit register. Each logical address must be less than the limit
register. The MMU maps the logical address into physical address by adding the value of the
relocation register with the logical address.

If a device driver is not commonly used, we don’t want to keep the code in memory, we
may use that space for other purposes such code is sometimes called transient operating-
system code.

Memory allocation:

One of the simplest methods for memory allocation is to divide memory into several
fixed-sized blocks. Each block may contain exactly one process. In this multiple partition
method when a partition is free, a process is selected from the input queue and is loaded into
the free partition. When the process terminates, the partition becomes available for another

MS Page 6
UNIT – III Operating
System

process.

The operating system keeps a table indicating which parts of memory are available and
which are occupied. Initially all memory is available for user processes, and is considered as
one large block of available memory, a hole. When a process arrives and needs memory, OS
search for a hole large enough for this process. If it finds a hole, we allocate only as much
memory as is needed, keeping the rest of the memory available to satisfy future requests.

The set of holes is searched to determine which hole is best to allocate. The first-fit,
best-fit and worst-fit strategies are the most commonly used to select a free hole.

First fit: allocate the first hole that is big enough. Searching can start either at the beginning
of the set of holes or where the previous first-fit search ended.

Best fit: allocate the smallest hole that is big enough. This strategy produces the smallest
available hole to the process.

Worst fit: allocate the largest hole. This strategy produces the largest available hole to the
process.

After the processes are loaded and removed from memory, the free memory space is
broken into little pieces.

External fragmentation: External fragmentation occurs when the total memory space exists
to satisfy a request, but it is not contiguous.

Internal fragmentation: Allocated memory for a process may be slightly larger than
requested memory; this size difference is called internal fragmentation. This size difference
is memory internal; no other process can use this memory.

Compaction: One solution to the external fragmentation problem is compaction. The idea of
compaction is to shuffle memory contents to place all free memory together into one large
block. Compaction is possible only if relocation is dynamic, and is done at execution time.

Another solution to the external fragmentation problem is to permit the logical address
space of a process to be non-contiguous.

Paging:

MS Page 7
UNIT – III Operating
System

Paging is a memory-management scheme that permits the physical-address spaces of


a process to be noncontiguous. Physical memory is broken into fixed-sized blocks called
frames. Logical memory is also broken into blocks of the same size called pages. When a
process is to be executed, its pages are loaded into any available memory frames from the
backing store.

Basic method:

Every address generated by the CPU is divided into two parts: a page number (p) and a
page offset (d). The page number is used as an index into a page table. The page table
contains the base address of each page in physical memory. This base address is combined
with the page offset to define the physical memory address

When we use a paging scheme, we have no external fragmentation, but we may have
some internal fragmentation.

Hardware support:

Each operating system has its own method for storing page tables. In the simplest
case, the page table is implemented as a set of dedicated registers. The use of registers for
the page table is satisfactory if the page table is reasonably small. Many modern computers
allow the page table to be very large. The problem of large page table is it requires more time

MS Page 8
UNIT – III Operating
System

to locate a page.

A standard solution to this problem is to use a special, small, fast lookup hardware
cache, called translation look-aside buffer (TLB). The TLB contains only a few of the page-
table entries. When a logical address is generated by the CPU, its page number is presented
to the TLB. If the page number is found, its frame number is immediately available and is
used to access memory. If the page number is not present in TLB, a memory reference to the
page table must be made. When the frame number is obtained, we can use it to access
memory.

MS Page 9
UNIT – III Operating
System

Protection:

Memory protection in a page environment is accomplished by adding protection bit


associated with each frame. Normally, these bits are kept in the page table. One bit can
define a page read-write or read-only.

One more bit is generally attached to each entry in the page table: a valid-invalid bit.
When this bit is set to “valid”, this value indicates that the associated page is in the process
logical address space and is legal page. If the bit is set to “invalid”, this value indicates that
the page is not in the process logical-address space.Diagram 9.11

Structure of the page table:

Hierarchical paging:
Most modern computer systems support a large logical-address space. In two level
paging algorithms, the page table itself is also paged.

MS Page 10
UNIT – III Operating
System

Hashed page table:

A common approach for handling address spaces larger than 32 bits is to use a
hashed page table. Each entry in hash table contains three fields: a) the virtual page number
b) frame number c) pointer to the next element in the linked list.

Inverted page table:

One entry for each real page of memory. Entry consists of virtual address of page in
real memory with information about process that owns page. Each virtual address in the
system consists of a triple.

<Process-id, page-number, offset>

Shared pages:

Advantage of paging is share the common code. Consider a system that supports 40
users, each of whom executes a text editor. If the text editor consists of 150KB of code and
50KB of data space; we would need 8000 KB to support 40 users.

Reentrant code is non-self-modifying code. If the code is reentrant, then it never


changes during execution; so two or more processes can execute the same code at a time.

MS Page 11
UNIT – III Operating
System

In this example only one copy of the editor needs to be kept in physical memory. Each
user’s page table maps onto the same physical copy of the editor, but data pages are mapped
to the different frames.

Segmentation:

An important problem with paging is the separation of the user’s view of memory and
the actual physical memory. The user’s view of memory is not the same as the actual
physical memory.

MS Page 12
UNIT – III Operating
System

Basic method:

Segmentation is a memory management scheme that supports the user view of


memory. A logical-address space is collection of segments. Each segment has a name and
length. The addresses specify both the segment name and the offset within the segment.

Hardware:

The actual physical memory is still, a one-dimensional sequence of bytes. We must


define an implementation to map two dimensional users-define address into one-dimensional
physical address. This can be done with the help of segment table. Each entry of the segment
table has a segment base and a segment limit. The segment base contains the starting
physical address (where the segment resides in memory). Segment limit specifies the length
of the segment.

MS Page 13
UNIT – III Operating
System

We know that logical address consist of two parts: segment number(s) and offset (d).
The segment number is used as an index to the segment table. The offset of the logical
address must be between 0 and the segment limit. If it is not, the OS has to report the error
message. If this offset is legal, it is added to the segment base to produce the address in
physical memory.

For example consider the situation shown in the below figure. We have five segments
numbered from 0 through 4. The segment table has a separate entry for each segment, giving
the beginning address of the segment in physical memory (or base) and limit of the segment

MS Page 14
UNIT – III Operating
System

(or limit). For example segment 2 is 400 bytes long and begins at location 4300. Thus a
reference to byte 53 of segment 2 is mapped onto location 4300+53=4353.

Protection and sharing:

A particular advantage of segmentation is the association of protection with the


segments. Because the segment represent a semantically defined portion of the program. In
modern, instructions are non-self –modifying, so instruction segments can be defined as read
only or execute only.

Another advantage of segmentation involves the sharing of code or data. For example

MS Page 15
UNIT – III Operating
System

consider the use of a text editor in a time-sharing system. A complete editor might be quite
large composed of many segments. These segments can be shared among all users. Rather
than n copies of the editor, we need only one copy.

We can also share only a part of program. For example, common subroutine packages
can be shared among many users.

Fragmentation:

Segmentation may cause external fragmentation, when all blocks of free memory are
too small to accommodate a segment. In this case, the process has to wait until more
memory becomes available, or until compaction creates a larger hole. At one extreme we
could define each process to be one segment. This approach reduces the variable-sized
partition scheme. At the other extreme, every byte could be put in its own segment and
relocated separately. This arrangement eliminates external fragmentation. But every byte
would need a base register for its relocation, doubling the memory use. Generally, if the
average segment size is small, external fragmentation will also be small.

MS Page 16
UNIT – III Operating
System

Segmentation with paging:

Both paging and segmentation have advantages and disadvantages. So we are


merging both these memory management schemes into to single unit. In this segment table
entry contains not the base address of the segment, but the base address of the page table of
this segment.

Virtual address has three parts: segment number, page number and page offset.
Segment number is added with segment base register pointing the segment in the segment
table. Segment entry contains the base value for a page. This base value and the page
number added together and pointing a location in page table. As usual page table contains
the information about the frame where the information actually stored in physical memory.

MS Page 17
UNIT – III Operating
System

MS Page 18

You might also like