Ch. 3 Lecture 1 - 3 PDF
Ch. 3 Lecture 1 - 3 PDF
Ch. 3 Lecture 1 - 3 PDF
Background
Logical versus Physical Address Space
Swapping
Contiguous Allocation
Paging
Segmentation
Segmentation with Paging
Background
Protection:
Prevent access to private memory of other processes
Different pages of memory can be given special behavior (Read Only, Invisible to
user programs, etc)
Kernel data protected from User programs
Translation:
Ability to translate accesses from one address space (virtual) to a different one
(physical)
When translation exists, process uses virtual addresses, physical memory
uses physical addresses
Names and Binding
Early binding
compiler - produces efficient code
allows checking to be done early
allows estimates of running time and space
Delayed binding
Linker, loader
produces efficient code, allows separate compilation
portability and sharing of object code
Late binding
VM, dynamic linking/loading, overlaying, interpreting
code less efficient, checks done at runtime
flexible, allows dynamic reconfiguration
Multi-step Processing of a Program for Execution
Preparation of a program for execution involves
components at:
Compile time (i.e., “gcc”)
Link/Load time (unix “ld” does link)
Execution time (e.g. dynamic libs)
Dynamic Libraries
Linking postponed until execution
Small piece of code, stub, used to locate appropriate
memory-resident library routine
Stub replaces itself with the address of the routine,
and executes routine
Dynamic Loading
CPU
Base register
Logical Physical
address address
(ma) (pa)
pa = ba + ma
Fixed partitions
Contiguous Allocation (cont.)
OS OS OS OS
Process 5 Process 5 Process 5 Process 5
Process 9 Process 9
Process 8 Process 10
External fragmentation
total memory space exists to satisfy a request, but it is
not contiguous.
Internal fragmentation
allocated memory may be slightly larger than requested
memory; this size difference is memory internal to a
partition, but not being used.
Reduce external fragmentation by compaction
Shuffle memory contents to place all free memory
together in one large block
Compaction is possible only if relocation is dynamic, and
is done at execution time.
I/O problem - (1) latch job in memory while it is in I/O (2)
Do I/O only into OS buffers.
Fragmentation example
Compaction
Paging
CPU p d f d
Physical
Memory
:
p
f
:
Example of Paging
Page 0
Page 0
Page 1 Page 2
Page 1
0 1
Page 2
1 3
Page 3 2 4
Page 3
3 7
: :
Page Table Implementation
EAT = 2+ -
Memory Protection
Page of
Physical memory
page-tables
Outer-page table 1
: :
500
100
: :
708
:
:
929
: :
900
Two Level Paging Example
A logical address (32bit machine, 4K page size) is
divided into
a page number consisting of 20 bits, a page offset
consisting of 12 bits
Since the page table is paged, the page number
consists of
a 10-bit page number, a 10-bit page offset
Thus, a logical address is organized as (p1,p2,d) where
p1 is an index into the outer page table
p2 is the displacement within the page of the outer page
table Page number Page offset
p1 p2 d
Multilevel paging
1
2
1
4
2 4
3
Limit Base
editor 0 25286 43602
1 4425 68348 43062
segment 0
data 1 Segment Table editor
68348
process P1
data 1
segment 1 72773
Logical Memory
process P1
47
Virtual Memory
Background
Demand paging
Performance of demand paging
Page Replacement
Page Replacement Algorithms
Allocation of Frames
Thrashing
Demand Segmentation
Need for Virtual Memory
Virtual Memory
Separation of user logical memory from physical
memory.
Only PART of the program needs to be in memory for
execution.
Logical address space can therefore be much larger
than physical address space.
Need to allow pages to be swapped in and out.
Virtual Memory can be implemented via
Paging
Segmentation
Paging/Segmentation Policies
Fetch Strategies
When should a page or segment be brought into primary
memory from secondary (disk) storage?
Demand Fetch
Anticipatory Fetch
Placement Strategies
When a page or segment is brought into memory, where
is it to be put?
Paging - trivial
Segmentation - significant problem
Replacement Strategies
Which page/segment should be replaced if there is not
enough room for a required page/segment?
Demand Paging
0
0
0
Handling a Page Fault
Frame 1 1 5 4
Frame 2 2 1 5 10 Page faults
4 frames Frame 3 3 2
Frame 4 4 3
FIFO Replacement - Belady’s Anomaly -- more frames does not mean less page faults
Optimal Algorithm
Counter Implementation
Every page entry has a counter; every time page is referenced
through this entry, copy the clock into the counter.
When a page needs to be changes, look at the counters to
determine which page to change (page with smallest time value).
Stack Implementation
Keeps a stack of page numbers in a doubly linked form
Page referenced
move it to the top
requires 6 pointers to be changed
No search required for replacement
LRU Approximation Algorithms
Reference Bit
With each page, associate a bit, initially = 0.
When page is referenced, bit is set to 1.
Replace the one which is 0 (if one exists). Do not know
order however.
Additional Reference Bits Algorithm
Record reference bits at regular intervals.
Keep 8 bits (say) for each page in a table in memory.
Periodically, shift reference bit into high-order bit, I.e. shift
other bits to the right, dropping the lowest bit.
During page replacement, interpret 8bits as unsigned
integer.
The page with the lowest number is the LRU page.
LRU Approximation Algorithms
Second Chance
FIFO (clock) replacement algorithm
Need a reference bit.
When a page is selected, inspect the reference bit.
If the reference bit = 0, replace the page.
If page to be replaced (in clock order) has reference bit
= 1, then
set reference bit to 0
leave page in memory
replace next page (in clock order) subject to same rules.
LRU Approximation Algorithms
Page Protection
Segmentation Protection
Equal Allocation
E.g. If 100 frames and 5 processes, give each 20 pages.
Proportional Allocation
Allocate according to the size of process
Sj = size of process Pj
S = Sj
a1 = 10/137 * 64 5
a2 = 127/137 * 64 59
Priority Allocation
Global Replacement
Process selects a replacement frame from the set of all
frames.
One process can take a frame from another.
Process may not be able to control its page fault rate.
Local Replacement
Each process selects from only its own set of allocated
frames.
Process slowed down even if other less used pages of
memory are available.
Global replacement has better throughput
Hence more commonly used.
Thrashing
75
Thrashing
working-set window
a fixed number of page references, e.g. 10,000 instructions
WSSj (working set size of process Pj) - total number of
pages referenced in the most recent (varies in time)
If too small, will not encompass entire locality.
If too large, will encompass several localities.
If = , will encompass entire program.
D = WSSj total demand frames
If D m (number of available frames) thrashing
Policy: If D m, then suspend one of the processes.
Keeping Track of the Working Set
Approximate with
interval timer + a reference bit
Example: = 10,000
Timer interrupts after every 5000 time units.
Whenever a timer interrupts, copy and set the values of all
reference bits to 0.
Keep in memory 2 bits for each page (indicated if page was used
within last 10,000 to 15,000 references).
If one of the bits in memory = 1 page in working set.
Not completely accurate - cannot tell where reference
occurred.
Improvement - 10 bits and interrupt every 1000 time units.
Page fault Frequency Scheme
79
Demand Paging Issues
Prepaging
Tries to prevent high level of initial paging.
E.g. If a process is suspended, keep list of pages in
working set and bring entire working set back before
restarting process.
Tradeoff - page fault vs. prepaging - depends on how many
pages brought back are reused.
Page Size Selection
fragmentation
table size
I/O overhead
locality
Demand Paging Issues
Program Structure
Array A[1024,1024] of integer
Assume each row is stored on one page
Assume only one frame in memory
Program 1
for j := 1 to 1024 do
for i := 1 to 1024 do
A[i,j] := 0;
1024 * 1024 page faults
Program 2
for i := 1 to 1024 do
for j:= 1 to 1024 do
A[i,j] := 0;
1024 page faults
Demand Paging Issues