0% found this document useful (0 votes)
9 views60 pages

CH 09

This document summarizes key aspects of virtual memory management, including: 1) Demand paging brings pages into memory only when needed to reduce I/O and memory usage; 2) Copy-on-write allows processes to initially share pages but copies pages when written to; and 3) Page replacement algorithms like FIFO, OPT, and LRU select victim pages to replace based on how long since the page was used to make space for new pages being paged in.

Uploaded by

surajraut8586
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
9 views60 pages

CH 09

This document summarizes key aspects of virtual memory management, including: 1) Demand paging brings pages into memory only when needed to reduce I/O and memory usage; 2) Copy-on-write allows processes to initially share pages but copies pages when written to; and 3) Page replacement algorithms like FIFO, OPT, and LRU select victim pages to replace based on how long since the page was used to make space for new pages being paged in.

Uploaded by

surajraut8586
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 60

Chapter 9: Virtual-Memory

Management
Chapter 9: Virtual Memory

„ Background
„ Demand Paging
„ Copy-on-Write
„ Page Replacement
„ Allocation of Frames
„ Thrashing
„ Memory-Mapped Files
„ Allocation Kernel Memory
„ Other Consideration

Operating System Concepts 9.2 Silberschatz, Galvin and Gagne ©2005


Background

„ Virtual memory – separation of user logical memory from physical


memory.
z Only part of the program needs to be in memory for execution.
z Logical address space can therefore be much larger than
physical address space.
z Allows address spaces to be shared by several processes.
z Allows for more efficient process creation.

„ Virtual memory can be implemented via:


z Demand paging
z Demand segmentation

Operating System Concepts 9.3 Silberschatz, Galvin and Gagne ©2005


Virtual Memory That is Larger Than Physical Memory

Operating System Concepts 9.4 Silberschatz, Galvin and Gagne ©2005


Virtual-address Space

Operating System Concepts 9.5 Silberschatz, Galvin and Gagne ©2005


Shared Library Using Virtual Memory

Operating System Concepts 9.6 Silberschatz, Galvin and Gagne ©2005


Demand Paging

„ Bring a page into memory only when it is needed


z Less I/O needed
z Less memory needed
z Faster response
z More users

„ Page is needed ⇒ reference to it


z invalid reference ⇒ abort
z not-in-memory ⇒ bring to memory

Operating System Concepts 9.7 Silberschatz, Galvin and Gagne ©2005


Transfer of a Paged Memory to Contiguous Disk Space

Operating System Concepts 9.8 Silberschatz, Galvin and Gagne ©2005


Page Table When Some Pages Are Not in Main Memory

Operating System Concepts 9.9 Silberschatz, Galvin and Gagne ©2005


Steps in Handling a Page Fault

Operating System Concepts 9.10 Silberschatz, Galvin and Gagne ©2005


A Worst Case Example

„ Consider a three-address instruction, ADD the content of A and B


and place the result in C
z Fetch and decode the instruction (ADD)
z Fetch A
z Fetch B
z Add A and B
z Store the sum in C

Operating System Concepts 9.11 Silberschatz, Galvin and Gagne ©2005


Performance of Demand Paging

„ Page Fault Rate 0 ≤ p ≤ 1.0


z if p = 0 no page faults
z if p = 1, every reference is a fault

„ Effective Access Time (EAT)


EAT = (1 – p) x memory access
+ p x page fault time

For example, page fault time = page fault overhead


+ [swap page out ]
+ swap page in
+ restart overhead

Operating System Concepts 9.12 Silberschatz, Galvin and Gagne ©2005


Example

„ Memory access time = 200 ns, page fault service time = 8 ms


„ EAT = (1-p) x 200 + p x 8000000 = 200 + 7999800 x p
„ If p = 0.1%, EAT = 8.2 μs, slowed down by a factor of 40
„ If we want performance degradation to be less than 10%, we need

220 > 200 + 7999800 x p


20 > 7999900 x p
p < 0.0000025

„ Fewer than one memory access out of 399990 to page fault

Operating System Concepts 9.13 Silberschatz, Galvin and Gagne ©2005


Copy-on-Write

„ Copy-on-Write (COW) allows both parent and child processes to


initially share the same pages in memory

If either process modifies a shared page, only then is the page


copied

„ COW allows more efficient process creation as only modified


pages are copied

„ Free pages are allocated from a pool of zeroed-out pages

Operating System Concepts 9.14 Silberschatz, Galvin and Gagne ©2005


Before Process 1 Modifies Page C

Operating System Concepts 9.15 Silberschatz, Galvin and Gagne ©2005


After Process 1 Modifies Page C

Copy of page C

Operating System Concepts 9.16 Silberschatz, Galvin and Gagne ©2005


Page Replacement

„ Prevent over-allocation of memory by modifying page-fault service


routine to include page replacement

„ Use modify (dirty) bit to reduce overhead of page transfers – only


modified pages are written to disk

„ Page replacement completes separation between logical memory


and physical memory – large virtual memory can be provided on a
smaller physical memory

Operating System Concepts 9.17 Silberschatz, Galvin and Gagne ©2005


Need For Page Replacement

Operating System Concepts 9.18 Silberschatz, Galvin and Gagne ©2005


Basic Page Replacement

1. Find the location of the desired page on disk

2. Find a free frame:


- If there is a free frame, use it
- If there is no free frame, use a page replacement
algorithm to select a victim frame

3. Read the desired page into the (newly) free frame. Update the
page and frame tables.

4. Restart the process

Operating System Concepts 9.19 Silberschatz, Galvin and Gagne ©2005


Page Replacement

Operating System Concepts 9.20 Silberschatz, Galvin and Gagne ©2005


Graph of Page Faults Versus The Number of Frames

Operating System Concepts 9.21 Silberschatz, Galvin and Gagne ©2005


FIFO Page Replacement

Operating System Concepts 9.22 Silberschatz, Galvin and Gagne ©2005


Belady’s Anomaly

„ For some page-replacement algorithm, the page-fault rate may


increase as the number of allocated frames increases.
„ For reference string 0, 1, 2, 3, 0, 1, 4, 0, 1, 2, 3, 4
z FIFO demonstrates Belady’s anomaly.
z The number of faults for four frames is greater than three
frames.

Operating System Concepts 9.23 Silberschatz, Galvin and Gagne ©2005


FIFO Illustrating Belady’s Anomaly

„ FIFO with 3 page frames


„ FIFO with 4 page frames
„ P's show which page references show page faults

Operating System Concepts 9.24 Silberschatz, Galvin and Gagne ©2005


FIFO Illustrating Belady’s Anomaly

Operating System Concepts 9.25 Silberschatz, Galvin and Gagne ©2005


Optimal Page Replacement

„ Replace the page that will not be used for the longest period of
time.
„ The lowest page-fault rate and no Belady’s anomaly
„ Requires future knowledge of page reference

Operating System Concepts 9.26 Silberschatz, Galvin and Gagne ©2005


Optimal Page Replacement

Operating System Concepts 9.27 Silberschatz, Galvin and Gagne ©2005


LRU Page Replacement

„ Time
z FIFO uses the time when a page was brought into memory
z OPT uses the time when a page is to be used
z LRU, least recently used, uses the time that a page has not
been used
„ LRU: replace the page that has not been used for the longest
period of time
z Like OPT looking backward in time
„ Requires hardware support for implementing LRU

Operating System Concepts 9.28 Silberschatz, Galvin and Gagne ©2005


LRU Page Replacement

Operating System Concepts 9.29 Silberschatz, Galvin and Gagne ©2005


LRU Page Replacement

„ Two possible implementation


z Counters
 Whenever a page is referenced, the CPU clock counter is
copied to the time-of-use field of the page
 The page with the smallest time value will be replaced
z Stack
 Whenever a page is referenced, it is removed from the
stack and put on the top
 The bottom page will be replaced
„ Together with OPT, they belong to stack algorithm that never
exhibit Belady’s anomaly
z A stack algorithm is the one that the set of pages in memory of
n frames is always a subset of pages in memory of n+1 frames

Operating System Concepts 9.30 Silberschatz, Galvin and Gagne ©2005


Use Of A Stack to Record The Most Recent Page References

Operating System Concepts 9.31 Silberschatz, Galvin and Gagne ©2005


Stack Algorithms

State of memory array, M, after each item in reference string is processed

Operating System Concepts 9.32 Silberschatz, Galvin and Gagne ©2005


LRU Approximation Algorithms

„ Need less hardware support than LRU: reference bit


z With each page associate a bit, initially = 0
z When page is referenced bit set to 1
z Replace the one which is 0 (if one exists). We do not know
the order, however.
„ Second chance or clock
z Need reference bit
z Clock replacement
z If page to be replaced (in clock order) has reference bit = 1
then:
 set reference bit 0
 leave page in memory
 replace next page (in clock order), subject to same rules

Operating System Concepts 9.33 Silberschatz, Galvin and Gagne ©2005


Second-Chance (clock) Page-Replacement Algorithm

Operating System Concepts 9.34 Silberschatz, Galvin and Gagne ©2005


Enhanced Second-Chance Algorithm

„ Each page has Reference bit, Modified bit


z bits are set when page is referenced, modified
„ Pages are classified
1. (0, 0): not referenced, not modified
2. (0, 1): not referenced, modified
3. (1, 0): referenced, not modified
4. (1, 1): referenced, modified
„ Scan the circular queue and replace the first page from lowest
numbered non empty class
z May scan the circular queue several times

Operating System Concepts 9.35 Silberschatz, Galvin and Gagne ©2005


Counting Algorithms

„ Keep a counter of the number of references that have been


made to each page
z LFU Algorithm: replaces page with smallest count
z MFU Algorithm: based on the argument that the page
with the smallest count was probably just brought in and
has yet to be used
„ Neither MFU or LFU is common
z The implementation is expensive
z Do not approximate OPT well

Operating System Concepts 9.36 Silberschatz, Galvin and Gagne ©2005


Allocation of Frames

„ How to allocate the fixed amount of free memory among various


processes?
„ Each process needs minimum number of pages
„ Example: IBM 370 – 6 pages to handle SS MOVE instruction:
z instruction is 6 bytes, might span 2 pages
z 2 pages to handle from
z 2 pages to handle to
„ Two major allocation schemes
z fixed allocation
 Equal size or proportional to process size
z priority allocation
 High-priority processes may have more memory

Operating System Concepts 9.37 Silberschatz, Galvin and Gagne ©2005


Local versus Global Allocation

„ Local
z Each process can select from its own set of allocated frames
„ Global
z A process can select replacement frame from all frames, even
it is currently allocated to other process.
z Has better throughput
z Performance of a process depends not only on the paging
behavior of that process but also on other processes.
A process cannot control its own page-fault rate.

Operating System Concepts 9.38 Silberschatz, Galvin and Gagne ©2005


Local versus Global Allocation Example

„ Original configuration
„ Local page replacement
„ Global page replacement

Operating System Concepts 9.39 Silberschatz, Galvin and Gagne ©2005


Thrashing

„ If a process does not have “enough” pages, the page-fault rate is


very high. Consider the following scenario with global allocation:
z low CPU utilization
z operating system thinks that it needs to increase the degree of
multiprogramming
z another process added to the system
z at some point, each process does not have enough frames and
page fault rate is high which cause CPU wait for paging
devices
 CPU utilization
„ Thrashing ≡ a process is busy swapping pages in and out

Operating System Concepts 9.40 Silberschatz, Galvin and Gagne ©2005


Thrashing (Cont.)

Operating System Concepts 9.41 Silberschatz, Galvin and Gagne ©2005


Thrashing (Cont.)

„ Thrashing can be limited by using local allocation, but cannot be


completely solved
„ To prevent thrashing, we must provide a process with as many
frames as it needs
„ Question:
z How many it needs?
z Locality model of process execution
„ Process execution tends to work in some locality for some period of
time and then move to another
z Allocate enough frames for current locality then no page faults
again during the locality lifetime
z If not, the process may thrash since it cannot keep enough
pages in memory that it is actively using

Operating System Concepts 9.42 Silberschatz, Galvin and Gagne ©2005


Locality In A Memory-Reference Pattern

Operating System Concepts 9.43 Silberschatz, Galvin and Gagne ©2005


Working-Set Model

„ Δ ≡ working-set window ≡ a fixed number of page references


Example: 10,000 instruction
„ WSSi (working set of Process Pi) =
total number of pages referenced in the most recent Δ (varies in
time)
z if Δ too small will not encompass entire locality
z if Δ too large will encompass several localities
z if Δ = ∞ ⇒ will encompass entire program
„ D = Σ WSSi ≡ total demand frames
„ if D > m ⇒ Thrashing, m is total number of available frames
„ Policy
z if D > m, then suspend one of the processes
z If enough extra frames, another process can be initiated
„ Prevent thrashing while keeping the degree of multiprogramming as
high as possible

Operating System Concepts 9.44 Silberschatz, Galvin and Gagne ©2005


Working-set model

Operating System Concepts 9.45 Silberschatz, Galvin and Gagne ©2005


Page-Fault Frequency Scheme

„ Establish “acceptable” page-fault rate


z If actual rate too low, process loses frame
z If actual rate too high, process gains frame

Operating System Concepts 9.46 Silberschatz, Galvin and Gagne ©2005


Memory-Mapped Files

„ Memory-mapped file I/O allows file I/O to be treated as routine


memory access by mapping a disk block to a page in memory

„ A file is initially read using demand paging. A page-sized portion of


the file is read from the file system into a physical page.
Subsequent reads/writes to/from the file are treated as ordinary
memory accesses.

„ Simplifies file access by treating file I/O through memory rather


than read() write() system calls

„ Also allows several processes to map the same file allowing the
pages in memory to be shared

Operating System Concepts 9.47 Silberschatz, Galvin and Gagne ©2005


Memory Mapped Files

Operating System Concepts 9.48 Silberschatz, Galvin and Gagne ©2005


Allocating Kernel Memory

„ Kernel memory is often allocated from a free-memory pool different


from the list for ordinary user processes
z To minimize fragmentation: kernel requests memory for data
structures of varying sizes, some of which are less than a page
z Requires contiguous memory: certain hardware devices
interact directly with physical memory without virtual memory
interface
„ Two strategies for managing free memory for kernel processes
z Buddy system
z Slab allocation

Operating System Concepts 9.49 Silberschatz, Galvin and Gagne ©2005


Buddy System Allocation

Operating System Concepts 9.50 Silberschatz, Galvin and Gagne ©2005


Slab Allocation

Operating System Concepts 9.51 Silberschatz, Galvin and Gagne ©2005


Prepaging

„ Prepaging
z To reduce the large number of page faults that occurs at process
startup
z Prepage all or some of the pages a process will need, before
they are referenced
z But if prepaged pages are unused, I/O and memory was wasted
z Assume s pages are prepaged and α of the pages is used
 Is the cost of s * α saved pages faults > or < than the cost of
prepaging s * (1- α) unnecessary pages?
α near zero ⇒ prepaging loses

Operating System Concepts 9.52 Silberschatz, Galvin and Gagne ©2005


Page Size

„ Page size selection must take into consideration:


z fragmentation
z table size
z I/O overhead
z locality

Operating System Concepts 9.53 Silberschatz, Galvin and Gagne ©2005


Page Size (Cont.)

Small page size


„ Advantages
z less internal fragmentation
z better fit for various data structures, code sections
z less unused program in memory
„ Disadvantages
z programs need many pages, larger page tables

Operating System Concepts 9.54 Silberschatz, Galvin and Gagne ©2005


Page Size (Cont.)

„ Overhead due to page table and internal fragmentation

page table space

s⋅e p
overhead = + internal
p 2 fragmentation

„ Where
z s = average process size in bytes
Optimized when
z

z
p = page size in bytes
e = page entry size in bytes
p = 2 se

Operating System Concepts 9.55 Silberschatz, Galvin and Gagne ©2005


TLB Reach

„ TLB Reach: The amount of memory accessible from the TLB


„ TLB Reach = (TLB Size) X (Page Size)
„ Ideally, the working set of each process is stored in the TLB.
Otherwise there is a high degree of page faults.
„ Increase the Page Size. This may lead to an increase in
fragmentation as not all applications require a large page size
„ Provide Multiple Page Sizes. This allows applications that
require larger page sizes the opportunity to use them without
an increase in fragmentation.

Operating System Concepts 9.56 Silberschatz, Galvin and Gagne ©2005


Program Structure

„ Program structure
z Int[128,128] data;
z Each row is stored in one page
z Program 1
for (j = 0; j <128; j++)
for (i = 0; i < 128; i++)
data[i,j] = 0;

128 x 128 = 16,384 page faults

z Program 2
for (i = 0; i < 128; i++)
for (j = 0; j < 128; j++)
data[i,j] = 0;

128 page faults

Operating System Concepts 9.57 Silberschatz, Galvin and Gagne ©2005


I/O interlock

„ I/O Interlock – Pages must sometimes be locked into


memory
z A process is doing I/O and has some pages for
buffer
z After issuing I/O, the process will be suspended
while the I/O device is transferring data to buffer
z A page for I/O buffer may be replaced by other
process via global allocation
„ Consider I/O. Pages that are used for copying a file from
a device must be locked from being selected for eviction
by a page replacement algorithm.

Operating System Concepts 9.58 Silberschatz, Galvin and Gagne ©2005


Reason Why Frames Used For I/O Must Be In Memory

Operating System Concepts 9.59 Silberschatz, Galvin and Gagne ©2005


End of Chapter 9

You might also like