Goals of Memory Management: CSE 451: Operating Systems Spring 2012

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Goals of memory management

• Allocate memory resources among competing


CSE 451: Operating Systems processes, maximizing memory utilization and
Spring 2012 system throughput
• Provide isolation between processes
– We have come to view “addressability” and “protection” as
Module 11 inextricably linked, even though they’re really orthogonal
Memory Management • Provide a convenient abstraction for programming
(and for compilers, etc.)

Ed Lazowska
[email protected]
Allen Center 570

© 2012 Gribble, Lazowska, Levy, Zahorjan 2

Tools of memory management Today’s desktop and server systems


• Base and limit registers • The basic abstraction that the OS provides for
memory management is virtual memory (VM)
• Swapping – Efficient use of hardware (real memory)
• Paging (and page tables and TLB’s) • VM enables programs to execute without requiring their entire
address space to be resident in physical memory
• Segmentation (and segment tables) • Many programs don’t need all of their code or data at once (or
• Page faults => page fault handling => virtual memory ever – branches they never take, or data they never read/write)
• No need to allocate memory for it, OS should adjust amount
• The policies that govern the use of these allocated based on run-time behavior
mechanisms – Program flexibility
• Programs can execute on machines with less RAM than they
“need”
– On the other hand, paging is really slow, so must be minimized!
– Protection
• Virtual memory isolates address spaces from each other
• One process cannot name addresses visible to others; each
process has its own isolated address space
© 2012 Gribble, Lazowska, Levy, Zahorjan 3 © 2012 Gribble, Lazowska, Levy, Zahorjan 4

VM requires hardware and OS support A trip down Memory Lane …


• MMU’s, TLB’s, page tables, page fault handling, … • Why?
– Because it’s instructive
• Typically accompanied by swapping, and at least
– Because embedded processors (98% or more of all processors)
limited segmentation typically don’t have virtual memory
– Because some aspects are pertinent to allocating portions of a
virtual address space – e.g., malloc()

• First, there was job-at-a-time batch programming


– programs used physical addresses directly
– OS loads job (perhaps using a relocating loader to “offset” branch
addresses), runs it, unloads it
– what if the program wouldn’t fit into memory?
• manual overlays!

• An embedded system may have only one program!

© 2012 Gribble, Lazowska, Levy, Zahorjan 5 © 2012 Gribble, Lazowska, Levy, Zahorjan 6

1
• Swapping • Then came multiprogramming
– save a program’s entire state (including its memory image) – multiple processes/jobs in memory at once
to disk
• to overlap I/O and computation between processes/jobs, easing
– allows another program to be run the task of the application programmer
– first program can be swapped back in and re-started right – memory management requirements:
where it was
• protection: restrict which addresses processes can use, so they
can’t stomp on each other
• The first timesharing system, MIT’s “Compatible Time • fast translation: memory lookups must be fast, in spite of the
Sharing System” (CTSS), was a uni-programmed protection scheme
swapping system • fast context switching: when switching between jobs, updating
– only one memory-resident user memory hardware (protection and translation) must be quick
– upon request completion or quantum expiration, a swap took
place
– bow wow wow … but it worked!

© 2012 Gribble, Lazowska, Levy, Zahorjan 7 © 2012 Gribble, Lazowska, Levy, Zahorjan 8

Virtual addresses for multiprogramming


• To make it easier to manage memory of multiple • The set of virtual addresses a process can reference
processes, make processes use virtual addresses is its address space
(which is not what we mean by “virtual memory” – many different possible mechanisms for translating virtual
addresses to physical addresses
today!) • we’ll take a historical walk through them, ending up with our
– virtual addresses are independent of location in physical current techniques
memory (RAM) where referenced data lives
• OS determines location in physical memory • Note: We are not yet talking about paging, or virtual
– instructions issued by CPU reference virtual addresses memory
• e.g., pointers, arguments to load/store instructions, PC … – Only that the program issues addresses in a virtual address
– virtual addresses are translated by hardware into physical space, and these must be translated to reference memory
addresses (with some setup from OS) (the physical address space)
– For now, think of the program as having a contiguous virtual
address space that starts at 0, and a contiguous physical
address space that starts somewhere else

© 2012 Gribble, Lazowska, Levy, Zahorjan 9 © 2012 Gribble, Lazowska, Levy, Zahorjan 10

Old technique #1: Fixed partitions Mechanics of fixed partitions


• Physical memory is broken up into fixed partitions
– partitions may have different sizes, but partitioning never physical memory
changes 0
– hardware requirement: base register, limit register limit register base register partition 0
• physical address = virtual address + base register 2K
2K P2’s base: 6K
• base register loaded by OS when it switches to a process
partition 1
– how do we provide protection?
• if (physical address > base + limit) then… ? 6K
yes
• Advantages offset <? + partition 2
8K
– Simple virtual address
no
• Problems partition 3
raise
– internal fragmentation: the available partition is larger than protection fault 12K
what was requested
– external fragmentation: two small partitions left, but one big
job – what sizes should the partitions be??
© 2012 Gribble, Lazowska, Levy, Zahorjan 11 © 2012 Gribble, Lazowska, Levy, Zahorjan 12

2
Old technique #2: Variable partitions Mechanics of variable partitions
• Obvious next step: physical memory is broken up into
partitions dynamically – partitions are tailored to programs physical memory

– hardware requirements: base register, limit register


limit register base register partition 0
– physical address = virtual address + base register
P3’s size P3’s base
– how do we provide protection? partition 1
• if (physical address > base + limit) then… ?
• Advantages partition 2
– no internal fragmentation yes
• simply allocate partition size to be just big enough for process offset <? + partition 3
(assuming we know what that is!) virtual address
no
• Problems
raise
– external fragmentation protection fault partition 4
• as we load and unload jobs, holes are left scattered throughout
physical memory
• slightly different than the external fragmentation for fixed partition
systems
© 2012 Gribble, Lazowska, Levy, Zahorjan 13 © 2012 Gribble, Lazowska, Levy, Zahorjan 14

Dealing with fragmentation Modern technique: Paging


• Compact memory by • Solve the external fragmentation problem by using fixed sized units
copying in both physical and virtual memory
– Swap a program out • Solve the internal fragmentation problem by making the units small
partition 0 partition 0
– Re-load it, adjacent to virtual address space
partition 1 partition 1 physical address space
another
– Adjust its base register partition 2 page 0
frame 0
– “Lather, rinse, repeat” partition 2 partition 3
page 1
frame 1
– Ugh partition 3 partition 4 page 2
frame 2
page 3


partition 4

frame Y
page X

© 2012 Gribble, Lazowska, Levy, Zahorjan 15 © 2012 Gribble, Lazowska, Levy, Zahorjan 16

Life is easy …
• For the programmer … • For the protection system
– Processes view memory as a contiguous address space – One process cannot “name” another process’s memory –
from bytes 0 through N – a virtual address space there is complete isolation
– N is independent of the actual hardware • The virtual address 0xDEADBEEF maps to different physical
addresses for different processes
– In reality, virtual pages are scattered across physical
memory frames – not contiguous as earlier
• Virtual-to-physical mapping Note: Assume for now that all pages of the address
• This mapping is invisible to the program space are resident in memory – no “page faults”
• For the memory manager …
– Efficient use of memory, because very little internal
fragmentation
– No external fragmentation at all
• No need to copy big chunks of memory around to coalesce free
space

© 2012 Gribble, Lazowska, Levy, Zahorjan 17 © 2012 Gribble, Lazowska, Levy, Zahorjan 18

3
Address translation Paging (K-byte pages)
• Translating virtual addresses
page table virtual address space physical memory
– a virtual address has two parts: virtual page number & offset

process 0
page frame 0 0
– virtual page number (VPN) is index into a page table page 0 page frame 0
0 3 K 1K
– page table entry contains page frame number (PFN) 1 5 page 1 page frame 1
2K 2K
– physical address is PFN::offset page frame 2
3K
• Page tables page frame 3
4K
– managed by the OS page table virtual address space page frame 4
0 5K
– one page table entry (PTE) per page in virtual address space page frame
page 0 page frame 5

process 1
0 7 K 6K
• i.e., one PTE per VPN page frame 6
1 5 page 1
7K
– map virtual page number (VPN) to page frame number (PFN) 2K
page frame 7
2 - page 2
• VPN is simply an index into the page table 3K 8K
3 1 page frame 8
page 3
4K 9K
page frame 9
10K
? Page fault – next lecture!
© 2012 Gribble, Lazowska, Levy, Zahorjan 19 © 2012 Gribble, Lazowska, Levy, Zahorjan 20

Mechanics of address translation Example of address translation

virtual address
• Assume 32 bit addresses
virtual page # offset – assume page size is 4KB (4096 bytes, or 212 bytes)
physical memory – VPN is 20 bits long (220 VPNs), offset is 12 bits long
page
page table frame 0
page • Let’s translate virtual address 0x13325328
physical address
frame 1 – VPN is 0x13325, and offset is 0x328
page
page frame # page frame # offset – assume page table entry 0x13325 contains value 0x03004
frame 2
page • page frame number is 0x03004
frame 3 • VPN 0x13325 maps to PFN 0x03004
– physical address = PFN::offset = 0x03004328

page
frame Y

© 2012 Gribble, Lazowska, Levy, Zahorjan 21 © 2012 Gribble, Lazowska, Levy, Zahorjan 22

Page Table Entries – an opportunity! Page Table Entries (PTE’s)


1 1 1 2 20
• As long as there’s a PTE lookup per memory
V R M prot page frame number
reference, we might as well add some functionality
– We can add protection • PTE’s control mapping
• A virtual page can be read-only, and result in a fault if a store to
– the valid bit says whether or not the PTE can be used
it is attempted
• says whether or not a virtual address is valid
• Some pages may not map to anything – a fault will occur if a
reference is attempted • it is checked each time a virtual address is used
– We can add some “accounting information” – the referenced bit says whether the page has been accessed
• Can’t do anything fancy, since address translation must be fast • it is set when a page has been read or written to
• Can keep track of whether or not a virtual page is being used, – the modified bit says whether or not the page is dirty
though • it is set when a write to the page has occurred
– This will help the paging algorithm, once we get to paging – the protection bits control which operations are allowed
• read, write, execute
– the page frame number determines the physical page
• physical page start address = PFN
© 2012 Gribble, Lazowska, Levy, Zahorjan 23 © 2012 Gribble, Lazowska, Levy, Zahorjan 24

4
Paging advantages Paging disadvantages
• Easy to allocate physical memory • Can still have internal fragmentation
– Process may not use memory in exact multiples of pages
– physical memory is allocated from free list of frames
– But minor because of small page size relative to address space
• to allocate a frame, just remove it from the free list size
– external fragmentation is not a problem • Memory reference overhead
• managing variable-sized allocations is a huge pain in the neck – 2 references per address lookup (page table, then memory)
– “buddy system” – Solution: use a hardware cache to absorb page table lookups
• translation lookaside buffer (TLB) – next class
• Leads naturally to virtual memory
• Memory required to hold page tables can be large
– entire program need not be memory resident – need one PTE per page in virtual address space
– take page faults using “valid” bit – 32 bit AS with 4KB pages = 220 PTEs = 1,048,576 PTEs
– all “chunks” are the same size (page size) – 4 bytes/PTE = 4MB per page table
– but paging was originally introduced to deal with external • OS’s have separate page tables per process
• 25 processes = 100MB of page tables
fragmentation, not to allow programs to be partially resident
– Solution: page the page tables (!!!)
• (ow, my brain hurts…more later)

© 2012 Gribble, Lazowska, Levy, Zahorjan 25 © 2012 Gribble, Lazowska, Levy, Zahorjan 26

Segmentation
(We will be back to paging soon!)
What’s the point?
• Paging • More “logical”
– mitigates various memory allocation complexities (e.g., – absent segmentation, a linker takes a bunch of independent
fragmentation) modules that call each other and linearizes them
– view an address space as a linear array of bytes – they are really independent; segmentation treats them as
– divide it into pages of equal size (e.g., 4KB) such
– use a page table to map virtual pages to physical page • Facilitates sharing and reuse
frames – a segment is a natural unit of sharing – a subroutine or
• page (logical) => page frame (physical) function
• Segmentation • A natural extension of variable-sized partitions
– partition an address space into logical units – variable-sized partition = 1 segment/process
• stack, code, heap, subroutines, … – segmentation = many segments/process
– a virtual address is <segment #, offset>

© 2012 Gribble, Lazowska, Levy, Zahorjan 27 © 2012 Gribble, Lazowska, Levy, Zahorjan 28

Hardware support Segment lookups


• Segment table segment table
– multiple base/limit pairs, one per segment physical memory
limit base
– segments named by segment #, used as index into table
segment 0
• a virtual address is <segment #, offset> segment # offset
– offset of virtual address added to base address of segment segment 1
virtual address
to yield physical address
segment 2
yes
<? + segment 3
no

raise
protection fault segment 4

© 2012 Gribble, Lazowska, Levy, Zahorjan 29 © 2012 Gribble, Lazowska, Levy, Zahorjan 30

5
Pros and cons Combining segmentation and paging
• Can combine these techniques
• Yes, it’s “logical” and it facilitates sharing and reuse – x86 architecture supports both segments and paging
• But it has all the horror of a variable partition system • Use segments to manage logical units
– except that linking is simpler, and the “chunks” that must be
– segments vary in size, but are typically large (multiple pages)
allocated are smaller than a “typical” linear address space
• Use pages to partition segments into fixed-size chunks
• What to do?
– each segment has its own page table
• there is a page table per segment, rather than per user address
space
– memory allocation becomes easy once again
• no contiguous allocation, no external fragmentation

Segment # Page # Offset within page

Offset within segment


© 2012 Gribble, Lazowska, Levy, Zahorjan 31 © 2012 Gribble, Lazowska, Levy, Zahorjan 32

• Linux:
– 1 kernel code segment, 1 kernel data segment
– 1 user code segment, 1 user data segment
– all of these segments are paged

• Note: this is a very limited/boring use of segments!

© 2012 Gribble, Lazowska, Levy, Zahorjan 33

You might also like