ch1 PC
ch1 PC
n
Introductio
n
What is Parallel Architecture?
2
What is Parallel Architecture?
A parallel computer is a collection of processing elements that
cooperate to solve large problems fast
Some broad issues:
• Resource Allocation:
– how large a collection?
– how powerful are the elements?
– how much memory?
• Data access, Communication and Synchronization
– how do the elements cooperate and communicate?
– how are data transmitted between processors?
– what are the abstractions and primitives for cooperation?
• Performance and Scalability
– how does it all translate into performance?
– how does it scale?
3
Why Study Parallel Architecture?
Parallelism:
• Provides alternative to faster clock for performance
• Applies at all levels of system design
• Is a fascinating perspective from which to view architecture
• Is increasingly central in information processing
4
Why Study it
Today?
History: diverse and innovative organizational structures, often
tied to novel programming models
Rapidly maturing under strong technological constraints
• The “killer micro” is ubiquitous
• Laptops and supercomputers are fundamentally similar!
• Technological trends cause diverse approaches to converge
5
Inevitability of Parallel Computing
Application demands: Our insatiable need for computing cycles
• Scientific computing: CFD, Biology, Chemistry, Physics, ...
• General-purpose computing: Video, Graphics, CAD, Databases, TP...
Technology Trends
• Number of transistors on chip growing rapidly
• Clock rates expected to go up only slowly
Architecture Trends
• Instruction-level parallelism valuable but limited
• Coarser-level parallelism, as in MPs, the most viable approach
Economics
Current trends:
• Today’s microprocessors have multiprocessor support
• Servers and workstations becoming MP: Sun, SGI, DEC, COMPAQ!...
• Tomorrow’s microprocessors are multiprocessors
6
Application Trends
Demand for cycles fuels advances in hardware, and vice-versa
• Cycle drives exponential increase in microprocessor performance
• Drives parallel architecture harder: most demanding applications
7
Scientific Computing Demand
8
Engineering Computing
Demand
Large parallel machines a mainstay in many industries
• Petroleum (reservoir analysis)
• Automotive (crash simulation, drag analysis, combustion efficiency),
• Aeronautics (airflow analysis, engine efficiency, structural mechanics,
electromagnetism),
• Computer-aided design
• Pharmaceuticals (molecular modeling)
• Visualization
– in all of the above
– entertainment (films like Toy Story)
– architecture (walk-throughs and rendering)
• Financial modeling (yield and derivative analysis)
• etc.
9
Applications: Speech and Image
Processing
10 GIPS 5,000 Words
Continuous
1,000 Words Speech
1 GIPS Continuous Recognition
Speech HDTV Receiver
Telephone Recognition
Number CIF Video
100 MIPS Recognition ISDN-CD Stereo
200 Words Receiver
Isolated Speech
Recognition CELP
10 MIPS Speech Coding
Speaker
Veri¼cation
1 MIPS Sub-Band
Speech Coding
12
TPC-C Results for March 1996
25,000
Tandem Himalaya
DEC Alpha
SGI PowerChallenge
20,000 HP PA
IBM PowerPC
Other
Throughput (tpmC)
15,000
10,000
5,000
0
0 20 40 60 80 100 120
Number of processors
• Parallelism is pervasive
• Small to moderate scale parallelism very important
• Difficult to obtain snapshot to compare across vendor platforms
13
Summary of Application
Trends
Transition to parallel computing has occurred for scientific and
engineering computing
In rapid progress in commercial computing
• Database and transactions as well as financial
• Usually smaller-scale, but large-scale systems also used
14
Technology Trends
100
Supercomputers
10
Performance
Mainframes
Microprocessors
Minicomputers
1
0.1
1965 1970 1975 1980 1985 1990 1995
The natural building block for multiprocessors is now also about the
fastest!
15
General Technology Trends
• Microprocessor performance increases 50% - 100% per year
• Transistor count doubles every 3 years
• DRAM size quadruples every 3 years
• Huge investment per generation is carried by huge commodity market
180
160
140
DEC
120
alpha
100 Integer FP
IBM
80 HP 9000
RS6000
750
60 540
MIPS
40 MIPS
Sun 4 M/120 M2000
20 260
0
1987 1988 1989 1990 1991 1992
Interconnect
17
Clock Frequency Growth Rate
1,000
R10000
100
Pentium100
Clock rate (MHz)
i80386
10 i8086 i80286
i8080
1
i8008
i4004
0.1
1970 1980 1990 2000
1975 1985 1995 2005
18
Transistor Count Growth
100,000,000 Rate
10,000,000
R10000
Pentium
1,000,000
Transistors
i80386
i80286 R3000
100,000 R2000
i8086
10,000
i8080
i8008
i4004
1,000
1970 1980 1990 2000
1975 1985 1995 2005
20
Architectural Trends
Architecture translates technology’s gifts to performance and capability
Resolves the tradeoff between parallelism and locality
• Current microprocessor: 1/3 compute, 1/3 cache, 1/3 off-chip connect
• Tradeoffs may change with scale and technology advances
Understanding microprocessor architectural trends
• Helps build intuition about design issues or parallel machines
• Shows fundamental role of parallelism even in “sequential” computers
21
Architectural Trends
Greatest trend in VLSI generation is increase in parallelism
• Up to 1985: bit level parallelism: 4-bit -> 8 bit -> 16-bit
– slows after 32 bit
– adoption of 64-bit now under way, 128-bit far (not performance issue)
– great inflection point when 32-bit micro and cache fit on a chip
• Mid 80s to mid 90s: instruction level parallelism
– pipelining and simple instruction sets, + compiler advances (RISC)
– on-chip caches and functional units => superscalar execution
– greater sophistication: out of order execution, speculation, prediction
• to deal with control transfer and latency problems
22
Phases in VLSI
100,000,000 Generation
Bit-level parallelism Instruction-level Thread-level (?)
10,000,000
R10000
1,000,000
Pentium
Transistors
i80386
i80286 R3000
100,000
R2000
i8086
10,000
i8080
i8008
i4004
1,000
1970 1975 1980 1985 1990 1995 2000 2005
23
Architectural Trends: ILP
• Reported speedups for superscalar processors
• Horst, Harris, and Jardine [1990] ...................... 1.37
• Wang and Wu [1988] .......................................... 1.70
• Smith, Johnson, and Horowitz [1989] .............. 2.30
• Murakami et al. [1989] ........................................ 2.55
• Chang et al. [1991] ............................................. 2.90
• Jouppi and Wall [1989] ...................................... 3.20
• Lee, Kwok, and Briggs [1991] ........................... 3.50
• Wall [1991] .......................................................... 5
• Melvin and Patt [1991] ....................................... 8
• Butler et al. [1991] ............................................. 17+
• Large variance due to difference in
– application domain investigated (numerical versus non-numerical)
– capabilities of processor modeled
24
ILP Ideal Potential
30 3
25 2.5
Fraction of total cycles (%)
20 2
Speedup
15 1.5
10 1
5 0.5
0 0
0 1 2 3 4 5 6+ 0 5 10 15
Number of instructions issued Instructions issued per cycle
• Infinite resources and fetch bandwidth, perfect branch prediction and renaming
– real caches and non-zero miss latencies
25
Results of ILP
Studies
• Concentrate on parallelism for 4-issue machines
perfect branch prediction
4x
1 branch unit/real prediction
3x
2x
1x
CRAY CS6400
Sun
60 E10000
50
Number of processors
40
SGI Challenge
AS8400
Sequent B8000 Symmetry21 SE10
10 SE30
Power SS1000 SS1000E
27
Bus
100,000 Bandwidth
Sun E10000
10,000
SGI
Sun E6000
PowerCh
Shared bus bandwidth (MB/s)
XL AS8400
SGI Challenge CS6400
1,000 HPK400
SC2000E
SC2000 AS2100
P-Pro
SS1000E
SS1000 SS20
SS690MP 120
SS10/ SE70/SE30
SS690MP 140 SE10/
Symmetry81/21 SE60
100
SGI PowerSeries Power
Sequent B2100
Sequent
B8000
10
1984 1986 1988 1990 1992 1994 1996 1998
28
Economics
Commodity microprocessors not only fast but CHEAP
• Development cost is tens of millions of dollars (5-100 typical)
• BUT, many more are sold compared to supercomputers
• Crucial to take advantage of the investment, and use the
commodity building block
• Exotic parallel architectures no more than special-purpose
29
Consider Scientific Supercomputing
30
Raw Uniprocessor Performance: LINPACK
10,000
CRAY n = 1,000
CRAY n = 100
Micro n = 1,000
Micro n = 100
1,000
T94
C90
LINPACK (MFLOPS)
DEC 8200
Ymp
Xmp/416
IBM Power2/990
100
MIPS R4400
Xmp/14se
DEC Alpha
HP9000/735
DEC Alpha AXP
CRAY 1s
HP 9000/750
IBM RS6000/540
10
MIPS M/2000
MIPS M/120
Sun 4/260
1
1975 1980 1985 1990 1995 2000
31
Raw Parallel Performance: LINPACK
10,000
MPP peak
CRAY peak
Paragon XP/S MP
(1024)
T3D
CM-5
100
T932(32)
Paragon XP/S
CM-200
CM-2 C90(16)
10 Delta
iPSC/860
nCUBE/2(1024)
Ymp/832(8)
1
Xmp /416(4)
0.1
1985 1987 1989 1991 1993 1995 1996
• Even vector Crays became parallel: X-MP (2-4) Y-MP (8), C-90 (16), T94 (32)
• Since 1993, Cray produces MPPs too (T3D, T3E)
32
500 Fastest
350
Computers
313 319
284
300
Number of systems
250 239
MPP
200 PVP
198
187 SMP
150
110 106
100
106
50 73
63
0
11/93 11/94 11/95 11/96
33
Summary: Why Parallel Architecture?
Increasingly attractive
• Economics, technology, architecture, application demand
Increasingly central and mainstream
Parallelism exploited at many levels
• Instruction-level parallelism
• Multiprocessor servers
• Large-scale multiprocessors (“MPPs”)
Focus of this class: multiprocessor level of parallelism
Same story from memory system perspective
• Increase bandwidth, reduce average latency with many local memories
Wide range of parallel architectures make sense
• Different cost, performance and scalability
34
Convergence of Parallel Architectures
History
Historically, parallel architectures tied to programming models
• Divergent architectures, with no predictable pattern of growth.
Application Software
Systolic System
Arrays Software SIMD
Architecture
Message Passing
Dataflow
Shared Memory
36
Toda
y
Extension of “computer architecture” to support communication and cooperation
• OLD: Instruction Set Architecture
• NEW: Communication Architecture
Defines
• Critical abstractions, boundaries, and primitives (interfaces)
• Organizational structures that implement interfaces (hw or sw)
37
Modern Layered Framework
Compilation
or library Communication abstraction
User/system boundary
Operating systems support
Hardware/software boundary
Communication hardwar e
Physical communication medium
38
Programming Model
What programmer uses in coding applications
Specifies communication and synchronization
Examples:
• Multiprogramming: no communication or synch. at program level
• Shared address space: like bulletin board
• Message passing: like letters or phone calls, explicit point to point
• Data parallel: more regimented, global actions on data
– Implemented with shared address space or message passing
39
Communication Abstraction
User level communication primitives provided
• Realizes the programming model
• Mapping exists between language primitives of programming model and
these primitives
Supported directly by hw, or via OS, or via user sw
Lot of debate about what to support in sw and gap between layers
Today:
• Hw/sw interface tends to be flat, i.e. complexity roughly uniform
• Compilers and software play important roles as bridges today
• Technology trends exert strong influence
Result is convergence in organizational structure
• Relatively simple, general purpose communication primitives
40
Communication Architecture
= User/System Interface + Implementation
User/System Interface:
• Comm. primitives exposed to user-level by hw and system-level sw
Implementation:
• Organizational structures that implement the primitives: hw or OS
• How optimized are they? How integrated into processing node?
• Structure of network
Goals:
• Performance
• Broad applicability
• Programmability
• Scalability
• Low Cost
41
Evolution of Architectural Models
Historically machines tailored to programming models
• Prog. model, comm. abstraction, and machine organization lumped together as the
“architecture”
Evolution helps understand convergence
• Identify core concepts
42
Shared Address Space
Architectures
Any processor can directly reference any memory location
• Communication occurs implicitly as result of loads and stores
Convenient:
• Location transparency
• Similar programming model to time-sharing on uniprocessors
– Except processes run on different processors
– Good throughput on multiprogrammed workloads
43
Shared Address Space Model
Process: virtual address space plus one or more threads of control
Portions of address spaces of processes are shared
Machine physical address space
Virtual address spaces for a
collection of processes communicating
via shared addresses Pn pr i v at e
Load
Pn
Common physical
P2 addresses
P1
P0
St or e
P2 pr i vat e
Shared portion
of address space
P1 pr i v at e
Private portion
of address space
P0 pr i vat e
•Writes to shared address visible to other threads (in other processes too)
•Natural extension of uniprocessors model: conventional memory
operations for comm.; special atomic operations for synchronization
•OS uses shared memory to coordinate processes
44
Communication Hardware
Also natural extension of uniprocessor
Already have processor, one or more memory modules and I/O
controllers connected by hardware interconnect of some sort
I/O
devices
Interconnect Interconnect
Processor Processor
45
History
“Mainframe” approach
• Motivated by multiprogramming
• Extends crossbar used for mem bw and I/O P
“Minicomputer” approach
• Almost all microprocessor systems have bus
• Motivated by multiprogramming, TP
• Used heavily for parallel computing I/O I/O
• Called symmetric multiprocessor (SMP) C C M M
46
Example: Intel Pentium Pro Quad
CPU
P-Pro P-Pro P-Pro
Interrupt 256-KB module module module
controller L2 $
Bus interface
PCI bus
PCI
PCI bus
I/O MIU
cards
1-, 2-, or 4-way
interleaved
DRAM
47
Example: SUN Enterprise
CPU/mem
P P cards
$ $
$2 $2 Mem ctrl
Bus interface/switch
I/O cards
Bus interface
2 FiberChannel
100bT, SCSI
SBUS
SBUS
SBUS
• 16 cards of either type: processors + memory, or I/O
• All memory accessed over bus, so symmetric
• Higher bandwidth, higher latency bus
48
Scaling
M M
M
Up
Network Network
$ $ $ M $ M $ M $
P P P P P P
P Mem
$
Mem
ctrl
and NI
XY Switch
50
Message Passing Architectures
Complete computer as building block, including I/O
• Communication via explicit I/O operations
51
Message-Passing Abstraction
Match ReceiveY, P, t
Address Y
SendX, Q, t
Address X
Local process Local process
address space
address space
Process P Process Q
001 000
111 110
53
Example: IBM SP-2
Power 2
CPU IBM SP-2 node
L2 $
Memory bus
General interconnection
network formed from Memory 4-way
interleaved
8-port switches controller
DRAM
MicroChannel bus
NIC
I/O DMA
DRAM
i860 NI
54
Example Intel Paragon
i860 i860 Intel
Paragon
L1 $ L1 $ node
Mem DMA
ctrl
Driver
NI
4-way
Sandia’ s Intel Paragon XP/S-based Super computer
interleaved
DRAM
8 bits,
175 MHz,
2D grid network bidirectional
with processing node
attached to every switch
55
Toward Architectural Convergence
Evolution and role of software have blurred boundary
• Send/recv supported on SAS machines via buffers
• Can construct global address space on MP using hashing
• Page-based (or finer-grained) shared virtual memory
Hardware organization converging too
• Tighter NI integration even for MP (low-latency, high-bandwidth)
• At lower level, even hardware SAS passes hardware messages
Even clusters of workstations/SMPs are parallel systems
• Emergence of fast system area networks (SAN)
Programming models distinct, but organizations converging
• Nodes connected by general network and communication assists
• Implementations also converging, at least in high-end machines
56
Data Parallel Systems
Programming model
• Operations performed in parallel on each element of data structure
• Logically single thread of control, performs sequential or parallel steps
• Conceptually, a processor associated with each data element
Architectural model
• Array of many simple, cheap processors with little memory each
– Processors don’t sequence through instructions
• Attached to a control processor that issues instructions
• Specialized and general communication, cheap global synchronization
Control
Original motivations
processor
PE PE PE
57
Application of Data Parallelism
•Each PE contains an employee record with his/her salary
If salary > 100K then
salary = salary *1.05
else
salary = salary *1.10
• Logically, the whole operation is a single step
• Some processors enabled for arithmetic operation, others disabled
Other examples:
• Finite differences, linear algebra, ...
• Document searching, graphics, image processing, ...
58
Evolution and Convergence
Rigid control structure (SIMD in Flynn taxonomy)
• SISD = uniprocessor, MIMD = multiprocessor
Popular when cost savings of centralized sequencer high
• 60s when CPU was a cabinet
• Replaced by vectors in mid-70s
– More flexible w.r.t. memory layout and easier to manage
• Revived in mid-80s when 32-bit datapath slices just fit on chip
• No longer true with modern microprocessors
Other reasons for demise
• Simple, regular applications have good locality, can do well anyway
• Loss of applicability due to hardwiring data parallelism
– MIMD machines as effective for data parallelism and more general
Prog. model converges with SPMD (single program multiple data)
• Contributes need for fast global synchronization
• Structured global address space, implemented with either SAS or MP
59
Dataflow Architectures
Represent computation as a graph of essential dependences
• Logical processor at each node, activated by availability of operands
• Message (tokens) carrying tag of next instruction sent to next processor
• Tag compared with others in matching store; match fires execution
1 b c e
a = (b +1) (b c) +
d=ce
f=ad
d
Dataflow graph
a
Network
f
Token Program
store store
Token queue
Network
60
Evolution and Convergence
Key characteristics
• Ability to name operations, synchronization, dynamic scheduling
Problems
• Operations have locality across them, useful to group together
• Handling complex data structures like arrays
• Complexity of matching store and memory units
• Expose too much parallelism (?)
Lasting contributions:
• Integration of communication with thread (handler) generation
• Tightly integrated communication and fine-grained synchronization
• Remained useful concept for software (compilers etc.)
61
Systolic Architectures
• Replace single processor with array of regular processing elements
• Orchestrate data flow for high throughput with less memory access
M M
PE
PE PE PE
62
Systolic Arrays (contd.)
Example: Systolic array for 1-D convolution
y(i) = w1 x(i) + w2 x(i + 1) + w3 x(i + 2) + w4 x(i + 3)
x8 x6 x4 x2
x7 x5 x3 x1
w4 w3 w2 w1
y3 y2 y1
63
Convergence: Generic Parallel Architecture
A generic modern multiprocessor
Network
Communication
Mem assist (CA)
66
Fundamental Design
Issues
At any layer, interface (contract) aspect and performance aspects
• Naming: How are logically shared data and/or processes referenced?
• Operations: What operations are provided on these data
• Ordering: How are accesses to data ordered and coordinated?
• Replication: How are data replicated to reduce communication?
• Communication Cost: Latency, bandwidth, overhead, occupancy
Other issues
• Node Granularity: How to split between processors and memory?
• ...
67
Sequential Programming Model
Contract
• Naming: Can name any variable in virtual address space
– Hardware (and perhaps compilers) does translation to physical addresses
• Operations: Loads and Stores
• Ordering: Sequential program order
Performance
• Rely on dependences on single location (mostly): dependence order
• Compilers and hardware violate other orders without getting caught
• Compiler: reordering and register allocation
• Hardware: out of order, pipeline bypassing, write buffers
• Transparent replication in caches
68
SAS Programming Model
Naming: Any process can name any variable in shared space
Operations: loads and stores, plus those needed for ordering
Simplest Ordering Model:
• Within a process/thread: sequential program order
• Across threads: some interleaving (as in time-sharing)
• Additional orders through synchronization
• Again, compilers/hardware can violate orders without getting caught
– Different, more subtle ordering models also possible (discussed later)
69
Synchronization
Mutual exclusion (locks)
• Ensure certain operations on certain data can be performed by only
one process at a time
• Room that only one person can enter at a time
• No ordering guarantees
Event synchronization
• Ordering of events to preserve dependences
– e.g. producer —> consumer of data
• 3 main types:
– point-to-point
– global
– group
70
Message Passing Programming Model
Naming: Processes can name private data directly.
• No shared address space
Operations: Explicit communication through send and receive
• Send transfers data from private address space to another process
• Receive copies data from process to private address space
• Must be able to name processes
Ordering:
• Program order within a process
• Send and receive can provide pt to pt synch between processes
• Mutual exclusion inherent
74
Ordering
Message passing: no assumptions on orders across processes except
those imposed by send/receive pairs
75
Replication
Very important for reducing data transfer/communication
Again, depends on naming model
Uniprocessor: caches do it automatically
• Reduce communication with memory
Message Passing naming model at an interface
• A receive replicates, giving a new name; subsequently use new name
• Replication is explicit in software above that interface
77
Simple Example
Component performs an operation in 100ns
Simple bandwidth: 10 Mops
Internally pipeline depth 10 => bandwidth 100 Mops
• Rate determined by slowest stage of pipeline, not overall latency
Delivered bandwidth on application depends on initiation frequency
Suppose application performs 100 M operations. What is cost?
• op count * op latency gives 10 sec (upper bound)
• op count / peak op rate gives 1 sec (lower bound)
– assumes full overlap of latency with useful work, so just issue cost
• if application can do 50 ns of useful work before depending on result of
op, cost to application is the other 50ns of latency
78
Linear Model of Data Transfer Latency
Transfer time (n) = T0 + n/B
• useful for message passing, memory access, vector ops etc
79
Communication Cost Model
Comm Time per message= Overhead + Assist Occupancy + Network
Delay + Size/Bandwidth + Contention
= ov + oc + l + n/B + Tc
80
Summary of Design Issues
Functional and performance issues apply at all layers
Functional: Naming, operations and ordering
Performance: Organization, latency, bandwidth, overhead, occupancy
Replication and communication are deeply related
• Management depends on naming model
81
Recap
Parallel architecture is important thread in evolution of architecture
• At all levels
• Multiple processor level now in mainstream of computing
Exotic designs have contributed much, but given way to convergence
• Push of technology, cost and application performance
• Basic processor-memory architecture is the same
• Key architectural issue is in communication architecture
– How communication is integrated into memory and I/O system on node
Fundamental design issues
• Functional: naming, operations, ordering
• Performance: organization, replication, performance characteristics
Design decisions driven by workload-driven evaluation
• Integral part of the engineering focus
82
Outline for Rest of Class
Understanding parallel programs as workloads
– Much more variation, less consensus and greater impact than in sequential
• What they look like in major programming models (Ch. 2)
• Programming for performance: interactions with architecture (Ch. 3)
• Methodologies for workload-driven architectural evaluation (Ch. 4)
Cache-coherent multiprocessors with centralized shared memory
• Basic logical design, tradeoffs, implications for software (Ch 5)
• Physical design, deeper logical design issues, case studies (Ch 6)
Scalable systems
• Design for scalability and realizing programming models (Ch 7)
• Hardware cache coherence with distributed memory (Ch 8)
• Hardware-software tradeoffs for scalable coherent SAS (Ch 9)
83
Outline (contd.)
Interconnection networks (Ch 10)
Latency tolerance (Ch 11)
Future directions (Ch 12)
84