0% found this document useful (0 votes)
8 views66 pages

تنظيم حاسوب

تنظيم حاسوب جامعي

Uploaded by

01211487.gmai
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
8 views66 pages

تنظيم حاسوب

تنظيم حاسوب جامعي

Uploaded by

01211487.gmai
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 66

Computer Organization

T. Ahmed M. Dubais
Lecture 1

Introduction to
Introduction of computer systems

Lecture Duration: 2 Hours


Course description
• Course name
– Computer Organization
• Course level
– Level 2
– Equivalent to 3 CH (Credit Hours)
• Assessment
– Continuous Assessment (CA) : 50%
• HomeWorks and assignments : 10%
• Laboratory Reports : 15%
• Course Projects: 10%
• 1 MTA (Mid-Term Assessment): 15%
– Final Exam: 50%
Course description

• Reference Book
– The essentials of Computer Organization and
Architecture, 5th edition – Author: Linda Null
and Julia Lobur 2018
• Topics to be covered
– History of computers, Data Representation,
Intro. to simple computer, Boolean algebra &
Digital logic, Instruction Ste Archt., Real world
Archt., and Memory
Chapter 1 Objectives

• Know the difference between computer


organization and computer architecture.
• Understand units of measure common to computer
systems.
• Appreciate the evolution of computers.
• Understand the computer as a layered system.
• Be able to explain the von Neumann architecture
and the function of basic computer components.
5
What is computer organization and architecture?
– In a computer system, hardware and many software
components are fundamentally related
– Computer organization and architecture help us to
understand how hardware and software interact with each
other

6
1.1 Overview

Why study computer organization?


– Design better programs, including system software
such as compilers, operating systems, and device
drivers.
– Optimize program behavior.
– Evaluate (benchmark) computer system performance.
– Understand time, space, and price tradeoffs.

7
1.1 Overview

• Computer organization
– Encompasses all physical aspects of computer systems.
• How components are connected together?
• How components interact with/talk to each other?
– It addresses issues such as
• Control signals, signaling methods
• Memory types, …
– It helps us to answer the question: How does a
computer work?

8
1.1 Overview

• Computer architecture?
– It focuses on the structure and behavior of the computer
system
– It refers to the logical aspects of system implementation
as seen by the programmer.
– It includes many elements such as
• instruction sets and formats, data types, addressing
modes, number and type of registers, …
– It helps us to answer the question: How do I design a
computer?

9
1.2 Computer Components

• Principle of Equivalence of Hardware and Software


Anything that can be done with software can also be done with
hardware, and anything that can be done with hardware can
also be done with software*
• We implement an application in Hardware or
Software level?
– Our knowledge of computer organization and architecture
will help us to make the best choice

* Assuming speed is not a concern.


10
1.2 Computer Components

• From Software to Hardware


– Computer scientists design algorithms
– Computer scientists implement an algorithm by using high
level programing language (Java, C, etc.)
– Another algorithm runs this algorithm and another one
runs that algorithm and so on
– We finally get down to machine level
– Machine level can be thought of as an algorithm
implemented as an electronic device

11
1.2 Computer Components

• Computer Hardware main components


– A processor to interpret and execute programs
– A memory to store both data and programs
– A mechanism for transferring data to and from the
outside world (Input/Output)

12
1.3 An Example System

Consider this advertisement:

What does it all mean??


13
1.3 An Example System
• A reminder:
– Computers do computations in base 2 (binary system)
– A binary character is called a bit (0 or 1)
– A byte is a set of 8 bits
– “Bits, bytes, KB, MB, …” are used to measure the storage
capacity of a device (RAM, HDD, Flash memory, etc.)
– We use the below table to convert from one unit to another
• Example:
– 1KB = 210 = 1024 Bytes = 1024 x 8 bits = 8192 bits

14
1.3 An Example System

• Hertz = clock cycles per second (frequency)


– 1MHz = 1,000,000Hz
– Processor speeds are measured in MHz or GHz.
• Byte = a unit of storage
– 1KB = 210 = 1024 Bytes
– 1MB = 220 = 1,048,576 Bytes
– Main memory (RAM) is measured in MB
– Disk storage is measured in GB for small systems, TB
for large systems.

15
1.3 An Example System

Measures of time and space:


• Milli- (m) = 1 thousandth = 10 -3
• Micro- () = 1 millionth = 10 -6
• Nano- (n) = 1 billionth = 10 -9
• Pico- (p) = 1 trillionth = 10 -12
• Femto- (f) = 1 quadrillionth = 10 -15

16
1.3 An Example System

• Millisecond = 1 thousandth of a second


– Hard disk drive access times are often 10 to 20
milliseconds.
• Nanosecond = 1 billionth of a second
– Main memory access times are often 50 to 70
nanoseconds.
• Micron (micrometer) = 1 millionth of a meter
– Circuits on computer chips are measured in microns.

17
1.3 An Example System

• We note that cycle time is the reciprocal of clock


frequency.
• A bus operating at 133MHz has a cycle time of
7.52 nanoseconds:

133,000,000 cycles/second = 7.52ns/cycle

Now back to the advertisement ...

18
1.3 An Example System

The microprocessor is the “brain” of


the system. It executes program
instructions. This one is a Pentium III
(Intel) running at 667MHz.

A system bus moves data within the


computer. The faster the bus the better.
This one runs at 133MHz.

19
1.3 An Example System

▪ Pentium III 667 MHz?


• Pentium III : The microprocessor type
• 667 MHz: microprocessor’s clock speed
- Each microprocessor has a synchronization clock (it sends electrical pulses)
- Here, the clock speed is 667 million electrical pulse per second
- The number of instructions per second that a processor can execute is
proportionate to its clock speed (not equal to).
▪ 133 MHz 64MB SDRAM?
- SDRAM: Main memory type, “Synchronous Dynamic Random Access
Memory”
- 133 MHz: Speed of the system bus (between the memory and the
microprocessor)
- 64 MB : Memory Capacity (64 x 220 x 8 bits = 536870912 bits)

20
1.3 An Example System

• Computers with large main memory capacity can


run larger programs with greater speed than
computers having small memories.
• RAM is an acronym for random access memory.
Random access means that memory contents
can be accessed directly if you know its location.
• Cache is a type of temporary memory that can be
accessed faster than RAM.

21
1.3 An Example System

22
1.3 An Example System

▪ 32KB L1 cache, 256KB L2 cache?


- Two cache memory to speed up the data transfer between the
main memory and the processor
- 32KB and 256KB are the capacity of level 1 (L1) and level 2
(L2) memory respectively
▪ 30GB EIDE hard drive (7200 RPM)?
- 30GB: The capacity of the hard drive
- 7200 RPM: Speed of disk rotation is 7200 round per minute
- EIDE: disk interface (connectivity with the rest of the
computer’s components). EIDE stands for Enhanced
Integrated Drive Electronics

23
1.3 An Example System

24
1.3 An Example System

▪ 48X max variable CD-ROM?


- CD-ROM drive
- 48X: the maximum reading data rate the CD drive can achieve
(48 times the traditional audio CD data transfer rate)
▪ 2 USB ports, 1 serial port, 1 parallel port?
- Ports that allow movement of data to and from devices external to the
computer.
- Serial ports send data as a series of pulses along one or two data lines.
- Parallel ports send data as a single pulse along at least eight data lines.
- USB, universal serial bus, is an intelligent serial interface that is self-
configuring. (It supports “plug and play.”)
▪ 19" monitor, .24mm AG, 1280 x 1024 at 85Hz?
- The monitor size (19”), resolution (1280 x 1024), refresh rate (85Hz)
and pixel size (0.24mm).

25
1.3 An Example System

26
1.3 An Example System

▪ Intel 3D AGP graphics card?


– Graphic interface for 3D graphics
– AGP stands for “Accelerated Graphics Port”
▪ 56K PCI voice modem? 64-bit PCI sound card?
– PCI : Dedicated I/O buses, PCI stands for “Peripherial
Component Interconnect
– PCI voice modem: for internet connection
– PCI sound card : for the system’s stereo speakers

27
1.3 An Example System

28
1.4 Standards Organizations

• Standards organizations?
– Number of government and industry organizations
– Some standards-setting organizations are consortia
made up of industry leaders
– Aims
• Establish common guide lines for a particular type of
equipment
• Why? To ensure a “worldwide”1 interoperability
(compatibility)

29
1.4 Standards Organizations

• There are many organizations that set


computer hardware standards-- to include
the interoperability of computer components.
• Throughout this book, and in your career,
you will encounter many of them.
• Some of the most important standards-
setting groups are . . .

30
1.4 Standards Organizations

• Some international standards organizations


– IEEE
• Institute of Electrical and Electronic Engineers
• sets standards for various computer components, signaling protocols, and data
representation
– ITU
• International Telecommunications Union
• Sets standards for telecommunications systems, including telephone, telegraph, and
data communication systems
– ISO
• International Standards Organization
• coordinates worldwide standards development
• Establishes worldwide standards for everything from screw threads to photographic
film.
• Is influential in formulating standards for computer hardware and software,
including their methods of manufacture.

31
1.4 Standards Organizations

• Other county(ies)-wide organizations


– ANSI: American National Standards Institute
– CEN: Comité Européenne de Normalisation
(European committee for standardization)
– BSI: British Standards Institution

32
1.5 Historical Development

• To fully appreciate the computers of today, it is


helpful to understand how things got the way they
are.
• The evolution of computing machinery has taken
place over several centuries.
• In modern times computer evolution is usually
classified into four generations according to the
salient technology of the era.

We note that many of the following dates are approximate.

33
1.5 Historical Development

▪ Generation Zero: Mechanical Calculating Machines (1642 –


1945)
• Use of mechanical technology to do calculations
• Suggest the use of binary number system rather than the decimal
number system
• First Generation: Vacuum tube computers
(1945–1953)
– Use of electrical/electronic technology (much
faster than mechanical technology)
– Invent binary machines built from vacuum tubes
– Invent vaccum tubes diodes and triodes
– Disadvantages: Bulky systems, Power
consumption and heat dissipation.

34
1.5 Historical Development

• The First Generation: Vacuum Tube Computers (1945 - 1953)

The first mass-produced computer.


The first general-purpose computer.

35
1.5 Historical Development

• The Second Generation: Transistorized Computers


(1954–1965)
– Transistors revolutionize computers
• Transistors consume less power than vacuum tubes, are smaller,
and work more reliably,
• the circuitry in computers became smaller and more reliable.

▪ The Third Generation: Integrated Circuit Computers


(1965–1980)
• Integrating multiple transistors in a single
silicon/germanium chip
• Explosion in computer use
• Computers became faster, smaller, and cheaper, bringing
huge gains in processing power

36
1.5 Historical Development

IBM 360
DEC PDP-1
Cray-1
2nd Generation Third Generation

37
Transistors
• Replaced vacuum tubes
• Smaller
• Cheaper
• Less heat dissipation
• Solid State device
• Made from Silicon (Sand)
• Invented 1947 at Bell Labs
• Shockley, Brittain, Bardeen
38
Integrated Circuits

• Self-contained transistor is a discrete


component
– Big, manufactured separately, expensive, hot
when you have thousands of them
• Integrated Circuits
– Transistors “etched” into a substrate, bundled
together instead of discrete components
– Allowed thousands of transistors to be
packaged together efficiently

39
Chip Production
• Ingot of purified silicon – 1 meter
long, sliced into thin wafers
• Chips are etched – much like
photography
– UV light through multiple masks
– Circuits laid down through mask
• Process takes about 3 months

View of
Cross-Section
40
1.5 Historical Development

• The Fourth Generation: VLSI Computers (>1980)


– More integration, more transistors on a single silicon chip (see the table below)
– Computers became smaller: Appearance of micro-computers
– Increasing the processing power of all computers types (also supercomputers
and main frame computers)
– The first was the 4-bit Intel 4004
– Later versions, such as the 8080, 8086, and 8088
spawned the idea of “personal computing.”
Intel
Scale Integration Number of components per chip 4004

SSI: Small Scale Integration 10 – 100


MSI: Medium Scale Integration 100 – 1000
LSI: Large Scale Integration 1000 - 10000
VLSI: Very Large Scale > 10000
Integration
41
1.5 Historical Development

Size comparison

Vacuum Tube

Transistor
Integrated
circuit chip

Integrated
circuit package
42
1.5 Historical Development

• Moore’s Law (1965)


– Gordon Moore, Intel founder
– “The density of transistors in an integrated circuit will double
every year.”
– Higher packing density means shorter electrical paths, giving higher
performance
– Smaller size gives increased flexibility
– Reduced power and cooling requirements
– Fewer interconnections increases reliability
• Contemporary version:
– “The density of silicon chips doubles every 18 months.”

43
Moore’s Law

44
The Shrinking Chip
• Human Hair: 100 microns wide
– 1 micron is 1 millionth of a meter
• Bacterium: 5 microns
• Virus: 0.8 microns
• Early microprocessors: 10-15 micron technology
• 1997: 0.35 micron
• 1998: 0.25 micron
• 1999: 0.18 micron
• 2001: 0.13 micron
• 2003: 0.09 micron
• Physical limits believed to be around 0.02 microns 45
Size

46
1.5 Historical Development

But this “law” cannot hold forever ...

• Rock’s Law
– Arthur Rock, Intel financier
– “The cost of capital equipment to build
semiconductors will double every four years.”
– In 1968, a new chip plant cost about $12,000.
At the time, $12,000 would buy a nice home in
the suburbs.
An executive earning $12,000 per year was
“making a very comfortable living.”
47
1.5 Historical Development

• Rock’s Law
– In 2003, a chip plants under construction will
cost over $2.5 billion.
$2.5 billion is more than the gross domestic
product of some small countries, including
Belize, Bhutan, and the Republic of Sierra
Leone.

– For Moore’s Law to hold, Rock’s Law must fall,


or vice versa. But no one can say which will
give out first.

48
1.6 The Computer Level Hierarchy

– The user executes programs on a PC (Paint, word files, games, etc.)


– The user is outside the computer! He uses input and output devices to
communicate with the computer.
– Now, what happens INSIDE the computer?
– To understand, we will use a “divide and conquer” approach .

49
1.6 The Computer Level Hierarchy

• Imagine the machine (computer) as a hierarchy of


levels, in which each level has a specific function.
• The highest level – Level 6 – is the “user’s level”
– Level 6 is composed of applications
– User runs programs such as word processors, graphics
packages, or games.
• The lower levels are unseen by the user those can be
considered as “virtual machines”.
• Let us discover these “virtual machines”.

50
1.6 The Computer Level Hierarchy

51
1.6 The Computer Level Hierarchy

• Level 5: High-Level Language Level


– Consists of languages such as C, C++, FORTRAN, Lisp,
Pascal, and Prolog.
– Programmers write programs at this level.
– Compilers translates these languages to a language the
machine can understand (that lower levels could understand):
Assembly then machine languages.
• Level 4: Assembly Language Level
– More “machine dependent” language.
– Assembly language is then one to one translated to machine
language (one assembly language instruction is translated to
exactly one machine language instruction).

52
1.6 The Computer Level Hierarchy

• Level 3: System Software Level


– Deals with operating system instructions (multiprogramming,
protecting memory, synchronizing processes, and various
other important functions)
– Instructions translated from assembly language to machine
language are passed through this level unmodified
• Level 2: Instruction Set Architecture (ISA), or Machine
Level
– Machine language recognized by the particular architecture of
the computer system
– Programs written in machine language can be executed
directly by the electronic circuits without any interpreters,
translators, or compilers.

53
1.6 The Computer Level Hierarchy

• Level 1: The Control Level


– Is where a control unit do its job
• Receives machine instructions from the level above
• decodes and executes those instructions properly
• Moves data to where and when it should be
– The control unit interprets the machine instructions
• Level 0: The Digital Logic Level
– is where we find the physical components of the
computer system: the gates and wires

54
1.7 The von Neumann Model

• A computer architecture model published by a famous


Hungarian mathematician named John von Neumann
• The idea is to store programs’ instructions inside a main
memory in order to avoid rewiring the system each time
it had a new problem to solve, or an old one to debug.
• All stored-program computers have come to be known as
von Neumann systems using the von Neumann
architecture

55
1.7 The von Neumann Model

• On the ENIAC,
all programming
was done at the
digital logic
level.
• Programming
the computer
involved moving
plugs and wires.

56
1.7 The von Neumann Model

• The Von Neumann architecture is shown in Figure


1.4
• It satisfies at least the following characteristics
– Consists of three hardware systems (see figure 1.4)
• A central processing unit (CPU) with a control unit, an
arithmetic logic unit (ALU), registers (small storage areas), and
a program counter;
• a main-memory system, which holds programs that control the
computer’s operation;
• and an I/O system.
– Capacity to carry out sequential instruction processing
– Contains a single path, between the main memory
system and the control unit of the CPU

57
58
• Program instructions are stored inside the main
memory
• The machine runs the programs sequentially
(instruction per instruction – machine instruction)
• Each machine instruction is fetched, decoded and
executed during one cycle known as the von
Neumann execution cycle (also called the fetch-
decode-execute cycle)

59
• One iteration of the cycle is as follows:
1. The control unit fetches the next program instruction
from the memory, using the program counter to
determine where the instruction is located.
2. The instruction is decoded into a language the ALU
can understand.
3. Any data operands required to execute the instruction
are fetched from memory and placed into registers
within the CPU.
4. The ALU executes the instruction and places the
results in registers or memory.

60
Instruction 1
1. Fetch
• PC indicates Instruction 3
Instruction 2

the iteration Instruction 3


Data 1 Data 2 Instruction 4
number
• CU fill the
instruction
register …
2. Decode …
• what ALU …
should do (add, Instruction N
multiply, …)?
• Fill registers
with needed
data

61
Instruction 1
Instruction 2
Instruction 3
3. Execute Instruction 3
• Execute the Data 1 Data 2 Instruction 4
instruction Result
• Place the results
in registers or …
memory …

Instruction N

62
1.8 Non-von Neumann Models

• Conventional stored-program computers have


undergone many incremental improvements
over the years.
• These improvements include adding
specialized buses, floating-point units, and
cache memories, to name only a few.
• But enormous improvements in computational
power require departure from the classic von
Neumann architecture.
• Adding processors is one approach.

63
1.8 Non-von Neumann Models

• In the late 1960s, high-performance computer


systems were equipped with dual processors
to increase computational throughput.
• In the 1970s supercomputer systems were
introduced with 32 processors.
• Supercomputers with 1,000 processors were
built in the 1980s.
• In 1999, IBM announced its Blue Gene
system containing over 1 million processors.

64
1.8 Non-von Neumann Models

• Parallel processing is only one method of


providing increased computational power.
• More radical systems have reinvented the
fundamental concepts of computation.
• These advanced systems include genetic
computers, quantum computers, and dataflow
systems.
• At this point, it is unclear whether any of these
systems will provide the basis for the next
generation of computers.
65
Thank You

66

You might also like