Introduc) On: Lecture 1: Evolution of Computer System Lecture 1: Evolution of Computer System
Introduc) On: Lecture 1: Evolution of Computer System Lecture 1: Evolution of Computer System
Introduc)on
• Computers have become part and parcel of our daily lives.
– They are everywhere (embedded systems?)
– Laptops, tablets, mobile phones, intelligent appliances.
2 2 2
Historical Perspec)ve
• Computer OrganizaJon:
– Design of the components and funcJonal blocks using which computer • Constant quest of building automaJc compuJng machines
systems are built.
have driven the development of computers.
– Analogy: civil engineer’s task during building construcJon (cement,
– Ini$al efforts: mechanical devices like pulleys, levers and gears.
bricks, iron rods, and other building materials).
– During World War II: mechanical relays to carry out computaJons.
• Computer Architecture:
– Vacuum tubes developed: first electronic computer called ENIAC.
– How to integrate the components to build a computer system to
– Semiconductor transistors developed and journey of miniaturizaJon
achieve a desired level of performance.
began.
– Analogy: architect’s task during the planning of a building (overall • SSI à MSI à LSI à VLSI à ULSI à …. Billions of transistors per chip
layout, floorplan, etc.).
2 4 2
Babbage Engine
• First automaJc compuJng engine
PASCALINE (1642) was designed by Charles Babbage
• Mechanical calculator invented by in the 19th century, but he could
B. Pascal. not build it.
• Could add and subtract two numbers • The first complete Babbage
directly, and mulJply and divide by engine was built in 2002, 153
repeJJon. years aber it was designed.
• 8000 parts.
• Weighed 5 tons.
• 11 feet in length.
5 2 6 2
1
24/07/17
Harvard Mark 1
ENIAC • Built at the University of
(Electrical Numerical
Harvard in 1944, with support
Integrator and
from IBM.
Calculator) • Used mechanical relays
• Developed at the University (switches) to represent data.
of Pennsylvania.
• It weighed 35 tons, and
• Used 18,000 vacuum tubes, required 500 miles of wiring.
weighed 30 tons, and
occupied a 30b x 50b space.
7 2 8 2
9 2 10 2
First (1945-54) Vacuum tubes, relays Machine & assembly language Evolu)on of the
ENIAC, IBM-701
Types of Computer
Second (1955-64) Transistors, memories, I/O Batch processing systems, HLL
processors IBM-7090 Systems
Third (1965-74) SSI and MSI integrated circuits MulJprogramming / Time sharing
Microprogramming IBM 360, Intel 8008
Fourth (1975-84) LSI and VLSI integrated circuits MulJprocessors The future?
Intel 8086, 8088 • Large-scale IoT based
Fibh (1984-90) VLSI, mulJprocessor on-chip Parallel compuJng, Intel 486 systems.
• Wearable compuJng.
Sixth (1990 onwards) ULSI, scalable architecture, post- Massively parallel processors
• Intelligent objects.
CMOS technologies PenJum, SUN Ultra workstaJons
11 2 12 2
2
24/07/17
13 2 14 2
Moore’s Law
• Refers to an observaJon made by Intel co-founder Gordon Moore in
1965. He noJced that the number of transistors per square inch on
integrated circuits had doubled every year since their invenJon.
• Moore's law predicts that this trend will conJnue into the
foreseeable future.
• Although the pace has slowed, the number of transistors per square
inch has since doubled approximately every 18 months. This is used
as the current definiJon of Moore's law.
15 2 16 2
17 2 18 2
3
24/07/17
19 2 20 2
• Inside the Memory Unit – Various different types of memory are possible.
– Two main types of memory subsystems. a) Random Access Memory (RAM), which is used for the cache and primary
• Primary or Main memory, which stores the acJve instrucJons and data for memory sub-systems. Read and Write access Jmes are independent of
the program being executed on the processor. the locaJon being accessed.
• Secondary memory, which is used as a backup and stores all acJve and b) Read Only Memory (ROM), which is used as part of the primary memory
inacJve programs and data, typically as files. to store some fixed data that cannot be changed.
– The processor only has direct access to the primary memory. c) MagneJc Disk, which uses direcJon of magneJzaJon of Jny magneJc
parJcles on a metallic surface to store data. Access Jmes vary depending
– In reality, the memory system is implemented as a hierarchy of several on the locaJon being accessed, and is used as secondary memory.
levels.
d) Flash Memory, which is replacing magneJc disks as secondary memory
• L1 cache, L2 cache, L3 cache, primary memory, secondary memory. devices. They are faster, but smaller in size as compared to disk.
• ObjecJve is to provide faster memory access at affordable cost.
21 2 22 2
Input Unit
• Used to feed data to the computer system from the external
environment.
– Data are transferred to the processor/memory aber appropriate
encoding.
• Common input devices:
– Keyboard
– Mouse
– JoysJck
– Camera
23 2 24 2
4
24/07/17
Output Unit
• Used to send the result of some computaJon to the outside
world.
• Common output devices:
– LCD/LED screen
– Printer and Plooer
– Speaker / Buzzer
– ProjecJon system
25 2 26 2
END OF LECTURE 1
27 2 28 2
Introduc)on
• The basic mechanism through which an instrucJon gets
executed shall be illustrated.
• May be recalled:
Lecture 1: EVOLUTION OF COMPUTER SYSTEM – ALU contains a set of registers, some general-purpose and some special-
Lecture 2: BASIC
Lecture OPERATION
1: EVOLUTION OF A COMPUTER
OF COMPUTER SYSTEM purpose.
– First we briefly explain the funcJons of the special-purpose registers
DR. KAMALIKA DATTA
DR. KAMALIKA DATTA
before we look into some examples.
DR. KAMALIKA DATTA
DEPARTMENT
DEPARTMENTOF
OFCOMPUTER SCIENCEAND
COMPUTER SCIENCE ANDENGINEERING,
ENGINEERING,NITNIT MEGHALAYA
MEGHALAYA
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, NIT MEGHALAYA
2 30 2
5
24/07/17
31 2 32 2
33 2 34 2
35 2 36 2
6
24/07/17
ExecuJon of ADD R1,LOCA • LOCA (i.e. 5000) is transferred (from IR) to MAR. MAR ß IR[Operand]
• Assume that the instrucJon is stored in memory locaJon 1000, the iniJal value of R1 • READ request is issued to memory unit.
is 50, and LOCA is 5000. • The data is fetched to MDR. MDR ß Mem[MAR]
• Before the instrucJon is executed, PC contains 1000. • The content of MDR is added to R1. R1 ß R1 + MDR
• Content of PC is transferred to MAR. MAR ß PC
The steps being carried out are called micro-operaJons:
• READ request is issued to memory unit.
MAR ß PC
• The instrucJon is fetched to MDR. MDR ß Mem[MAR] MDR ß Mem[MAR]
• Content of MDR is transferred to IR. IR ß MDR IR ß MDR
PC ß PC + 4
• PC is incremented to point to the next instrucJon. PC ß PC + 4
MAR ß IR[Operand]
• The instrucJon is decoded by the control unit. MDR ß Mem[MAR]
ADD R1 5000 R1 ß R1 + MDR
37 2 38 2
R1 125
50 ExecuJon of ADD R1,R2
1. PC = 1000
• Assume that the instrucJon is stored in memory locaJon 1500, the iniJal value of R1
Address Content 2. MAR = 1000 is 50, and R2 is 200.
1000 ADD R1, LOCA 3. PC = PC + 4 = 1004 • Before the instrucJon is executed, PC contains 1500.
1004 … 4. MDR = ADD R1, LOCA • Content of PC is transferred to MAR. MAR ß PC
5. IR = ADD R1, LOCA • READ request is issued to memory unit. ADD R1, R2
6. MAR = LOCA = 5000 • The instrucJon is fetched to MDR. MDR ß Mem[MAR]
5000 75
7. MDR = 75 • Content of MDR is transferred to IR. IR ß MDR
LOCA • PC is incremented to point to the next instrucJon. PC ß PC + 4
8. R1 = R1 + MDR = 50 + 75 = 125
• The instrucJon is decoded by the control unit.
• R2 is added to R1. R1 ß R1 + R2
39 2 40 2
R1 250
50 Bus Architecture
R2 200 1. PC = 1500 • The different funcJonal modules must be connected in an
2. MAR = 1500 organized manner to form an operaJonal system.
Address InstrucJon
3. PC = PC + 4 = 1504
1500 ADD R1, R2 4. MDR = ADD R1, R2
• Bus refers to a group of lines that serves as a connecJng path
1504 … 5. IR = ADD R1, R2 for several devices.
6. R1 = R1 + R2 = 250 • The simplest way to connect the funcJonal unit is to use the
single bus architecture.
– Only one data transfer allowed in one clock cycle.
– For mulJ-bus architecture, parallelism in data transfer is allowed.
41 2 42 2
7
24/07/17
Output
Input Output Memory Processor Device
Input
Device
I/O Processor
43 2 44 2
45 2 46 2
Mul)-Bus Architectures
• Modern processors have mulJple buses that connect the
registers and other funcJonal units.
– Allows mulJple data transfer micro-operaJons to be executed in the END OF LECTURE 2
same clock cycle.
– Results in overall faster instrucJon execuJon.
• Also advantageous to have mulJple shorter buses rather than
a single long bus.
– Smaller parasiJc capacitance, and hence smaller delay.
47 2 48 2
8
24/07/17
LectureLecture 1: EVOLUTION OF COMPUTER SYSTEM which is the basic unit of data storage).
3: MEMORY ADDRESSING AND LANGUAGES • A memory system with M locaJons and N bits per locaJon, is referred to
DR. as an M x N memory.
DR.KAMALIKA DATTA
KAMALIKA DATTA – Both M and N are typically some powers of 2.
DEPARTMENT OF COMPUTERDR.SCIENCE
KAMALIKA DATTA
DEPARTMENT OF COMPUTER SCIENCEAND ENGINEERING,
AND ENGINEERING, NITNIT MEGHALAYA
MEGHALAYA
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, NIT MEGHALAYA – Example: 1024 x 8, 65536 x 32, etc.
49
2 50 2
51 2 52 2
53 2 54 2
9
24/07/17
55 2 56 2
57 2 58 2
An Example
• The two convenJons have been named as:
• Represent the following 32-bit number in both Liole-Endian and Big-Endian
a) Liole Endian in memory from address 2000 onwards:
• The least significant byte is stored at lower address followed by the 01010101 00110011 00001111 11000011
most significant byte. Examples: Intel processors, DEC alpha, etc.
• Same concept followed for arbitrary mulJ-byte data. Li_le Endian Big Endian
Address Data Address Data
b) Big Endian 2000 11000011 2000 01010101
• The most significant byte is stored at lower address followed by the
least significant byte. Examples: IBM’s 370 mainframes, Motorola 2001 00001111 2001 00110011
microprocessors, TCP/IP, etc. 2002 00110011 2002 00001111
• Same concept followed for arbitrary mulJ-byte data. 2003 01010101 2003 11000011
59 2 60 2
10
24/07/17
61 2 62 2
63 2 64 2
Assembly
• Example 1: An 8085 cross-assembler is running on a desktop PC which
Compiler Alterna)ve 2 generates 8085 machine code.
language
65 2 66 2
11
24/07/17
END OF LECTURE 3
Lecture 1: EVOLUTION OF COMPUTER SYSTEM
Lecture
Lecture 1: EVOLUTION
4: SOFTWARE ANDOFARCHITECTURE
COMPUTER SYSTEM
TYPES
DR. KAMALIKA DATTA
DR. KAMALIKA DATTA
DR. KAMALIKA DATTA
DEPARTMENT
DEPARTMENTOF
OFCOMPUTER
COMPUTER SCIENCE ANDENGINEERING,
SCIENCE AND ENGINEERING, NIT
NIT MEGHALAYA
MEGHALAYA
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING, NIT MEGHALAYA
68
67 2 2
69 2 70 2
71 2 72 2
12
24/07/17
OperaJng System
Users & Applica3on SoYware
• Provides an interface between
• Some very commonly used system sobware: computer hardware and users.
– OperaJng system (WINDOWS, LINUX, MAC/OS, • Two layers: Shell
ANDROID, etc.) a) Kernel: contains low-level
• Instance of a program that never terminates. Kernel
rouJnes for resource
• The program conJnues running unJl either the machine is management.
switched off or the user manually shuts down the machine. b) Shell: provides an interface for Computer
the users to interact with the Hardware
– Compilers and assemblers
computer hardware through the
– Linkers and loaders kernel.
– Editors and debuggers
73 2 74 2
• The OS is a collecJon of rouJnes that is used to control • Depending on the intended use of the computer system, the
sharing of various computer resources as they execute goal of the OS may differ.
applicaJon programs. – Classical mulJ-programming systems
– Typical resources: Processor, Memory, Files, I/O devices, etc. • Several user programs loaded in memory.
• These tasks include: • Switch to another program when one program gets blocked due to I/O.
• ObjecJve is to maximize resource uJlizaJon.
– Assigning memory and disk space to program and data files.
– Modern Jme-sharing systems
– Moving data between I/O devices, memory and disk units.
• Widely used because every user can now afford to have a separate
– Handling I/O operaJons, with parallel operaJons where possible. terminal.
– Handling mulJple user programs that are running at the same • Processor Jme shared among a number of interacJve users.
Jme. • ObjecJve is to reduce the user response Jme.
75 2 76 2
ClassificaJon of Computer
– Real-Jme systems
• Several applicaJons are running with specific deadlines.
Architecture
• Deadlines can be either hard or sob. • Broadly can be classified into two types:
• Interrupt-driven operaJon – processor interrupted when a task arrives.
• Examples: missile control system, industrial manufacturing plant, paJent
a) Von-Neumann architecture
health monitoring and control system, automoJve control system, etc.
b) Harvard architecture
– Mobile (phone) systems
• Here user responsiveness is the most important. • How is a computer different from a calculator?
• SomeJmes a program that makes the system slow or hogs too much
memory may be forcibly stopped. – They have similar circuitry inside (e.g. for doing
arithmeJc).
– In a calculator, user has to interacJvely give the
77 2
sequence of commands. 78 2
13
24/07/17
81 2 82 2
14
24/07/17
END OF LECTURE 4
Lecture 1: EVOLUTION OF COMPUTER SYSTEM
Lecture 5: INSTRUCTION
Lecture SET
1: EVOLUTION OF ARCHITECTURE
COMPUTER SYSTEM
DR. KAMALIKA
DR.DR.
KAMALIKA DATTA
DATTA
KAMALIKA DATTA
DEPARTMENT
DEPARTMENT OF
OF COMPUTER
COMPUTER
DEPARTMENT SCIENCE
SCIENCE
OF COMPUTER AND
AND
SCIENCE ENGINEERING,
ENGINEERING,
AND NIT
NITNIT
ENGINEERING, MEGHALAYA
MEGHALAYA
MEGHALAYA
86
85 2 2
87 2 88 2
89 2 90 2
15
24/07/17
Registers
91 2 92 2
93 2 94 2
95 2 96 2
16
24/07/17
97 2 98 2
99 2 100 2
101 2 102 2
17
24/07/17
103 2 104 2
105 2 106 2
107 2 108 2
18
24/07/17
END OF LECTURE 5
109 2
19