Computer Organization: Virtual Memory
Computer Organization: Virtual Memory
Computer Organization: Virtual Memory
Design goals
The exact form of a computer system depends on the constraints and goals.
Computer architectures usually trade off standards, power versus performance,
cost, memory capacity, latency (latency is the amount of time that it takes for
information from one node to travel to the source) and throughput. Sometimes
other considerations, such as features, size, weight, reliability, and expandability
are also factors.
The most common scheme does an in-depth power analysis and figures out how to
keep power consumption low while maintaining adequate performance.
Performance
Modern computer performance is often described in instructions per cycle (IPC),
which measures the efficiency of the architecture at any clock frequency; a faster
IPC rate means the computer is faster. Older computers had IPC counts as low as
0.1 while modern processors easily reach near 1. Superscalar processors may reach
three to five IPC by executing several instructions per clock cycle.[citation needed]
Counting machine-language instructions would be misleading because they can do
varying amounts of work in different ISAs. The "instruction" in the standard
measurements is not a count of the ISA's machine-language instructions, but a unit
of measurement, usually based on the speed of the VAX computer architecture.
Many people used to measure a computer's speed by the clock rate (usually in
MHz or GHz). This refers to the cycles per second of the main clock of the CPU.
However, this metric is somewhat misleading, as a machine with a higher clock
rate may not necessarily have greater performance. As a result, manufacturers have
moved away from clock speed as a measure of performance.
Other factors influence speed, such as the mix of functional units, bus speeds,
available memory, and the type and order of instructions in the programs.
There are two main types of speed: latency and throughput. Latency is the time
between the start of a process and its completion. Throughput is the amount of
work done per unit time. Interrupt latency is the guaranteed maximum response
time of the system to an electronic event (like when the disk drive finishes moving
some data).
Performance is affected by a very wide range of design choices — for
example, pipelining a processor usually makes latency worse, but makes
throughput better. Computers that control machinery usually need low interrupt
latencies. These computers operate in a real-time environment and fail if an
operation is not completed in a specified amount of time. For example, computer-
controlled anti-lock brakes must begin braking within a predictable and limited
time period after the brake pedal is sensed or else failure of the brake will occur.
Benchmarking takes all these factors into account by measuring the time a
computer takes to run through a series of test programs. Although benchmarking
shows strengths, it shouldn't be how you choose a computer. Often the measured
machines split on different measures. For example, one system might handle
scientific applications quickly, while another might render video games more
smoothly. Furthermore, designers may target and add special features to their
products, through hardware or software, that permit a specific benchmark to
execute quickly but don't offer similar advantages to general tasks.
Power efficiency
Power efficiency is another important measurement in modern computers. A
higher power efficiency can often be traded for lower speed or higher cost. The
typical measurement when referring to power consumption in computer
architecture is MIPS/W (millions of instructions per second per watt).
Modern circuits have less power required per transistor as the number of transistors
per chip grows.[19] This is because each transistor that is put in a new chip requires
its own power supply and requires new pathways to be built to power it. However
the number of transistors per chip is starting to increase at a slower rate. Therefore,
power efficiency is starting to become as important, if not more important than
fitting more and more transistors into a single chip. Recent processor designs have
shown this emphasis as they put more focus on power efficiency rather than
cramming as many transistors into a single chip as possible. [20] In the world of
embedded computers, power efficiency has long been an important goal next to
throughput and latency.
Instruction Representation
Within the computer, each instruction is represented by a sequence of bits. The instruction is di
fields, corresponding to the constituent elements of the instruction. During instruction executi
an instruction is read into an instruction register (IR) in the CPU. The CPU must be able to extr
instruction fields to perform the required operation. It is difficult for both the programmer and
reader of textbooks to deal with binary representations of machine instructions. Thus, it has bec
to use a symbolic representation of machine instructions. Opcodes are represented by abbrevia
that indicate the operation. Common examples include:
ADD Add
SUB Subtract
MPY Multiply
DIV Divide
ADD R, Y
may mean add the value contained in data location Y to the contents of register R. In this examp
Note that the operation is performed on the contents of a location, not on its address.
The machine instruction cycle describes the order that instructions are
processed in a computer.
Instructions are processed under the direction of the control unit in a step-by-step
manner.
There are four fundamental steps in the instruction cycle:
1. Fetch the instruction The next instruction is fetched from the memory address
that is currently stored in the Program Counter (PC), and stored in the Instruction
register (IR). At the end of the fetch operation, the PC points to the next instruction
that will be read at the next cycle.
2. Decode the instruction The control unit interprets the instruction. During this
cycle the instruction inside the IR (instruction register) gets decoded.
3. Execute The Control Unit of CPU passes the decoded information as a sequence
of control signals to the relevant function units of the CPU to perform the actions
required by the instruction such as reading values from registers, passing them to
the ALU to perform mathematical or logic functions on them, and writing the
result back to a register. If the ALU is involved, it sends a condition signal back to
the CU.
4. Store result The result generated by the operation is stored in the main memory,
or sent to an output device. Based on the condition of any feedback from the ALU,
Program Counter may be updated to a different address from which the next
instruction will be fetched.
A digital system can understand positional number system only where there are a few
symbols called digits and these symbols represent different values depending on the
position they occupy in the number.
A value of each digit in a number can be determined using
The digit
The position of the digit in the number
The base of the number system (where base is defined as the total number of
digits available in the number system).
Example
Binary Number: 101012
Calculating Decimal Equivalent −
Step 2 19FDE16 ((1 × 164) + (9 × 163) + (15 × 162) + (13 × 161) + (14 × 160))10
Number conversion
There are many methods or techniques which can be used to convert numbers from
one base to another. We'll demonstrate here the following −
Step 1 29 / 2 14 1
Step 2 14 / 2 7 0
Step 3 7/2 3 1
Step 4 3/2 1 1
Step 5 1/2 0 1
Step 1 21 / 2 10 1
Step 2 10 / 2 5 0
Step 3 5/2 2 1
Step 4 2/2 1 0
Step 5 1/2 0 1
Binary coding
In the coding, when numbers, letters or words are represented by a specific group of
symbols, it is said that the number, letter or word is being encoded. The group of
symbols is called as a code. The digital data is represented, stored and transmitted as
group of binary bits. This group is also called as binary code. The binary code is
represented by the number as well as alphanumeric letter.
Weighted Codes
Non-Weighted Codes
Binary Coded Decimal Code
Alphanumeric Codes
Error Detecting Codes
Error Correcting Codes
Weighted Codes
Weighted binary codes are those binary codes which obey the positional weight
principle. Each position of the number represents a specific weight. Several systems of
the codes are used to express the decimal digits 0 through 9. In these codes each
decimal digit is represented by a group of four bits.
Non-Weighted Codes
In this type of binary codes, the positional weights are not assigned. The examples of
non-weighted codes are Excess-3 code and Gray code.
Excess-3 code
The Excess-3 code is also called as XS-3 code. It is non-weighted code used to
express decimal numbers. The Excess-3 code words are derived from the 8421 BCD
code words adding (0011) 2 or (3)10 to each code word in 8421. The excess-3 codes
are obtained as follows −
Example
Gray Code
It is the non-weighted code and it is not arithmetic codes. That means there are no
specific weights assigned to the bit position. It has a very special feature that, only one
bit will change each time the decimal number is incremented as shown in fig. As only
one bit changes at a time, the gray code is called as a unit distance code. The gray
code is a cyclic code. Gray code cannot be used for arithmetic operation.
Alphanumeric codes
A binary digit or bit can represent only two symbols as it has only two states '0' or '1'.
But this is not enough for communication between two computers because there we
need many more symbols for communication. These symbols are required to represent
26 alphabets with capital and small letters, numbers from 0 to 9, punctuation marks
and other symbols.
The alphanumeric codes are the codes that represent numbers and alphabetic
characters. Mostly such codes also represent other characters such as symbol and
various instructions necessary for conveying information. An alphanumeric code should
at least represent 10 digits and 26 letters of alphabet i.e. total 36 items. The following
three alphanumeric codes are very commonly used for the data representation.
There are three basic types of digital logic gates, the AND Gate, the OR
Gate and the NOT Gate
Digital Logic Gates can be made from discrete components such
as Resistors, Transistors and Diodes to form RTL (resistor-transistor logic)
or DTL (diode-transistor logic) circuits, but today’s modern digital 74xxx
series integrated circuits are manufactured using TTL (transistor-transistor
logic) based on NPN bipolar transistor technology or the much faster and
low power CMOS based MOSFET transistor logic used in the 74Cxxx,
74HCxxx, 74ACxxx and the 4000 series logic chips.
B A Q
0 0 0
1 0 0
1 1 1
Boolean Expression Q = A.B Read as A AND B gives Q
B A Q
0 0 0
0 1 1
1 0 1
1 1 1
B A Q
0 0 1
0 1 1
1 0 1
1 1 0
0 0 1
0 1 0
1 0 0
1 1 0
B A Q
0 0 0
0 1 1
1 0 1
1 1 0
B A Q
0 0 1
0 1 0
1 0 0
1 1 1
Boolean Expression Q = A ⊕ B Read if A AND B the SAME gives Q (even)
A Q
0 0
1 1
0 1
1 0
Read as inverse of
Boolean Expression Q = not A or A
A gives Q
0 0 0 1 0 1 0 1
0 1 0 1 1 0 1 0
1 0 0 1 1 0 1 0
1 1 1 0 1 0 0 1
A NOT Buffer
0 1 0
1 0 1
https://www.tutorialspoint.com/computer_logical_organization/logic_gates.htm