Chapter 08 and Chapter 09

Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

Chapter 09: Memory organization

Microcomputer Memory:

Computer data storage, often called storage or memory, refers to computer


components, devices, and recording media that retain digital data used for
computing for some interval of time. Computer data storage provides one of the
core functions of the modern computer, that of information retention. It is one of
the fundamental components of all modern computers, and coupled with a central
processing unit (CPU, a processor), implements the basic computer model used
since the 1940s.

In contemporary usage, memory usually refers to a form of semiconductor storage


known as random-access memory (RAM) and sometimes other forms of fast but
temporary storage. Similarly, storage today more commonly refers to mass storage
— optical discs, forms of magnetic storage like hard disk drives, and other types
slower than RAM, but of a more permanent nature. Historically, memory and
storage were respectively called main memory and secondary storage. The terms
internal memory and external memory are also used.

The contemporary distinctions are helpful, because they are also fundamental to
the architecture of computers in general. The distinctions also reflect an important
and significant technical difference between memory and mass storage devices,
which has been blurred by the historical usage of the term storage. Nevertheless,
this article uses the traditional nomenclature.

Memory connection to CPU


Purpose of Memory:

Many different forms of storage, based on various natural phenomena, have been
invented. So far, no practical universal storage medium exists, and all forms of
storage have some drawbacks. Therefore a computer system usually contains
several kinds of storage, each with an individual purpose.

A digital computer represents data using the binary numeral system. Text,
numbers, pictures, audio, and nearly any other form of information can be
converted into a string of bits, or binary digits, each of which has a value of 1 or 0.
The most common unit of storage is the byte, equal to 8 bits. A piece of
information can be handled by any computer whose storage space is large enough
to accommodate the binary representation of the piece of information, or simply
data. For example, using eight million bits, or about one megabyte, a typical
computer could store a short novel.

Traditionally the most important part of every computer is the central processing
unit (CPU, or simply a processor), because it actually operates on data, performs
any calculations, and controls all the other components.

In practice, almost all computers use a variety of memory types, organized in a


storage hierarchy around the CPU, as a tradeoff between performance and cost.
Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the
greater its access latency is from the CPU. This traditional division of storage to
primary, secondary, tertiary and off-line storage is also guided by cost per bit.

Main Memory:

Random Access Memory (RAM) is used to store the programs and data being used by the CPU
in real-time. The data on the random access memory can be read, written, and erased any number
of times. RAM is a hardware element where the data being currently used is stored. It is a
volatile memory. Types of RAM:

1. Static RAM, or (SRAM) which stores a bit of data using the state of a six transistor memory
cell.
2. Dynamic RAM, or (DRAM) which stores a bit data using a pair of transistor and capacitor
which constitute a DRAM memory cell.

Read Only Memory (ROM) is a type of memory where the data has been prerecorded. Data
stored in ROM is retained even after the computer is turned off ie, non-volatile. Types of ROM:
1. Programmable ROM, where the data is written after the memory chip has been created. It is
non-volatile.
2. Erasable Programmable ROM, where the data on this non-volatile memory chip can be erased
3. by exposing it to high-intensity UV light.
4. Electrically Erasable Programmable ROM, where the data on this non-volatile memory chip
can be electrically erased using field electron emission.
5. Mask ROM, in which the data is written during the manufacturing of the memory chip.
Characteristics of Memory system:

Storage technologies at all levels of the storage hierarchy can be differentiated by


evaluating certain core characteristics as well as measuring characteristics specific
to a particular implementation. These core characteristics are volatility, mutability,
accessibility, and addressibility. For any particular implementation of any storage
technology, the characteristics worth measuring are capacity and performance.

Volatility

Non-volatile memory Will retain the stored information even if it is not constantly
supplied with electric power. It is suitable for long-term storage of information.
Nowadays used for most of secondary, tertiary, and off-line storage. In 1950s and
1960s, it was also used for primary storage, in the form of magnetic core memory.
Volatile memory Requires constant power to maintain the stored information. The
fastest memory technologies of today are volatile ones (not a universal rule). Since
primary storage is required to be very fast, it predominantly uses volatile memory.

Differentiation

Dynamic random access memory

A form of volatile memory which also requires the stored information to be


periodically re-read and re-written, or refreshed, otherwise it would vanish.

Static memory

A form of volatile memory similar to DRAM with the exception that it never needs
to be refreshed.

Mutability

Read/write storage or mutable storage Allows information to be overwritten at


any time. A computer without some amount of read/write storage for primary
storage purposes would be useless for many tasks. Modern computers typically use
read/write storage also for secondary storage.

Read only storage Retains the information stored at the time of manufacture, and
write once storage(Write Once Read Many) allows the information to be written
only once at some point after manufacture. These are called immutable storage.
Immutable storage is used for tertiary and off-line storage. Examples include CD-
ROM and CD-R. Slow write, fast read storage Read/write storage which allows

information to be overwritten multiple times, but with the write operation being
much slower than the read operation. Examples include CD-RWand flash memory.

Accessibility

Random access

Any location in storage can be accessed at any moment in approximately the same
amount of time. \Such characteristic is well suited for primary and
secondarystorage.
Sequential access

The accessing of pieces of information will be in a serial order, one after the other;
therefore the time to access a particular piece of information depends upon which
piece of information was last accessed. Such characteristic is typical of off-line
storage.

Addressability

Location-addressable

Each individually accessible unit of information in storage is selected with its


numerical memory address. In modern computers, location-addressable storage
usually limits to primary storage, accessed internally by computer programs, since
location-addressability is very efficient, but burdensome for humans.

File addressable

Information is divided into filesof variable length, and a particular file is selected
with human-readabledirectory and file names. The underlying device is still
location-addressable, but the operating system of a computer provides the file
system abstraction to make the operation more understandable. In modern
computers, secondary, tertiary and off-line storage use file systems.

Content-addressable

Each individually accessible unit of information is selected based on the basis of


(part of) the contents stored there. Content-addressable storage can be implemented
using software (computer program) or hardware (computer device), with hardware
being faster but more expensive option. Hardware content addressable memory is
often used in a computer's CPU cache.

Capacity

Raw capacity

The total amount of stored information that a storage device or medium can hold. It
is expressed as a quantity of bits or bytes (e.g. 10.4 megabytes).

Memory storage density


The compactness of stored information. It is the storage capacity of a medium
divided with a unit of length, area or volume (e.g. 1.2 megabytes per square inch).

Performance

Latency

The time it takes to access a particular location in storage. The relevant unit of
measurement is typically nanosecondfor primary storage, millisecond for
secondary storage, and second for tertiary storage. It may make sense to separate
read latency and write latency, and in case of sequential access storage, minimum,
maximum and average latency.

Throughput

The rate at which information can be read from or written to the storage. In
computer data storage, throughput is usually expressed in terms of megabytes per
second or MB/s, though bit rate may also be used. As with latency, read rate and
write rate may need to be differentiated. Also accessing media sequentially, as
opposed to randomly, typically yields maximum throughput.

Magnetic

Magnetic storageuses different patterns of magnetization on a magnetically coated


surface to store information. Magnetic storage is non-volatile. The information is
accessed using one or more read/write heads which may contain one or more
recording transducers. A read/write head only covers a part of the surface sothat
the head or medium or both must be moved relative to another in order to access
data. In modern computers, magnetic storage will take these forms:

o Floppy disk, used for off-line storage


o Hard disk drive, used for secondary storage
line storage.

Memory Heirarchy:
Associative memory

An associative memory can be considered as a memory unit whose stored data can be identified
for access by the content of the data itself rather than by an address or memory location.

Associative memory is often referred to as Content Addressable Memory (CAM).

When a write operation is performed on associative memory, no address or memory location is


given to the word. The memory itself is capable of finding an empty unused location to store the
word.

On the other hand, when the word is to be read from an associative memory, the content of the
word, or part of the word, is specified. The words which match the specified content are located
by the memory and are marked for reading.

From the block diagram, we can say that an associative memory consists of a memory array and
logic for 'm' words with 'n' bits per word.

The functional registers like the argument register A and key register K each have n bits, one for
each bit of a word. The match register M consists of m bits, one for each memory word.

The words which are kept in the memory are compared in parallel with the content of the argument
register.

The key register (K) provides a mask for choosing a particular field or key in the argument word.
If the key register contains a binary value of all 1's, then the entire argument is compared with each
memory word. Otherwise, only those bits in the argument that have 1's in their corresponding
position of the key register are compared. Thus, the key provides a mask for identifying a piece of
information which specifies how the reference to memory is made.
Read Operation
If more than one word in memory matches the unmasked argument field, all the matched words
will have 1's in the corresponding bit position of the match register. It is then necessary to scan
the bits of the match register one at a time.
• The matched words are read in sequence by applying a read signal to each word line whose
corresponding Mi bit is a 1.
• In most applications, the associative memory stores a table with no two identical items under a
given key. In this case, only one word may match the unmasked argument field.

Write Operation
Writing in an associative memory can take different forms, depending on the application.
• If the entire memory is loaded with new information at once prior to a search operation then
the writing can be done by addressing each location in sequence. This will make the device a
random access memory for writing and a content addressable memory for reading.
• The advantage here is that the address for input can be decoded as in a random access memory.

Cache Memory:

– Speed of the main memory is very low in comparison with the speed of processor

– For good performance, the processor cannot spend much time of its time waiting
to access instructions and data in main memory.

– Important to device a scheme that reduces the time to access the information

– An efficient solution is to use fast cache memory

– When a cache is full and a memory word that is not in the cache is referenced,
the cache control hardware must decide which block should be removed to create
space for the new block that contain the referenced word.
Cache memory principles:

Cache memory is constructed with SRAM. It is much faster than DRAM, with
acess time in order of 10 ns. It is also much more expensive than DRAM used for
physical memory.

In memory hierarchy, cache memory is closest to the microprocessor. A cache


controller copies data from physical memory to cache memory when cpu needs it.

cache memory uses parallel searching of data. It first compares the incoming
address to the address present in the cache. if the address matches, it is said that a
hit is occurred and the corresponding data is read by the cpu.. if the address do not
match, it is said that a miss has occurred.

cache memory consist of 2 levels: level 1 cache(internal cache) level 2


cache(external cache)

Reason of including L2 cache: if there is no L2 cache and the processor makes an


acess request for a memory location not in L1 cache then processor must acess
DRAM physical memory across the bus. Due to typical slow bus speed and slow
memory acess time, this results in poor performance. IF L2 cache is used, the data
can be access using a zero wait state transaction. And use of separate bus rather
than system bus.
Cache Mapping-

• Cache mapping defines how a block from the main memory is mapped to the cache memory
in case of a cache miss.
OR
• Cache mapping is a technique by which the contents of main memory are brought into the
cache memory.
1. Direct Mapping
2. Fully Associative Mapping
3. K-way Set Associative Mapping

1. Direct Mapping-

In direct mapping,
• A particular block of main memory can map only to a particular line of the cache.
• The line number of cache to which a particular block can map is given by-

Example-

• Consider cache memory is divided into ‘n’ number of lines.


• Then, block ‘j’ of main memory can map to line number (j mod n) only of the cache.
2. Fully Associative Mapping-

In fully associative mapping,


• A block of main memory can map to any line of the cache that is freely available at
that moment.
• This makes fully associative mapping more flexible than direct mapping.

Example-

Consider the following scenario-


Here,
• All the lines of cache are freely available.
• Thus, any block of main memory can map to any line of the cache.
• Had all the cache lines been occupied, then one of the existing blocks will have to be
replaced.

3. K-way Set Associative Mapping-

In k-way set associative mapping,


• Cache lines are grouped into sets where each set contains k number of lines.
• A particular block of main memory can map to only one particular set of the cache.
• However, within that set, the memory block can map any cache line that is freely
available.

Example-

Consider the following example of 2-way set associative mapping-


Here,
• k = 2 suggests that each set contains two cache lines.
• Since cache contains 6 lines, so number of sets in the cache = 6 / 2 = 3 sets.
• Block ‘j’ of main memory can map to set number (j mod 3) only of the cache.
• Within that set, block ‘j’ can map to any cache line that is freely available at that
moment.
• If all the cache lines are occupied, then one of the existing blocks will have to be
replaced.

ELEMENTS OF CACHE DESIGN:

The factors to be considered while designing the cache memory...

1. 1) Cache size:
cache size should be small so that the average cost per bit is close to that

of main memory. The larger the cache it is slower.

2. 2) mapping techniques:

it specifies how the cache is organized. types: direct, associative, set


associative.

3. 3) Replacement algorithms: when the cache is filled up, and a new block is
brought into cache ,one of existing block must be replaced. Replacement
algorithm is needed for associative and set associative mapping. To achieve
high speed,such algorithm must be implemented in hardware.

The 4 most common replacement algorithms are:-

1. a) LRU: least recently used.

most effective.

replace that block in the set that has been in cache longest with no
reference to it.

easily implemented in 2 way associative.


2. b) FIFO: first in first out.

replace that block in a set that has been in cache longest.

fifo is easily implemented as round robin or circular buffer technique.

3. c) LFU: least frequently used.


replace that block in the set that has experience fewer references

lfu can be implemented by associating a counter with each line.

4. d) Random replacement:

Pick a line at random from among the candidate lines. it has low
performance.

5. e) write policy:

i) write through: when a data is written from CPU into a location

in a cache, it is also written corresponding in physical memory.

ii) write back: the value written to cache is not always written to physical memory.
The value is written to physical memory only once,when data is removed from
cache.

6. f) line size: FOR HPC system, 64 and 128 bytes line sizes are mostly used.
7. g) number of caches: cache memory is placed at 2 0r 3 levels. they are
called first level, second level and third level. some processor contain L1
and L2 within the processor. the cache within the processor are internal
cache and outside the processor is external cache.
CHAPTER 8:
INPUT –OUTPUT ORGANISATION

Peripheral devices:

The input and output devices and secondary storage units of a computer are
called peripherals. the term peripheral is used in the wider sense, it also
includes interfacing devices such as I/O ports, programmable
interface,interrupt controller, keyboard interface etc

Input devices....Output devices

I/O modules:
Input/Output Module
Interface to CPU and Memory •Interface to one or more peripherals
Isolated vs Memory mapped input output
Asynchronous data transfer
If the registers in the I/O interface share a common clock with CPU registers, then transfer
between the two units is said to be synchronous. But in most cases, the internal timing in each
unit is independent of each other, so each uses its private clock for its internal registers. In this
case, the two units are said to be asynchronous to each other, and if data transfer occurs between
them, this data transfer is called Asynchronous Data Transfer.

But, the Asynchronous Data Transfer between two independent units requires that control signals
be transmitted between the communicating units so that the time can be indicated at which they
send data. These two methods can achieve this asynchronous way of data transfer:
o Strobe control: A strobe pulse is supplied by one unit to indicate to the other unit when
the transfer has to occur.
o Handshaking: This method is commonly used to accompany each data item being
transferred with a control signal that indicates data in the bus. The unit receiving the data
item responds with another signal to acknowledge receipt of the data.
The strobe pulse and handshaking method of asynchronous data transfer is not restricted to I/O
transfer. They are used extensively on numerous occasions requiring the transfer of data between
two independent units. So, here we consider the transmitting unit as a source and receiving unit
as a destination.

Asynchronous Data Transfer Methods


The asynchronous data transfer between two independent units requires that control signals be
transmitted between the communicating units to indicate when they send the data. Thus, the two
methods can achieve the asynchronous way of data transfer.

1. Strobe Control Method

The Strobe Control method of asynchronous data transfer employs a single control line to time
each transfer. This control line is also known as a strobe, and it may be achieved either by source
or destination, depending on which initiate the transfer.

a. Source initiated strobe: In the below block diagram, you can see that strobe is initiated
by source, and as shown in the timing diagram, the source unit first places the data on the data bus.
Destination initiated strobe: In the below block diagram, you see that the strobe initiated by
destination, and in the timing diagram, the destination unit first activates the strobe pulse,
informing the source to provide the data

2. Handshaking Method
The strobe method has the disadvantage that the source unit that initiates the transfer has no
way of knowing whether the destination has received the data that was placed in the bus.
Similarly, a destination unit that initiates the transfer has no way of knowing whether the
source unit has placed data on the bus
So this problem is solved by the handshaking method. The handshaking method introduces a
second control signal line that replays the unit that initiates the transfer.

Source initiated handshaking: In the below block diagram, you can see that two
handshaking lines are "data valid", which is generated by the source unit, and "data
accepted", generated by the destination unit.
Destination initiated handshaking: In the below block diagram, you see that the two
handshaking lines are "data valid", generated by the source unit, and "ready for data"
generated by the destination unit.
Note that the name of signal data accepted generated by the destination unit has been changed to
ready for data to reflect its new meaning.
Programmed I/O

• • Programmed I/O operations are the result of I/O instructions written in the computer
program.
• • In programmed I/O, each data transfer in initiated by the instructions in the CPU and
hence the CPU is in the continuous monitoring of the interface.
• • Input instruction is used to transfer data from I/O device to CPU, store instruction is
used to transfer data from CPU to memory and output instruction is used to transfer data
from CPU to I/O device.
• • This technique is generally used in very slow speed computer and is not a efficient
method if the speed of the CPU and I/O is different.



• I/O device places the data on the I/O bus and enables its data valid signal
• • The interface accepts the data in the data register and sets the F bit of status

register and also enables the data accepted signal.

• • Data valid line is disables by I/O device.


• • CPU is in a continuous monitoring of the interface in which it checks the F bit of

the status register.

o If it is set i.e. 1, then the CPU reads the data from data register and sets F

bit to zero
o If it is reset i.e. 0, then the CPU remains monitoring the interface.

• Interface disables the data accepted signal and the system goes to initial state where next item
of data is placed on the data bus.

Interrupt-driven I/O

• • Polling takes valuable CPU time


• • Open communication only when some data has to be passed -> Interrupt.
• • I/O interface, instead of the CPU, monitors the I/O device
• • When the interface determines that the I/O device is ready for data transfer, it

generates an Interrupt Request to the CPU

• • Upon detecting an interrupt, CPU stops momentarily the task it is doing, branches
to the service routine to process the data transfer, and then returns to the task it was
performing

The problem with programmed I/O is that the processor has to wait a long time for the
I/O module of concern to be ready for either reception or transmission of data. The
processor, while waiting, must repeatedly interrogate the status of the I/O module. As a
result, the level of the performance of the entire system is severely degraded. An
alternative is for the processor to issue an I/O command to a module and then go on to do
some other useful work. The I/O module will then interrupt the processor to request
service when it is ready to exchange data with processor. The processor then executes the
data transfer, and then resumes its former processing. The interrupt can be initiated either
by software or by hardware.

1. Interrupt Driven I/O basic operation


o • CPU issues read command
o • I/O module gets data from peripheral whilst CPU does other work
o • I/O module interrupts CPU
o • CPU requests data
o • I/O module transfers data

Interrupt Processing from CPU viewpoint

o • Issue read command


o • Do other work
o • Check for interrupt at end of each instruction cycle
o • If interrupted:-

o Save context (registers) o Process interrupt


o Fetch data & store
Priority Interrupt

• • Determines which interrupt is to be served first when two or more requests are made
simultaneously
• • Also determines which interrupts are permitted to interrupt the computer while another
is being serviced
• • Higher priority interrupts can make requests while servicing a lower priority interrupt

Priority Interrupt by Software (Polling)

• • Priority is established by the order of polling the devices (interrupt sources), that is
identify the highest-priority source by software means
• • One common branch address is used for all interrupts
• • Program polls the interrupt sources in sequence
• • The highest-priority source is tested first
• • Flexible since it is established by software
• • Low cost since it needs a very little hardware
• • Very slow

Priority Interrupt by Hardware

Require a priority interrupt manager which accepts all the interrupt requests to determine the
highest priority request
Fast since identification of the highest priority interrupt request is identified by the hardware

Fast since each interrupt source has its own interrupt vector to access directly to its own service
routine
• Interrupt Request from any device
• • CPU responds by INTACK
• • Any device receives signal(INTACK) at PI puts the VAD on the bus
• • Among interrupt requesting devices the only device which is physically

closest to CPU gets INTACK and it blocks INTACK to propagate to the next device

.4.3 Direct Memory access

• • Large blocks of data transferred at a high speed to or from high speed devices,
magnetic drums, disks, tapes, etc.
• • DMA controller Interface that provides I/O transfer of data directly to and from the
memory and the I/O device
• • CPU initializes the DMA controller by sending a memory address and the number of
words to be transferred
• • Actual transfer of data is done directly between the device and memory through DMA
controller -> Freeing CPU for other tasks

The transfer of data between the peripheral and memory without the interaction of CPU
and letting the peripheral device manage the memory bus directly is termed as Direct
Memory Access (DMA).

1. The two control signals Bus Request and Bus Grant are used to fascinate the DMA
transfer. The bus request input is used by the DMA controller to request the CPU for the
control of the buses. When BR signal is high, the CPU terminates the execution of the
current instructions and then places the address, data, read and write lines to the high
impedance state and sends the bus grant signal. The DMA controller now takes the
control of the buses and transfers the data directly between memory and I/O without
processor interaction. When the transfer is completed, the bus request signal is made low
by DMA. In response to which CPU disables the bus grant and again CPU takes the
control of address, data, read and write lines.
2. The transfer of data between the memory and I/O of course facilitates in two ways which
are DMA Burst and Cycle Stealing.
DMA Burst: The block of data consisting a number of memory words is transferred at a
time.

Cycle Stealing: DMA transfers one data word at a time after which it must return control of the
buses to the CPU.

• • CPU is usually much faster than I/O (DMA), thus CPU uses the most of the memory
cycles
• • DMA Controller steals the memory cycles from CPU
• • For those stolen cycles, CPU remains idle
• • For those slow CPU, DMA Controller may steal most of the memory cycles

which may cause CPU remain idle long time

1. DMA Controller

The DMA controller communicates with the CPU through the data bus and control lines.
DMA select signal is used for selecting the controller, the register select is for selecting
the register. When the bus grant signal is zero, the CPU communicates through the data
bus to read or write into the DMA register. When bus grant is one, the DMA controller
takes the control of buses and transfers the data between the memory and I/O.
1. The address register specifies the desired location of the memory which is incremented
after each word is transferred to the memory. The word count register holds the number
of words to be transferred which is decremented after each transfer until it is zero. When
it is zero, it indicates the end of transfer. After which the bus grant signal from CPU is
made low and CPU returns to its normal operation. The control register specifies the
mode of transfer which is Read or Write.

DMA Transfer

• DMA request signal is given from I/O device to DMA controller.

DMA sends the bus request signal to CPU in response to which CPU disables its current
instructions and initialize the DMA by sending the following information.

o The starting address of the memory block where the data are available (for read) and where
data to be stored (for write)

o The word count which is the number of words in the memory block
o Control to specify the mode of transfer
o Sends a bust grant as 1 so that DMA controller can take the control of the buses
o DMA sends the DMA acknowledge signal in response to which peripheral device

puts the words in the data bus (for write) or receives a word from the data bus (for read).
CPU tells DMA controller:-
o Read/Write
o Device address
o Starting address of memory block for data o Amount of data to be transferred

CPU carries on with other work


DMA controller deals with transfer
DMA controller sends interrupt when finished

I/O Processors

• • Processor with direct memory access capability that communicates with I/O devices
• • Channel accesses memory by cycle stealing
• Channel can execute a Channel Program
• • Stored in the main memory
• • Consists of Channel Command Word(CCW)
• • Each CCW specifies the parameters needed by the channel to control the I/O devices

and perform data transfer operations


• • CPU initiates the channel by executing a channel I/O class instruction and once

initiated, channel operates independently of the CPU

A computer may incorporate one or more external processors and assign them the task of
communicating directly with the I/O devices so that no each interface need to
communicate with the CPU. An I/O processor (IOP) is a processor with direct memory
access capability that communicates with I/O devices. IOP instructions are specifically
designed to facilitate I/O transfer. The IOP can perform other processing tasks such as
arithmetic logic, branching and code translation.

The memory unit occupies a central position and can communicate with each processor
by means of direct memory access. The CPU is responsible for processing data needed in
the solution of computational tasks. The IOP provides a path for transferring data
between various peripheral devices and memory unit.

In most computer systems, the CPU is the master while the IOP is a slave processor. The
CPU initiates the IOP and after which the IOP operates independent of CPU and transfer
data between the peripheral and memory. For example, the IOP receives 5 bytes from an
input device at the device rate and bit capacity. After which the IOP packs them into one
block of 40 bits and transfer them to memory. Similarly the O/P word transfer from
memory to IOP is directed from the IOP to the O/P device at the device rate and bit
capacity.

CPU – IOP Communication

The memory unit acts as a message center where each processor leaves information for
the other. The operation of typical IOP is appreciated with the example by which the
CPU and IOP communication.
The CPU sends an instruction to test the IOP path.
The IOP responds by inserting a status word in memory for the CPU to check.
The bits of the status word indicate the condition of the IOP and I/O device, such as IOP
overload condition, device busy with another transfer or device ready for I/O transfer. The CPU
refers to the status word in in memory to decide what to do next.
If all right up to this, the CPU sends the instruction to start I/O transfer.
The CPU now continues with another program while IOP is busy with I/O program. When IOP
terminates the execution, it sends an interrupt request to CPU.
CPU responds by issuing an instruction to read the status from the IOP.
IOP responds by placing the contents to its status report into specified memory location. Status
word indicates whether the transfer has been completed or with error.

Data Communication Processor

• Distributes and collects data from many remote terminals connected through telephone and
other communication lines.

The end

You might also like