Unit 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

DEPARTMENT OF

COMPUTER SCIENCE
Nutan College Of Engineering&Research, & ENGINEERING
TalegaonDabhade, Pune- 410507
Computer Architecture And Organization
Unit IV: Memory Organization
1. Internal Memory: Semiconductor main memory
2. Error correction
3. Advanced DRAM organization
4. Virtual memory systems and cache memory systems
5. External Memory: Organization and characteristics of magnetic disk
6. Magnetic tape
7. Optical memory
8. RAID
9. Memory controllers

1. Internal Memory: Semiconductor main memory


Semiconductor memory is a digital electronic semiconductor device used for digital data storage,
such as computer memory. It typically refers to MOS memory, where data is stored within metal–
oxide–semiconductor (MOS) memory cells on a silicon integrated circuit memory chip.There are
numerous different types using different semiconductor technologies. The two main types
of random-access memory (RAM) are static RAM (SRAM), which uses several transistors per
memory cell, and Dynamic random-access memory (DRAM), which uses a single transistor
and MOS capacitor per cell. Non-volatile memory (such as EPROM, EEPROM and flash memory)
uses floating-gate memory cells, which consist of a single transistor per cell.
Most types of semiconductor memory have the property of random access, which means that it takes
the same amount of time to access any memory location, so data can be efficiently accessed in any
random order. This contrasts with data storage media such as hard disks and CDs which read and
write data consecutively and therefore the data can only be accessed in the same sequence it was
written. Semiconductor memory also has much faster access times than other types of data storage;
a byte of data can be written to or read from semiconductor memory within a few nanoseconds,
while access time for rotating storage such as hard disks is in the range of milliseconds. For these
reasons it is used for main computer memory (primary storage), to hold data the computer is
currently working on, among other uses.
In a semiconductor memory chip, each bit of binary data is stored in a tiny circuit called a memory
cell consisting of one to several transistors. The memory cells are laid out in rectangular arrays on
the surface of the chip. The 1-bit memory cells are grouped in small units called words which are
accessed together as a single memory address. Memory is manufactured in word length that is
usually a power of two, typically N=1, 2, 4 or 8 bits.
Data is accessed by means of a binary number called a memory address applied to the chip's address
pins, which specifies which word in the chip is to be accessed. If the memory address consists
of M bits, the number of addresses on the chip is 2 M, each containing an N bit word. Consequently,
the amount of data stored in each chip is N2M bits.[5] The memory storage capacity for M number of
address lines is given by 2 M, which is usually in power of two: 2, 4, 8, 16, 32, 64, 128, 256 and 512
and measured in kibibits, mebibits, gibibits or tebibits, etc. As of 2014 the largest semiconductor
memory chips hold a few gibbets’ of data, but higher capacity memory is constantly being
developed. By combining several integrated circuits, memory can be arranged into a larger word
length and/or address space than what is offered by each chip, often but not necessarily a power of
two.[5]
The two basic operations performed by a memory chip are "read", in which the data contents of a
memory word is read out (non destructively), and "write" in which data is stored in a memory word,
replacing any data that was previously stored there. To increase data rate, in some of the latest types
of memory chips such as DDR SDRAM multiple words are accessed with each read or write
operation.
In addition to standalone memory chips, blocks of semiconductor memory are integral parts of many
computer and data processing integrated circuits. For example, the microprocessor chips that run
computers contain cache memory to store instructions awaiting execution.
Types:
1. Volatile Memory:

It is the memory hardware that fetches/stores data at a high-speed. It is also referred as temporary
memory. The data within the volatile memory is stored till the system is capable of, but once the
system is turned off the data within the volatile memory is deleted automatically. RAM (Random
Access Memory) and Cache Memory are some common examples of volatile memory. Here, data
fetch/store is fast and economical.

2. Non-Volatile Memory:

It is the type of memory in which data or information is not lost within the memory even power is
shut-down. ROM (Read Only Memory) is the most common example of non-volatile memory. It’s
not economical and slow in fetch/store as compared to volatile memory however stores higher
volume of data. All such information that needs to be stored for an extended amount of time is stored
in non-volatile memory. Non-volatile memory has a huge impact on a system’s storage capacity.
Below are the differences between volatile and non-volatile memory:

Sr. Key Volatile Memory Non-Volatile Memory


No.

Data Data is present till power supply is Data remains even after power supply is not
1
Retention present. present.

Persistence Volatile memory data is not Non-volatile memory data is permanent.


2
permanent.

Speed Volatile memory is faster than non- Non-volatile memory access is slower.
3
volatile memory.
Sr. Key Volatile Memory Non-Volatile Memory
No.

Example RAM is an example of Volatile ROM is an example of Non-Volatile Memory.


4
Memory.

Data Data Transfer is easy in Volatile Data Transfer is difficult in Non-Volatile


5
Transfer Memory. Memory.

CPU CPU can access data stored on Data to be copied from Non-Volatile memory
6 Access Volatile memory. to Volatile memory so that CPU can access its
data.

Storage Volatile memory less storage Non-Volatile memory like HDD has very high
7
capacity. storage capacity.

Impact Volatile memory such as RAM is Non-volatile memory has no impact on


8 high impact on system's system's performance.
performance.

Cost Volatile memory is costly per unit Non-volatile memory is cheap per unit size.
9
size.

.
2. Error Correction
Error correction is the process of detecting errors in transmitted messages and reconstructing the
original error-free data. Error correction ensures that corrected and error-free messages are obtained
at the receiver side. Systems capable of requesting the retransmission of bad messages in response to
error detection include an automatic request for retransmission, or automatic repeat request (ARQ)
processing, in their communication software package. They use acknowledgments, negative
acknowledgment messages and timeouts to achieve better data transmission.

ARQ is an error control (error correction) method that uses error-detection codes and positive and
negative acknowledgments. When the transmitter either receives a negative acknowledgment or a
timeout happens before acknowledgment is received, the ARQ makes the transmitter resend the
message.

Error-correcting code (ECC) or forward error correction (FEC) is a method that involves adding
parity data bits to the message. These parity bits will be read by the receiver to determine whether an
error happened during transmission or storage. In this case, the receiver checks and corrects errors
when they occur. It does not ask the transmitter to resend the frame or message.
A hybrid method that combines both ARQ and FEC functionality is also used for error correction. In
this case, the receiver asks for retransmission only if the parity data bits are not enough for
successful error detection and correction.

3. Advanced DRAM organization

The traditional DRAM is constrained both by its internal architecture and by its interface to processor's memory
bus. The new enhancements (most common) on DRAM architecture are:
1. Synchronous DRAM (SDRAM)
2. Rambus DRAM (RDRAM)
3. Double Data Rate DRAM (DDR DRAM)
4. Cache DRAM (CDRAM)

1. Synchronous DRAM (SDRAM)


 Exchange data with the processor synchronized to an external clock signal and running at the full speed of the
processor/memory bus without imposing wait states.
 One word of data is transmitted per clock cycle (single data rate).[All control, address, & data signals are only
valid (and latched) on a clock edge.]
 Typical clock frequencies are 100 and 133 MHz.
 SDRAM has multiple-bank internal architecture that improves opportunities for on-chip parallelism. Generally it
uses dual data banks internally. It starts access in one bank then next, and then receives data from first then second.
 SDRAM performs best when it is transferring large blocks of data serially, such as for applications like word
processing, spreadsheets and multimedia.

2. Rambus DRAM (RDRAM)


 RDRAM chips are vertical packages with all pins on one side. The chip exchanges data with the processor over 28
wires no more than 12 cm long. The bus can address upto 320 RDRAM chips.
 Entire data blocks are access and transferred out on a high-speed bus-like interface (500 Mbps to 1.6 Gbps).
 It has tricky system level design where the bus itself defines impedance, clocking and signaling very precisely.
 More expensive memory chips.
 Concurrent RDRAMs have been used in video games, while Direct RDRAMs have been used in computers.

3. Double Data Rate DRAM (DDR DRAM)


 Uses both rising (positive edge) and falling (negative) edge of clock for data transfer. That is DDR SDRAM can
send data twice per clock cycle - once on the rising edge of the clock pulse and once on the falling edge.
 There has been improvement in DDR DRAM technology. The later generations (DDR2 and DDR3) increases the
data rate by increasing the operational frequency of the RAM chip and by increasing the prefetch buffer from 2 bits
to 4 bits per chip.
 DDR can transfer data at a clock rate in the range of 200MHz to 600 MHz, DDR2 can transfer in the range of
400MHz to 1066MHz and DDR3 can transfer in the range of 800MHz to 1600 MHz.

4. Cache DRAM (CDRAM)


 Integrates a small SRAM cache onto a generic DRAM chip.
 The SRAM on CDRAM can be used in two ways
 It can be used as true cache.
 It can be used as buffer to support the serial access of a block of data.
 This architecture achieves concurrent operation of DRAM and SRAM synchronized with an external clock.
Separate control and address input terminals of the two portions enable independent control of the DRAM and
SRAM, thus the system achieves continuous and concurrent operation of DRAM and SRAM.
 CDRAM can handle CPU, direct memory access (DMA) and video refresh at the same time by utilizing a high-
speed video interface.
 CDRAM can replace cache and main memory, and it is has already been proven that a CDRAM based system has
a 10 to 50 percent performance advantage over a 256kbyte cache based system.

4. Virtual memory systems and cache memory systems

Cache Memory:

Cache memory increases the accessing speed of CPU. It is not a technique but a memory unit i.e a
storage device. In cache memory, recently used data is copied. Whenever the program is ready to be
executed, it is fetched from main memory and then copied to the cache memory. But, if its copy is
already present in the cache memory then the program is directly executed.

Virtual Memory:

Virtual Memory increases the capacity of main memory. Virtual memory is not a storage unit, its a
technique. In virtual memory, even such programs which have a larger size than the main memory
are allowed to be executed.

Difference between Virtual memory and Cache memory:

S.NO VIRTUAL MEMORY CACHE MEMORY

1. Virtual memory increases the capacity of main While cache memory increase the
memory. accessing speed of CPU.

Cache memory is exactly a memory

2. Virtual memory is not a memory unit, its a technique. unit.

The size of virtual memory is greater than the cache While the size of cache memory is

3. memory. less than the virtual memory.

On the other hand hardware manages

4. Operating System manages the Virtual memory. the cache memory.

In virtual memory, The program with size larger than While in cache memory, recently

5. the main memory are executed. used data is copied into.

In virtual memory, mapping frameworks is needed While in cache memory, no such

6. for mapping virtual address to physical address. mapping frameworks is needed.

5. External Memory: Organization and characteristics of Magnetic disk

With an external Internal memory is usually chips or modules that you attach directly to the
motherboard. Internal ROM is a circular disc that continuously rotates as the computer accesses its
data. External memory often comes in the form of USB flash drives; CD, DVD, and other optical
discs; and portable hard drives.

Below is a list of the advantages to using an external storage device with a computer.
Easy way to add additional storage or options to your computer without having to open the
computer.
hard drive, you can store a lot of data for backup or to move between computers.
An external disc drive can allow a computer without a disc drive to read CDs, DVDs, or other discs.
Devices like the Drobo can give your computer additional features such as RAID help keep your
data protected.

Magnetic Disk
A magnetic disk is a storage device that uses a magnetization process to read, write, rewrite and
access data. The Magnetic disk is made of a set of circular platters. It is covered with a magnetic
coating and stores data in the form of tracks, spots, and sectors. Hard disks, zip disks, and floppy
disks are common examples of magnetic disks. The number of bits stored on each track does not
change by using the simplest constant angular velocity.
The primary computer storage device. Like tape, it is magnetically recorded and can be re-recorded
over and over. Disks are rotating platters with a mechanical arm that moves a read/write head
between the outer and inner edges of the platter's surface. It can take as long as one second to find a
location on a floppy disk to as little as a couple of milliseconds on a fast hard disk. See hard disk for
more details. Tracks and Spots The disk surface is divided into concentric tracks (circles within
circles). The thinner the tracks, the more storage. The data bits are recorded as tiny magnetic spots
on the tracks. The smaller the spot, the more bits per inch and the greater the storage. Sectors Tracks
are further divided into sectors, which hold a block of data that is read or written at one time; for
example, READ SECTOR 782, WRITE SECTOR 5448. In order to update the disk, one or more
sectors are read into the computer, changed and written back to disk. The operating system figures
out how to fit data into these fixed spaces. Modern disks have more sectors in the outer tracks than
the inner ones because the outer radius of the platter is greater than the inner radius.
See magnetic tape and optical disc.

Tracks and Sectors


Tracks are concentric circles on the disk, broken up into storage units called "sectors." The sector,
which is typically 512 bytes, is the smallest unit that can be read or written.

Magnetic Disk Summary


The following magnetic disk technologies are summarized below. Several have been discontinued,
but are often still used long after their official demise. Media tend to be made for many years there
after.

6. Magnetic Tape
'Tape is dead! Long live tape!' Were you around in the 80s when cassette tapes were all the rage?
Many people still say 'mixed tape' sometimes when referring to playlists they make on Spotify or
Pandora, or even CDs that they give to each other. Though the cassette tape has long since fallen out
of favor, it was neither the first nor the last device to use magnetic tape for storage.
A magnetic tape, in computer terminology, is a storage medium that allows for data archiving,
collection, and backup. At first, the tapes were wound in wheel-like reels, but then cassettes and
cartridges came along, which offered more protection for the tape inside.

One side of the tape is coated with a magnetic material. Data on the tape is written and read
sequentially. Finding a specific record takes time because the machine has to read every record in
front of it. Most tapes are used for archival purposes, rather than ad-hoc writing and reading.

Data is written into 'tracks' on the medium. Some run along the edge of the tape, which is
called linear recording, while others are written diagonally, which is called helical recording. Older
magnetic tapes used eight tracks, while more modern ones can handle 128 or more tracks.

Linear Tape File System


The Linear Tape File System (LTFS) mimics random access attributes of hard disks and has brought
about a revolution in tape backup in the age of huge files and big data. By storing metadata about an
object separate from the object itself, it increases access and retrieval times. LTFS is an open
standard, meaning is not proprietary or owned by anyone. This allows different vendors,
architectures, and systems the ability to use the technology.
Linear Tape File Systems have brought a huge leap forward in storage capacity. IBM and Fujifilm
unveiled an LTO tape that can store up to 220 Terabytes of data! This is a significant increase over
the technologies mentioned previously, and even from the first prototype of LTFS.

Advantages and Disadvantages


Although magnetic tape is still viable when compared to hard discs, external drives, or even cloud
storage, it lacks speed in data retrieval. Although there are fewer tape drives around than disk drives,
tape drives still perform a valuable function. Although disk drives can be faster, smaller, and hold
more data, a physical tape is much more mobile.
A company can back up its data to tapes, remove them, and send by courier to off-site storage; a
very important step for disaster recovery. In that regard, while disk drives can be used to read and
write data at high speeds, tape disks are usually only for writing data. Because of this, they're a great
backup or archival tool.

Magnetic Tape Timeline


Now let's take a look at a timeline that focuses on magnetic tape storage. Keep in mind that other
technologies were also being developed (such as Flash drives or CD-ROM), but it's kind of
remarkable that the idea of recording data with magnetic tape storage goes all the way back to the
19th century. With that said, let's get into it.

Dfference Between Magnetic Disk & Magnetic Tape

S.NO MAGNETIC TAPE MAGNETIC DISK


1. The cost of magnetic tape is less. The cost of magnetic disk is high.

2. Reliability of magnetic tape is less. Reliability of magnetic disk is more.

3. Access time for magnetic tape is more. Access time for magnetic disk is less.

Data transfer rate for magnetic tape is

4. comparatively less. Data transfer rate for magnetic disk is more.

5. Magnetic tape is used for backups. Magnetic disk is used as a secondary storage.

In magnetic disk data accessing rate is high or

6. In magnetic tape data accessing rate is slow. fast.

In magnetic tape data can’t be updated after

7. fed-up of data. In magnetic disk data can be updated.

8. Magnetic tape is more portable. Magnetic disk is less portable.

Magnetic tape contains reels of tape which Magnetic disk contains round platters which is

9. is in form of strip of plastic. made up of plastic or metal.

In magnetic tape for data recording, While in magnetic disk for data recording,

magnetic material is coated on only one side magnetic material is coated on only both side

10. of the tape. of the platters.

7. Optical Memory

Optical storage, electronic storage medium that uses low-power laser beams to record and retrieve
digital (binary) data. In optical-storage technology, a laser beam encodes digital data onto an optical,
or laser, disk in the form of tiny pits arranged in a spiral track on the disk’s surface. A low-power
laser scanner is used to “read” these pits, with variations in the intensity of reflected light from the
pits being converted into electric signals. This technology is used in the compact disc, which records
sound; in the CD-ROM (compact disc read-only memory), which can store text and images as well
as sound; in WORM (write-once read-many), a type of disk that can be written on once and read any
number of times; and in newer disks that are totally rewritable.

Optical storage provides greater memory capacity than magnetic storage because laser beams can be
controlled and focused much more precisely than can tiny magnetic heads, thereby enabling the
condensation of data into a much smaller space. An entire set of encyclopedias, for example, can be
stored on a standard 12-centimetre (4.72-inch) optical disk. Besides higher capacity, optical-storage
technology also delivers more authentic duplication of sounds and images. Optical disks are also
inexpensive to make: the plastic disks are simply molds pressed from a master, as phonograph
records are. The data on them cannot be destroyed by power outages or magnetic disturbances, the
disks themselves are relatively impervious to physical damage, and unlike magnetic disks and tapes,
they need not be kept in tightly sealed containers to protect them from contaminants. Optical-
scanning equipment is similarly durable because it has relatively few moving parts.

Early optical disks were not erasable—i.e., data encoded onto their surfaces could be read but not
erased or rewritten. This problem was solved in the 1990s with the development of WORM and of
writable/rewritable disks. The chief remaining drawback to optical equipment is a slower rate
of information retrieval compared with conventional magnetic-storage media. Despite its slowness,
its superior capacity and recording characteristics make optical storage ideally suited to memory-
intensive applications, especially those that incorporate still or animated graphics, sound, and large
quantities of text. Multimedia encyclopedias, video games, training programs, and directories are
commonly stored on optical media.
8. RAID

RAID ("Redundant Array of Inexpensive Disks"[1] or "Redundant Array of Independent Disks") is a


data storage virtualization technology that combines multiple physical disk drive components into
one or more logical units for the purposes of data redundancy, performance improvement, or both.
This was in contrast to the previous concept of highly reliable mainframe disk drives referred to as
"single large expensive disk" (SLED).
Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on
the required level of redundancy and performance. The different schemes, or data distribution
layouts, are named by the word "RAID" followed by a number, for example RAID 0 or RAID 1.
Each scheme, or RAID level, provides a different balance among the key
goals: reliability, availability, performance, and capacity. RAID levels greater than RAID 0 provide
protection against unrecoverable sector read errors, as well as against failures of whole physical
drives.
A number of standard schemes have evolved. These are called levels. Originally, there were five
RAID levels, but many variations have evolved, including several nested levels and many non-
standard levels (mostly proprietary). RAID levels and their associated data formats are standardized
by the Storage Networking Industry Association (SNIA) in the Common RAID Disk Drive Format
(DDF) standard:
RAID 0
RAID 0 consists of striping, but no mirroring or parity. Compared to a spanned volume, the
capacity of a RAID 0 volume is the same; it is the sum of the capacities of the drives in the
set. But because striping distributes the contents of each file among all drives in the set, the
failure of any drive causes the entire RAID 0 volume and all files to be lost. In comparison, a
spanned volume preserves the files on the unfailing drives. The benefit of RAID 0 is that
the throughput of read and write operations to any file is multiplied by the number of drives
because, unlike spanned volumes, reads and writes are done concurrently[11]. The cost is
increased vulnerability to drive failures—since any drive in a RAID 0 setup failing causes the
entire volume to be lost, the average failure rate of the volume rises with the number of
attached drives.

RAID 1
RAID 1 consists of data mirroring, without parity or striping. Data is written identically to
two or more drives, thereby producing a "mirrored set" of drives. Thus, any read request can
be serviced by any drive in the set. If a request is broadcast to every drive in the set, it can be
serviced by the drive that accesses the data first (depending on its seek time and rotational
latency), improving performance. Sustained read throughput, if the controller or software is
optimized for it, approaches the sum of throughputs of every drive in the set, just as for
RAID 0. Actual read throughput of most RAID 1 implementations is slower than the fastest
drive. Write throughput is always slower because every drive must be updated, and the
slowest drive limits the write performance. The array continues to operate as long as at least
one drive is functioning.

RAID 2
RAID 2 consists of bit-level striping with dedicated Hamming-code parity. All disk spindle
rotation is synchronized and data is striped such that each sequential bit is on a different
drive. Hamming-code parity is calculated across corresponding bits and stored on at least one
parity drive. This level is of historical significance only; although it was used on some early
machines (for example, the Thinking Machines CM-2), as of 2014 it is not used by any
commercially available system.

RAID 3
RAID 3 consists of byte-level striping with dedicated parity. All disk spindle rotation is
synchronized and data is striped such that each sequential byte is on a different drive. Parity
is calculated across corresponding bytes and stored on a dedicated parity drive. Although
implementations exist, RAID 3 is not commonly used in practice.

RAID 4
RAID 4 consists of block-level striping with dedicated parity. This level was previously used
by NetApp, but has now been largely replaced by a proprietary implementation of RAID 4
with two parity disks, called RAID-DP.[21] The main advantage of RAID 4 over RAID 2 and
3 is I/O parallelism: in RAID 2 and 3, a single read I/O operation requires reading the whole
group of data drives, while in RAID 4 one I/O read operation does not have to spread across
all data drives. As a result, more I/O operations can be executed in parallel, improving the
performance of small transfers.

RAID 5
RAID 5 consists of block-level striping with distributed parity. Unlike RAID 4, parity
information is distributed among the drives, requiring all drives but one to be present to
operate. Upon failure of a single drive, subsequent reads can be calculated from the
distributed parity such that no data is lost. RAID 5 requires at least three disks.[11] Like all
single-parity concepts, large RAID 5 implementations are susceptible to system failures
because of trends regarding array rebuild time and the chance of drive failure during rebuild
(see "Increasing rebuild time and failure probability" section, below).[22] Rebuilding an array
requires reading all data from all disks, opening a chance for a second drive failure and the
loss of the entire array.

RAID 6
RAID 6 consists of block-level striping with double distributed parity. Double parity
provides fault tolerance up to two failed drives. This makes larger RAID groups more
practical, especially for high-availability systems, as large-capacity drives take longer to
restore. RAID 6 requires a minimum of four disks. As with RAID 5, a single drive failure
results in reduced performance of the entire array until the failed drive has been
replaced.[11] With a RAID 6 array, using drives from multiple sources and manufacturers, it is
possible to mitigate most of the problems associated with RAID 5. The larger the drive
capacities and the larger the array size, the more important it becomes to choose RAID 6
instead of RAID 5. RAID 10 also minimizes these problems.
9. Memory controllers

The memory controller is a digital circuit that manages the flow of data going to and from the
computer's main memory. A memory controller can be a separate chip or integrated into another
chip, such as being placed on the same die or as an integral part of a microprocessor; in the latter
case, it is usually called an integrated memory controller (IMC). A memory controller is
sometimes also called a memory chip controller.

Memory controller is a logical block that performs reads/writes from a memory based on the
memory technology.

Most commonly memory controller refers to main memory (DRAM) controller.


The DRAM memory controller translates addresses coming from a CPU read/write into a
DRAM address (bank,page, row, column bits) based on the type of memory attached , its size,
organization etc.

It also needs to follow and be compliant with the DRAM protocol and timings (DDR2/DDR3/4
protocol and various timing requirements between sub commands).
There will also be logic support to enhance performance of memory access (e.g - tracking
open/closed pages, read/write ordering, write to read data forwarding, automatic vs adaptive
page closing etc)

It also needs to ensure the DRAMs are refreshed at regular intervals, data returned from DRAMs
checked for errors (and correction if possible) etc

Another type of memories that are commonly used are Flash memory devices like USB sticks or
SSD drives - In this case, there needs to be a Flash/SSD Memory controller that performs
write/read/erase operations to the Flash memory (NAND or NOR). This is little more
complicated and normally also is intervened by firmware - Flash memory does not support
random access, are typically accessed in blocks/pages and the controllers also need to be taking
care of reliability and performance

Example:

Incode the data 1101 in Even Parity by using hamming code.


Step 1:
Calculate the required number of parity bits.
Let P = 2, then

2P = 22 = 4 and n + P + 1 = 4 + 2 + 1 = 7.
2 parity bits are not sufficient for 4 bit data.
So let’s try P = 3, then

2P = 23 = 8 and n + P + 1 = 4 + 3 + 1 = 8
Therefore 3 parity bits are sufficient for 4 bit data.
The total bits in the code word are 4 + 3 = 7

Step 2
Constructing bit location table

Step 3

Determine the parity bits.

For P1 : 3, 5 and 7 bits are having three 1’s so for even parity, P1 = 0.

For P2 : 3, 6 and 7 bits are having two 1’s so for even parity, P2 = 1.
For P3 : 5, 6 and 7 bits are having two 1’s so for even parity, P3 = 0.
By entering / inserting the parity bits at their respective positions, codeword can be formed
and is transmitted. It is 1100110.

NOTE: If the codeword has all zeros (ex: 0000000), then there is no error in Hamming code.

Ms. Shital Shinde


(Subject Co-ordinator)

You might also like