Vlsi Unit-V

Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

Unit – V VHDL Synthesis

Synthesis Process:
Circuit Design Flow:
The VLSI IC circuits design flow is shown in the figure below. The various levels of design
are numbered and the blocks show processes in the design flow. Specifications comes first,
they describe abstractly, the functionality, interface, and the architecture of the digital IC
circuit to be designed.

Fig: Simplified VLSI Circuit design flow


Behavioral description is then created to analyze the design in terms of functionality,
performance, compliance to given standards, and other specifications.

RTL description is done using HDLs. This RTL description is simulated to test functionality.
From here onwards we need the help of EDA tools. RTL description is then converted to a
gate-level netlist using logic synthesis tools. A gate-level netlist is a description of the circuit
in terms of gates and connections between them, which are made in such a way that they
meet the timing, power and area specifications.

Finally, a physical layout is made, which will be verified and then sent to fabrication.

The Gajski-Kuhn Y-chart is a model, which captures the considerations in designing


semiconductor devices.

The three domains of the Gajski-Kuhn Y-chart are on radial axes. Each of the domains can
be divided into levels of abstraction, using concentric rings.

• Behavioural domain: which specifies the software implementation of the system’s

functionality.

• Structural domain: which specifies how modules are connected together to affect the

prescribed behaviour.

• Physical domain: which specifies the layout used to build the system according to the

architect’s idea from transistor level.


All three domains aim in common to achieve the specified behaviour of die system meeting
the customer requirement. Engineers work in all domains at various abstraction levels in the
project.

At the top level (outer ring), we consider the architecture of the chip; at the lower levels
(inner rings), we successively refine the design into finer detailed implementation.
4. Flattening:
Benifits of Using Synthesis (or) Advantages:
1. It forces higher level of abstraction
2. Easy debugging
3. Code portability
4. Designer can guide synthesizer to optimize the design for speed, power or area.
5. Synthesis allows technology independent coding
Simulation:
Types of Simulation:
Design capture tools:
1. HDL Design
2. Schematic Design
3. Layout Design
4. Floor Planning
5. Chip Composition
1.HDL Design:

The behaviour and structure of a system may be captured in a Hardware Description


Language.

Major drawback of traditional design method is manual method of design description in


group of logic equations. Thus any description can automatically convert into HDL code that
can be implemented by using synthesis tool.

HDL’s are used to design two kinds of system

(i) Integrated circuits and (ii) Programmable logic device

HDL design can be used for designing integrated circuits like processor or any other kind of
digital logic chip.

PLD’s like FPGA or CPLD can be designed with HDL.

2. Schematic Design:

The traditional method of capturing a digital system design is via an interactive


schematic editor. Schematic editors provide a means to draw and connect components.

A collection of components may be collected into a module for which can be defined.
The icon is a diagram that stands for the collection of components within the module.
Given figure shows typical schematic for a module and its schematic icon.
Primarily, schematic editors are menu-based graphic editors with operations such as:

• Creating, selecting and deleting parts by pointing or area inclusion.


• Changing the graphic view by panting, zooming, or other means.
To a basic graphic editor, operations are added that pertain to the electrical nature of
the schematic, such as:

• Selecting an electrical node and interrogating if for state, connections,


capacitance, etc.
• Running an attached simulator or other electrical network-based tools.

3. Layout Design:

Layout too can be captured via code or interactive graphics editors. Layout editors,like
schematic editors, are based on drawing editors. A layout editor might interface to a
Design Rule checking program to allow interactive checking of DRC errors an to a
layout extraction program to examine circuit connectivity issues.

4. Floor Planning:

Fig: A Floorplan example


5. Chip Composition:

:
Design Verification Tools:
The functionality of cmos chip is to be verified certain set of verification tools are used for
testing functional specification.

Given figure shows a conventional flow through a set of design tools to produce a working
CMOS chip from a functional specification.

1. Simulation Tools
2. Timing Verifiers
3. Network Isomorphism
4. Netlist Comparision
5. Layout Extraction
6. Back Annotation
7. Design Rule Verification
8. Pattern Generation
1. SimulationTools:
Simulators are probably the most often used design tools. A simulator uses mathematical
models to represent the behavior of circuit components. Given specific input signals, the
simulator solves for the signals inside the circuit. Simulators come in a wide variety
depending on the level of accuracy and the simulation speed desired:
Circuit level simulation

• The most detailed and accurate simulation technique is referred to as circuit analysis.
As the name suggests simulators operate at the circuit level. Circuit simulators are used
to verify performance of CMOS circuits should not be assumed to accurately predict the
performance of designs.

Logic Level Simulation

• Logic level simulators provide the ability to simulate larger designs than circuit-level
simulators. Logic simulation is the use of simulation software to predict the behavior
of digital circuits and hardware description languages. Simulation can be performed at
varying degrees of physical abstraction, such as at the transistor level, gate level, register-
transfer level (RTL), electronic system-level (ESL), or behavioral level.

2. Timing Verifiers: Timing verifiers determine the longest delay path in a circuit to
optimize performance and to make sure that the clock cycles are correct.

The designers simulated with unit delay simulators to verify functionality. They ran
simulators with delays to check for timing problem. The detection of such problems is
pattern dependent. In other works, if the critical timing vector is not exercised, the critical
path will not be found. A Timing verifier takes a different approach to temporal verification.
Here the delays through all paths in a circuit are evaluated in a pattern.
3. Network Isomorphism:
Network isomorphism is used to prove that two networks are equivalent and therefore should
function equivalently. This is used often to prove only those circuits requiring detailed
simulation expend expensive compute cycles.

An electrical network may be represented by a graph where the vertices of the graph are
devices such as MOS transistors, bipolar transistors, diodes, resistors, and capacitors. The
arcs are the connections between devices. These are the electrical nodes in the circuit.

Two electrical circuits are identical if the graphs representing them are isomorphic.

The matching devices have identical properties such as:

▪ Transistor width and length.


▪ Resistance value
▪ The number of connections in each
terminal

4. Netlist Comparison:
In the comparison phase, the verification tool compares the electrical circuits from the
schematic netlist and the layout extracted netlist. The netlist comparison process also uses
the LVS (Layout versus Schematic) rule check.

5. Layout Extraction:

6. Back Annotation:
Back annotation is the term that describes the step of feeding layout information back to the
circuit design. Back annotation is the process of adding the extra delay caused by the
parasitic components back into the original timing analysis, which only has the timing from
the cells’ delay.

Gate-level simulation and static timing analysis (STA) are the two most commonly
used approaches in verifying a chip’s timing performance. Both of the methods can verify the
chip’s operating speed against the design specification.
7. Design Rule Verification:

8. Pattern Generation:
Pattern Generation is the last step in the sequence that starts at architecture for a chip and
ends with a database suitable for manufacture. It is the operation of creating the data that is
used for manufacture. It is the operation of creating the data that is used for mask making.
Now a days most semiconductor operations use electron beam generated masks. These
machines expose the masks in a raster-scan style similar to a television.

A common format is the Electron Beam Exposure System (EBES) format. The following
steps must be completed to create an EBES file.
TEST AND TESTABILITY
Testing:
• Testing is a process of verification. It can be done when a known input is applied to a unit
a known response can be evaluated. In other words, the response from a circuit is compared
with a known response or predictable response. The Testing process equally applicable to
circuits, chips, boards, and systems from a transistor level, gate level, microcells, chips and
printed circuit boards.
• Testing is used not only to find the fault-free devices, PCBs, and systems but also to
improve production yield at the various stages of manufacturing by analyzing the cause of
defects when faults are encountered.
Role of Testing:
• Testing of a system is an experiment in which the system is exercised and its resulting
response is analyzed to ascertain whether it behaved correctly.
• If incorrect behavior is detected, a second goal of a testing experiment may be to diagnose,
or locate, the cause of the misbehaviour.
• The role of testing is to detect whether something went wrong and the role of diagnosis is to
determine exactly what went wrong, and where the process needs to be altered.
• Therefore, correctness and effectiveness of testing is most important for quality products
(another name for perfect products.).
• The benefits of testing are quality and economy.
Testing may be two types
➢ Functionality test
➢ Manufacturing test

Principle of testing
• The response of the circuit is compared with the expected response.
• The circuit is considered good if the responses match. Obviously, the quality of the tested
circuit will depend upon the thoroughness of the test vectors.
• Generation and evaluation of test vectors is one of the important concepts in the testing.
• A testable circuit is defined as a cừcuit whose internal nodes of interest can be set to 0 or 1
and in which any change to the desừed logic value at the node of interest, due to a fault, can
be observed externally.
• The VLSI development process is iỉỉusứated in Fig. 9.2, where it can be seen that some
form of testing is involved at each stage of the process. Based on a customer or project
need, a VLSI device requừement is determined and formulated as a design specification.
Designers are then responsible for synthesizing a circuit that satisfies the design
specification and for verifying the design. Design verification is a predictive analysis that
ensiưes that the synthesized design will perform the required functions when manufactured.
When a design error is found, modifications to the design are necessary and design
verííìcation must be repeated. As a result, design verification can be considered as a form of
testing.
• Once verified, the VLSI design then goes to fabrication. At the same time, test engineers
develop a test procedure based on the design specification and fault models associated with
the implementation technology. A defect is a flaw or physical imperfection that may lead
to a fault. Due to unavoidable statistical flaws in the materials and masks used to
fabricate ICs, it is impossible for 100% of any particular kind of IC to be defect-free.
• Thus, the first testing performed during the manufacturing process is to test the ICs
fabricated on the wafer in order to determine which devices are defective. The chips that
pass the wafer-level test are exttacted and packaged. The packaged devices are retested to
eliminate those devices ùat may have been damaged during the packaging process or put
into defective packages. Additional testing is used to assure the final quality before going to
market. This final testing includes measurement of such parameters as inpuưoutput timing
specifications, voltage.
Fault Models:
Fault models are necessary for generating and evaluating a set of test vectors. Generally, a good fault
model should satisfy two criteria:
(1) It should accurately reflect the behaviour of defects.
(2) It should be computationally efficient in terms of fault simulation and test pattern generation.

Types of Fault Models:


1. Stuck-at-fault model
2. Transistor level stuck fault model (or) Stuck-Open and Stuck-Short faults
3. Bridging fault model
4. Delay fault model

1. Stuck-at-fault model:

Example 1:

Example 2:
2. Transistor level stuck fault model (or) Stuck-Open and Stuck-Short faults:
3. Bridging fault model:
4. Delay Fault Model:
Fault simulation:
Fault Simulation is defined as the process of measuring the quality of test. It consists of simulating a
circuit in the presence of faults. Any input pattern or sequence of input patterns that produces a
different output response in a faulty circuit from that of the fault-free cừcuit is a test vector, or
sequence of test vectors, that will detect the faults. Fault simulation is performed using gate-level
model and functional level model.
The main goals of fault simulation:
Measuring the effectiveness of the test patterns
Guiding the test pattern generator program
Generating fault dictionaries
Fault simulation serves following functions:
1. Confirms detection of fault
2. Computes fault coverage
3. Diagnostics of circuit
4. Identifies areas of circuit where fault coverage is inadequate.

• The mechanics of testing for fault simulation, as illustrated in fig. First, a set of target faults
(fault list) based on the CUT is enumerated. Often, fault collapsing is applied to the
enumerated fault set to produce a collapsed fault set to reduce fault simulation or fault grading
time. Then, input stimuli are applied to the CUT, and the output responses are compared with
the expected fault-free responses to determine whether the circuit is faulty. For fault
simulation, the CUT is typically synthesized down to a gate-level design (or circuit netlist).
• Ensuring that sufficient desing verification has been obtained is a difficult step for the
designer. Although the ultimate determination is whether or not the design works in the
system, fault simulation illustrated in fig, can provide a rough quantitative measure of the level
of design verification much earlier in the design process.
• Fault simulation also provides valuable information on portions of the design that need further
design verfication, because design verification vectors are often used as functional vectors
(called functional testing) during manufacturing test.

Fault simulation may be two types, these are


• Deterministic Fault Simulation
• Nondeterministic Fault Simulation

Deterministic Fault Simulation


• In this fault simulation technique, a set of test vectors are used to simulate a circuit and catch
the faults.
• But if all the faults are not caught by the test vectors; they are modified and the fault
simulation is repeated.
• There are mainly three types of deterministic fault simulations
1. Serial fault simulation
(It is done one after another, the process is inherently slow).
2. Parallel fault simulation
(The faulty circuits are simulated simultaneously and Very Fast)
3. Concurrent fault simulation
(The whole circuit is not simulated, but only a part is simulated where the fault is introduced).

Nondeterministic Fault Simulation


In the nondeterministic fault simulation, instead of testing every fault, a subset or sample of the faults
is tested and generalize the fault coverage from the sample tested.
Test Generation:
The goal of test generation is to find an efficient set of test vectors that detects all faults considered
for that circuit. To test a circuit with n inputs and m outputs, a set of input patterns is applied to the
Circuit Under Test (CUT), and its responses are compared to the good response of fault-free circuit.
Each input pattern is called a test vector. In order to completely test a circuit, many test patterns are
required.
Exhaustive Testing:
To test a circuit with n inputs and m outputs, a set of input patterns is applied to the circuit under
test (CUT), and its responses are compared to the known good responses of a fault-free circuit.
Each input pattern is called a test vector. In order to completely test a circuit, many test patterns are
required; however, it is difficult to know how many test vectors are needed to guarantee a
satisfactory reject rate. If the CUT is an n-input combinational logic circuit, we can apply all 2n
possible input patterns for testing stuck-at faults; this approach is called exhaustive testing.

Functional Testing:
In this testing every entry in the truth table for the combinational logic circuit is tested to
determine whether it produces the correct response. In practice, functional testing is considered by
many designers and test engineers to be testing the CUT as thoroughly as possible in a system-like
mode of operation. In either case, one problem is the lack of a quantitative measure of the defects
that will be detected by the set of functional test vectors.

Structural Testing:
The approach of structural testing is to select specific test patterns based on circuit structural
information and a set of fault models. Structural testing saves time and improves test efficiency, as
the total number of test patterns is decreased because the test vectors target specific faults that
would result from defects in the manufactured circuit. Structural testing cannot guarantee detection
of all possible manufacturing defects, as the test vectors are generated based on specific fault
models; however, the use of fault models does provide a quantitative measure of the fault-detection
capabilities of a given set of test vectors for a targeted fault model. This measure is called fault
coverage and is defined as:
Number of detected faults
Fault coverage =
Total number of faults

It may be impossible to obtain fault coverage of 100% because of the existence of undetectable faults.
An undetectable fault means there is no test to distinguish the fault-free circuit from a faulty circuit
containing that fault. As a result, the fault coverage can be modified and expressed as the fault
detection efficiency, also referred to as the effective fault coverage, which is defined as:
Fault detection efficiency = Number of detected faults
Total number of faults - number of undetectab le faults
Fault coverage is linked to the yield and the defect level by the following expression:

Defect level = 1-yield(1-fault coverage)


From this equation, we can show that a PCB with 40 chips, each having 90% fault coverage and 90%
yield, could result in a reject rate of 41.9%, or 419,000 PPM. As a result, improving fault coverage can
be easier and less expensive than improving manufacturing yield because making yield enhancements
can be costly; therefore, generating test stimuli with high fault coverage is very important.
Design Strategies for Testing :
Controllability and Observability:
Controllability:The ability to apply the input test vectors to the primary
inputs of a circuit to set up appropriate logic value (logic 0 or logic 1) is
known as controllability. For example, in the presence of fault, the primary
input has to set logic 1 for stuck-at-0 fault is known as 1-controllability.
Controllabilty is important while accessing the degree of difficulty of testing
a particular signal in a circuit. A node with little controllability may take
hundreds of cycles to get it to the right state.

Observability: The ability to observe the response of a fault on an


internal node through primary outputs of a circuit is known as Observability.
If the logic state of this node can reliably be observed, this node is regarded
as observable. Observability is useful, when a test engineer has to measure
the output of a gate/chip within a larger circuit to check its correct
operation. Higher observability indicates less number of cycles required to
measure output node value. Circuit having poor observability includes
sequential circuits with long feedback 100 ps.
Whether a circuit node is stuck at 1 or 0 is only testable if that node is
both controllable and observable.
These two major factors, controllability and observability, play a vital role to
test the circuit in stuck-at fault model. Controllability and Observability can
be achieved by the functionality of the combinational circuit and selection of
appropriate test vectors. For a gate-level circuit, the fault can be at input or
output of every gate and only one line can be detected at a time. These
controllability and observability measures gives the actual behaviour of the
circuit.
In a circuit of combinational logic, the logic states of the internal nodes can
be determined if the circuit’s inputs are all known. But for a circuit that
includes sequential elements, such as flip-flops and latches, this is not true.
Some of the node’s logic states depend on these sequential cell’s previous
states. This leads to controllability and observability issues.
Design For Testability (DFT):

• Design for testing or design for testability (DFT) consists of IC design techniques that add
testability features to a hardware product design.
• The added features make it easier to develop and apply manufacturing tests to the designed
hardware.
• The purpose of manufacturing tests is to validate that the product hardware contains no
manufacturing defects that could adversely affect the product's correct functioning.
• DFT plays an important role in the development of test programs and as an interface for test
application and diagnostics.
• Two important attributes related to testability are controllability and observability.
• Controllability is the ability to establish a specific signal value at each node in a circuit by
setting values on the circuit's inputs.
• Observability is the ability to determine the signal value at any node in a circuit by
controlling the circuit's inputs and observing its outputs.

Need of Design for Testability:


• During fabrication process several types of defects may exists such as catastrophic,
crystalline. Catastrophic defect is due to contamination, resulting in destruction of all the
transistors on the chip. And crystalline defect is because of destruction of a single transistor
on the chip.
• It is necessary to test the chip from the flaws. Hence it mandatory to check the chip
regarding its performance and functionality. Identifying the faulty chips is a complex job
and also time consuming. The faulty chip creates huge difficulty in system debugging. It
also increases the debugging cost. Therefore the design for Testability (DFT) is necessary.
We have three main approaches to what is commonly called Design for Testability (DFT). These
may be categorized as follows:
1. Ad hoc testing.
2. Scan-based approaches.
3. Built-in Self-test(BIST).

Ad hoc Testable Design Techniques:


One way to increase the testability is to make nodes more accessible at some cost physically
inserting more access circuits to the original design. Listed below are some of the ad hoc testable
deisgn techniques.
• Partition and Mux Technique
• Initialize Sequential Circuits
• Disable Internal Osicallators and Clocks
• Avoid Asynchronous Logic and Redundant Logic
• Avoid Delay Dependent Logic

Partition and Multiplexers Technique:


• Since the sequence of many serial gates, functional blocks, or large circuits are difficult to
test, such circuits can be partitioned and multiplexers can be inserted such that some of the
primary inputs can be fed to partitioned parts through multiplexers with accessible control
signals.
• With this design tehnique, the number of accessible nodes can be increased and the number
of test pattern can be reduced.

Initialize Sequential Circuits:


• when the sequential cicuits is powered up, its initial state can be a random unknown state. In
this case, it is not possible to start the test sequence correctly. The state of a sequential
circuit can be brought to a known state through initialization.
• In many designs, the initialization can be easily done by connecting asynchoronous preset or
clear-input signals from primary or controllable inputs to flip-flops or latches.
Disable Internal Osicallators and Clocks:
To avoid synchronization problems during testing, internal oscillators and clocks should be
disabled. For example, rather than connnecting circuit directly on-chip oscillator, the clock signal
can be Ored with a disabling signal followed by an insertion of a testing signal as shown in fig.

Avoid Asynchronous Logic and Redundant Logic:

• The speed of an asynchronous logic circuit can be faster than that of the synchronous logic
circuit counterpart.
• The design and test of an asynchronous logic circuit are more difficult than for a synhronous
logic circuit, and its state transition times are difficult to predict.
• The operation of an asynchronous logic circuit is sensitive to input test pattersns, oftern
causing race problems and hazards of having momentary signal values opposite to the
expected values.
• Some designed-in logic redundancy is used to mask a static hazard condition for reliability.
• The redundant node cannot be observed since the primary output value cannot be made
dependent on the value of the redundant node.
• This means that certain fault conditions on the node cannot be detected, such as a node SA1
of the function F.
Avoid Delay-Dependent Logic:

Fig:A pulse-generation circuit using a delay chain of three inverters


Automatic test pattern generators work in logic domains, they view delay dependent logic as
redundant combinational logic. In this case the ATPG will see an AND of a signal with its
complement, and will therefore always compute a 0 on the output of the AND-gate (instead of a
pulse). Adding an OR-gate after the AND-gate output permits to the ATPG to substitute a clock
signal directly.

Scan-Based Techniques
The goal of the scan path technique is to reconfigure a sequential circuit, for the purpose of testing,
into a combinational circuit. Since a sequential circuit is based on a combinational circuit and some
storage elements, the technique of scan path consists in connecting together all the storage elements
to form a long serial shift register. Thus the internal state of the circuit can be observed and
controlled by shifting (scanning) out the contents of the storage elements. The shift register is then
called a scan path.
• The storage elements can either be D, J-K, or R-S types of flip-flops, but simple latches
cannot be used in scan path. However, the structure of storage elements is slightly different
than classical ones. Generally the selection of the input source is achieved using a
multiplexer on the data input controlled by an external mode signal. This multiplexer is
integrated into the D-flip-flop, in our case; the D-flip-flop is then called MD-flip-flop
(multiplexed-flip-flop).

• The sequential circuit containing a scan path has two modes of operation : a normal mode
and a test mode which configure the storage elements in the scan path.

• In the normal mode, the storage elements are connected to the combinational circuit, in the
loops of the global sequential circuit, which is considered then as a finite state machine.

• In the test mode, the loops are broken and the storage elements are connected together as a
serial shift register (scan path), receiving the same clock signal. The input of the scan path is
called scan-in and the output scan-out. Several scan paths can be implemented in one same
complex circuit if it is necessary, though having several scan-in inputs and scan-out outputs.

• A large sequential circuit can be partitioned into sub-circuits, containing combinational sub-
circuits, associated with one scan path each. Efficiency of the test pattern generation for a
combinational sub-circuit is greatly improved by partitioning, since its depth is reduced.

• Before applying test patterns, the shift register itself has to be verified by shifting in all ones
i.e. 111...11, or zeros i.e. 000...00, and comparing.

The method of testing a circuit with the scan path is as follows:

1. Set test mode signal, flip-flops accept data from input scan-in
2. Verify the scan path by shifting in and out test data
3. Set the shift register to an initial state
4. Apply a test pattern to the primary inputs of the circuit
5. Set normal mode, the circuit settles and can monitor the primary outputs of the circuit
6. Activate the circuit clock for one cycle
7. Return to test mode
8. Scan out the contents of the registers, simultaneously scan in the next pattern
Built – in Self-Test (BIST) :
• Built-in self-test (BIST) is a design technique in which parts of a circuit are used to
test the circuit itself.
A test vector generator produces the test vectors to be applied to the circuit under test.
The response of a good circuit may be determined using the simulator tool of a CAD system.
The expected responses must be stored on the chip for comparison during testing.

Fig 1: BIST arrangement


• The related term built-in-test equipment (BITE) refers to the hardware and/or
software incorporated into a unit to provide DFT or BIST capability.
• BIST is mainly focused at reducing
-- The volume of test data
-- Costs involved in test pattern generation
-- Test time
• These points can be covered by integrating an automatic test system into the design of
chip.
BIST architectures consist of several key elements, namely
1. Test-pattern generators;
2. Output-response analyzers;
3. The circuit under test;
4. A BIST controller for controlling the BIST circuitry and CUT during self-test.
Pseudorandom binary sequence generator (PRBSG)
A practical approach for generating the test vectors on-chip is to use pseudorandom tests.
• The linear feedback shift register (LFSR) is used to generate pseudo-random test vectors in
the chip.
• The outputs at each stage of an LFSR are used as the input of the circuit.
• The bit-pattern of the sequence is random in nature but has a periodicity.
• That is why it is called pseudorandom sequence.
• The sequence is constructed using D-flip-flops and a XOR gate.

The maximum number of test patterns can be generated by n bit LFSR, the maximum 2n – 1 test
patterns can be generated using n bit LFSR as all 0’s is not allowed.
Single-input compressor circuit (SIC):
In PRBSG, it is not attractive to store a large number of responses to the tests on a chip. A Practical
solution is to compress the results of the tests into a single pattern. It can be done using an LFSR
circuit. Instead of providing the feedback signals as the input, a compresser circuit is included is
called the Single-Input Compressor Crrcuit (SIC).
• After applying a number of test vectors, the resulting values of p drive the SIC and, coupled
with the LFSR functionality, produce a four-bit pattern.
• The pattern generated by the SIC is signature of the tested circuit for the given sequence of
tests.
• The signature can be compared against a predetermined pattern to see if the tested circuit is
working properly.
Multiple input compressor circuit (MIC):
If the circuit uner test has more than one output, then an LSFR with multiple inputs can be used.
Four-bit signature provides a good mechanism for distinguishing among different sequences of four-
bit patterns that may appear on the inputs of this multiple-input compressor circuit (MIC).

BIST in a Sequential Circuit:


• The scan-path approach is used to provide a testable circuit. The test patterns that would
normally be applied on the primary inputs W = w1 w2 ........... wn are generated internally as the
pattern on X = x1 x2........... xn. Multiplexers are needed to allow switching from W to X, as
inputs to the combinational circuit.
• A pseudorandom binary sequence generator PRBSG-X, generates the test pattern for X. The
protion of the tests applied via the next-state signals y, is generated by the second PRBS
generator, PRBSG-y. These patterns are scanned into the flip-flops.
• The test outputs are compressed using the two compressor circuits. The patterns on the
primary outputs, Z = z1 z2 .......... zm are compressed using the MIC circuit, and those on the
next-state wires Y = y1 y2........... yk, by the SIC circuit. These circuits produce the Z-signature
and Y-signature, respectively. At the end of the testing process the two signatures are
compared with the stored patterns.

The effectiveness of the BIST approach depends on the length of the LFSR generator and
compressor circuits. Longer shift registers give better results.

Advantages of BIST:
• Low cost
• High Quality Test
• Faster Fault Detection
• Ease of Diagnostics
• Reduce maintenance and repair cost.

You might also like