ASIC Design Flow Tutorial
ASIC Design Flow Tutorial
By
Hima Bindu Kommuru
Hamid Mahmoodi
When designing a chip, the following objectives are taken into consideration:
1. Speed
2. Area
3. Power
4. Time to Market
To design an ASIC, one needs to have a good understanding of the CMOS Technology.
The next few sections give a basic overview of CMOS Technology.
In the present decade the chips being designed are made from CMOS technology. CMOS
is Complementary Metal Oxide Semiconductor. It consists of both NMOS and PMOS
transistors. To understand CMOS better, we first need to know about the MOS (FET)
transistor.
The transistor normally needs some kind of voltage initially for the channel to form.
When there is no channel formed, the transistor is said to be in the ‘cut off region’. The
voltage at which the transistor starts conducting (a channel begins to form between the
source and the drain) is called threshold Voltage. The transistor at this point is said to be
in the ‘linear region’. The transistor is said to go into the ‘saturation region’ when there
are no more charge carriers that go from the source to the drain.
Example: Creating a CMOS inverter requires only one PMOS and one NMOS transistor.
The NMOS transistor provides the switch connection (ON) to ground when the input is
logic high. The output load capacitor gets discharged and the output is driven to a
logic’0’. The PMOS transistor (ON) provides the connection to the VDD power supply
rail when the input to the inverter circuit is logic low. The output load capacitor gets
charged to VDD . The output is driven to logic ’1’.
The output load capacitance of a logic gate is comprised of
a. Intrinsic Capacitance: Gate drain capacitance ( of both NMOS and PMOS
transistors)
b. Extrinsic Capacitance: Capacitance of connecting wires and also input
capacitance of the Fan out Gates.
In CMOS, there is only one driver, but the gate can drive as many gates as possible. In
CMOS technology, the output always drives another CMOS gate input.
The charge carriers for PMOS transistors is ‘holes’ and charge carriers for NMOS
are electrons. The mobility of electrons is two times more than that of ‘holes’. Due to this
the output rise and fall time is different. To make it same, the W/L ratio of the PMOS
transistor is made about twice that of the NMOS transistor. This way, the PMOS and
Sequential Element
In CMOS, an element which stores a logic value (by having a feedback loop) is called a
sequential element. A simplest example of a sequential element would be two inverters
connected back to back. There are two types of basic sequential elements, they are:
1. Latch: The two inverters connected back to back, when connected to a
transmission gate, with a control input, forms a latch. When the control input is
high (logic ‘1’), the transmission gate is switched on and whatever value which
was at the input ‘D’ passes to the output. When the control input is low, the
transmission gate is off and the inverters that are connected back to back hold the
2. Flip-Flop: A flip flop is constructed from two latches in series. The first latch is
called a Master latch and the second latch is called the slave latch. The control
input to the transmission gate in this case is called a clock. The inverted version of
the clock is fed to the input of the slave latch transmission gate.
a. When the clock input is high, the transmission gate of the master latch is
switched on and the input ‘D’ is latched by the 2 inverters connected back
to back (basically master latch is transparent). Also, due to the inverted
clock input to the transmission gate of the slave latch, the transmission
gate of the slave latch is not ‘on’ and it holds the previous value.
b. When the clock goes low, the slave part of the flip flop is switched on and
will update the value at the output with what the master latch stored when
the clock input was high. The slave latch will hold this new value at the
output irrespective of the changes at the input of Master latch when the
clock is low. When the clock goes high again, the value at the output of
the slave latch is stored and step’a’ is repeated again.
The data latched by the Master latch in the flip flop happens at the rising clock
edge, this type of flip flop is called positive-edge triggered flip flop. If the latching
happens at negative edge of the clock, the flip flop is called negative edge triggered flip
flop.
Master Slave
D Q
CLK
2.0 Introduction
To design a chip, one needs to have an Idea about what exactly one wants to design. At
every step in the ASIC flow the idea conceived keeps changing forms. The first step to
make the idea into a chip is to come up with the Specifications.
Specifications are nothing but
• Goals and constraints of the design.
• Functionality (what will the chip do)
• Performance figures like speed and power
• Technology constraints like size and space (physical dimensions)
• Fabrication technology and design techniques
The next step is in the flow is to come up with the Structural and Functional
Description. It means that at this point one has to decide what kind of architecture
(structure) you would want to use for the design, e.g. RISC/CISC, ALU, pipelining etc …
To make it easier to design a complex system; it is normally broken down into several
sub systems. The functionality of these subsystems should match the specifications. At
this point, the relationship between different sub systems and with the top level system is
also defined.
The sub systems, top level systems once defined, need to be implemented. It is
implemented using logic representation (Boolean Expressions), finite state machines,
Combinatorial, Sequential Logic, Schematics etc.... This step is called Logic Design /
Register Transfer Level (RTL). Basically the RTL describes the several sub systems. It
should match the functional description. RTL is expressed usually in Verilog or VHDL.
Verilog and VHDL are Hardware Description Languages. A hardware description
language (HDL) is a language used to describe a digital system, for example, a network
switch, a microprocessor or a memory or a simple flip-flop. This just means that, by
using a HDL one can describe any hardware (digital) at any level. Functional/Logical
Verification is performed at this stage to ensure the RTL designed matches the idea.
Once Functional Verification is completed, the RTL is converted into an optimized
Gate Level Netlist. This step is called Logic/RTL synthesis. This is done by Synthesis
Tools such as Design Compiler (Synopsys), Blast Create (Magma), RTL Compiler
(Cadence) etc... A synthesis tool takes an RTL hardware description and a standard cell
library as input and produces a gate-level netlist as output. Standard cell library is the
basic building block for today’s IC design. Constraints such as timing, area, testability,
and power are considered. Synthesis tools try to meet constraints, by calculating the cost
of various implementations. It then tries to generate the best gate level implementation
for a given set of constraints, target process. The resulting gate-level netlist is a
completely structural description with only standard cells at the leaves of the design. At
this stage, it is also verified whether the Gate Level Conversion has been correctly
performed by doing simulation.
The next step in the ASIC flow is the Physical Implementation of the Gate Level
Netlist. The Gate level Netlist is converted into geometric representation. The geometric
Idea
Specifications
RTL
Physical
Implementation
GDSII
CHIP
There are three main steps in debugging the design, which are as follows
You can interactively do the above steps using the VCS tool. VCS first compiles the
verilog source code into object files, which are nothing but C source files. VCS can
compile the source code into the object files without generating assembly language files.
VCS then invokes a C compiler to create an executable file. We use this executable file to
simulate the design. You can use the command line to execute the binary file which
creates the waveform file, or you can use VirSim.
Below is a brief overview of the VCS tool, shows you how to compile and simulate a
counter. For basic concepts on verification and test bench, please refer to APPENDIX 3A
at the end of this chapter.
SETUP
Before going to the tutorial Example, let’s first setup up the directory.
You need to do the below 3 steps before you actually run the tool:
1. As soon as you log into your engr account, at the command prompt, please type “csh
“as shown below. This changes the type of shell from bash to c-shell. All the commands
work ONLY in c-shell.
[hkommuru@hafez ]$csh
This ccreate directory structure as shown below. It will create a directory called
“asic_flow_setup ”, under which it creates the following directories namely
The “asic_flow_setup” directory will contain all generated content including, VCS
simulation, synthesized gate-level Verilog, and final layout. In this course we will always
try to keep generated content from the tools separate from our source RTL. This keeps
our project directories well organized, and helps prevent us from unintentionally
modifying the source RTL. There are subdirectories in the project directory for each
major step in the ASIC Flow tutorial. These subdirectories contain scripts and
configuration files for running the tools required for that step in the tool flow. For this
tutorial we will work exclusively in the vcs directory.
3. Please source “synopsys_setup.tcl” which sets all the environment variables necessary
to run the VCS tool.
Please source them at unix prompt as shown below
Please Note : You have to do all the three steps above everytime you log in.
In this tutorial, we would be using a simple counter example . Find the verilog code and
testbench at the end of the tutorial.
Setup
3.1.1 Compiling and Simulating
Please note that the –f option means the file specified (main_counter.f ) contains a list of
command line options for vcs. In this case, the command line options are just a list of the
verilog file names. Also note that the testbench is listed first. The below command also
will have same effect .
The +v2k option is used if you are using Verilog IEE 1364-2000 syntax; otherwise there
is no need for the option. Please look at Figure 3.a for output of compile command.
By default the output of compilation would be a executable binary file is named simv.
You can specify a different name with the -o compile-time option.
For example :
vcs –f main_counter.f +v2k –o counter.simv
VCS compiles the source code on a module by module basis. You can incrementally
compile your design with VCS, since VCS compiles only the modules which have
changed since the last compilation.
2. Now, execute the simv command line with no arguments. You should see the output
from both vcs and simulation and should produce a waveform file called counter.dump in
your working directory.
[hkommuru@hafez vcs]$simv
If you look at the last page of the tutorial, you can see the testbench code, to understand
the above result better.
3. You can do STEP 1 and STEP 2 in one single step below. It will compile and simulate
in one single step. Please take a look at the command below:
To compile and simulate your design, please write your verilog code, and copy it to the
vcs directory. After copying your verilog code to the vcs directory, follow the tutorial
steps to simulate and compile.
Where debug_pp option is used to run the dve in simulation mode. Debug_pp creates a
vpd file which is necessary to do simulation. The below window will open up.
4. Now in the data pane select all the signals with the left mouse button holding the shift
button so that you select as many signals you want. Click on the right mouse button to
open a new window, and click on “Add to group => New group . A new window will
open up showing a new group of selected signals below.
In the waveform window, the menu option View Set Time Scale can be used to
change the display unit and the display precision
7. You can save your current session and reload the same session next time or start a new
session again. In the menu option , File Save Session, the below window opens as
shown below.
Go to the menu option, Simulation Breakpoints , will open up a new window as shown
below. You need to do this before Step 6, i.e. before actually running the simulation.
You can browse which file and also the line number and click on “Create” button to
create breakpoints.
Now when you simulate, click on Simulate Start, it will stop at your defined
breakpoint, click on Next to continue.
You can save your session again and exit after are done with debugging or in the middle
of debugging your design.
Verilog Code
File : Counter.v
endmodule // counter
// Test bench gets wires for all device under test (DUT) outputs:
initial
begin
reset = 1'b1;
@(posedge clk);#1;
reset = 1'b0;
$finish;
end
endmodule // counter_testbench
RTL is expressed in Verilog or VHDL. This document will cover the basics of Verilog.
Verilog is a Hardware Description Language (HDL). A hardware description language is
a language used to describe a digital system example Latches, Flip-Flops, Combinatorial,
Sequential Elements etc… Basically you can use Verilog to describe any kind of digital
system. One can design a digital system in Verilog using any level of abstraction. The
most important levels are:
Verilog allows hardware designers to express their designs at the behavioral level and not
worry about the details of implementation to a later stage in the design of the chip. The
design normally is written in a top-down approach. The system has a hierarchy which
makes it easier to debug and design. The basic skeleton of a verilog module looks like
this:
module example (<ports >);
input <ports>;
output <ports>;
inout <ports>;
# Data-type instantiation
#reg data-type stores values
reg <names>;
<Instantiation>
The modules can reference other modules to form a hierarchy. If the module contains
references to each of the lower level modules, and describes the interconnections between
them, a reference to a lower level module is called a module instance. Each instance is an
independent, concurrently active copy of a module. Each module instance consists of the
name of the module being instanced (e.g. NAND or INV), an instance name (unique to
that instance within the current module) and a port connection list.
Instance name in the above example is ‘N1 and V1’ and it has to be unique. The port
connection list consists of the terms in open and closed bracket ( ). The module port
connections can be given in order (positional mapping), or the ports can be explicitly
named as they are connected (named mapping). Named mapping is usually preferred for
long connection lists as it makes errors less likely.
2. Port mapping by order: Don’t have to specify (.in) & (.out). The
Example:
AND A1 (a, b, aandb);
If ‘a’ and ‘b ‘are the inputs and ‘aandb’ is the output, then the ports must be
mentioned in the same order as shown above for the AND gate. One cannot write
it in this way:
AND A1 (aandb, a, b);
end
Digital Design can be broken into either Combinatorial Logic or Sequential Logic. As
mentioned earlier, Hardware Description Languages are used to model RTL. RTL again
is nothing but combinational and sequential logic. The most popular language used to
model RTL is Verilog. The following are a few guidelines to code digital logic in
Verilog:
1. Not everything written in Verilog is synthesizable. The Synthesis tool does not
synthesize everything that is written. We need to make sure, that the logic implied
is synthesized into what we want it to synthesize into and not anything else.
a. Mostly, time dependant tasks are not synthesizable in Verilog. Some of
the Verilog Constructs that are Non Synthesizable are task, wait, initial
statements, delays, test benches etc
b. Some of the verilog constructs that are synthesizable are assign statement,
always blocks, functions etc. Please refer to next section for more detail
information.
2. One can model level sensitive and also edge sensitive behavior in Verilog. This
can be modeled using an always block in verilog.
a. Every output in an ‘always’ block when changes and depends on the
sensitivity list, becomes combinatorial circuit, basically the outputs have
to be completely specified. If the outputs are not completely specified,
then the logic will get synthesized to a latch. The following are a few
examples to clarify this:
b. Code which results in level sensitive behavior
c. Code which results in edge sensitive behavior
d. Case Statement Example
i. casex
ii. casez
3. Blocking and Non Blocking statements
a. Example: Blocking assignment
b. Example: Non Blocking assignment
4. Modeling Synchronous and Asynchronous Reset in Verilog
a. Example: With Synchronous reset
b. Example: With Asynchronous reset
5. Modeling State Machines in Verilog
a. Using One Hot Encoding
b. Using Binary Encoding
After designing the system, it is very vital do verify the logic designed. At the front end,
this is done through simulation. In verilog, test benches are written to verify the code.
Initial blocks start executing sequentially at simulation time 0. Starting with the first line
between the “begin end pair” each line executes from top to bottom until a delay is
reached. When a delay is reached, the execution of this block waits until the delay time
has passed and then picks up execution again. Each initial and always block executes
concurrently. The initial block in the example starts by printing << Starting the
Simulation >> to the screen, and initializes the reg types clk_50 and rst_l to 0 at time 0.
The simulation time wheel then advances to time index 20, and the value on rst_l changes
to a 1. This simple block of code initializes the clk_50 and rst_l reg types at the beginning
of simulation and causes a reset pulse from low to high for 20 ns in a simulation.
Some system tasks are called. These system tasks are ignored by the synthesis tool, so
it is ok to use them. The system task variables begin with a ‘$’ sign. Some of the system
level tasks are as follows:
a. $Display: Displays text on the screen during simulation
b. $Monitor: Displays the results on the screen whenever the parameter
changes.
c. $Strobe: Same as $display, but prints the text only at the end of the time
step.
d. $Stop: Halts the simulation at a certain point in the code. The user can add
the next set of instructions to the simulator. After $Stop, you get back to
the CLI prompt.
e. $Finish: Exits the simulator
f. $Dumpvar, $Dumpfile: This dumps all the variables in a design to a file.
You can dump the values at different points in the simulation.
task load_count;
input [3:0] load_value;
begin
@(negedge clk_50);
$display($time, " << Loading the counter with %h >>", load_value);
load_l = 1’b0;
count_in = load_value;
@(negedge clk_50);
load_l = 1’b1;
end
endtask //of load_count
This task takes one 4-bit input vector, and at the negative edge of the next clk_50, it starts
executing. It first prints to the screen, drives load_l low, and drives the count_in of the
counter with the load_value passed to the task. At the negative edge of clk_50, the load_l
signal is released. The task must be called from an initial or always block. If the
simulation was extended and multiple loads were done to the counter, this task could be
called multiple times with different load values.
The compiler directive `timescale:
‘timescale 1 ns / 100 ps
This line is important in a Verilog simulation, because it sets up the time scale and
operating precision for a module. It causes the unit delays to be in nanoseconds (ns) and
the precision at which the simulator will round the events down to at 100 ps. This causes
a #5 or #1 in a Verilog assignment to be a 5 ns or 1 ns delay respectively. The rounding
of the events will be to .1ns or 100 pico seconds.
Verilog Test benches use a standard, which contains a description of the C language
procedural interface, better known as programming language interface (PLI). We can
treat PLI as a standardized simulator (Application Program Interface) API for routines
written in C or C++. Most recent extensions to PLI are known as Verilog procedural
interface (VPI);
Before writing the test bench, it is important to understand the design specifications of
the design, and create a list of all possible test cases.
You can view all the signals and check to see if the signal values are correct, in the
waveform viewer.
When designing the test bench, you can break-points at certain times, or can do
simulation in a single step way, one can also have Time related breakpoints (Example:
execute the simulation for 10ns and then stop)
To test the design further, it is good to have randomized simulation. Random
Simulation is nothing but supplying random combinations of valid inputs to the
simulation tool and run it for a long time. When this random simulation runs for a long
time, it could cover all corner cases and we can hope that it will emulate real system
behavior. You can create random simulation in the test bench by using the $random
variable.
The following is an example of a simple read, write, state machine design and a test
bench to test the state machine.
State Machine:
module state_machine(sm_in,sm_clock,reset,sm_out);
endmodule
// instantiations
state_machine #(idle_state,
read_state,
write_state,
wait_state) st_mac (
.sm_in (in1),
.sm_clock (clk),
.reset (reset),
.sm_out (data_mux)
);
// monitor section
always @ (st_mac.current_state)
case (st_mac.current_state)
idle_state : state_message = "idle";
read_state : state_message = "read";
write_state: state_message = "write";
wait_state : state_message = "wait";
endcase
// clock declaration
initial clk = 1'b0;
always #50 clk = ~clk;
// tasks
task reset_cct;
begin
@(posedge clk);
message = " reset";
task change_in1_to;
input a;
begin
message = "change in1 task";
@ (posedge clk);
in1 = a;
end
endtask
endmodule
How do you simulate your design to get the real system behavior?
The following are two methods with which it id possible to achieve real system behavior
and verify it.
4.0 Introduction
The Design Compiler is a synthesis tool from Synopsys Inc. In this tutorial you will
learn how to perform hardware synthesis using Synopsys design compiler. In simple
terms, we can say that the synthesis tool takes a RTL [Register Transfer Logic] hardware
description [design written in either Verilog/VHDL], and standard cell library as input
and the resulting output would be a technology dependent gate-level-netlist. The gate-
level-netlist is nothing but structural representation of only standard cells based on the
cells in the standard cell library. The synthesis tool internally performs many steps, which
are listed below. Also below is the flowchart of synthesis process.
Libraries Read
Libraries
Read
Netlist
Netlist
Map to
Target Library
Map to and Optimize
Link Library
(if gate-level)
Write-out
Optimized
Apply Netlist
SDC
Const. Constraints
While running DC, it is important to monitor/check the log files, reports, scripts etc to
identity issues which might affect the area, power and performance of the design. In this
For Additional documentation please refer the below location, where you can get more
information on the 90nm Standard Cell Library, Design Compiler, Design Vision, Design
Ware Libraries etc.
There are four important parameters that should be setup before one can start
using the tool. They are:
• search_path
This parameter is used to specify the synthesis tool all the paths that it should search
when looking for a synthesis technology library for reference during synthesis.
• target_library
The parameter specifies the file that contains all the logic cells that should used for
mapping during synthesis. In other words, the tool during synthesis maps a design to the
logic cells present in this library.
• symbol_library
This parameter points to the library that contains the “visual” information on the logic
cells in the synthesis technology library. All logic cells have a symbolic representation
and information about the symbols is stored in this library.
• link_library
This parameter points to the library that contains information on the logic gates in the
synthesis technology library. The tool uses this library solely for reference but does not
use the cells present in it for mapping as in the case of target_library.
An example on use of these four variables from a .synopsys_dc.setup file is given below.
search_path = “. /synopsys/libraries/syn/cell_library/libraries/syn”
target_library = class.db
link_library = class.db
symbol_library = class.db
Once these variables are setup properly, one can invoke the synthesis tool at the
command prompt using any of the commands given for the two interfaces.
Design: It corresponds to the circuit description that performs some logical function. The
design may be stand-alone or may include other sub-designs. Although sub-design may
be part of the design, it is treated as another design by the Synopsys.
Cell: It is the instantiated name of the sub-design in the design. In Synopsys terminology,
there is no differentiation between the cell and instance; both are treated as cell.
Reference: This is the definition of the original design to which the cell or instance refers.
For e.g., a leaf cell in the netlist must be referenced from the link library, which contains
the functional description of the cell. Similarly an instantiated sub-design must be
referenced in the design, which contains functional description of the instantiated
subdesign.
Ports: These are the primary inputs, outputs or IO’s of the design.
Pin: It corresponds to the inputs, outputs or IO’s of the cells in the design. (Note the
difference between port and pin)
Net: These are the signal names, i.e., the wires that hook up the design together by
connecting ports to pins and/or pins to each other.
Clock: The port or pin that is identified as a clock source. The identification may be
internal to the library or it may be done using dc_shell commands.
Library: Corresponds to the collection of technology specific cells that the design is
targeting for synthesis; or linking for reference.
Design Entry
Before synthesis, the design must be entered into the Design Compiler (referred to as DC
from now on) in the RTL format. DC provides the following two methods of design
entry:
read command
analyze & elaborate commands
The analyze & elaborate commands are two different commands, allowing designers to
initially analyze the design for syntax errors and RTL translation before building the
generic logic for the design. The generic logic or GTECH components are part of
Synopsys generic technology independent library. They are unmapped representation of
boolean functions and serve as placeholders for the technology dependent library.
The analyze command also stores the result of the translation in the specified design
library that maybe used later. So a design analyzed once need not be analyzed again and
can be merely elaborated, thus saving time. Conversely read command performs the
function of analyze and elaborate commands but does not store the analyzed results,
therefore making the process slow by comparison.
One other major difference between the two methods is that, in analyze and elaborate
design entry of a design in VHDL format, one can specify different architectures during
elaboration for the same analyzed design. This option is not available in the read
command.
The commands used for both the methods in DC are as given below:
Read command:
dc_shell>read –format <format> <list of file names>
“-format” option specifies the format in which the input file is in, e.g. VHDL
Sample command for a reading “adder.vhd” file in VHDL format is given below
Or
Technology libraries contain the information that the synthesis tool needs to generate a
netlist for a design based on the desired logical behavior and constraints on the design.
The tool referring to the information provided in a particular library would make
appropriate choices to build a design. The libraries contain not only the logical function
of an ASIC cell, but the area of the cell, the input-to-output timing of the cell, any
constraints on fanout of the cell, and the timing checks that are required for the cell.
The target_library, link_library, and symbol_library parameters in the startup file are
used to set the technology library for the synthesis tool.
Following are given some guidelines which if followed might improve the performance
of the synthesized logic, and produce a cleaner design that is suited for automating the
synthesis process.
• Clock logic including clock gating and reset generation should be kept in one block –
to be synthesized once and not touched again. This helps in a clean specification of
the clock constraints. Another advantage is that the modules that are being driven by
the clock logic can be constrained using the ideal clock specifications.
• No glue logic at the top: The top block is to be used only for connecting modules
together. It should not contain any combinational glue logic. This removes the time
consuming top-level compile, which can now be simply stitched together without
undergoing additional synthesis.
• Module name should be same as the file name and one should avoid describing more
that one module or entity in a single file. This avoids any confusion while compiling
the files and during the synthesis.
• While coding finite state machines, the state names should be described using the
enumerated types. The combinational logic for computing the next state should be in
its own process, separate from the state registers. Implement the next-state
combinational logic with a case statement. This helps in optimizing the logic much
better and results in a cleaner design.
• Incomplete sensitivity lists must be avoided as this might result in simulation
mismatches between the source RTL and the synthesized logic.
• Memory elements, latches and flip-flops: A latch is inferred when an incomplete if
statement with a missing else part is specified. A flip-flop, or a register, is inferred
when an edge sensitive statement is specified in the always statement for Verilog and
process statement for VHDL. A latch is more troublesome than a latch as it makes
static timing analysis on designs containing latches. So designers try to avoid latches
and prefer flipflops more to latches.
• Multiplexer Inference: A case statement is used for implementing multiplexers. To
prevent latch inferences in case statements the default part of the case statement
should always be specified. On the other hand an if statement is used for writing
priority encoders. Multiple if statements with multiple branches result in the creation
of a priority encoder structure.
Ex: always @ (A, B, C)
begin
if A= 0 then D = B; end if;
if A= 1 then D = C; end if;
end
The same code can be written using if statement along with elsif statements to cover all
possible branches.
Three state buffers: A tri-state buffer is inferred whenever a high impedance (Z) is
assigned to an output. Tri-state logic is generally not always recommended because it
reduces testability and is difficult to optimize – since it cannot be buffered.
Signals versus Variables in VHDL: Signal assignments are order independent, i.e. the
order in which they are placed within the process statement does not have any effect on
the order in which they are executed as all the signal assignments are done at the end of
the process. The variable assignments on the other hand are order dependent. The signal
assignments are generally used within the sequential processes and variable assignments
are used within the combinational processes.
A designer, in order to achieve optimum results, has to methodically constrain the design,
by describing the design environment, target objectives and design rules. The constraints
contain timing and/or area information, usually derived from the design specifications.
The synthesis tool uses these constraints to perform synthesis and tries to optimize the
design with the aim of meeting target objectives.
Design attributes set the environment in which a design is synthesized. The attributes
specify the process parameters, I/O port attributes, and statistical wire-load models. The
most common design attributes and the commands for their setting are given below:
Load: Each output can specify the drive capability that determines how many loads can
be driven within a particular time. Each input can have a load value specified that
determines how much it will slow a particular driver. Signals that are arriving later than
the clock can have an attribute that specifies this fact. The load attribute specifies how
much capacitive load exists on a particular output signal. The load value is specified in
the units of the technology library in terms of picofarads or standard loads, etc... The
command for setting this attribute is given below:
set_load <value> <object_list>
e.g. dc_shell> set_load 1.5 x_bus
Design constraints specify the goals for the design. They consist of area and timing
constraints. Depending on how the design is constrained the DC/DA tries to meet the set
objectives. Realistic specification is important, because unrealistic constraints might
result in excess area, increased power and/or degrading in timing. The basic commands to
constrain the design are
set_max_area: This constraint specifies the maximum area a particular design should
have. The value is specified in units used to describe the gate-level macro cells in the
technology library.
e.g. dc_shell> set_max_area 0
Specifying a 0 area might result in the tool to try its best to get the design as small as
possible
create_clock: This command is used to define a clock object with a particular period and
waveform. The –period option defines the clock period, while the –waveform option
controls the duty cycle and the starting edge of the clock. This command is applied to a
pin or port, object types.
Following example specifies that a port named CLK is of type “clock” that has a period
of 40 ns, with 50% duty cycle. The positive edge of the clock starts at time 0 ns, with the
falling edge occurring at 20 ns. By changing the falling edge value, the duty cycle of the
clock may be altered.
e.g. dc_shell> create_clock –period 40 –waveform {0 20} CLK
set_input_delay: It specifies the input arrival time of a signal in relation to the clock. It is
used at the input ports, to specify the time it takes for the data to be stable after the clock
edge. The timing specification of the design usually contains this information, as the
setup/hold time requirements for the input signals. From the top-level timing
specifications the sub-level timing specifications may also be extracted.
e.g. dc_shell> set_input_delay –max 23.0 –clock CLK {datain}
dc_shell> set_input_delay –min 0.0 –clock CLK {datain}
The CLK has a period of 30 ns with 50% duty cycle. For the above given specification of
max and min input delays for the datain with respect to CLK, the setup-time requirement
for the input signal datain is 7ns, while the hold-time requirement is 0ns.
set_output_delay: This command is used at the output port, to define the time it takes for
the data to be available before the clock edge. This information is usually is provided in
the timing specification.
e.g. dc_shell> set_output_delay – max 19.0 –clock CLK {dataout}
The CLK has a period of 30 ns with 50% duty cycle. For the above given specification of
max output delay for the dataout with respect to CLK, the data is valid for 11 ns after the
clock edge.
set_max_delay: It defines the maximum delay required in terms of time units for a
particular path. In general it is used for blocks that contain combination logic only.
However it may also be used to constrain a block that is driven by multiple clocks, each
with a different frequency. This command has precedence over DC derived timing
requirements.
e.g. dc_shell> set_max_delay 5 –from all_inputs() – to_all_outputs()
set_min_delay: It defines the minimum delay required in terms of time units for a
particular path.. It is the opposite of the set_max_delay command. This command has
precedence over DC derived timing requirements.
e.g. dc_shell> set_max_delay 3 –from all_inputs() – to_all_outputs()
Setup
1. Write the Verilog Code. For the purpose of this tutorial, please consider the simple
verilog code for gray counter below.
// SIGNAL DECLARATIONS
reg [2-1:0] gcc_out;
// Compute new gcc_out value based on current gcc_out value
always @(negedge reset_n or posedge clk) begin
if (~reset_n)
gcc_out <= 2'b00;
else begin // MUST be a (posedge clk) - don't need “else if (posedge clk)"
if (en_count) begin // check the count enable
case (gcc_out)
2'b00: begin gcc_out <= 2'b01; end
2'b01: begin gcc_out <= 2'b11; end
2'b11: begin gcc_out <= 2'b10; end
default: begin gcc_out <= 2'b00; end
endcase // of case
end // of if (en_count)
end // of else
end // of always loop for computing next gcc_out value
endmodule
2. As soon as you log into your engr account, at the command prompt, please type “csh
“as shown below. This changes the type of shell from bash to c-shell. All the commands
work ONLY in c-shell.
[hkommuru@hafez ]$csh
This ccreate directory structure as shown below. It will create a directory called
“asic_flow_setup ”, under which it creates the following directories namely
asic_flow_setup
src/ : for verilog code/source code
The “asic_flow_setup” directory will contain all generated content including, VCS
simulation, synthesized gate-level Verilog, and final layout. In this course we will always
try to keep generated content from the tools separate from our source RTL. This keeps
our project directories well organized, and helps prevent us from unintentionally
modifying the source RTL. There are subdirectories in the project directory for each
major step in the ASIC Flow tutorial. These subdirectories contain scripts and
configuration files for running the tools required for that step in the tool flow. For this
tutorial we will work exclusively in the vcs directory.
3. Please source “synopsys_setup.tcl” which sets all the environment variables necessary
to run the VCS tool.
Please source them at unix prompt as shown below
Please Note : You have to do all the three steps above everytime you log in.
5. First we will learn how to run dc_shell manually, before we automate the scripts. Use
the below command invoke dc_shell
[hkommuru@hafez.sfsu.edu] $ dc_shell-xg-t
Initializing...
dc_shell-xg-t>
Once you get the prompt above, you can run various commands to load verilog files,
libraries etc. To get more information on any command you can type “man
<command_name> at the prompt.
The command “lappend search path” tells the tool to search for the verilog code in that
particular directory ] to the verilog source code directory.
The next command “define_design_lib”, creates a Synopsys work directory, and the la
last two commands “ set link_library “ and “ set target_library “ point to the standard
technology libraries we will be using. The DB files contain wireload models [Wire load
modeling allows the tool to estimate the effect of wire length and fanout on the resistance,
capacitance, and area of nets, calculate wire delays and circuit speeds], area and timing
information for each standard cell. DC uses this information to optimize the synthesis
process. For more detail information on optimization, please refer to the DC manual.
7. The next step is to load your Verilog/VHDL design into Design Compiler. The
commands to load verilog are “analyze” and “elaborate”. Executing these commands
results in a great deal of log output as the tool elaborates some Verilog constructs and
starts to infer some high-level components. Try executing the commands as follows.
You can see part of the analyze command in Figure 7.a below
Before DC optimizes the design, it uses Presto Verilog Compiler [for verilog code], to
read in the designs; it also checks the code for the correct syntax and builds a generic
technology (GTECH) netlist. DC uses this GTECH netlist to optimize the design. You
could also use “read_verilog” command, which basically combines both elaborate and
analyze command into one. You can use “read_verilog” as long as your design is not
parameterized, meaning look at the below example of a register.
For more information on the “elaborate” command, and how the synthesis tool infers
combinational and sequential elements, please refer to Presto HDL Compiler Reference
Manual found in the documentation area.
dc_shell-xg-t> check_design
Please go through, the check_design errors and warnings. DC cannot compile the design
if there are any errors. Many of the warning’s may not an issue, but it is still useful to
skim through this output.
9. After the design compile is clean, we need to tell the tool the constraints, before it
actually synthesizes. The tool needs to know the target frequency you want to synthesize.
Take a look at the “create_clock” command below.
The above command tells the tool that the pin named clk is the clock and that your
desired clock period is 5 nanoseconds. We need to set the clock period constraint
carefully. If the period is unrealistically small, then the tools will spend forever trying to
meet timing and ultimately fail. If the period is too large, then the tools will have no
trouble but you will get a very conservative implementation.
Similarly you can define output constraints, which define how much time would be spent
by signals leaving the design, outside the design, before being captured by the same clk.
Set area constraints: set maximum allowed area to 0 ☺, well it’s just to instruct design
compiler to use as less area as possible.
Please refer to tutorial on “Basics of Static Timing Analysis” for more understanding of
concepts of STA and for more information on the commands used in STA, please refer to
the Primetime Manual and DC Compiler Manual at location /packages/synopsys/
10. Now we are ready to use the compile command to actually synthesize our design into
a gate-level netlist. Two of the most important options for the compile command are the
map effort and the area effort. Both of these can be set to one of none, low, medium, or
high. They specify how much time to spend on technology mapping and area reduction.
DC will attempt to synthesize your design while still meeting the constraints. DC
considers two types of constraints: user specified constraints and design rule constraints.
We looked at the user specified constraints in the previous step. Design rule constraints
are fixed constraints which are specified by the standard cell library. For example, there
are restrictions on the loads specific gates can drive and on the transition times of certain
pins. To get a better understanding of the standard cell library, please refer to Generic
90nm library documents in the below location which we are using in the tutorial.
/packages/process_kit/generic/generic_90nm/updated_Oct2008/SAED_EDK90nm/Digi
tal_Standard_Cell_Library/doc/databook/
Also, note that the compile command does not optimize across module boundaries. You
have to use “set flatten” command to enable inter-module optimization. For more
information on the compile command consult the Design Compiler User Guide (dc-user-
guide.pdf) or use man compile at the DC shell prompt.
You can use the compile command more than once, as many iterations as you want, for
example, first iteration you can optimize only timing, but it might come with high area
cost, for second iteration, it optimizes area, but could cause the design to no longer meet
timing. There is no limit on number of iterations; however each design is different, and
you need to do number of runs, to decide how many iterations it needs.
We can now use various commands to examine timing paths, display reports, and further
optimize the design. Using the shell directly is useful for finding out more information
about a specific command or playing with various options.
In addition to the actual synthesized gate-level netlist, the dc_synth.tcl also generates
several text reports. Reports usually have the rpt filename suffix. The following is a list
of the synthesis reports.
The synth area.rpt report contains area information for each module in the design. 7.d
shows a fragment from synth_area.rpt. We can use the synth_area.rpt report to gain
insight into how various modules are being implemented. We can also use the area report
to measure the relative area of the various modules.
You can find all these reports in the below location for your reference.
/packages/synopsys/setup/project_dc/synth/reports/
You can also look at command.log , in the synth directory, which will list all the
commands used in the current session.
Library(s) Used:
saed90nm_typ (File:
/packages/process_kit/generic/generic_90nm/updated_Oct2008/SAED_EDK90nm
/Digital_Standard_Cell_Library/synopsys/models/saed90nm_typ.db)
Number of ports: 5
Number of nets: 9
Number of cells: 5
Number of references: 3
The synth_cells.rpt - Contains the cells list in the design , as you can see in Figure
4.e . From this report , you can see the breakup of each cell area in the design.
Cell Count
-----------------------------------
Hierarchial Cell Count: 0
Hierarchial Port Count: 0
Leaf Cell Count: 5
-----------------------------------
Area
-----------------------------------
Combinational Area: 29.492001
Noncombinational Area: 64.512001
Net Area: 0.000000
-----------------------------------
Cell Area: 94.003998
Design Area: 94.003998
Design Rules
-----------------------------------
Total Number of Nets: 9
Nets With Violations: 0
-----------------------------------
Hostname: hafez.sfsu.edu
1
synth_timing.rpt - Contains critical timing paths
You can see below an example of a timing report dumped out from synthesis . You can
see at the last line of the Figure 7.f , this paths meets timing. The report lists the critical
path of the design. The critical path is the slowest logic path between any two registers
=> In the above example, the file will be empty since the graycounter
did not need any of the complex cells.
No implementations to report
No multiplexors to report
Below is the gate-level netlist output of the gray counter RTL code after synthesis.
AO22X1 U2
( .IN1(gcc_out[1]), .IN2(n1), .IN3(en_count), .IN4(N8), .Q(n4) );
AO22X1 U3 ( .IN1(en_count), .IN2(n6), .IN3(N8), .IN4(n1), .Q(n5) );
INVX0 U4 ( .IN(en_count), .QN(n1) );
DFFARX1 \gcc_out_reg[0]
( .D(n5), .CLK(clk), .RSTB(reset_n), .Q(N8) );
DFFARX1 \gcc_out_reg[1]
( .D(n4), .CLK(clk), .RSTB(reset_n), .Q(gcc_out[1]),
.QN(n6) );
endmodule
## Give the path to the verilog files and define the WORK directory
## Create Constraints
create_clock clk -name ideal_clock1 -period 5
set_input_delay 2.0 [ remove_from_collection [all_inputs] clk ]
set_output_delay 2.0 [all_outputs ] -clock clk
set_max_area 0
## Compilation
## you can change medium to either low or high
compile -area_effort medium -map_effort medium
write_sdc const/gray_counter.sdc
exit
4. A.0 Introduction
A fully optimized design is one, which has met the timing requirements and occupies the
smallest area. The optimization can be done in two stages one at the code level, the other
during synthesis. The optimization at the code level involves modifications to RTL code
that is already been simulated and tested for its functionality. This level of modifications
to the RTL code is generally avoided as sometimes it leads to inconsistencies between
simulation results before and after modifications. However, there are certain standard
model optimization techniques that might lead to a better synthesized
design.
Model optimizations are important to a certain level, as the logic that is generated by the
synthesis tool is sensitive to the RTL code that is provided as input. Different RTL codes
generate different logic. Minor changes in the model might result in an increase or
decrease in the number of synthesized gates and also change its timing characteristics. A
logic optimizer reaches different endpoints for best area and best speed depending on the
starting point provided by a netlist synthesized from the RTL code. The different starting
points are obtained by rewriting the same HDL model using different constructs. Some of
the optimizations, which can be used to modify the model for obtaining a better quality
design, are listed below.
if A = ‘1’ then
E = B + C;
else
E = B + D;
end if;
if A = ‘1’ then
temp := C; // A temporary variable introduced.
else
temp := D;
end if;
E = B + temp;
It is clear from the figure that one ALU has been removed with one ALU being shared
for both the addition operations. However a multiplexer is induced at the inputs of the
ALU that contributes to the path delay. Earlier the timing path of the select signal goes
through the multiplexer alone, but after resource sharing it goes through the multiplexer
B := R1 + R2;
…..
C <= R3 – (R1 + R2);
if (test)
A <= B & (C + D);
else
J <= (C + D) | T;
end if;
In the above code the common factor C + D can be place out of the if statement, which
might result in the tool generating only one adder instead of two as in the above case.
Such minor changes if made by the designer can cause the tool to synthesize better logic
and also enable it to concentrate on optimizing more critical areas.
Moving Code
In certain cases an expression might be placed, within a for/while loop statement, whose
value would not change through every iteration of the loop. Typically a synthesis tool
handles the a for/while loop statement by unrolling it the specified number of times. In
such cases redundant code might be generated for that particular expression causing
additional logic to be synthesized. This could be avoided if the expression is moved
outside the loop, thus optimizing the design. Such optimizations performed at a higher
C := A + B;
…………
for c in range 0 to 5 loop
……………
T := C – 6;
// Assumption : C is not assigned a new value within the loop, thus the above expression
would remain constant on every iteration of the loop.
……………
end loop;
The above code would generate six subtracters for the expression when only one is
necessary. Thus by modifying the code as given below we could avoid the generation of
unnecessary logic.
C := A + B;
…………
temp := C – 6; // A temporary variable is introduced
for c in range 0 to 5 loop
……………
T := temp;
// Assumption : C is not assigned a new value within the loop, thus the above expression
would remain constant on every iteration of the loop.
……………
end loop;
Ex:
C := 4;
….
Y = 2 * C;
Computing the value of Y as 8 and assigning it directly within your code can avoid the
above unnecessary code. This method is called constant folding. The other optimization,
dead code elimination refers to those sections of code, which are never executed.
Ex.
A := 2;
The above if statement would never be executed and thus should be eliminated from the
code. The logic optimizer performs these optimizations by itself, but nevertheless if the
designer optimizes the code accordingly the tool optimization time would be reduced
resulting in faster tool running times.
The usage of parentheses is critical to the design as the correct usage might result in
better timing paths.
Ex.
Result <= R1 + R2 - P + M;
The hardware generated for the above code is as given below in Figure 4 (a).
If the expression has been written using parentheses as given below, the hardware
synthesized would be as given in Figure 4 (b).
Result <= (R1 + R2) – (P - M);
For the optimization of design, to achieve minimum area and maximum speed, a lot of
experimentation and iterative synthesis is needed. The process of analyzing the design for
speed and area to achieve the fastest logic with minimum area is termed – design space
exploration.
For the sake of optimization, changing of HDL code may impact other blocks in the
design or test benches. For this reason, changing the HDL code to help synthesis is less
desirable and generally is avoided. It is now the designer’s responsibility to minimize the
area and meet the timing requirements through synthesis and optimization. The later
The DC has three different compilation strategies. It is up to user discretion to choose the
most suitable compilation strategy for a design.
a) Top-down hierarchical compile method.
b) Time-budget compile method.
c) Compile-characterize-write-script-recompile (CCWSR) method.
Advantages
Only top level constraints are needed.
Better results due to optimization across entire design.
Disadvantages
Long compile time.
Incremental changes to the sub-blocks require complete re-synthesis.
Does not perform well, if design contains multiple clocks or generated clocks.
Time-budgeting compile.
This process is best for designs properly partitioned designs with timing specifications
defined for each sub-block. Due to specifying of timing requirements for each block,
multiple synthesis scripts for individual blocks are produced. The synthesis is usually
performed bottom-up i.e., starting at the lowest level and going up to the top most level.
This method is useful for medium to very large designs and does not require large
amounts memory.
Advantages
Design easier to manage due to individual scripts.
Incremental changes to sub-blocks do not require complete re-synthesis.
Compile-Characterize-Write-Script-Recompile
This is an advanced synthesis approach, useful for medium to very large designs that do
not have good inter-block specifications defined. It requires constraints to be applied at
the top level of the design, with each sub-block compiled beforehand. The subblocks are
then characterized using the top-level constraints. This in effect propagates the required
timing information from the top-level to the sub-blocks. Performing a write_script on
the characterized sub-blocks generates the constraint file for each subblock.
The constraint files are then used to re-compile each block of the design.
Advantages
Less memory intensive.
Good quality of results because of optimization between sub-blocks of the design.
Produces individual scripts, which may be modified by the user.
Disadvantages
The generated scripts are not easily readable.
It is difficult to achieve convergence between blocks
Lower block changes might need complete re-synthesis of entire design.
Ex: Lets say moduleA has been synthesized. Now moduleB that has two instantiations of
moduleA as U1 and U2 is being compiled. The compilation will be stopped with an error
message stating that moduleA is instantiated 2 times in moduleB. There are two methods
of resolving this problem.
You can set a don_touch attribute on moduleA before synthesizing moduleB, or
uniquify moduleB. uniquify a dc_shell command creates unique definitions of multiple
instances. So it for the above case it generates moduleA-u1 and moduleA_u2 (in VHDL),
corresponding to instance U1 and U2 respectively.
Flattening
Flattening reduces the design logic in to a two level, sum-of-products of form, with few
logic levels between the input and output. This results in faster logic. It is recommended
for unstructured designs with random logic. The flattened design then can be structured
before final mapping optimization to reduce area. This is important as flattening has
significant impact on area of the design. In general one should compile the design using
default settings (flatten and structure are set as false). If timing objectives are not met
flattening and structuring should be employed. It the design is still failing goals then just
flatten the design without structuring it. The command for flattening is given below
If the design is not timing critical and you want to minimize for area only, then set the
area constraints (set_max_area 0) and perform Boolean optimization. For all other case
structure with respect to timing only.
Removing hierarchy
DC by default maintains the original hierarchy that is given in the RTL code. The
hierarchy is a logic boundary that prevents DC from optimizing across this boundary.
Unnecessary hierarchy leads to cumbersome designs and synthesis scripts and also limits
the DC optimization within that boundary, without optimizing across hierarchy. To allow
DC to optimize across hierarchy one can use the following commands.
This allows the DC to optimize the logic separated by boundaries as one logic resulting in
better timing and an optimal solution.
DC by default tries to optimize for timing. Designs that are not timing critical but area
intensive can be optimized for area. This can be done by initially compiling the design
with specification of area requirements, but no timing constraints. In addition, by using
the don_touch attribute on the high-drive strength gates that are larger in size, used by
default to improve timing, one can eliminate them, thus reducing the area considerably.
Once the design is mapped to gates, the timing and area constraints should again be
specified (normal synthesis) and the design re-compiled incrementally. The incremental
compile ensures that DC maintains the previous structure and does not bloat the logic
unnecessarily. The following points can be kept in mind for further area optimization:
There are two kind of timing issues that are important in a design- setup and hold timing
violations.
Setup Time: It indicates the time before the clock edge during which the data should be
valid i.e. it should be stable during this period and should not change. Any change during
this period would trigger a setup timing violation. Figure 4A.b illustrates an example with
setup time equal to 2 ns. This means that signal DATA must be valid 2 ns before the
clock edge; i.e. it should not change during this 2ns period before the clock edge.
Hold Time: It indicates the time after the clock edge during which the data should be
held valid i.e. it should not change but remain stable. Any change during this period
would trigger a hold timing violation. Figure 4A.b illustrates an example with hold time
equal to 1 ns. This means that signal DATA must be held valid 1 ns after the clock edge;
i.e. it should not change during the 1 ns period after the clock edge.
The synthesis tool automatically runs its internal static timing analysis engine to check
for setup and hold time violations for the paths, that have timing constraints set on them.
It mostly uses the following two equations to check for the violations.
Here Tprop is the propagation delay from input clock to output of the device in question
(mostly a flip-flop); Tdelay is the propagation delay across the combinational logic
through which the input arrives; Tsetup is the setup time requirement of the device;
When the synthesis tool reports timing violations the designer needs to fix them. There
are three options for the designer to fix these violations.
1) Optimization using synthesis tool: this is the easiest of all the other options. Few of
the techniques have been discussed in the section Optimization Techniques above.
Register balancing
This command is particularly useful with designs that are pipelined. The command
reshuffles the logic from one pipeline stage to another. This allows extra logic to be
moved away from overly constrained pipeline stages to less constrained ones with
additional timing. The command is simply balance_registers.
The implementation type sim is only for simulation. Implementation types rpl, cla, and
clf are for synthesis; clf is the faster implementation followed by cla; the slowest being
rpl. If compilation of map_effort low is set the designer can manually set the
implementation using the set_implementation command. Otherwise the selection will not
change from current choice. If the map_effort is set to medium the design compiler
would automatically choose the appropriate implementation depending upon the
optimization algorithm. A choice of medium map_effort is suitable for better
optimization or even a manual setting can be used for better performance results.
Balancing heavy loading Designs generally have certain nets with heavy fanout
generating a heavy load on a certain point. A large load would be difficult to drive by a
single net. This leads to unnecessary delays and thus timing violations. The
balance_buffers command comes in hand to solve such problems. this command would
make the design compiler to create buffer trees to drive the large fanout and thus balance
the heavy load.
Microarchitectural Tweaks
Consider the figure 4A.c Assuming a critical path exists from A to Q2, logic optimization
on combinational logic X, Y, and Z would be difficult because X is shared with Y and Z.
We can duplicate the logic X as shown in figure 4A.d. In this case Q1 and Q2 have
independent paths and the path for Q2 can be optimized in a better fashion by the tool to
ensure better performance.
Logic duplication can also be used in cases where a module has one signal arriving late
compared to other signals. The logic can be duplicated in front of the fast -arriving
signals such that timing of all the signals is balanced. Figure 4A.e & 4A.f illustrate this
fact quite well. The signal Q might generate a setup violation as it might be delayed due
Figure 4A.f: Logic Duplication for balancing the timing between signals
When a designer knows for sure that a particular input signal is arriving late then priority
encoding would be a good bet. The signals arriving earlier could be given more priority
and thus can be encoded before the late arriving signals.
It can be designed using five and gates with A, B at the first gate. The output of first gate
is anded with C and output of the second gate with D and so on. This would ensure
proper performance if signal F is most late arriving and A is the earliest to arrive. If
propagation delay of each and gate were 1 ns this would ensure the output signal Q would
be valid only 5 ns after A is valid or only 1 ns after signal H is valid. Multiplex decoding
is useful if all the input signals arrive at the same time. This would ensure that the output
would be valid at a faster rate. Thus multiplex decoding is faster than priority decoding if
all input signals arrive at the same time. In this case for the boolean equation above the
each of the two inputs would be anded parallely in the form of A.B, C.D and E.F each
these outputs would then be anded again to get the final output. This would ensure Q to
be valid in about 2 ns after A is valid.
Since it is very difficult fot the synthesis tool to find hardware with exact delays, all
absolute and relative timing declarations are ignored by the tools. Also, all signals are
assumed to be of maximum strength (strength 7). Boolean operations on x and z are not
permitted. The constructs are classified as
<identifiers>
<continuous assignment>
>>,<<,?:,{}
assign (procedural and declarative), begin, end, case, casex, casez, endcase
default
disable
function, endfunction
if, else, else if
input, output, inout
wire, wand, wor, tri
integer, reg
macromodule, module
parameter
supply0, supply1
task, endtask
Construct Constraints
when both operands
constants or second
*,/,%
operand
is a power of 2
only edge triggered
Always
events
bounded by static
For variables: only ise +
or - to index
Ignored Constructs
Unsupported constructs
<assignment with variable used as bit select on LHS of assignment>
<global variables>
===, !==
cmos,nmos,rcmos,rnmos,pmos,rpmos
deassign
defparam
event
force
fork,join
forever,while
initial
pullup,pulldown
release
repeat
rtran,tran,tranif0
tranif1
Synopsys provides a GUI front-end to Design Compiler called Design Vision which we
will use to analyze the synthesis results. You should avoid using the GUI to actually
perform synthesis since we want to use scripts for this. To launch Design Vision and read
in the synthesized design, move into the /project_dc/synth/ working directory and use the
following commands. The command “design_vision-xg” will open up a GUI.
% design_vision-xg
design_vision-xg> read_file -format ddc output/gray_counter.ddc
You can browse your design with the hierarchical view. Right click on the gray_counter
module and choose the Schematic View option [Figure 8.a], the tool will display a
schematic of the synthesized logic corresponding to that module. Figure 8.b shows the
schematic view for the gray counter module. You can see synthesized flip-flops in the
schematic view.
In the current gray_count design, there are no submodules. If there are submodules in the
design, it is sometimes useful to examine the critical path through a single submodule. To
do this, right click on the module in the hierarchy view and use the Characterize option.
Check the timing, constraints, and connections boxes and click OK. Now choose the
module from the drop down list box on the toolbar (called the Design List). Choosing
Timing ! Report Timing will provide information on the critical path through that
submodule given the constraints of the submodule within the overall design’s context.
For more information on Design Vision consult the Design Vision User Guide
Why do we normally do Static Timing Analysis and not Dynamic Timing Analysis?
What is the difference between them?
Timing Analysis can be done in both ways; static as well as dynamic. Dynamic
Timing analysis requires a comprehensive set of input vectors to check the timing
characteristics of the paths in the design. Basically it determines the full behavior of the
circuit for a given set of input vectors. Dynamic simulation can verify the functionality of
the design as well as timing requirements. For example if we have 100 inputs then we
need to do 2 to the power of 100 simulations to complete the analysis. The amount of
analysis is astronomical compared to static analysis.
Static Timing analysis checks every path in the design for timing violations without
checking the functionality of the design. This way, one can do timing and functional
analysis same time but separately. This is faster than dynamic timing simulation because
there is no need to generate any kind of test vectors. That’s why STA is the most popular
way of doing timing analysis.
The different kinds of paths when checking the timing of a design are as follows:
1. Input pin/port Sequential Element
2. Sequential Element Sequential Element
3. Sequential Element Output pin/port
4. Input pin/port Output pin/port
The static timing analysis tool performs the timing analysis in the following way:
1. STA Tool breaks the design down into a set of timing paths.
2. Calculates the propagation delay along each path.
3. Checks for timing violations (depending on the constraints e.g. clock) on the
different paths and also at the input/output interface.
You can learn about CTS more detail in the Physical Design part of this tutorial.
Clock Network Delay: A set of buffers are added in between the source of the clock to
the actual clock pin of the sequential element. This delay due to the addition of all these
buffers is defined as the Clock Network Delay. [Clock Network Delay is added to clock
period in Primetime]
Path Delay: When calculating path delay, the following has to be considered:
Clock Network Delay+ Clock-Q + (Sum of all the Gate delays and Net delays)
Global Clock skew: It is defined as the delay which is nothing but the difference
between the Smallest and Longest Clock Network Delay.
Zero Skew: When the clock tree is designed such that the skew is zero, it is defined as
zero skew.
Local Skew: It is defined as the skew between the launch and Capture flop. The worst
skew is taken as Local Skew.
Useful Skew: When delays are added only to specific clock paths such that it improves
set up time or hold time, is called useful skew.
What kind of model does the tool use to calculate the delay?
The tool uses a wire load model. It is nothing but a statistical model .It consists of a table
which gives the capacitance and resistance of the net with respect to fan-out.
For more information please refer to the Primetime User Manual in the
packages/synopsys/ directory.
PrimeTime (PT) is a sign-off quality static timing analysis tool from Synopsys. Static
timing analysis or STA is without a doubt the most important step in the design flow. It
determines whether the design works at the required speed. PT analyzes the timing delays
in the design and flags violation that must be corrected.
PT, similar to DC, provides a GUI interface along with the command-line interface. The
GUI interface contains various windows that help analyze the design graphically.
Although the GUI interface is a good starting point, most users quickly migrate to using
PT is a stand-alone tool that is not integrated under the DC suite of tools. It is a separate
tool, which works alongside DC. Both PT and DC have consistent commands, generate
similar reports, and support common file formats. In addition PT can also generate timing
assertions that DC can use for synthesis and optimization. PT’s command-line interface is
based on the industry standard language called Tcl. In contrast to DC’s internal STA
engine, PT is faster, takes up less memory, and has additional features.
6.6.2 Pre-Layout
After successful synthesis, the netlist obtained must be statically analyzed to check for
timing violations. The timing violations may consist of either setup and/or hold-time
violations. The design was synthesized with emphasis on maximizing the setup-time,
therefore you may encounter very few setup-time violations, if any. However, the hold-
time violations will generally occur at this stage. This is due to the data arriving too fast
at the input of sequential cells with respect to the clock.
If the design is failing setup-time requirements, then you have no other option but to re-
synthesize the design, targeting the violating path for further optimization. This may
involve grouping the violating paths or over constraining the entire sub-block, which had
violations. However, if the design is failing hold-time requirements, you may either fix
these violations at the pre-layout level, or may postpone this step until after layout. Many
designers prefer the latter approach for minor hold-time violations (also used here), since
the pre-layout synthesis and timing analysis uses the statistical wire-load models and
fixing the hold-time violations at the pre-layout level may result in setup-time violations
for the same path, after layout. However, if the wire-load models truly reflect the post-
routed delays, then it is prudent to fix the hold-time violations at this stage. In any case, it
must be noted that gross hold-time violations should be fixed at the pre-layout level, in
order to minimize the number of hold-time fixes, which may result after the layout.
In the pre-layout phase, the clock tree information is absent from the netlist. Therefore, it
is necessary to estimate the post-route clock-tree delays upfront, during the pre-layout
phase in order to perform adequate STA. In addition, the estimated clock transition
should also be defined in order to prevent PT from calculating false delays (usually large)
for the driven gates. The cause of large delays is usually attributed to the high fanout
normally associated with the clock networks. The large fanout leads to slow input
transition times computed for the clock driving the endpoint gates, which in turn results
in PT computing unusually large delay values for the endpoint gates. To prevent this
situation, it is recommended that a fixed clock transition value be specified at the source.
The following commands may be used to define the clock, during the prelayout phase of
the design.
The above commands specify the port CLK as type clock having a period of 20ns, the
clock latency as 2.5ns, and a fixed clock transition value of 0.2ns. The clock latency
value of 2.5ns signifies that the clock delay from the input port CLK to all the endpoints
is fixed at 2.5ns. In addition, the 0.2ns value of the clock transition forces PT to use the
0.2ns value, instead of calculating its own. The clock skew is approximated with 1.2ns
specified for the setup-time, and 0.5ns for the hold-time. Using this approach during pre-
layout yields a realistic approximation to the post-layout clock network results.
[hkommuru@hafez.sfsu.edu] $ csh
[hkommuru@hafez.sfsu.edu] $ cd /asic_flow_setup/primetime
[hkommuru@hafez.sfsu.edu] $ source /packages/synopsys/setup/synopsys_setup.tcl
2. PT may be invoked in the command-line mode using the command pt_shell or in the
GUI mode through the command primetime as shown below.
Command-line mode:
> pt_shell
GUI-mode:
> primetime
[hkommuru@hafez.sfsu.edu]$ vi scripts/pre_layout_pt.tcl
3. Just like DC setup, you need to set the path to link_library and search_path
7. Now we can do the analysis of the design, as discussed in the beginning of this chapter,
general, four types of analysis is performed on the design, as follows:
• From primary inputs to all flops in the design.
• From flop to flop.
• From flop to primary output of the design.
• From primary inputs to primary outputs of the design.
All four types of analysis can be accomplished by using the following commands:
pt_shell> report_timing –from [all_inputs] –max_paths 20 \
–to [all_registers –data_pins]
pt_shell> report_timing –from [all_registers –clock_pins] –max_paths 20 \
–to [all_registers –data_pins]
pt_shell> report_timing –from [all_registers -clock_pins] -max_paths 20 \
–to [all_outputs]
pt_shell> report_timing –from [all_inputs] \
–to [all_outputs] –max_paths 20
8. Reporting setup time and hold time. Primetime by default reports the setup time. You
can report the setup or hold time by specifying the –delay_type option as shown in below
figure.
11. You can save your session and come back later if you chose to.
Note: If the timing is not met, you need to go back to synthesis and redo to make sure the
timing is clean before you proceed to the next step of the flow that is Physical
Implementation.
8.1 Introduction
As you have seen in the beginning of the ASIC tutorial, after getting an optimized gate-
level netlist, the next step is Physical implementation. Before we actually go into details
of ICCompiler, which is the physical implementation tool from Synopsys, this chapter
covers the necessary basic concepts needed to do physical implementation. Also, below
you can see a more detailed flowchart of ASIC flow.
8.2 Floorplanning
At the floorplanning stage, we have a netlist which describes the design and the various
blocks of the design and the interconnection between the different blocks. The netlist is
the logical description of the ASIC and the floorplan is the physical description of the
ASIC. Therefore, by doing floorplanning, we are mapping the logical description of the
design to the physical description. The main objectives of floorplanning are to minimize
a. Area
Floorplanning is a major step in the Physical Implementation process. The final timing,
quality of the chip depends on the floorplan design. The three basic elements of chip are:
1. Standard Cells: The design is made up of standard cells.
2. I/O cells: These cells are used to help signals interact to and from the chip.
3. Macros (Memories): To store information using sequential elements takes up lot
of area. A single flip flop could take up 15 to 20 transistors to store one bit.
Therefore special memory elements are used which store the data efficiently and
also do not occupy much space on the chip comparatively. These memory cells
are called macros. Examples of memory cells include 6T SRAM (Static Dynamic
Access Memory), DRAM (Dynamic Random Access Memory) etc.
The above figure shows a basic floorplan. The following is the basic floorplanning steps
(and terminology):
1. Aspect ratio (AR): It is defines as the ratio of the width and length of the chip.
From the figure, we can say that aspect ratio is x/y. In essence, it is the shape of
the rectangle used for placement of the cells, blocks. The aspect ratio should take
into account the number of routing resources available. If there are more
After Floorplanning is complete, check for DRC (Design Rule check) violations. Most
of the pre-route violations are not removed by the tool. They have to be fixed manually.
I/O Cells in the Floorplan: The I/O cells are nothing but the cells which interact in
between the blocks outside of the chip and to the internal blocks of the chip. In a
floorplan these I/O cells are placed in between the inner ring (core) and the outer ring
(chip boundary). These I/O cells are responsible for providing voltage to the cells in the
core. For example: the voltage inside the chip for 90nm technology is about 1.2 Volts.
The regulator supplies the voltage to the chip (Normally around 5.5V, 3.3V etc).
The next question which comes to mind is that why is the voltage higher than the voltage
inside the chip?
The regulator is basically placed on the board. It supplies voltage to different other chips
on board. There is lot of resistances and capacitances present on the board. Due to this,
the voltage needs to be higher. If the voltage outside is what actually the chip need inside,
then the standard cells inside of the chip get less voltage than they actually need and the
chip may not run at all.
So now the next question is how the chips can communicate between different voltages?
The answer lies in the I/O cells. These I/O cells are nothing but Level Shifters. Level
Shifters are nothing but which convert the voltage from one level to another The Input
I/O cells reduce the voltage coming form the outside to that of the voltage needed inside
Most of the time, the verilog netlist is in the hierarchical form. By hierarchical I mean
that the design is modeled on basis of hierarchy. The design is broken down into different
sub modules. The sub modules could be divided further. This makes it easier for the logic
designer to design the system. It is good to have a hierarchical netlist only until Physical
Implementation. During placement and routing, it is better to have a flattened netlist.
Flattening of the netlist implies that the various sub blocks of the model have basically
opened up and there are no more sub blocks. There is just one top block. After you flatten
the netlist you cannot differentiate between the various sub block, but the logical
hierarchy of the whole design is maintained. The reason to do this is:
In a flat design flow, the placement and routing resources are always visible and
available.
Physical Design Engineers can perform routing optimization and can avoid congestion
to achieve a good quality design optimization. If the conventional hierarchical flow is
used, then it can lead to sub-optimal timing for critical paths traveling through the blocks
and for critical nets routed around the blocks.
The following gives and example of a netlist in the hierarchical mode as well as the
flattened netlist mode:
wire n1;
endmodule
endmodule
In verilog, the instance name of each module is unique. In the flattened netlist, the
instance name would be the top level instance name/lower level instance name etc…
Also the input and output ports of the sub modules also get lost. In the above example the
input and output ports; a, out1, b and outb get lost.
The above hierarchical model, when converted to the flattened netlist, will look like this:
input in1;
output out1;
wire topn1;
endmodule
Placement is a step in the Physical Implementation process where the standard cells
location is defined to a particular position in a row. Space is set aside for interconnect to
each logic/standard cell. After placement, we can see the accurate estimates of the
capacitive loads of each standard cell must drive. The tool places these cells based on the
algorithms which it uses internally. It is a process of placing the design cells in the
floorplan in the most optimal way.
What does the Placement Algorithm want to optimize?
The main of the placement algorithm is
1. Making the chip as dense as possible ( Area Constraint)
2. Minimize the total wire length ( reduce the length for critical nets)
Min-Cut Algorithm
This is the most popular algorithm for placement. This method uses successive
application of partitioning the block. It does the following steps:
1. Cuts the placement area into two pieces. This piece is called a bin. It counts the
number of nets crossing the line. It optimizes the cost function. The cost function
here would be number of net crossings. The lesser the cost function, the more
optimal is the solution.
2. Swaps the logic cells between these bins to minimize the cost function.
3. Repeats the process from step 1, cutting smaller pieces until all the logic cells are
placed and it finds the best placement option.
The cost function not only depends on the number of crossings but also a number of
various other factors such as, distance of each net, congestion issues, signal integrity
issues etc. The size of the bin can vary from a bin size equal to the base cell to a bin size
that would hold several logic cells. We can start with a large bin size, to get a rough
placement, and then reduce the bin size to get a final placement.
8.5 Routing
After the floorplanning and placement steps in the design, routing needs to be done.
Routing is nothing but connecting the various blocks in the chip with one an other. Until
now, the blocks were only just placed on the chip. Routing also is spilt into two steps
1. Global routing: It basically plans the overall connections between all the blocks
and the nets. Its main aim is to minimize the total interconnect length, minimize
the critical path delay. It determines the track assignments for each interconnect.
The chip is divided into small blocks. These small blocks are called routing
bins. The size of the routing bin depends on the algorithm the tool uses. Each
routing bin is also called a gcell. The size of this gcell depends on the tool.
San Francisco State University Nano-Electronics & Computing Research Center 100
Each gcell has a finite number of horizontal and vertical tracks. Global routing
assigns nets to specific gcells but it does not define the specific tracks for each
of them. The global router connects two different gcells from the centre point
of each gcell.
Track Assignment: The Global router keeps track of how many
interconnections are going in each of direction. This is nothing but the routing
demand. The number of routing layers that are available depend on the design
and also, if the die size is more, the greater the routing tracks. Each routing
layer has a minimum width spacing rule, and its own routing capacity.
For Example: For a 5 metal layer design, if Metal 1, 4, 5 are partially up for
inter-cell connections, pin, VDD, VSS connections, the only layers which are
routable 100% are Metal2 and Metal3. So if the routing demand goes over the
routing supply, it causes Congestion. Congestion leads to DRC errors and
slow runtime.
2. Detailed Routing: In this step, the actual connection between all the nets takes
place. It creates the actual via and metal connections. The main objective of
detailed routing is to minimize the total area, wire length, delay in the critical
paths.
It specifies the specific tracks for the interconnection; each layer has its
own routing grid, rules. During the final routing, the width, layer, and exact
location of the interconnection are decided.
After detailed routing is complete, the exact length and the position of each interconnect
for every net in the design is known. The parasitic capacitance, resistance can now is
extracted to determine the actual delays in the design. The parasitic extraction is done by
extraction tools. This information is back annotated and the timing of the design is now
calculated using the actual delays by the Static Timing Analysis Tool.
After timing is met and all other verification is performed such as LVS, etc, the design is
sent to the foundry to manufacture the chip.
San Francisco State University Nano-Electronics & Computing Research Center 101
8.6 Packaging
Depending on the type of packaging of the chip, the I/O cells, pad cells are designed
differently during the Physical Implementation. There are two types of Packaging style:
a. Wire-bond: The connections in this technique are real wires. The underside of the
die is first fixed in the package cavity. A mixture of epoxy and a metal (aluminum,
sliver or gold) is used to ensure a low electrical and thermal resistance between the
die and the package. The wires are then bonded one at a time to the die and the
package. Below is a illustration of Wire Bond packaging.
b. Flip-Chip: Flip Chip describes the method of electrically connecting the die to the
package carrier. This is a direct chip-attach technology, which accommodates dies that
have several bond pads placed anywhere on the surfaces at the top. Solder balls are
deposited on the die bond pads usually when they are still on the wafer, and at
corresponding locations on the board substrate. The upside-down die (Flip-chip) is then
aligned to the substrate. The advantage of this type of packaging is very short
connections (low inductance) and high package density. The picture below is an
illustration of flip-chip type of packaging.
San Francisco State University Nano-Electronics & Computing Research Center 102
Figure 8.6.b : Flip Chip Example
8.7.1 Introduction
The physical design stage of the ASIC design flow is also known as the “place and route”
stage. This is based upon the idea of physically placing the circuits, which form logic
gates and represent a particular design, in such a way that the circuits can be fabricated.
This is a generic, high level description of the physical design (place/route) stage. Within
the physical design stage, a complete flow is implemented as well. This flow will be
described more specifically, and as stated before, several EDA companies provide
software or CAD tools for this flow. Synopsys® software for the physical design process
is called IC Compiler. The overall goal of this tool/software is to combine the inputs of a
gate-level netlist, standard cell library, along with timing constraints to create and placed
and routed layout. This layout can then be fabricated, tested, and implemented into the
overall system that the chip was designed for.
San Francisco State University Nano-Electronics & Computing Research Center 103
This layout view or depiction of the logical function contains the drawn mask layers
required to fabricate the design properly. However, the place and route tool does not
require such level of detail during physical design. Only key information such as the
location of metal and input/output pins for a particular logic function is needed. This
representation used by ICC is considered to be the abstract version of the layout. Every
desired logic function in the standard cell library will have both a layout and abstract
view. Most standard cell libraries will also contain timing information about the function
such as cell delay and input pin capacitance which is used to calculated output loads. This
timing information comes from detailed parasitic analysis of the physical layout of each
function at different process, voltage, and temperature points (PVT). This data is
contained within the standard cell library and is in a format that is usable by ICC. This
allows ICC to be able to perform static timing analysis during portions of the physical
design process. It should be noted that the physical design engineer may or may not be
involved in the creating of the standard cell library, including the layout, abstract, and
timing information. However, the physical design engineer is required to understand what
common information is contained within the libraries and how that information is used
during physical design. Other common information about standard cell libraries is the
fact that the height of each cell is constant among the different functions. This common
height will aid in the placement process since they can now be linked together in rows
across the design. This concept will be explained in detail during the placement stage of
physical design.
3. The third of the main inputs into ICC are the design constraints. These constraints are
identical to those which were used during the front-end logic synthesis stage prior to
physical design. These constraints are derived from the system specifications and
implementation of the design being created. Common constraints among most designs
include clock speeds for each clock in the design as well as any input or output delays
associated with the input/output signals of the chip. These same constraints using during
logic synthesis are used byICC so that timing will be considered during each stage of
place and route. The constraints are specific for the given system specification of the
design being implemented.
In the below IC compiler tutorial example, we will place & route the fifo design
synthesized.
STEPS
1. As soon as you log into your engr account, at the command prompt, please type “csh
“as shown below. This changes the type of shell from bash to c-shell. All the commands
work ONLY in c-shell.
[hkommuru@hafez ]$csh
San Francisco State University Nano-Electronics & Computing Research Center 104
This ccreate directory structure as shown below. It will create a directory called
“asic_flow_setup ”, under which it creates the following directories namely
asic_flow_setup
src/ : for verilog code/source code
vcs/ : for vcs simulation for counter example
synth_graycounter/ : for synthesis of graycounter example
synth_fifo/ : for fifo synthesis
pnr_fifo/ : for Physical design of fifo design example
extraction/: for extraction
pt/: for primetime
verification/: final signoff check
The “asic_flow_setup” directory will contain all generated content including, VCS
simulation, synthesized gate-level Verilog, and final layout. In this course we will always
try to keep generated content from the tools separate from our source RTL. This keeps
our project directories well organized, and helps prevent us from unintentionally
modifying the source RTL. There are subdirectories in the project directory for each
major step in the ASIC Flow tutorial. These subdirectories contain scripts and
configuration files for running the tools required for that step in the tool flow. For this
tutorial we will work exclusively in the vcs directory.
3. Please source “synopsys_setup.tcl” which sets all the environment variables necessary
to run the VCS tool.
Please source them at unix prompt as shown below
Please Note : You have to do all the three steps above everytime you log in.
At the unix prompt type “icc_shell “, it will open up the icc window.
[hkommuru@hafez.sfsu.edu] $icc_shell
Next, to open the gui, type “gui_start”, it opens up gui window as shown in the next page .
icc_shell > gui_start
San Francisco State University Nano-Electronics & Computing Research Center 105
Before a design can be placed and routed within ICC, the environment for the design
needs to be created. The goal of the design setup stage in the physical design flow is to
prepare the design for floorplanning. The first step is to create a design library. Without a
design library, the physical design process using will not work. This library contains all
of the logical and physical data that will need. Therefore the design library is also
referenced as the design container during physical design. One of the inputs to the design
library which will make the library technology specific is the technology file.
4.a Setting up the logical libraries. The below commands will set the logical libraries and
define VDD and VSS
San Francisco State University Nano-Electronics & Computing Research Center 106
4.b Creating Milkway database
icc_shell> create_mw_lib
4.d Read in the gate level synthesized verilog netlist. It opens up a layout window , which
contains the layout information as shown below. You can see all the cells in the design at
the bottom, since we have not initialized the floorplan yet or done any placement.
4.e Uniquify the design by using the uniquify_fp_mw_cel command. The Milkyway
format does not support multiply instantiated designs. Before saving the design in
Milkyway format, you must uniquify the design to remove multiple instances.
icc_shell> uniquify_fp_mw_cel
4.f Link the design by using the link command (or by choosing File > Import > Link
Design in the GUI).
San Francisco State University Nano-Electronics & Computing Research Center 107
icc_shell> link
4.g Read the timing constraints for the design by using the read_sdc command (or by
choosing File > Import > Read SDC in the GUI).
FLOORPLANNING
Open file /scripts/floorplan_icc.tcl
You can see in the layout window, the floorplan size and shape. Since we are still in the
floorplan stage all the cells in the design are outside of the floorplan . You can change the
above options to play around with the floorplan size.
San Francisco State University Nano-Electronics & Computing Research Center 108
6. Connect Power and Ground pins with below command
$MW_POWER_NET -power_pin $MW_POWER_PORT -ground_net $MW_GROUND_NET -
ground_pin $MW_GROUND_PORT
derive_pg_connection -power_net $MW_POWER_NET -ground_net
$MW_GROUND_NET -tie
7. ICC automatically places the pins around the boundary of the floorplan, if there are no
pin constraints given evenly . You can constrain the pins around the boundary [ the blue
color pins in the above figure ] , using a TDF file. Can look at the file /const/fifo.tdf .
San Francisco State University Nano-Electronics & Computing Research Center 109
In the tutorial example, the tool is placing the pins automatically.
8. Power Planning : First we need to create rectangular power ring around the floorplan .
Create VSS power ring
icc_shell>create_rectangular_rings -nets {VSS} -left_offset 0.5 -left_segment_layer
M6 -left_segment_width 1.0 -extend_ll -extend_lh -right_offset 0.5 -right_segment_layer
M6 -right_segment_width 1.0 -extend_rl -extend_rh -bottom_offset 0.5 -
bottom_segment_layer M7 -bottom_segment_width 1.0 -extend_bl -extend_bh -
top_offset 0.5 -top_segment_layer M7 -top_segment_width 1.0 -extend_tl -extend_th
San Francisco State University Nano-Electronics & Computing Research Center 110
10. Save the design .
PLACEMENT
open /scripts/place_icc.tcl
During the optimization step, the place_opt command introduces buffers and inverters
tofix timing and DRC violations. However, this buffering strategy is local to some critical
paths.The buffers and inverters that are inserted become excess later because critical
paths change during the course of optimization. You can reduce the excess buffer and
inverter counts after place_opt by using the set_buffer_opt_strategy command, as shown
11. Goto Layout Window , Placement Core Placement and Optimization . A new
window opens up as shown below . There are various options, you can click on what ever
option you want and say ok. The tool will do the placement. Alternatively you can also
run at the command at icc_shell . Below is example with congestion option.
San Francisco State University Nano-Electronics & Computing Research Center 111
When you want to add area recovery, execute :
# place_opt -area_recovery -effort low
# When the design has congestion issues, you have following choices :
# place_opt -congestion -area_recovery -effort low # for medium effort congestion
removal
# place_opt -effort high -congestion -area_recovery # for high eff cong removal
## What commands do you need when you want to reduce leakage power ?
# set_power_options -leakage true
# place_opt -effort low -area_recovery -power
## What commands do you need when you want to reduce dynamic power ?
# set_power_options -dynamic true -low_power_placement true
# read_saif –input < saif file >
# place_opt -effort low -area_recovery –power
# Note : option -low_power_placement enables the register clumping algorithm in
# place_opt, whereas the option -dynamic enables the
# Gate Level Power Optimization (GLPO)
## When you want to do scan opto, leakage opto, dynamic opto, and you have congestion
issues,
## use all options together :
# read_def < scan def file >
# set_power_options -leakage true -dynamic true -low_power_placement true
# place_opt -effort low -congestion -area_recovery -optimize_dft -power -num_cpus
12. After the placement is done, all the cells would be placed in the design and it would
the below window.
San Francisco State University Nano-Electronics & Computing Research Center 112
13. You can report the following information after the placement stage.
### Report
icc_shell>report_placement_utilization > output/fifo_place_util.rpt
icc_shell>report_qor_snapshot > output/fifo_place_qor_snapshot.rpt
icc_shell>report_qor > output/fifo_place_qor.rpt
After placement, if you look at the fifo_cts.setup.rpt and fifo_cts.hold.rpt, in the reports
directory, they meet timing.
San Francisco State University Nano-Electronics & Computing Research Center 113
Before doing the actual cts, you can set various optimization steps. In the Layout window,
click on “Clock “, you will see various options, you can set any of the options to run CTS.
If you click on Clock Core CTS and Optimization . By default it does low_effort, so
you need to change to high effort.
## clock_opt -only_psyn
## clock_opt -sizing
San Francisco State University Nano-Electronics & Computing Research Center 114
## hold_time fix
## clock_opt -only_hold_time
ROUTING
open scripts/route_icc.tcl
14 . In the layout window, click on Route Core Routing and Optimization, a new
window will open up as shown below
You can select various options, if you want all the routing steps in one go, or do global
routing first , and then detail, and then optimization steps. Its upto you.
icc_shell> route_opt
San Francisco State University Nano-Electronics & Computing Research Center 115
Below command does not have optimizations. You can although do an incremental
optimization by clicking on the incremental mode in the above window after route_opt is
completed. View of window after routing is complete. You can see that there are no
DRC violations , that is routing is clean.
San Francisco State University Nano-Electronics & Computing Research Center 116
POST ROUTE OPTIMIZATION STEPS
16. Goto Layout Window, Route Verify Layout , it opens up a new window below,
click ok.
San Francisco State University Nano-Electronics & Computing Research Center 117
The results are clean , as you can see in the below window
If results are not clean, you might have to do post route optimization steps, like
incremental route . Verify, clean, etc.
EXTRACTION
9.0 Introduction
In general, almost all layout tools are capable of extracting the layout database using
various algorithms. These algorithms define the granularity and the accuracy of the
extracted values. Depending upon the chosen algorithm and the desired accuracy, the
following types of information may be extracted:
Detailed parasitics in DSPF or SPEF format.
Reduced parasitics in RSPF or SPEF format.
Net and cell delays in SDF format.
Net delay in SDF format + lumped parasitic capacitances.
The DSPF (Detailed Standard Parasitic Format) contains RC information of
each segment (multiple R’s and C’s) of the routed netlist. This is the most accurate form
of extraction. However, due to long extraction times on a full design, this method is not
practical. This type of extraction is usually limited to critical nets and clock trees of the
design.
San Francisco State University Nano-Electronics & Computing Research Center 118
The RSPF (Reduced Standard Parasitic Format) represents RC delays in terms of a pi
model (2 C’s and 1 R). The accuracy of this model is less than that of DSPF, since it does
not account for multiple R’s and C’s associated with each segment of the net. Again, the
extraction time may be significant, thus limiting the usage of this type of information.
Target applications are critical nets and small blocks of the design. Both detailed and
reduced parasitics can be represented by OVI’s (Open Verilog International) Standard
Parasitic Exchange Format (SPEF). The last two (number 3 and 4) are the most common
types of extraction used by the designers. Both utilize the SDF format. However, there is
major difference between the two. Number 3 uses the SDF to represent both the cell and
net delays, whereas number 4 uses the SDF to represent only the net delays. The lumped
parasitic capacitances are generated separately. Some layout tools generate the lumped
parasitic capacitances in the Synopsys set_load format, thus facilitating direct back
annotation to DC or PT.
San Francisco State University Nano-Electronics & Computing Research Center 119
A.1 Test Techniques
A.1.1 Issues faced during testing
1. Consider a combinatorial circuit which has N inputs. To validate the circuit, we need
to exhaustively apply all possible input test vectors and observe the responses. Therefore
for N inputs we need to generate 2 to the power of N inputs. If the number of inputs is too
high, we still can manage to it, but it would take a long time.
Test vector: A set of input vectors to test the system.
2. Now consider a Sequential circuit, for a sequential circuit, the output of the circuit
depends on the inputs as well as the state value. To test such a circuit, it would take
extremely long long time and is practically impossible to do it.
To overcome the above issue, a Scan-based Methodology was introduced
It is easy to test combinational circuits using the above set of rules. For sequential circuits,
to be able to the above, we need to replace the flip flops (FF) with ‘Scan-FF’. These Scan
Flip Flops are a special type of flip flops; they can control and also check the logic of the
circuits. There are two methodologies:
1. Partial Scan: Only some Flip-Flops are changed to Scan-FF.
2. Full Scan: Entire Flip Flops in the circuit are changed to these special Scan FF.
This does mean that we can test the circuit 100%.
San Francisco State University Nano-Electronics & Computing Research Center 120
Typical designs might have hundreds of scan chains...
The number of scan chains depends on the clock domains. (Normally within a domain,
it is not preferable to have different clocks in each scan chain)
Example: If there are 10 clock domains, then it means the minimum number of scan
chains would be 10.
Scan chains test logic of combinational gates
Scan chains test sequential Flip-Flops through the vectors which pass through them.
Any mismatch in the value of the vectors can be found.
Manufacturing tests: These include functional test and performance test. Functional
test checks whether the manufactured chip is working according to what is actually
designed and performance test checks the performance of the design (speed).
After the chip comes back from the foundry, it sits on a load board to be tested. The
equipment which tests these chips is called Automatic Test Equipment (ATE).
The socket is connected to the board. The chip is placed on a socket. A mechanical arm
is used to place the chip on the socket.
A test program tells the ATE what kind of vectors needs to be loaded. Once the
vectors are loaded, the logic is computed and the output vectors can be observed on the
screen.
The output pattern should match the expected output.
When an error is found, you can exactly figure out which flip-flop output is not right
Through the Automatic Test Pattern Generator, we can point out the erroneous FF.
Failure Analysis analyzes why the chip failed. This is a whole new different field.
Fault Coverage: It tells us that for a given set of vectors, how many faults are covered.
Test Coverage: It is the coverage of the testable logic; meaning how much of the
circuit can be tested.
The latest upcoming technology tests the chips at the wafer level (called wafer sort
(with probes))
San Francisco State University Nano-Electronics & Computing Research Center 121
2. Model Checking: Verifies that the implementation satisfies the properties of the
design. Model checking is used early in the design creation phase to uncover functional
bugs.
Compare Point: [Cadence Nomenclature]; it is defined as any input of a sequential
element and any primary output ports.
In the EDA industry, library is defined as a collection of cells (gates). These cells are
called standard cells. There are different kinds of libraries which are used at different
steps in the whole ASIC flow. All the libraries used contain standard cells. The libraries
contain the description of these standard cells; like number of inputs, logic functionality,
propagation delay etc. The representation of each standard cell in each library is different.
There is no such library which describes all the forms of standard cells. For example: the
library used for timing analysis contains the timing information of each of the standard
cells; the library used during Physical Implementation contains the geometry rules.
Standard Cell Libraries determine the overall performance of the synthesized logic. The
library usually contains multiple implementations of the same logic-function, differing by
area and speed. For example few of the basic standard cells could be a NOR gate, NAND
gate, AND gate, etc. The standard cell could be a combination of the basic gates etc. Each
library can be designed to meet the specific design constraints. The flowing are some
library formats used in the ASIC flow:
San Francisco State University Nano-Electronics & Computing Research Center 122
SPICE is a circuit simulator. It is used to analyze the behavior of the circuits. It is mostly
used in the design of analog and mixed signal IC design. The input to SPICE is basically
at netlist, and the tool analyzes the netlist information and does what it is asked to do.
San Francisco State University Nano-Electronics & Computing Research Center 123
(INTERCONNECT A.INV8.OUT B.DFF1.Q (:0.6:) (:0.6:))))
In this example the rising and falling delay is 60 ps (equal to 0.6 units multiplied by the
time scale of 100 ps per unit specified in a TIMESCALE construct. The delay is specified
between the output port of an inverter with instance name A.INV8 in block A and the Q
input port of a D flip-flop (instance name B.DFF1) in block B
San Francisco State University Nano-Electronics & Computing Research Center 124