Compilers: Basic Compiler Functions
Compilers: Basic Compiler Functions
Compilers: Basic Compiler Functions
(id - list)
READ ( id )
{value}
Fig. 1 Syntax Tree
GRAMMARS
A grammar for a programming language is a formal description of the syntax of
programs and individual statements written in the language. The grammar does not
describe the semantics or memory of the various statements. To differentiate between
syntax and semantics consider the following example:
Fig .3
Compilers 186
Example: ALPHA
ALPHA, BETA
ALPHA is an < id - list > that consist of another < id - list > ALPHA, followed
by a comma, followed by an id BETA.
Tree: It is also called parse tree or syntax tree. It is convenient to display the
analysis of a source statement in terms of a grammar as a tree.
Example: READ (VALUE)
GRAMMAR: (read) : : = READ ( < id -list>)
Example: Assignment statement:
SUM : = 0 ;
SUM : = + VALUE ;
SUM : = - VALUE ;
187 System Software
Id
Fig.4 (c)
(< exp > ) Fig. 4 (d)
Fig. 4 Parse Trees
For the statement Variance : = SUMSQ Div 100 - MEAN * MEAN ;
The list of simplified Pascal grammar is shown in fig.5.
1. < prog > : : = PROGRAM < program > VAR <dec - list >
BEGIN < stmt > - list > END.
2. < prog - name >: : = id
3. < dec - list > : : = < dec > | < dec - list > ; < dec >
4. < dec > : : = < id - list > : < type >
5. < type > : : = integer
6. < id - list > : : = id | < id - list > , id
7. <stmt - list > : : = < stmt > <stmt - list > ; < stmt >
8. < stmt > : : = < assign > | <read > | < write > | < for >
Compilers 188
PROGRAM < prog - name > VAR dec - list BEGIN <Stmt - list > END
(id - list ) ; id < stmt - list > ; <stmt > < assign > (id - list ) . id
(MEAN) {VARIANCE}
id
(id - list ) , id id := <MEAN>
<VALUE > < stmt - list > ; <stmt > {VARIANCE} < exp >
| factor | int
< term > | id {100} <factor> int
| int {SUM} | {100}
< factor > { 0} id
| Next {SUMSQ}
int {0} Page
189 System Software
|
< for >
id
< id - list > , {GAMMA}
id
< id - list > , {BETA}
id
[ ALPHA ]
Compilers 190
2 (a) Draw Parse tree, according to the grammar in fig. 5 for the following < exp > S :
(a) ALPHA + BETA < exp >
|
< term >
id id
{BETA} {GAMMA}
191 System Software
3. Suppose the rules of the grammar for < exp > and < term > is as follows:
< exp > :: = < term > | < exp > * < term> | < exp> Div < term >
< term > :: = <factor> | < term > + < factor > | < term > - < factor >
Draw the parse trees for the following:
(a) A1 + B1 (b) A1 - B1 * G1 (c) A1 + DIV (B1 + G1) - D1
< exp >
|
(a) A1 + B1 term
factor
term * factor
|
id factor id
{A1} id {B1} {G1}
(c) A1 DIV (B1 + A1) - D1
< exp >
id
{A1} ( < exp > )
Token := + - K DIV ( )
Code 15 16 17 18 17 20 21
Token Id Int
Code 22 23
Fig. 7 Token Coding Scheme
For a keyword or an operator the token loading scheme gives sufficient
information. In the case of an identifier, it is also necessary to supply particular identifier
name that was scanned. It is true for the integer, floating point values, character-string
constant etc. A token specifier can be associated with the type of code for such tokens.
This specifier gives the identifier name, integer value, etc., that was found by the scanner.
193 System Software
Some scanners enter the identifiers directly into a symbol table. The token
specifier for the identifiers may be a pointer to the symbol table entry for that identifier.
The functions of a scanner are:
The entire program is not scanned at one time.
Scanner is a operator as a procedure that is called by the processor when it needs
another token.
Scanner is responsible for reading the lines of the source program and possible
for printing the source listing.
The scanner, except for printing as the output listing, ignores comments.
Scanner must look into the language characteristics.
Example: FOTRAN : Columns 1 - 5 Statement number
: Column 6 Continuation of line
: Column 7 . 22 Program statement
PASCAL : Blanks function as delimiters for tokens
: Statement can be continued freely
: End of statement is indicated by ; (semi column)
Scanners should look into the rules for the formation of tokens.
Example: 'READ': Should not be considered as keyword as it is within quotes.
i.e., all string within quotes should not be considered as token.
Blanks are significant within the quoted string.
Blanks has important factor to play in different language
1
State Final State Transition
Fig. 8
Example: Finite automata to recognize tokens is gives in fig. 9. The
corresponding algorithm is given in fig. 10
0-9
A-Z
B A-Z
1 2
2 3
Fig. 9
-+: =,
()DIV*
::DOTO
IntId
ENDENDBEGINVAR
WRITEREASFOR INTEGER
PROGRAM ≐ ⋖
≐ ⋖⋖ ⋖
VAR ⋖⋖⋖
<
≐≐ ⋖ ⋖
BEGIN ⋗⋗ ⋗
END
INTEGER ⋗ ⋗
≐ ⋖
FOR
≐
READ
≐
WRITE
TO ⋗ ⋖⋖ ⋖⋖⋖⋖ ⋖ ⋖
⋖ ⋗ ⋗ ⋖⋖⋖ ⋗ ⋖
DO
⋗⋗ ⋗ ⋖⋖⋖ ⋗⋖ ⋖ ⋖
; ⋗ ⋖ ⋗
:
, ≐
⋗⋗ ≐ ⋗ ⋖⋖ ⋖ ⋖ ⋖ ⋖⋖
:=
⋗⋗ ⋗⋗⋗ ⋗⋗ ⋖ ⋖ ⋖ ⋗ ⋖⋖
+ ⋗⋗⋗ ⋖ ⋖ ⋖ ⋗ ⋖⋖
⋗⋗ ⋗⋗
- ⋗⋗ ⋗⋗⋗ ⋗⋗ ⋗ ⋗ ⋗⋖
* ⋗⋗ ⋗ ⋗ ⋗ ⋗⋗ ⋗⋗⋖ ⋗ ⋖⋖
⋗⋗ ⋗ ⋗ ⋗ ⋗⋗ ⋗⋗⋖ ⋗ ⋖⋖
DIV
⋗ ⋗ ⋗ ⋖ ⋖⋖⋖ ≐ ⋖⋖
) ⋗⋗ ⋗ ⋗ ⋗ ⋗⋗ ⋗ ⋗
⋗
(
Compilers 196
id ⋗ ⋗⋗ ⋗⋗ ⋗ ⋗ ⋗ ≐ ⋗⋗ ⋗ ⋗
Int ⋗⋗ ⋗⋗ ⋗ ⋗⋗ ⋗
⋗ ⋗
⋗
id
197 System Software
(VALUE)
Fig. 12
According to the grammar id may be considered as < factor > . (rule 12),
<program > (rule 9) or a < id-list > (rule 6). In operator precedence phase, it is not
necessary to indicate which non-terminal symbol is being recognized. It is interpreted as
non-terminal < N1 >. Hence the new version is shown in fig. 12(b).
An operator-precedence parser generally uses a stack to save token that have
been scanned but not yet parsed, so it can reexamine them in this way. Precedence
relations hold only between terminal symbols, so < N1 > is not involved in this process
and a relationship is determined between (and).
READ (<N1>) corresponds to rule 13 of the grammar. This rule is the only one
that could be applied in recognizing this portion of the program. The sequence is simply
interpreted as a sequence of some interpretation < N2 >. Fig. 12(c) shows this
interpretation. The parser tree is given in fig. 12(d).
Note: (1) The parse tree in fig. 1 and fig. 12 (d) are same except for the name of
the non-terminal symbols involved.
(2) The name of the non-terminals is arbitrarily chosen.
Example: VARIANCE ; = SUMSQ DIV 100 - MEAN * MEAN
(i) . . id 1 : = id 2 Div . . . <N1>
⋖ ≐ ⋖ ⋗
<id 2>
(ii) . . . id 1 : = <N1> Div int -
⋖ ≐ ⋖ ⋖ ⋗ {SUMSQ}
(iii) . . . id 1 : = <N1> Div <N2> - <N1> <N2>
⋖ ≐ ⋖ ⋗
<id 2> int
{SUMSQ} {100}
id 3 id 4
{MEAN} {MEAN}
(viii) . . . id : = <N7> <N7>
⋖ ≐ ⋗
<N3> - <N6>
(ix) . . . <N8>
<N8>
<N7>
<N3> <N6>
Shif
t
2. . . . BEGIN READ ( id )
Shif
BEGIN
t
199 System Software
3. . . . BEGIN READ ( id ) . . .
Shif READ
t BEGIN
4. . . BEGIN READ ( id ) . . .
(
Shif READ
t BEGIN
5. . . . BEGIN READ ( id ) . . .
id
Shif (
READ
t BEGIN
6. . . . BEGIN READ ( id ) . . .
.
< id-list >
Shif (
READ
t BEGIN
Explanation
1. The parser shift (pushing the current token onto the stack) when it encounters
BEGIN
2 to 4. The shift pushes the next three tokens onto the stack.
5. The reduce action is invoked. The reduce converts the token on the top of the
stack to a non-terminal symbol from the grammar.
6. The shift pushes onto the stack, to be reduced later as part of the READ
statement.
Note: Shift roughly corresponds to the action taken by an operator – precedence
parses when it encounters the relation ⋖ and ≐. Reduce roughly corresponds to
the action taken when an operator precedence parser encounters the relation ⋗.
RECURSIVE DESCENT PARSING
Recursive-Descent is a top-down parsing technique. A recursive-descent parser is
made up of a precedence for each non-terminal symbol in the grammar. When a
precedence is called it attempts to find a sub-string of the input, beginning with the
current token, that can be interpreted as the non-terminal with which the procedure is
associated. During this process it may call other procedures, or call itself recursively to
search for other non-terminals. If the procedure finds the non-terminal that is its goal, it
returns an indication of success to its caller. It also advances the current-token pointer
past the sub-string it has just recognized. If the precedence is unable to find a sub-string
that can be interpreted as to the desired non-terminal, it returns an indication of failure.
Example: < read > : : = READ ( < id - list > )
Compilers 200
The procedure for < read > in a recursive descent parser first examiner the next
two input, looking for READ and (. If these are found, the procedures for < read > then
call the procedure for < id - list >. If that procedure succeeds, the < read > procedure
examines the next input token, looking for). If all these tests are successful, the < read >
procedure returns an indication of success. Otherwise the procedure returns a failure.
There are problems to write a complete set of procedures for the grammar of fig. 15.
Example: The procedure for < id - list >, corresponding to rule 6 would be unable
to decide between its alternatives since id and < id-list > can begin with id. <id-list > : : =
id | < id-list >, id
If the procedure somehow decided to try the second alternative <id-list>, it would
immediately call itself recursively to find an <id-list>. This causes unending chain. Top-
down parsers cannot be directly used with a grammar that contains this kind of immediate
left recursion.
Similarly the problem occurs for rules 3, 7, 10 and 11. Hence the fig. 13 shows
the rules 3, 6, 7, 10 and 11 modification.
3 < dec - list > : : = < dec > { ; <dec > }
6 < id - list > :: = id {; id }
7 < stmt - list > : : = < stmt > { ; < stmt > }
10 < exp > :: = < term > { + < term . | -- < term > }
11 < term > : : = < factor > { + < factor > | Div < factor >.}
Fig. 13
Fig. 14 illustrates a recursive-descent parse of the READ statement: READ (VALUE);
The modified grammar is considered in the procedure for the non-terminal <read > and < id-list >.
It is assumed that TOKEN contains the type of the next input token.
PROCEDURE READ
BEGIN
ROUND : = FALSE
If TOKEN + 8 { read } THEN
BEGIN
advance to next token
IF TOKEN + 20 { ( } THEN
BEGIN
advance to next token
IF IDLIST returns success THEN
IF token = 21 { ) } THEN
BEGIN
FOUND : = TRUE
advance to next token
END { if ) }
END { if READ }
IF FOUND = TRUE THEN
return success
else failure
end (READ)
201 System Software
Fig. 14
Procedure IDLIST
begin
FOUND = FALSE
if TOKEN = 22 {id} then
begin
FOUND : = TRUE advance to Next token
while (TOKEN = 14 {,}) and (FOUND = TRUE) do
begin
advance to next token
if TOKEN = 22 {id} then
advance to next token
else
FOUND = FALSE
End {while}
End {if id}
if FOUND : = TRUE then
return success
else
return failure
end {IDLIST}
Fig. 15
The fig. 15 IDLIST procedure shows an error message if ( , ) is not followed by
a id. It indicates the failure in the return statement. If the sequence of tokens such as " id,
id " could be a legal construct according to the grammar, this recursive-descent
technique would not work properly.
Fig. 16 shows a graphic representation of the recursive parsing process for the
statement being analyzed.
(i) In this part, the READ procedure has been invoked and has examined the
tokens READ and ' ( " from the input stream (indicated by the dashed
lines).
(ii) In this part, the READ has called IDLIST (indicated by the solid line),
which has examined the token id.
(iii) In this part, the IDLIST has returned to READ indicating success; READ
has then examined the input token.
Note that the sequence of procedure calls and token examinations has completely
defined the structures of the READ statement. The parser tree was constructed beginning
at the root, hence the term top-down parsing.
( ( (
id id
{ Value } { Value
Fig. 16
Fig. 17 illustrates a recursive discard parse of the assignment statement.
Variance: = SUNSQ DIVISION - MEAN * MEAN
The fig. 17 shows the procedures for the non-terminal symbols that are
involved in parsing this statement.
Procedure ASSIGN
begin
FOUND = FALSE
if TOKEN = 22 {id} then
begin
advance to Next token
if TOKEN = 15 {: =} then
begin
advance to next token
if EXP returns success then
FOUND : = TRUE
end {if : =}
if FOUND : = TRUE then
return success
else
return failure
end {ASSIGN}
Procedure EXP
begin
FOUND = FALSE
If TERM returns success then
begin
FOUND: = TRUE
while ((TOKEN = 16 {+ } ) or (TOKEN = 17 { - } ) )
and (FOUND = TRUE) do
begin
advance to next token
if TERM returns success then
FOUND = FALSE
end {while}
end {if TERM}
if FOUND : = TRUE then
return success
else
return failure
203 System Software
end {EXP}
Procedure TERM
begin
FOUND : = FALSE
If FACTOR returns success then
begin
FOUND : = TRUE
while ((TOKEN = 18 { * }) or (TOKEN = 19 {DIV })
and (FOUND = TRUE) do
begin
advance to next token
if TERM returns failure then
FOUND : = FALSE
end {while}
end {if FACTOR}
if FOUND : = TRUE then
return success
else
return failure
end {TERM}
Procedure FACTOR
begin
FOUND : = FALSE
if (TOKEN = 22 { id } ) or (TOKEN = 23 {int } ) then
begin
FOUND : = TRUE
advance to next token
end { if id or int }
else
if TOKEN = 20 { ( } then
begin
advance to next token
if EXP returns success then
if TOKEN = 21 { ) } then
begin
(FOUND = TRUE)
advance to next token
end { if ) }
end {if ( }
if FOUND : = TRUE then
return success
else
return failure
end {FACTOR}
Fig. 17 Recursive-Descent Parse of an Assignment Statement
Compilers 204
id 1 := id 1 : = EXP id 1 := EXP
TERM
(iv) (v) ASSIGN (vi) ASSIGN
id 2 id 2 int id 2 int
{SUMSQ} {SUMSQ} {100} {SUMSQ} {100}
(vii) ASSIGN
id 1 := EXP
{VARIANCE}
TERM - TERM
DIV
id 2 int id 3
{SUMSQ} {100} {MEANS}
(viii) ASSIGN
id 1 := EXP
(VARIANCE}
205 System Software
TERM TERM
-
Fig. 18 Step by step Representation for Variance : = SUMSQ Div 100 - MEAN * Mean
Using the rule of the grammar the parser recognizes at each step the left most
sub-string of the input that can be interpreted. In an operator precedence parse, the
recognition occurs when a sub-string of the input is reduced to some non-terminal <N i>.
In a recursive-descent parse, the recognition occurs when a procedure returns to
its caller, indicating success. Thus the parser first recognizes the id VALUE as an < id -
list >, and then recognizes the complete statement as a < read >.
The symbolic representation of the object code to be generated for the READ
statement is as shown in fig. 19(b). This code consists of a call to a statement XREAD,
which world be a part of a standard library associated with the compiler. The subroutine
any program that wants to perform a READ operation can call XREAD. XREAD is
linked together with the generated object program by a linking loader or a linkage editor.
The technique is commonly used for the compilation of statements that perform
voluntarily complex functions. The use of a subroutine avoids the repetitive generation of
large amounts of in-line code, which makes the object program smaller.
The parameter list for XREAD is defined immediately after the JSUB that calls
it. The first word is the number of variable that will be assigned values by the READ. The
following word gives the addresses of three variables.
Fig. 19(c) shows the routines that might be used to accomplish the code
generation.
1. < id - list > : : = id
add ST (id) to list
add 1 to List_count
2. < id - list > : : = < id - list >, id
add ST (id) to list
add 1 to LC List_Current
3. < read > : : = READ (< id - list >)
generate [ + JSUB XREAD ]
record external reference to XREAD
generate [WORD List - count]
for each item on list of do
begin
remove ST (ITEM) from list
generate [WORD ST (ITEM)]
end
List _count : = 0
Fig. 19 (c) Routine for READ Code Generation
The first two statements (1) and (2) correspond to alternative structure for < id -
list >, that is < id - list > : : = id | < id - list >, id.
In each case the token specifies ST (id) for a new identifier being called to the <
id - list > is inserted into the list used by the code-generation routine, and list-count is
updated to reflect the insertion. After the entire < id-list > has been parsed, the list
contains the token specifiers for all the identifiers that are part of the < id- list >. When
207 System Software
the < read > statement is recognized, the token specifiers are removed from the list and
used to generate the object code for the READ.
id := id DIV int _ id * id
{VARIANCE} { SUMSQ } {100} {MEAN} {MEAN}
Fig. 20
The parser first recognizes the id SUMSQ as a < factor > and < term > ; then it
recognizes the int 100 as a < factor >; then it recognizes SUNSQ DIV 100 as a < term >,
and so forth. The order in which the parts of the statements are recognized is the same as
the order in which the calculations are to be performed. A code-generation routine is
called for each portion of the statement is recognized.
Example; For a rule < term >1: : = < term > 2 * < factor > a code is to be generated.
The subscripts are used to distinguish between the two occurrences of < term > .
The code-generation routines perform all arithmetic operations using register A.
Hence the multiple < term >2 * < factor > after multiplication is available in register A.
Before multiplication one of the operand < term >2 must be located in A-register. The
results after multiplication will be left in register A. So we need to keep track of the
result left in register A by each segment of code that is generated. This is accomplished
by extending the token-specifier idea to non-terminal nodes of the parse tree.
The node specifier ST (< term1>) would be set to rA, indicating that the result of
the completion is in register A. the variable REGA is used to indicate the highest level
node of the parse tree when value is left in register A by the code generated so far. Clearly
there can be only one such node at any point in the code-generation process. If the value
corresponding to a node is not in register A, the specifier for the node is similar to a token
Compilers 208
specifier: either a pointer to a symbol table entry for the variable that contains the value
or an integer constant.
Fig. 21 shows the code-generation routine considering the A-register of the
machine.
1. < assign > : : = id := < exp >
GETA (< exp >)
generate [ STA ST (id)]
REGA : = null
2. <exp> :: =< term >
ST < exp > : = ST (< term >)
if ST < exp > = rA then
REGA : = < exp >
3. < exp >1 : : = < exp >2 + < term >
if SR (< exp >2) = rA then
generate [ADD ST (< term >)]
else if ST (< term >) = rA then
generate [ADD ST (< exp >2)]
else
begin
GETA (< EXP >2)
generate [ADD ST(< term >)]
end
ST (< exp >1) : = rA
REGA : = < exp >1
4. < exp >1 : : = < exp >2 - < term >
if ST (< exp >2) = rA then
generate [SUB ST (< term >)]
else
begin
GETA (< EXP >2)
generate [ SUB ST (< term >)]
end
SR (< exp >1) : = rA
REGA : = < exp >1
5. < term > : : = < factor >
ST (< term >) : = ST (< factor >)
if ST (<term >) = rA then
REGA : = < term >
6. < term >1 : : = < term >2 * < factor >
if ST (< term >2) = rA then
generate [ MUL ST (< factor >)]
else if S (< factor >) = rA then
generate [ MUL ST (< term >2)]
else
begin
209 System Software
If the node specifies for either operand is rA, the corresponding value is already
in register A, the routine simply generates a MUL instruction. The node specifier for the
other operand gives the operand address for this MUL. Otherwise, the procedure GETA is
called. The GETA procedure is shown in fig. 22.
Fig. 22
The procedure GETA generates a LDA instruction to load the values associated
to <term> 2 into register A. Before loading the value into A-register, it confirms whether
A is null. If it is not null it generates STA instruction to save the contents of register-A
into Temp-variable. There can be number of Temp variable like Temp1, Temp2 . . . etc.
The temporary variables used during a completion will be assigned storage location at the
end of the object program. The node specifies for the node associated with the value
previously in register A, indicated by REGA is reset to indicate the temporary variable
used.
After the necessary instructions are generated, the code-generation routine sets
ST (< term >1) and REGA to indicate that the value corresponding to < terms >1 is now in
register A. This completes the code-generation action for the * operation.
The code-generation routine for ' + ' operation is the same as the ' * ' operation.
The routine ' DIV ' and ' - ' are similar except that for these operations it is necessary for
the first operand to be in register A. The code generation for < assign > consists of
bringing the value to be assigned into register A (using GETA) and then generating a STA
instruction.
The remaining rules in fig. 21 do not require the generation of any instruction
since no computation and data movement is involved.
The object code generated for the assignment statement is shown in fig. 22.
LDA SUMSQ
DIV * 100
STA TMP1
LDA MEAN
MUL MEAN
STA TMP2
LDA TMP1
SUB TMP2
STA VARIABLE
Fig. 22
For the grammar < prog > the code-generation routine is shown in fig. 23. When
<prog> is recognized, storage locations are assigned to any temporary (Temp) variables
that have been used. Any references to these variables are then fixed in the object code
using the same process performed for forward references by a one-pass assembler. The
compiler also generates any modification records required to describe external references
to library subroutine.
< prog > : : = PROGRAM < prog-name > VAR < dec list >
BEGIN < stmp -- list > END.
generate [LDL RETADR]
generate [RSUB]
for each Temp variable used do
211 System Software
The < prog-name > generates header information in the object program that is
similar to that created from the START and EXTREF as assembler directives. It also
generates instructions to save the return address and jump to the first executable
instruction in the compiled program. Fig. 24 shows the code generation routine for the
grammar < prog-name >.
< Program > : : = id
generate [START 0]
generate [EXTREF XREAD, XWRITE]
generate [STL RETADR]
add 3 to LC {leave room for jump to first executable instruction}
generate [RETADR RESW 1]
Fig. 24
< for > : : = FOR < id ex -- exp > Do < body >
POP JUMPADDR from stack {address of jump out of
loop}
POP ST (INDEX) from stack {index variable}
POP LOOPADDR from stack {beginning address of loop}
generate [LDA ST (INDEX)]
generate [ADD #1]
generate [ J LOOPADDR]
insert [ JGT LC ] at location JUMPADDR
< index - exp > : : = id : = < exp > | TO < exp >2
GETA (< exp >;)
Push LC onto stack {beginning addressing loop}
Push ST (id) onto stack {index variable}
Generate [STA ST (id)]
Generate [ COMP ST (< exp > 2)]
Push LC onto stack {address of jump out of loop}
and 3 to LC [ leave room for jump instruction]
REGA : = null
Fig. 25
There are no code-generation for the statements
< type > : : = INTEGER
< stmt - list > : : = {either alternative}
< stmt > : : = {any alternative}
< body > : : = {either alternative}
For the Pascal program in fig. 1 the complete code-generation process is shown in
fig. 26.
COMP # 100
JGT {L2}
9 + JSUB X READ {READ (VALUE) }
WORD 1
WORD VALUE
10 LDA SUM {SUM : = SUM + VALUE}
ADD VALUE
STA SUM
11 LDA VALUE {SUMSQ : = SUMSQ * VALUE * VALUE}
MUL VALUE
ADD SUMSQ
STA SUMSQ
LDA I {END OF FOR LOOP}
ADD #1
J {L1}
13 {L2} LDA SUM {MEAN : = SUM DIVISION}
DIV # 100
STA MEAN
14 LDA SUM {VARIABLE : = SUMSQ DIV
DIV # 100 100 - MEAN * MEAN}
STA TEMP1
LDA MEAN
MUL MEAN
STA TEMP2
LDA TEMP1
SUB TEMP2
STA VARIANCE
15 +JSUB XWRITE {WRITE (MEAN, VARIANCE) }
WORD 2
WORD MEAN
WORD VARIABLE
LDL RETADR
RSUB
TEMP 1` RESW 1 {WORKING VARIABLE USED}
TEMP 2 RESW 1
END
Fig. 25 Object Code Generated for Pascal Program
the source statements have been completely analyzed, but the actual translation into
machine code has not yet been performed. It is easier to analyze and manipulate this
intermediate code than to perform the operations on either the source program or the
machine code. The intermediate form made in a compiler, is not strictly dependent on the
machine for which the compiler is designed.
8.1.1 INTERMEDIATE FORM OF THE PROGRAM
The intermediate form that is discussed here represents the executable instruction
of the program with a sequence of quadruples. Each quadruples of the form
Operation, OP1, OP2, result.
Where
Operation - is some function to be performed by the object code
OP 1 & OP2 - are the operands for the operation and
Result - designation when the resulting value is to be placed.
Example 1: SUM : = SUM + VALUE could be represented as
+ , SUM, Value, i, i1
:= i1 , , SUM
The entry i1, designates an intermediate result (SUM + VALUE); the second
quadruple assigns the value of this intermediate result to SUM. Assignment is treated as a
separate operation ( : =).
Example 2 : VARIANCE : = SUMSQ, DIV 100 -- MEAN * MEAN
DIV, SUMSQ, #100, i1
*, MEAN, MEAN, i2
-, i1, i2, i3
::= i3 , VARIABLE
Note: Quadruples appears in the order in which the corresponding object code
instructions are to be executed. This greatly simplifies the task of
analyzing the code for purposes of optimization. It is also easy to translate
into machine instructions.
For the source program in Pascal shown in fig. 1. The corresponding quadruples
are shown in fig. 27. The READ and WRITE statements are represented with a CALL
operation, followed by PARM quadruples that specify the parameters of the READ or
WRITE. The JGT operation in quadruples 4 in fig. 27 compares the values of its two
operands and jumps to quadruple 15 if the first operand is greater than the second. The J
operation in quadruples 14 jumps unconditionally to quadruple 4.
1. := #0 SUM SUM : = 0
2. := #0 SUMSQ SUMSQ : = 0
3. := #1 I FOR I : = 1 to 100
4. JGT I #100 (15)
215 System Software
A basic block is a sequence of quadruples with one entry point, which is at the
beginning of the block, one exit point, which is at the end of the block, and no jumps
within the blocks. Since procedure calls can have unpredictable effects as register
contents, a CALL operation is usually considered to begin a new basic block. The
assignment and use of registers within a basic block can follow as described previously.
When control passes from one block to another, all values currently held in registers are
saved in temporary variables.
For the problem is fig. .27, the quadruples can be divided into five blocks. They are:
C : 5 - 14
Block -- C Quadruples 5 - 14
Block -- D Quadruples 15 - 20 D : 15 - 20
Block -- E Quadruples 21 - 23 E : 21 - 23
Fig. 28
Fig. 28 shows the basic blocks of the flow group for the quadruples in fig. 27. An
arrow from one block to another indicates that control can pass directly from one
quadruple to another. This kind of representation is called a flow group.
Even though i2 value is in the register, it is not possible to perform the subtraction
operation. It is necessary to store the value of i2 in another temporary variable T2 and then
load the value of i1 from T1 into register A before performing the subtraction.
The optimizing compiler could rearrange the quadruples so that the second
operand of the subtraction is computed first. This results in reducing two memory
accesses. Fig. 29 shows the rearrangements.
* MEAN MEAN i2
DIV SUMSQ # 100 i1
- i1 i2 i3
:= i3 VARIANCE
LDA MEAN
MUL MEAN
STA T1
LDA SUMSQ
DIV # 100
SUB T1
STA VARIANCE
Fig. 29 Rearrangement of Quadruples for Code Optimization
-- Characteristics and Instructions of Target Machine: These may be special loop
- control instructions or addressing modes that can be used to create more efficient object
code. On some computers there are high-level machine instructions that can perform
complicated functions such as calling procedure and manipulating data structures in a
single operation.
Some computers have multiple functional blocks. The source code must be
rearranged to use all the blocks or most of the blocks concurrently. This is possible if the
result of one block does not depend on the result of the other. There are some systems
where the data flow can be arranged between blocks without storing the intermediate data
in any register. An optimizing compiler for such a machine could rearrange object code
instructions to take advantage of these properties.
Machine Independent Compiler Features
Machine independent compilers describe the method for handling structured
variables such as arrays. Problems involved in compiling a block-structured language
indicate some possible solution.
3.1 STRUCTURED VARIABLES
Structured variables discussed here are arrays, records, strings and sets. The
primarily consideration is the allocation of storage for such variable and then the
generation of code to reference then.
Arrays: In Pascal array declaration -
(i) Single dimension array: A: ARRAY [ 1 . . 10] OF INTEGER
Compilers 218
If each integer variable occupies one word of memory, then we require 10 words
of memory to store this array. In general an array declaration is ARRAY [ l .. u ] OF
INTEGER
Memory word allocated = ( u - l + 1) words.
(ii) Two dimension array : B : ARRAY [ 0 .. 3, 1 . . 3 ] OF INTEGER
In this type of declaration total word memory required is 0 to 3 = 4 ; 1 - 3 = 3 ;
4 x 3 = 12 word memory locations.
In general: ARRAY [ l1 .. u1, l2 . . u2.] OF INTEGER Requires ( u1 - l1 + 1) *
( u2 - l2 + 1) Memory words
The data is stored in memory in two different ways. They are row-major and
column major. All array elements that have the same value of the first subscript are stored
in contiguous locations. This is called row-major order. It is shown in fig. 30(a). Another
way of looking at this is to scan the words of the array in sequence and observe the
subscript values. In row-major order, the right most subscript varies most rapidly.
0,1 0,2 0,3 0,4 0,5 0,1 1,2 1,3 1,4 1,5 2,1 2,2 2,3 2,4 2,5 ...
A: ARRAY [ 1 . . 10 ] OF INTEGER
.
219 System Software
.
.
A[ I ] : = S
(1) - I #1 i1
+ i1 #3 i2
:= #5 A [ i1 ]
Fig. 31 Code Generation for Single Dimension Array of i2 in the Index Register
(2) Multi-Dimensional Array: In multi-dimensional array we assume row
major order. To access element B[ 2,3 ] of the matrix B[ 6, 4 ], we must skip over two
complete rows before arriving at the beginning of row 2. Each row contains 6 elements so
we have to skip 6 x 2 = 12 array elements before we come to the beginning of row 2 to
arrive at B[ 2, 3 ]. To skip over the first two elements of row 2 to arrive at B[ 2, 3 ]. This
makes a total of 12 + 2 = 14 elements between the beginning of the array and element
B[2, 3 ]. If each element occurs 3 byte as in SIC, the B[2, 3] is located relating at 14 x 3 =
42 address within the array.
Generally the two dimensional array can be written as
B ; ARRAY [ l1 . . . u1, l1 . . . u1, ] OF INTEGER
The code to perform such an array reference is shown in fig. 32.
B : ARRAY [ 0 . . 3, 1 . . 6 ] OF INTEGER
.
.
B[I, J] : = 5
1) * I #6 i1
-- j #1 i2
+ i1 i2 i3
* i3 #3 i4
:= #5 A [ i1 ]
Fig. 32 Code Generation for Two Dimensional Array
The symbol - table entry for an array usually specifies the following:
The type of the elements in the array
The number of dimensions declared
The lower and upper limit for each subscript.
This information is sufficient for the compiler to generate the code required for
array reference. Some of the languages line FORTRAN 90, the values of ROWS and
COLUMNS are not known at completion time. The compiler cannot directly generate
code. Then, the compiler create a descriptor called dope vector for the array. The
descriptor includes space for storing the lower and upper bounds for each array subscript.
When storage is allocated for the array, the values of these bounds are computed and
stored in the descriptor. The generated code for one array reference uses the values from
Compilers 220
the descriptor to calculate relative addresses as required. The descriptor may also include
the number of dimension for the array, the type of the array elements and a pointer to the
beginning of the array. This information can be useful if the allocated array is passed as a
parameter to another procedure.
In the compilation of other structured variables like recode, string and sets the
same type of storage allocations are required. The compiler must store information
concerning the structure of the variable and use the information to generate code to
access components of the structure and it must construct a description for situation in
which the required conformation is not known at compilation time.
1. := #1 I [Loop initialization]
2. JGT I #10 (20)
3. - I #1 i1 [Subscript calculation for x]
4. * i1 #10 i2
5. * #2 J i3
6. -- i3 #1 i4
7. -- i4 #1 i5
+ i2 i5 i6
9. * i6 #3 i7
10. -- I #1 i8 [Subscript Calculation for y]
11. * i8 #10 i9
12. * #2 J i10
13. -- i10 3 1 i11
14. + i9 i11 i12
221 System Software
1. := #1 I [Loop initialization]
2. JGT I #10 (16)
3. - I #1 i1 [Subscript calculation for x]
4. * i1 #10 i2
5. * #2 J i3
6. - i3 #1 i4
7. - i4 #1 i5
+ i2 i5 i6
9. * i6 #3 i7
10. + i2 i4 i12 [Subscript Calculation for y]
Fig. 34
Compilers 222
Names i1 have been left unchanged, except for the substitutions first described, to
make the compromise with fig. 33(b) easier. This optimized code has only 15 quadruples
and hence the time taken is reduced.
Another method of code optimization is the removal of loop invariants. There
are sub-expressions within the loop whose values do not change from one iteration of the
loop to the next. Thus the values can be calculated once, before the loop is entered, rather
than being recalculated for each iteration. In the example shows in fig. 33(a), the loop-
invariant computation is the term 2 * J [quadruple 5 fig. 34]. The result of this
computation depends only on the operand J, which does not change the value during the
execution of the loop. Thus we can move quadruple 5 in fig. 34 to a point immediately
before the loop is entered. A similar arrangement can be applied to quadruples 6 and 7.
Fig. 35 shows the sequence of quadruples that result from these modification.
The total number of quadruples remains the same as fig. 34, however, the number
of quadruples within the body of the loop has been reduced from 14 to 11. Our
modification have reduced the total number of quadruples for one execution of the FOR
from 181 [Fig. 23 (b) ], to 114 [Fig 25], which saves a substantial amount of time.
1. * #2 J i3 {Commutation of invariants}
2. - i3 #1 i4
3. - i4 #1 i5
4. := #1 I {Loop Initialization}
5. JGT I #10 (16)
6. - I #1 i1
7. * i1 #10 i2 {Subscript calculation for x}
+ i2 i5 i6
9. * i6 #3 i7
10. + i2 i4 i12 {Subscript Calculation for y}
11. * i12 #3 i13
12. := y [ i13 ] x [i7 ] {assignment Operation}
13. + #1 I i14 {End of Loop}
14. := i14 I
15. J (5)
16. {Next Statement}
Fig. 35
T2 : = T1 -- 1 ;
FOR : = 1 To 10 Do
x [ I, T2 ] : = y[ I, T1]
Fig. 36(b)
This would achieve only a part of the benefits realized by the optimization
process described. Some time the statement in fig. 36(a) is preferable because it is clearer
than the modified version involving T1 and T2. An optimizing compiler should allow the
programmer to write source code that is clearer and easy to read and it should
compile such a program into machine code that is efficient to execute.
-- Code optimization of another source is the substitution of a more efficient
operation for a less efficient one.
Example: The FORTRAN statement:
Do 10 I = 1, 20 ; To calculate the first 20 power of 2 and store it in
TABLE ( I ) = 2 * * I ; TABLE
In each iteration of the loop, the constant 2 is raised to the power I. The
quadruples are shown in fig. 37(a). Exponentiation is represented with the operation
EXP.
This computation can be performed more efficiently. Here, in each iteration of
the loop, the value of I is incremented by 1. The value of 2 * * I for the current iteration
can be found by multiplying the value for the previous iteration by 2. This method of
computing 2 * * I is much more efficient than performing series of multiplication or
using a logarithms technique.
This technique is shown in fig. 37(b).
1. := #1 I {Loop Initialization}
2. EXP #2 I i1 { Calculation of 2 *
3. -- I #1 i5 {Subscript calculation }
4. * i2 #3 i3
5. := i1 TABLE [ i2] {Assignment Operation}
6. + I #1 i4 {End of the Loop}
7. := i4 I
JLE I #20 i3
Fig. 37(a)
1. := #1 i1 {Initialize temporaries}
2. :: = # (-3) i3
3. := #1 I {Loop Initialization}
4. * i1 #2 i1 { Calculation of 2 * * I }
Compilers 224
5. + i3 #3 i3 {Subscript calculation }
6. := i1 TABLE [ i3] {Assignment Operation}
7. + I #1 i4 {End of the Loop}
:= i4 I
9. JLE I #20 (4)
Fig. 37(b)
STORAGE ALLOCATION
All the program defined variable, temporary variable, including the location used
to save the return address use simple type of storage assignment called static allocation.
When recursively procedures are called, static allocation cannot be used. This is
explained with an example. Fig. 38(a) shows the operating system calling the program
MAIN. The return address from register 'L' is stored as a static memory location
RETADR within MAIN.
RETADR
RETADR RETADR
(2) (2)
(a) SUB
CALL SUB
(b) (3)
RETADR RETADR(c)
(c)
Fig. 38
In fig. 38(b) MAIN has called the procedure SUB. The return address for the call
has been stored at a fixed location within SUB (invocation 2). If SUB now calls itself
recursively as shown in fig. 38(c), a problem occurs. SUB stores the return address for
invocation 3 into RETADR from register L. This destroys the return address for
invocation 2. As a result, there is no possibility of ever making a correct return to MAIN.
There is no provision of saving the register contents. When the recursive call is
made, variable within SUB may set few variables. These variables may be destroyed.
However, these previous values may be needed by invocation 2 or SUB after the return
from the recursive call. Hence it is necessary to preserve the previous values of any
variables used by SUB, including parameters, temporaries, return addresses, register save
areas etc., when a recursive call is made. This is accomplished with a dynamic storage
allocation technique. In this technique, each procedure call creates an activation record
that contains storage for all the variables used by the procedure. If the procedure is called
225 System Software
recursively, another activation record is created. Each activation record is associated with
a particular invocation of the procedure, not with the itself. An activation record is not
deleted until a return has been made from the corresponding invocation.
Activation records are typically allocated on a stack, with the correct record at
the tip of the stack. It is shown in fig. 39(a). Fig. 39(a) corresponds to fig. 39(b). The
procedure MAIN has been called; its activation record appears on the stack. The base
register B has been set to indicate the starting address of this correct activation record.
The first word in an activation record would normally contain a pointer PREV to the
previous record on the stack. Since the record is the first, the pointer value is null. The
second word of the activation record contain a portion NEXT to the first unused word of
the stack, which will be the starting address for the next activation record created. The
third word contain the return address for this invocation of the procedure, and the
necessary words contain the values of variables used by the procedure.
SYSTEMS
MAIN
RETADR
NEXT
0
Stack
Fig. 39 (a)
SYSTEM Variables
(1) MAIN For SUB
RETADR
NEXT
CALL SUB PREV
B Variable
For MAIN
SUB RETADR
Stack NE XT
0
stacl
Fig. 39(b)
In fig. 39 (b), MAIN has called the procedure SUB. A new activation record has
been created on the top of the stack, with register B set to indicate this new current
Variables
record. The pointers PREV and NEXT in theFor SUBrecords have been set as shown.
time
RETADR
NEXT
SYSTEM PREV
(1) MAIN Variable
For MAIN
RETADR
CALL SUB NEXT
PREV
Variable
For MAIN
RETADR
NEXT
0
Compilers 226
B
CALL SUB
SYSTEM Variables
(1) MAIN For SUB
RETADR
NEXT
CALL SUB PREV
B Variable
For MAIN
SUB RETADR
NEXT
0
shown in fig. 39. This code is often called a prologue for the procedure. At the end of the
procedure, there must be code to delete the current activation record, resulting
pointers as needed. This code is called an epilogue.
Example: IN FOTRAN 90 :ALLOCATE (MATRIX (ROWS, COLUMNS) )
allocation storage for the dynamic array MATRIX with the specified dimensions.
DE-ALLOCATE MATRIX
releases the storage assigned to MATRIX by a previous ALLOCATE.
IN PASCAL: NEW (P)
allocates storage for a variable and sets the pointer P to indicate the variable just
created.
DISPOSE (P)
releases the storage that was previously assigned to the variable pointed to by P.
In C : MALLCO (SIZE) ; allocate a block of specified size
.
.
.
FREE (P) ; frees the storage indicated by pointer P.
A variable that is dynamically allocated in this way does not occupy a fixed
location in an activation record, so it cannot be referenced directly using base relative
addressing. Such a variable is usually accessed using indirect addressing through a
pointer variable P. Since P does occupy a fixed location in the activation record, it can be
addressed in the usual way.
The mechanism to allocate a storage memory to a variable can be done in any of
the following ways:
A NEW or MALLOC statement would be translated into a request by
the operating system for an area of storage of the required size.
The required allocation is handled through a run-time support procedure
associated with the compiler. With this method, a large block of free
storage called a heap is obtained from the operating system at the
beginning of the program. Allocations of the storage from the heap are
managed by the run-time procedure.
In some systems, the program need not free memory for storage. A run-
time garbage collection procedure scans the pointer in the program and
reclaims areas from the heap that are no longer used.
Each procedure corresponds to a block. Note that blocks are rested within other
blocks. Example: Procedures B and D are rested within procedure A and procedure C is
rested within procedure B. Each block may contain a declaration of variables. A block
may also refer to variables that are defined in any block that contains it, provided the
same names are not redefined in the inner block. Variables cannot be used outside the
block in which they are declared.
In compiling a program written in a blocks structured language, it is convenient
to number the blocks as shown in fig. 40. As the beginning of each new block is
recognized, it is assigned the next block number in sequence. The compiler can then
construct a table that describes the block structure. It is illustrated in fig. 41. The block-
level entry gives the nesting depth for each block. The outer most block number
that is one greater than that of the surrounding block.
PROCEDURE A ;
VAR X, Y, Z : INTEGER ;
:
PROCEDURE B ;
VAR W, X, Y : REAL ;
:
PROCEDURE C ;
VAR W, V : INTEGER ; 1
: 3 2
END { C };
:
END { B };
:
PROCEDURE D ;
VAR X, Z : CHAR ;
. 2
.
END { D};
END { A};
Fig. 40 Nested Blocks in a Program
Block Surrounding
Name Number Level Block
A 1 1 --
B 2 2 1
C 3 3 2
D 4 2 1
Fig. 41
Since a name can be declared more than once in a program (by different blocks),
each symbol-table entry for an identifier must contain the number of the declaring block.
A declaration of an identifier is legal if there has been no previous declaration of that
identifier by the current block, so there can be several symbolic table entries for the same
229 System Software
name. The entries that represent declaration of the same name by different blocks can be
linked together in the symbol table with a chain of pointers.
When a reference to an identifier appears in the source program, the compiler
must first check the symbol table for a definition of that identifier by the current block. If
not such definition is found, the compiler looks for a definition by the block that
surrounds the current one, then by the block that surrounds that and so on. If the
outermost block is reached without finding a definition of the identifier, then the
reference is an error.
The search process just described can easily be implemented within a symbol
table that uses hashed addressing. The hashing function is used to locate one definition of
the identifier. The chain of definitions for that identifier is then searched for the
appropriate entry.
Most block-structured languages make use of automatic storage allocation. The
variables that are defined by a block are stored in an activation record that is created each
time the block is entered. If a statement refers to a variable that is declared within the
current block, this variable is present in the current activation record, so it can be
accessed in the usual way. It is possible to refer to a variable that is declared in some
surrounding block. In that case, the most recent activation record for that block must be
located to access the variable.
Activation
Activation Record for C
Record for C
Activation Activation
Record for B Record for C
C
B Activation C
Activation
Stack Record for B
Record
(a) for A A (b) B
Activation
Record for A A
Fig. 42 Use of Display for Procedure
A data structure called display is used to access a variable in surrounding blocks.
The display contains pointers to the most recent activation records for the current block
and for all blocks that surround the current one in the source program. When a block
refers to a variable that is declared in some surrounding block, the generated object code
uses the display to find the activation record that contains this variable.
Example:
When a procedure calls itself recursively thus an activation record is created on
the stack as a result of the call. Assume procedure C calls itself recursively. It is shown in
fig. 42(b) the record for C is created on the stack as a result of the call. Any reference to a
variable declared by C should use this most recent activation record ; the display pointer
for C is changed accordingly. Variables that correspond to the previous invocation of C
are not accessible for the movement, so there is no display pointer to this
Compilers 230
activation record.
Activation
Record for C
Activation
Record for B
Activation
Record for A
Activation
Record for B
Activation D
Record for B
A
Stack Display
Fig 42(c)
Now if procedure 'C' call procedure D the resulting stack and display are as
illustrated in fig. 42(c) . An activation record for D has been created in the usual way and
added to the stack. Note, that the display now contains only two pointers: one each to the
activation records for D and A. This is because procedure D cannot refer to variable in B
or C, except through parameters that are passed to it, even though it is called from C.
According to the rules for the scope of names in as block-structured language, procedure
D can refer only to variable that are declared by D or by some block that contains D in
the source program.
8.4 COMPILER DESIGN OPTIONS
The compiler design is briefly discussed in this section. The compiler is divided
into single pass and multi pass compilers.
4.1. COMPILER PASSES
One pass compiler for a subset of the Pascal language was discussed in section 1.
In this design the parsing process drove the compiler. The lexical scanner was called
when the parser needed another input token and a code-generation routine was invoked as
the parser recognized each language construct. The code optimization techniques
discussed cannot be applied in total to one-pass compiler without intermediate code-
generation. One pass compiler is efficient to generate the object code.
One pass compiler cannot be used for translation for all languages. FORTRAN
and PASCAL language programs have declaration of variable at the beginning of the
program. Any variable that is not declared is assigned characteristic by default.
One pass compiler may fix the formal reference jump instruction without
problem as in one pass assembler. But it is difficult to fix if the declaration of an
identifier appears after it has been used in the program as in some programming
languages.
Example: X:=Y*Z
231 System Software
If all the variables x, y and z are of type INTEGER, the object code for this
statement might consist of a simple integer multiplication followed by storage of the
result. If the variable are a mixture of REAL and INTEGER types, one or more
conversion operations will need to be included in the object code, and floating point
arithmetic instructions may be used. Obviously the compiler cannot decide what machine
instructions to generate for this statement unless instruction about the operands is
available. The statement may even be illegal for certain combinations of operand types.
Thus a language that allows forward reference to data items cannot be compiled in one
pass.
Some programming language requires more than two passes. Example :
ALGOL-98 requires at least 3 passes.
There are a number of factors that should be considered in deciding between one
pass and multi pass compiler designs.
(1) One Pass Compiles: Speed of compilation is considered important.
Computer running students jobs tend to spend a large amount of time performing
compilations. The resulting object code is usually executed only once or twice for each
compilation, these test runs are not normally very short. In such an environment,
improvement in the speed of compilation can lead to significant benefit in system
performance and job turn around time.
(2) Multi-Pass Compiles: If programs are executed many times for each
compilation or if they process large amount of data, then speed of executive
becomes more important than speed of compilation. In a case, we might
prefer a multi-pass compiler design that could incorporate sophisticated
code-optimization technique.
Multi-pass compilers are also used when the amount of memory, or other systems
resources, is severely limited. The requirements of each pass can be kept smaller if the
work by compilation is divided into several passes.
Other factors may also influence the design of the compiler. If a compiler is
divided into several passes, each pass becomes simpler and therefore, easier to
understand, read and test. Different passes can be assigned to different programmers and
can be written and tested in parallel, which shortens the overall time require for compiler
construction.
INTERPRETERS
An interpreter processes a source program written in a high-level language. The
main difference between compiler and interpreter is that interpreters execute a version of
the source program directly, instead of translating it into machine code.
An interpreter performs lexical and syntactic analysis functions just like compiler
and then translates the source program into an internal form. The internal form may also
be a sequence of quadruples.
After translating the source program into an internal form, the interpreter
executes the operations specified by the program. During this phase, an interpreter can be
viewed as a set of subtractions. The internal form of the program drives the execution of
this subtraction.
The major differences b/w interpreter and compiler are:
Compilers 232
Interpreters Compilers
1) The process of translating a source The process of translating a source
program into some internal form is program into some internal form is
simpler and faster slower than interpreter.
2) Execution of the translated program Executing machine code is much faster.
is much slower.
3) Debugging facilities can be easily Provision of bugging facilities are
provided. difficult and complicated.
4) During execution the interpreter The compiler does not produce
produce symbolic dumps of data symbolic dumps of date value.
values, trace of program execution Debugging tools are required for
related to the source statement. trace the program.
5) Program testing can be done It is difficult to test as the compiler
effectively using interpreter as the execution file gives the final result.
operation on different data can be
traced.
6) Easy to handle dynamic scoping Difficult to handle dynamic scooping
Most programming languages can be either compiled or interpreted
successfully. However, some languages are particularly well suited to the use of
interpreter. Compilers usually generate calls to library routines to perform function such
as I/O and complex conversion operations. In such cases, an interpreter might be
performed because of its speed of translation. Most of the execution time for the standard
program would be consumed by the standard library routines. These routines would be
the same, regardless of whether a compiler or an interpreter is used.
In some languages the type of a variable can change during the execution of a
program. Dynamic scoping is used, in which the variable that are referred to by a
function or a subroutines are determined by the sequence of calls made during execution,
not by the nesting of blocks in the source program. It is difficult to compile such
language efficiently and allow for dynamic changes in the types of variables and the
scope of names. These features can be more easily handled by an interpreter that provides
delayed binding of symbolic variable names to data types and locations.
4.3 P-CODE COMPILERS
P-Code compilers also called byte of code compilers are very similar in concept
to interpreters. A P-code compiler, intermediate form is the machine language for a
hypothetical computers, often called pscudo-machine or P-machine. The process of using
such a P-code is shown in fig, 43.
The main advantage of this approach is portability of software. It is not necessary
for the compiler to generate different code for different computers, because the P-code
object program can be executed on any machine that has a P-code interpreter. Even the
compiler itself can be transported if it is written in the language that it compiles. To
accomplish this, the source version of the compiler is compiled into P-code; this P-code
can then be interpreted on another compiler. In this way P-code compiler can be used
without modification as a wide variety of system if a P-code interpreter is written for
each different machine.
233 System Software
Source Program
Compiler P-Code
Compiler
Object Program
P - Code
P - Code
Execute Interpreter
Fig. 43
The design of a P-machine and the associated P-code is often related to the
requirements of the language being compiled. For example, the P-code for a Pascal
compiler might include single P-instructions that perform:
Array subscript calculation
Handle the details of procedure entry and exit and
Perform elementary operation on sets
This simplifies the code generation process, leading to a smaller and more
efficient compiler.
The P-code object program is often much smaller than a corresponding machine
code program. This is particularly useful on machines with severely limited memory size.
The interpretive execution of P-code program may be much slower than the
execution of the equivalent machine code. Many P-code compilers are designed for a
single user running on a dedicated micro-computer systems. In that case, the speed of
execution may be relatively insignificant because the limiting factor is system
performance may be the response time and " think time " of the user.
If execution speed is important, some P-code compilers support the use of
machine-language subtraction. By rewriting a small number of commonly used routines
in machine language, rather than P-code, it is often possible to improve the performance.
Of course, this approach sacrifices some of the portability associated with the use of P-
code compilers.
8.4.2 COMPILER-COMPILERS
parses directly. Others create tables for use by standard table-driven scanning and parsing
routines that are supplies by the compiler - compiler.
Lexical Ruler
Compiler- Scanner
Grammar
Compiler
Parser
Semantic
Routines
Code
Generator
Compiler
Fig. 44