TOC Final PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 44

Group 1

Question#1

Consider the following grammar:


S→T⊣
T → T aP b ∣ T bM a ∣
P → P aP b ∣ ɛ
M → M bM a ∣ ɛ
a) Construct the automaton needed for the DK-test with lookahead
b) Would you conclude that the grammar is unambiguous? State why in terms of
consistency rules.
c) Write down the language generated by the above grammar.
Solution:
a) DK-Automaton

b)
The DK automaton in Figure shows that the grammar passes the test and is unambiguous.
Observe that every final state contains exactly one completed rule (having the dot at the
end) and no other rule with a terminal symbol (a or b) following a dot.
c)
The grammar generates the language as:
Question#2

(a) Is this grammar is deterministic or not deterministic?


S→T⊣
T → T (T) ∣ ɛ
State the reason.
(b) Prove with Dk test.
(c) Prove with reduction considering following string.
Solution:
(a) Yes this grammar is deterministic because the grammar is unambiguous. In this way
that Rule R1 has endmark ⊣. R2 has terminator. We have only one production so
in this case handle will be unique for any given string that belong to this DCFG.
(b)

(c)

Group 2
Question#1
Design A truing Machine M That Decide A = {02^n | n ≥ 0} the language consisting of all Strings of 0s
Whose length is a power of 2.
A = {02^n | n ≥ 0}
A = {02^n | n ≥ 0} (CONT….)

Question#2
a) Write down the formal definition of the Turing Machine?
Answer:

b) Design the state diagram of the Turing Machine that accepts the language: L = {w#w | w ∈ {a,b}*}
Answer:
If we take a string w = abaab, then w#w = abaab#abaab. The following would the state diagram of the
Turing machine that will accept the strings belonging to this language.
Group 3
Question#1

Q1: Design a Two-Tape Turing Machine for {0n1n | n ≥ 0}. Give its formal description and
show that how can we simulate this using Single-Tape Turing Machine?
Answer:
Two-Tape Turing Machine for {0n1n | n ≥ 0}:
Formal Description:
M = (Q, Σ, Γ, δ, q1, qaccept ,qreject) where
Σ = {0,1}
Γ = {0,1,x,␣}
q1 = Start state
qaccept = Accept state
δ function is given below:
δ(q1, 0␣)=( q2, 0x,RR)
δ(q2, 0␣)=( q2, 00 ,RR)
δ(q2, 1␣)=( q3, 1␣,SL)
δ(q3, 10)=( q3, 10,RL)
δ(q3, 1x)=( q4, 1x,RS)
δ(q4, ␣x)=( qaccept,␣x ,LS)
qreject = To simplify the figure, we don’t show the reject state or the transitions going to
the reject state. Those transitions occur implicitly whenever a state lacks an outgoing
transition for a particular symbol.
Tape Condition:

Simulation on Single Tape TM:

S = “On input w = 000111:


1. First S puts its tape into the format that represents both the tapes of M. The formatted tape contains
#000111#␣#···#.

2. To simulate a single move, S scans its tape from the first #, which marks the left-hand end, to the third #, which
marks the right-hand end, in order to determine the symbols under the virtual heads.

#•000111 #•␣# ···#.

3. Then S makes a second pass to update the tapes according to the way that M’s transition function dictates.
4. If at any point S moves one of the virtual heads to the right onto a #, this action signifies that M has moved the
corresponding head onto the previously unread blank portion of that tape. So, S writes a blank symbol on this
tape cell and shifts the tape contents, from this cell until the rightmost #, one unit to the right.

5. Then it continues the simulation as before.”


Question#2

Q2 : Let A be the language consisting of all strings representing undirected graphs that
are connected. Give example of an undirected graph and its encoding. Also give a high-
level description of a TM M that decides A.
A ={⟨G⟩|G is a connected undirected graph}.
Answer:
Example of G :

Encoding of G:

High Level Description of Turing Machine:


The following is a high-level description of a TM M that decides A.

M = “On input ⟨G⟩, the encoding of a graph G:

1. Select the first node of G and mark it.


2. Repeat the following stage until no new nodes are marked:
3. For each node in G, mark it if it is attached by an edge to a node that is already marked.
4. Scan all the nodes of G to determine whether they all are marked. If they are, accept; otherwise, reject.”

Group 4
Question#1

Prove that the language L = {anbn where n >= 0} is a non-regular language and can be
recognized by Turning Machine.
Solution:
Since all the regular languages can be modelled using the Finite State Machines (FSM),
therefore if above language is regular we should be able to construct its DFA or NFA but in
reality it is not possible. If we attempt to find a DFA
that recognizes L, we discover that the machine seems to need to remember
how many 0s have been seen so far as it reads the input. Because the number of
0s isn’t limited, the machine will have to keep track of an unlimited number of
possibilities. But it cannot do so with any finite number of states.
We can recognize the strings of such language using the Turing Machine and understand
how this language can be recognized.

Question#2
Question # 2: Construct a turning machine for given cfg to prove given grammar is decidable or
not and also describe relationship among classes or languages ?
S →A
A→0
B→IB/E

Solution:
S→ABA
A→0
B→IB/E
Step 1 = Language that is presented by given CFG is : 01*0

Where
V= {A , B}
E= {0 , 1}
R=Rules
S=>Start state
Step 2 # Turning Machine
L=01*0

This machine accept all the strings that are in this language and reject others.

Part 2 #
Relationship among classes of languages
→All regular languages are context free but not all context free languages are regular .Regular
languages are subset of context free languages .

→A language is called decidable if there is a tuning machine which accept and halts on every input string
w.
→Tuning Recognizable .
→A language is recognizable if there is a tuning machine which will halt and accept only the strings in
that language and for strings not in the language. The TM either rejects or does not halt at all.
Tuning

recognizable languages is a broad category .

Group 5
Question#1

Q No. 1: Prove the Undecidability of the language

ATM = {⟨M, w⟩| M is a TM and M accepts w}.

We assume that ATM is decidable and obtain a contradiction. Suppose that H is a decider
for ATM. On input ⟨M, w⟩, where M is a TM and w is a string, H halts and accepts if M
accepts w. Furthermore, H halts and rejects if
M fails to accept w. In other words, we assume that H is a TM, where

Now we construct a new Turing machine D with H as a subroutine. This new TM calls H
to determine what M does when the input to M is its own description ⟨M⟩. Once D has
determined this information, it does the opposite. That is, it rejects if M accepts and
accepts if M does not accept. The following is a description of D.
D = “On input ⟨M⟩, where M is a TM:
1. Run H on input ⟨M,⟨M⟩⟩.
2. Output the opposite of what H outputs. That is, if H accepts, reject; and if H rejects,
accept.”

Don’t be confused by the notion of running a machine on its own description! That is
similar to running a program with itself as input, something that does occasionally occur in
practice. For example, a compiler is a program that translates other programs. A compiler
for the language Python may itself be written in Python, so running that program on itself
would make sense.

In summary,
What happens when we run D with its own description ⟨D⟩ as input? In that
case, we get

No matter what D does, it is forced to do the opposite, which is obviously a contradiction.


Thus, neither TM D nor TM H can exist.

Let’s review the steps of this proof. Assume that a TM H decides ATM. Use H to build a
TM D that takes an input ⟨M⟩, where D accepts its input ⟨M⟩ exactly when M does not
accept its input ⟨M⟩. Finally, run D on itself. Thus, the machines take the following
actions, with the last line being the contradiction.
• H accepts ⟨M, w⟩ exactly when M accepts w.
• D rejects ⟨M⟩ exactly when M accepts ⟨M⟩.
• D rejects ⟨D⟩ exactly when D accepts ⟨D⟩.

Question#2

Q No. 2: Let B be the set of all infinite sequences over {0,1}. Show that B is
uncountable using a proof by diagonalization.
Each element in B is an infinite sequence (b1, b2, b3,…), where each bi ∈ {0,1}. Suppose
B is countable. Then we can define a correspondence f between N = {1,2,3,...} and B.
Specifically, for n ∈ N, let f(n) = (bn1, bn2, bn3, ...), where bni is the ith bit in the nth
sequence, i.e.,
Now define the infinite sequence c = (c1, c2, c3, c4, c5, ...) ∈ B, where ci = 1−bii for each i ∈ N. In other words, the
ith bit in c is the opposite of the ith bit in the ith sequence. For example, if

then we would define c = (1,1,0,0,...). Thus, for each n = 1,2,3,..., note that c ∈ B differs from the nth sequence in
the nth bit, so c does not equal f(n) for any n, which is a contradiction. Hence, B is uncountable.

Group 6
Question#1
Q1: Prove that HALTTM is undecidable using reducibility.
Ans: Let’s assume for the purpose of obtaining a contradiction that TM “R” decides HALTTM. We construct
TM “S” to decide ATM, with S operating as follows.
S = “On input ⟨M,w⟩, an encoding of a TM “M” and a string w:
1. Run TM R on input ⟨M,w⟩.
2. If R rejects, reject .
3. If R accepts, simulate M on w until it halts.
4. If M has accepted, accept; if M has rejected, reject .”
Clearly, if R decides HALTTM, then S decides ATM. Because ATM is undecidable, HALTTM also must be
undecidable.
Question#2
Q2: a) Let M be an LBA, what will be the number of exactly distinct configurations?
b) How it is known in LBA that the machine is now in loop?
c) In the proof of “ALLCFG is undecidable”, why the representation of computation histories is
modified and how it is modified?
Ans: a) Let M be an LBA with q states and g symbols in the tape alphabet. There are exactly qngn distinct
configurations of M for a tape of length n.
b) The idea for detecting when M is looping is that as M computes on w, it goes from configuration to
configuration. If M ever repeats a configuration, it would go on to repeat this configuration over and over
again and thus be in a loop and the number of configurations would become greater than qngn.
c) Why: The reason is that when Ci is popped off the stack, it is in reverse order and not suitable for
comparison with Ci+1. That’s why the representation is modified.
How: Every other configuration appears in reverse order. The odd positions remain written in the forward
order, but the even positions are written backward. Thus, an accepting computation history would appear
as shown below:
# C1 # C2(Reverse) # C3 # C4(Reverse) # ... # Cl #

Group 7
Question#1

Q1: Prove that Post Corresponding Problem (PCP) is undecidable.


PCP = {⟨P⟩| P is an instance of the Post Correspondence Problem with a match}
MPCP = {⟨P⟩| P is an instance of the Post Correspondence Problem with a match that starts with the first
domino}
Question#2

Q2: Prove that EQTM is neither Turing-recognizable nor co-Turing-recognizable.


Proof
ATM = {⟨M, w⟩ | M is a TM and M accepts w}
EQTM = {⟨M1, M2⟩ | M1 and M2 are TMs and L(M1) = L(M2)}
̅̅̅̅̅̅̅̅
First, we show that EQTM is not Turing-recognizable. We do so by showing that ATM is reducible to 𝐸𝑄 𝑇𝑀 .
As we know that ATM is undecidable and Turing Recognizable. The reducing function f works as follows.
F = “On input ⟨M, w⟩, where M is a TM and w a string:
1. Construct the following two machines, M1 and M2.
M1 = “On any input:
1. Reject.”
M2 = “On any input:
1. Run M on w. If it accepts, accept.”
2. Output ⟨M1, M2⟩.”
Here, M1 accepts nothing. If M accepts w, M2 accepts everything, and so the two machines are not
equivalent. Conversely, if M doesn’t accept w, M2 accepts nothing, and they are equivalent. Thus, f reduces
ATM to ̅̅̅̅̅̅̅̅
𝐸𝑄𝑇𝑀 , as desired.
To show that ̅̅̅̅̅̅̅̅
𝐸𝑄𝑇𝑀 is not Turing-recognizable, we give a reduction from ATM to the complement of
̅̅̅̅̅̅̅̅
𝐸𝑄𝑇𝑀 — namely, EQTM. Hence, we show that ATM ≤m EQTM. The following TM G computes the reducing
function g.
G = “On input ⟨M, w⟩, where M is a TM and w a string:
1. Construct the following two machines, M1 and M2.
M1 = “On any input:
1. Accept.”
M2 = “On any input:
1. Run M on w.
2. If it accepts, accept.”
2. Output ⟨M1, M2⟩.”
The only difference between f and g is in machine M1. In f, machine M1 always rejects, whereas in g it
always accepts. In both f and g, M accepts w iff M2 always accepts. In g, M accepts w iff M1 and M2 are
equivalent. That is why g is a reduction from ATM to EQTM.

Group 8
Group 9
Question#1

Q1. A triangle in an undirected graph is a 3-clique. Show that TRIANGLE ∈ P, where


TRIANGLE = { (G) | G contains a triangle }.
Answer: Let G = (V, E)be a graph with a set V of vertices and a set E of edges.
We enumerate all triples (u, v, w) with vertices u, v, w V and u < v < w, and then check
whether or not all three edges (u, v), (v, w) and (∈u, w) exist in E. Enumeration

of all triples requires O(|V |3) time. Checking whether or not all three edgesbelong to E
takes O(|E|) time. Thus, the overall time is O (|V |3 |E|), which is polynomial in the length
of the input (G).Therefore, TRIANGLE∈P.
Remark: Note that for TRIANGLE, we are looking for a clique of fixed size 3, so even
though the 3 is in the exponent of the time bound, the exponent is a constant, so the time
bound is polynomial. We could modify the above algorithm for TRIANGLE to work for
CLIQUE = {(G, k) | G is an undirected graph with a k-clique} by enumerating all collections
of k vertices, where k is the size of the clique desired. But the number of such collections is

so the time bound is O(|V |k k|E|), which is exponential in k. Because k is part of the
input (G, k), the time bound is no longer polynomial. Hence, we cannot use this
algorithm to show that CLIQUE ∈ P. Nor does it show that CLIQUE ƒ∈ P since we’ve
only shown that one algorithm doesn’t have polynomial runtime, but there might be
another algorithm for CLIQUE that does run in polynomial time. However at this time,
it is currently unknown if CLIQUE ∈ P or CLIQUE ƒ∈ P. Because CLIQUE is NP-
complete, this question will be answered if anyone solves the P vs. NP problem, which
is still unresolved.

Question#2

Q4)a. Show that P is closed under Union.

Answer: Suppose that language L1 ∈ P and language L2 ∈ P. Thus, there are


polynomial-time TMs M1 and M2 that decide L1 and L2, respectively.
Specifically, suppose that M1 has running time O(nk1 ), and that M2 has running time
O(nk2 ), where n is the length of the input w, and k1 and k2 are constants. A Turing
machine M3 that decides L1 ∪ L2 is the following:
M3 = “On input w:
1. Run M1 with input w. If M1 accepts, accept.
2. Run M2 with input w. If M2 accepts, accept.
Otherwise, reject.”
Thus, M3 accepts w if and only if either M1 or M2 (or both) accepts w. The
running time of M3 is O(nk1 )+ O(nk2 ) = O(nmax(k1,k2)), which is still
polynomial in n; i.e., the sum of two polynomials is also polynomial. Thus, the
overall running time of M3 is also polynomial.
(b) Show that P is closed under complementation.
Answer: Suppose that language L1 ∈P, so there is a polynomial-time TM M1
that decides L1. A Turing machine M2 that decides L1 is the following:

M2 = “On input w:
1. Run M1 with input w.
If M1 accepts, reject ; otherwise, accept.”

The TM M2 just outputs the opposite of what M1


Group 10
Question#1
a) What is clique?

• A clique is a sub graph of a graph (undirected), which is actually a complete


graph(where every two nodes are connected by an edge).
• K-clique contains k nodes
• Example 5-clique

b) Convert this Boolean formula in to the graph, using the clique theorem.

SOLUTION:
Question#2

What is the difference between NP hard and NP complete?

NP - Problems whose solutions can be verified in polynomial time.

NP Hard: If an even harder problem is reducible to all the problems in NP set (at
least as hard as any NP-problem) then that problem is called as NP-hard.

NP-complete vs NP-hard

1. NP-complete: All problems in NP can be translated as a NP-complete problem


and NP-complete ⊂ NP.
2. NP-hard: All problems in NP can be translated as a NP-hard problem but NP-
hard ⊄ NP or NP ⊄ NP-hard.
3. NP-complete ⊂ NP
4. NP-complete ⊂ NP-hard
5. NP-complete = NP ∩ NP-hard

Group 11
Question#1
Question 1: To prove that HAMPATH is NP-complete problem show that the given Boolean
formula is reducible to HAMPATH.
SOLUTION:
Question#2
Q2: Use following graph to prove that UHAMPATH is NP-complete.
UHAMPATH in G’ is : S, u2in, u2mid, u2out, u1in, u1mid, u1out, u3in, u3mid, u3out, u4in, u4mid, u4out, u5in,
u5mid, u5out, u6in, u6mid, u6out, T
As the given graph G is reduced to G’ so we UHAMPATH is in NP-Complete.

Group 12
Question#1
What is yieldability problem in Savitch theorem? And how can solve it?
ANSWER:
There are two configurations of NTM (Non-deterministic Turing machine) c1 and c2 with a
number t, and we test whether the NTM can get from c1 and c2 within t steps using only f(n)
space. This problem is called yieldability problem.
C1 is start configuration
C2 is accept configuration
T is num of steps that take non-deterministic machine
For this purpose, let us define a recursive function, called CAN_YIELD(c1,c2,t), the checks if c1
can yield c2 in t steps as follows:
Function CAN_YIELD(c1,c2,t) {
1. If t = 1, test whether c1 = c2 or whether c1 yields c2 in one step using the rule of NTM
N. Accept if either test succeeds; Reject otherwise.
2. For each config cm using k f(n) space:
a. Run CAN_YIELD(c1,cm,t/2)
b. Run CAN_YIELD(cm,c2,t/2)
c. If both accept, accept
3. If haven’t accept yet, reject
We modify N a bit, and define some terms:
• We modify N so that when it accepts, it clears the tape and moves the tape head to
leftmost cell. We denote such a configuration c(accept)
• Let c(start) = start configuration of N on w
• Select a constant d such that N has at most 2d f(n)configurations (which is the upper
bound of N’s running time)
Based on this new N, there exists a DTM M that simulates N as follows:
M = “On input w,
1. Output the result
CAN_YIELD(c(start),c(accept), 2d f(n)) ’
• When CAN_YIELD invokes itself recursively, it needs to store c1, c2, t, and the
configuration it is testing (so that these values can be restored upon return from the
recursive call)
• Each level of recursion thus uses O(f(n)) space
• Height of recursion: df(n) = O(f(n))
• Total space = O((f2(n)))
Question#2
Explain the relationship between P, NP, PSPACE,NPSPACE and EXPTIME with hierarchy
diagram?
ANSWER:
Group 13
Question#1

Why complete problem is very important in PSPACE-COMPLETENESS?


Ans: Complete problem are important because they are examples of the most difficult problems in a
complexity class. A complete problem Is most difficult because any other problem in the class is easily
reduced into it. So if we find an easy way to solve the complete problem, we can easily solve all other
problems in the class. The reduction must be easy, relative to the complexity of typical problems in the
class, for this reasoning to apply. If the reduction itself was difficult to compute, an easy solution to the
complete problem wouldn’t necessarily yield an easy solution to the problem reducing it.

Question#2

Write the algorithm of TQBF(true quantified Boolean formula?


• Answer: T = a fully quantified Boolean formula:

• 1. If φ(formula) contains no quantifiers, then it is an expression with only constants, so evaluate


φ and accept if it is true; otherwise, reject.

• 2. If φ(formula) equals ∃x ψ(input) , recursively call T on ψ, first with0 substituted for x and
then with 1 substituted for x. If either result is accept, then accept; otherwise , reject.

• 3. If φ(formula) equals ∀xψ (input), recursively call T on ψ, first with0 substituted for x and
then with 1 substituted for x. If both results are accept, then accept; otherwise , reject.”

Group 14
Question#1
Q 1 (a) Describe the Classes L and NL.

Answer: We examine smaller, sublinear space bounds. In time complexity,


sublinear bounds are insufficient for reading the entire input, so we don’t consider
them here. In sublinear space complexity, the machine is able to read the entire
input but it doesn’t have enough space to store the input.
We introduce a Turing machine with two tapes: a read-only input tape, and a
read/write work tape. On the read-only tape, the input head can detect symbols
but not change them. The input head must remain on the portion of the tape
containing the input. The work tape may be read and written in the usual way. Only
the cells scanned on the work tape contribute to the space complexity of this type
of Turing machine.
L is the class of languages that are decidable in logarithmic space on a deterministic
Turing machine. In other words,
L = SPACE(log n)
NL is the class of languages that are decidable in logarithmic space on a
nondeterministic Turing machine. In other words,
NL = NSPACE(log n)
The input head must remain on the portion of the tape containing the input. The
work tape may be read and written in the usual way. Only the cells scanned on the
work tape contribute to the space complexity of this type of Turing machine.
(b) The language A = {0k1k| k ≥ 0} is a member of L.
Answer: Turing machine that decides A by zig-zagging back and forth across the input, crossing
off the 0s and 1s as they are matched. That algorithm uses linear space to record which positions
have been crossed off, but it can be modified to use only log space.
The log space TM for A cannot cross off the 0s and 1s that have been matched on the input
tape because that tape is read-only. Instead, the machine counts the number of 0s and,
separately, the number of 1s in binary on the work tape. The only space required is that used to
record the two counters. In binary, each counter uses only logarithmic space and hence the
algorithm runs in O(log n) space. Therefore, A ∈ L.
Question#2

Q2: Prove that NL EQUALS coNL.

Answer: NL = coNL.
We show that PATH is in NL, and thereby establish that every problem in coNL is
also in NL, because PATH is NL-complete. The NL algorithm M that we present for
PATH must have an accepting computation whenever the input graph G does not
contain a path from s to t. First, let’s tackle an easier problem. Let c be the
number of nodes in G that are reachable from s. We assume that c is provided as
an input to M and show how to use c to solve PATH. Later we show how to
compute c. Given G, s, t, and c, the machine M operates as follows. One by one,
M goes through all the m nodes of G and nondeterministically guesses whether
each one is reachable from s. Whenever a node u is guessed to be reachable, M
attempts to verify this guess by guessing a path of length m or less from s to u. If a
computation branch fails to verify this guess, it rejects. In addition, if a branch
guesses that t is reachable, it rejects. Machine M counts the number of nodes
that have been verified to be reachable. When a branch has gone through all of
G’s nodes, it checks that the number of nodes that it verified to be reachable from
s equals c, the number of nodes that actually are reachable, and rejects if not.
Otherwise, this branch accepts. In other words, if M nondeterministically selects
exactly c nodes reachable from s, not including t, and proves that each is
reachable from s by guessing the path, M knows that the remaining nodes,
including t, are not reachable, so it can accept. Next, we show how to calculate c,
the number of nodes reachable from s. We describe a nondeterministic log space
procedure whereby at least one computation branch has the correct value for c
and all other branches reject. For each i from 0 to m, we define Ai to be the
collection of nodes that are at a distance of i or less from s (i.e., that have a path
of length at most i from s). So A0 = {s}, each Ai ⊆ Ai+1, and Am contains all nodes
that are reachable from s. Let ci be the number of nodes in Ai. We next describe a
procedure that calculates ci+1 from ci. Repeated application of this procedure
yields the desired value of c = cm. We calculate ci+1 from ci, using an idea similar
to the one presented earlier in this proof sketch. The algorithm goes through all
the nodes of G, determines whether each is a member of Ai+1, and counts the
members. To determine whether a node v is in Ai+1, we use an inner loop to go
through all the nodes of G and guess whether each node is in Ai. Each positive
guess is verified by guessing the path of length at most i from s. For each node u
verified to be in Ai, the algorithm tests whether (u, v) is an edge of G. If it is an
edge, v is in Ai+1. Additionally, the number of nodes verified to be in Ai is
counted. At the completion of the inner loop, if the total number of nodes
verified to be in Ai is not ci, all Ai have not been found, so this computation
branch rejects. If the count equals ci and v has not yet been shown to be in Ai+1,
we conclude that it isn’t in Ai+1. Then we go on to the next v in the outer loop.

PDA related Questions


Q1: Construct Pushdown Automata for following languages and give their
formal descriptions.

(i) E = {ai bj ck | i, j, k ≥ 0 and i + j = k}


(ii) D = {ai bj ck | i, j, k ≥ 0, and i = j or j = k}
Answer: (i)
State diagram for the PDA that recognizes E = {ai bj ck | i, j, k ≥ 0 and i + j = k} is given below:
For every a and b read in the first part of the string, the PDA pushes an x onto the stack. Then it must
read a c for each x popped off the stack.

Formal description:
P1 = (Q, Σ, Γ, δ, q1, F), where

Q = {q1, q2, q3, q4, q5},

Σ= {a, b, c},

Γ= {x, $},

F ={q5},

and δ is given by the following table, wherein blank entries signify ∅.

Input: a b c ε
Stack: x $ ε x $ ε x $ ε x $ ε
q1 {(q2,$)}
q2 {(q2,x)} {(q3,ε)}
q3 {(q3,x)} {(q4,ε)}
q4 {(q4,ε)} {(q5,ε)}
q5

(ii)
State diagram for the PDA that recognizes D = {ai bj ck | i, j, k ≥ 0, and i = j or j = k} is given
below:

The PDA has a nondeterministic branch at q1. If the string is ai bj ck with i = j, then the PDA takes the
branch from q1 to q2. If the string is ai bj ck with j = k, then the PDA takes the branch from q1 to q5.

Formal Description:
P2 = (Q, Σ, Γ, δ, q1, F), where
Q = {q1, q2, q3, q4, q5, q6, q7, q8},

Σ= {a, b, c},

Γ= {a, b, $},

F = {q4, q8},

and δ is given by the following table, wherein blank entries signify ∅.

Input: a b c ε
Stack: a b $ ε a b $ ε a b $ ε a b $ ε
q1 {(q2,$)}
{(q5,$)}
q2 {(q2,a)} {(q3,ε)}
q3 {(q3,ε)} {(q4,)}
q4 {(q4,ε)}
q5 {(q5,ε)} {(q6,ε)}
q6 {(q6,b)} {(q7,ε)}
q7 {(q7,ε)} {(q8,ε)}
q8

Q2: (i) Prove the Lemma: “If a pushdown automaton recognizes some language,
then it is context free”.
(ii) Convert the following Pushdown automaton to equivalent CFG according to
procedure given in the proof of above lemma.

Answer: (i)
Proof: We have a PDA P, and we want to make a CFG G that generates all the strings that P accepts.
Say that P = (Q, Σ, Γ, δ, q0, {qaccept}) and construct G.

• First, we simplify our task by modifying P slightly to give it the following three features.
1. It has a single accept state, qaccept.
2. It empties its stack before accepting.
3. Each transition either pushes a symbol onto the stack (a push move) or pops one off the
stack (a pop move), but it does not do both at the same time.
• The variables of G are {Apq| p, q ∈ Q}.
• The start variable is Aq0, qaccept.
• Now we describe G’s rules in three parts.
1. For each p, q, r, s ∈ Q, u ∈ Γ, and a, b ∈ Σε, if δ(p, a, ε) contains (r, u) and δ(s, b, u)
contains (q, ε), put the rule Apq → aArsb in G.
2. For each p, q, r ∈Q, put the rule Apq →Apr Arq in G.
3. Finally, for each p ∈Q, put the rule App → ε in G.

(ii) Transform PDA to CFG:

You might also like