0% found this document useful (0 votes)
10 views

AI Unit-1

Uploaded by

Aashish Pandey
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

AI Unit-1

Uploaded by

Aashish Pandey
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 68

ARTIFICIAL

INTELLIGENCE
What is Artificial Intelligence?
• It is a branch of Computer Science that pursues creating the
computers or machines as intelligent as human beings.
• It is the science and engineering of making intelligent machines,
especially intelligent computer programs.
• It is related to the similar task of using computers to understand
human intelligence, but AI does not have to confine itself to methods
that are biologically observable
Definition: Artificial Intelligence is the study of how to make computers do things,
which, at the moment, people do better.
According to the father of Artificial Intelligence, John McCarthy, it is “The science
and engineering of making intelligent machines, especially intelligent computer
programs”.
Artificial Intelligence is a way of making a computer, a computer-controlled robot,
or a software think intelligently, in the similar manner the intelligent humans think.
AI is accomplished by studying how human brain thinks and how humans learn,
decide, and work while trying to solve a problem, and then using the outcomes of
this study as a basis of developing intelligent software and systems.
From a business perspective AI is a set of very powerful tools, and methodologies
for using those tools to solve business problems.
From a programming perspective, AI includes the study of symbolic programming,
problem solving, and search.
AI Vocabulary
1. Intelligence
2. Intelligent behaviour
3. Science based goals of AI
4. Engineering based goals of AI
5. AI Techniques
6. Learning
7. Applications of AI
Problems of AI
• Intelligence does not imply perfect understanding; every intelligent being has
limited perception, memory and computation. Many points on the spectrum of
intelligence versus cost are viable, from insects to humans. AI seeks to
understand the computations required from intelligent behavior and to produce
computer systems that exhibit intelligence. Aspects of intelligence studied by AI
include perception, communicational using human languages, reasoning,
planning, learning and memory.
The following questions are to be considered before we can step forward:
1. What are the underlying assumptions about intelligence?
2. What kinds of techniques will be useful for solving AI problems?
3. At what level human intelligence can be modelled?
4. When will it be realized when an intelligent program has been built?
Branches of AI
• Logical AI
• Search
• Pattern Recognition
• Representation
• Inference
• Common sense knowledge and Reasoning
• Learning from experience
• Planning
• Epistemology
• Ontology
• Heuristics
AI Technique

Artificial Intelligence research during the last three decades has concluded that
Intelligence requires knowledge. To compensate overwhelming quality, knowledge
possesses less desirable properties.
A. It is huge.
B. It is difficult to characterize correctly.
C. It is constantly varying.
D. It differs from data by being organized in a way that corresponds to its
application.
E. It is complicated.
An AI technique is a method that exploits knowledge
that is represented so that:
• The knowledge captures generalizations that share properties, are grouped
together, rather than being allowed separate representation.
• It can be understood by people who must provide it—even though for many
programs bulk of the data comes automatically from readings.
• In many AI domains, how the people understand the same people must supply
the knowledge to a program.
• It can be easily modified to correct errors and reflect changes in real conditions.
• It can be widely used even if it is incomplete or inaccurate.
• It can be used to help overcome its own sheer bulk by helping to narrow the
range of possibilities that must be usually considered
Tic-Tac-Toe

The first approach (simple)


The Tic-Tac-Toe game consists of a nine element vector called BOARD; it represents
the numbers 1 to 9 in three rows.
• An element contains the value 0 for blank, 1 for X and 2 for O.
• A MOVETABLE vector consists of 19,683 elements (39 ) and is needed where each
element is a nine element vector.
• The contents of the vector are especially chosen to help the algorithm.
The algorithm makes moves by pursuing the following:
1. View the vector as a ternary number. Convert it to a decimal number.
2. Use the decimal number as an index in MOVETABLE and access the vector.
3. Set BOARD to this vector indicating how the board looks after the move.
This approach is capable in time but it has several disadvantages.
It takes more space and requires stunning effort to calculate the decimal numbers.
This method is specific to this game and cannot be completed.
The second approach
The structure of the data is as before but we use 2 for a blank, 3 for an X and 5 for
an O.
A variable called TURN indicates 1 for the first move and 9 for the last.
The algorithm consists of three actions:
MAKE2 which returns 5 if the centre square is blank;
it returns any blank non-corner square, i.e. 2, 4, 6 or 8.
POSSWIN (p) returns 0 if player p cannot win on the next move and otherwise
returns the number of the square that gives a winning move.
It checks each line using products 3*3*2 = 18 gives a win for X, 5*5*2=50 gives a
win for O, and the winning move is the holder of the blank. GO (n) makes a move
to square n setting BOARD[n] to 3 or 5.
This algorithm is more involved and takes longer but it is more efficient in storage
which compensates for its longer time. It depends on the programmer’s skill.
The final approach
The structure of the data consists of BOARD which contains a nine element vector,
a list of board positions that could result from the next move and a number
representing an estimation of how the board position leads to an ultimate win for
the player to move.
This algorithm looks ahead to make a decision on the next move by deciding which
the most promising move or the most suitable move at any stage would be and
selects the same.
Consider all possible moves and replies that the program can make.
Continue this process for as long as time permits until a winner emerges, and then
choose the move that leads to the computer program winning, if possible in the
shortest time.
Actually this is most difficult to program by a good limit but it is as far that the
technique can be extended to in any game.
This method makes relatively fewer loads on the programmer in terms of the game
Question Answering

• Let us consider Question Answering systems that accept input in English and
provide answers also in English. This problem is harder than the previous one as it
is more difficult to specify the problem properly. Another area of difficulty
concerns deciding whether the answer obtained is correct, or not, and further
what is meant by ‘correct’. For example, consider the following situation:
1.1 Text
• Rani went shopping for a new Coat.
• She found a red one she really liked. When she got home, she found that it went
perfectly with her favourite dress.
1.2 Question
1. What did Rani go shopping for?
2. What did Rani find that she liked?
3. Did Rani buy anything?
Method 1
Data Structures:
A set of templates that match common questions and produce patterns used to
match against inputs. Templates and patterns are used so that a template that
matches a given question is associated with the corresponding pattern to find the
answer in the input text. For example, the template who did x y generates x y z if a
match occurs and z is the answer to the question. The given text and the question
are both stored as strings.
Algorithm:
Answering a question requires the following steps to be followed:
• Compare the template against the questions and store all successful matches to
produce a set of text patterns.
• Pass these text patterns through a substitution process to change the person or
voice and produce an expanded set of text patterns.
• Apply each of these patterns to the text; collect all the answers and then print the
Example
In question 1 we use the template WHAT DID X Y which generates Rani go shopping
for z and after substitution we get Rani goes shopping for z and Rani went shopping
for z giving z [equivalence] a new coat.

In question 2 we need a very large number of templates and also a scheme to


allow the insertion of ‘find’ before ‘that she liked’; the insertion of ‘really’ in the
text; and the substitution of ‘she’ for ‘Rani’ gives the answer ‘a red one’.

Question 3 cannot be answered.


Method 2
A structure called English consists of a dictionary, grammar and some semantics
about the vocabulary we are likely to come across.
This data structure provides the knowledge to convert English text into a storable
internal form and also to convert the response back into English. T
he structured representation of the text is a processed form and defines the
context of the input text by making explicit all references such as pronouns.
There are three types of such knowledge representation systems: production rules
of the form ‘if x then y’, slot and filler systems and statements in mathematical
logic. The system used here will be the slot and filler system.
Algorithm
• Convert the question to a structured form using English know how, then use a
marker to indicate the substring (like ‘who’ or ‘what’) of the structure, that
should be returned as an answer. If a slot and filler system is used a special
marker can be placed in more than one slot.
• The answer appears by matching this structured form against the structured text.
• The structured form is matched against the text and the requested segments of
the question are returned.
Method 3
Data Structures
World model contains knowledge about objects, actions and situations that are
described in the input text. This structure is used to create integrated text from
input text. The diagram shows how the system’s knowledge of shopping might be
represented and stored. This information is known as a script and in this case is a
shopping script.
Algorithm
Convert the question to a structured form using both the knowledge contained in
Method 2 and the World model, generating even more possible structures, since
even more knowledge is being used. Sometimes filters are introduced to prune the
possible answers.
To answer a question, the scheme followed is: Convert the question to a structured
form as before but use the world model to resolve any ambiguities that may occur.
The structured form is matched against the text and the requested segments of the
question are returned.
PROBLEMS, PROBLEM SPACES AND SEARCH

1. Define the problem accurately including detailed specifications and what


constitutes a suitable solution.
2. Scrutinize the problem carefully, for some features may have a central affect on
the chosen method of solution.
3. Segregate and represent the background knowledge needed in the solution of
the problem.
4. Choose the best solving techniques for the problem to solve a solution.
Problem solving is a process of generating solutions from observed data.
• a ‘problem’ is characterized by a set of goals,
• a set of objects, and
• a set of operations.
These could be ill-defined and may evolve during problem solving.
A ‘problem space’ is an abstract space.
 A problem space encompasses all valid states that can be generated by the
application of any combination of operators on any combination of objects.
The problem space may contain one or more solutions. A solution is a
combination of operations and objects that achieve the goals.
A ‘search’ refers to the search for a solution in a problem space.
 Search proceeds with different types of ‘search control strategies’.
 The depth-first search and breadth-first search are the two common search
strategies.
The Water Jug Problem

• We use two jugs called four and three;


• four holds a maximum of four gallons of water and three a maximum of three
gallons of water.
How can we get two gallons of water in the two jug
The state space is a set of prearranged pairs giving the number of gallons of water
in the pair of jugs at any time, i.e., (four, three) where four = 0, 1, 2, 3 or 4 and
three = 0, 1, 2 or 3
The start state is (0, 0) and the goal state is (2, n) where n may be any but it is
limited to three holding from 0 to 3 gallons of water or empty. Three and four
shows the name and numerical number shows the amount of water in jugs for
solving the water jug problem. The major production rules for solving this problem
Production Rules for the Water Jug Problem
PRODUCTION SYSTEMS

Production systems provide appropriate structures for performing and


describing search processes. A production system has four basic components
as enumerated below.
• A set of rules each consisting of a left side that determines the applicability of
the rule and a right side that describes the operation to be performed if the
rule is applied.
• A database of current facts established during the process of inference.
• A control strategy that specifies the order in which the rules will be compared
with facts in the database and also specifies how to resolve conflicts in
selection of several rules or selection of more facts.
• A rule firing module.
Control Strategies

The word ‘search’ refers to the search for a solution in a problem space.
• Search proceeds with different types of ‘search control strategies’.
• A strategy is defined by picking the order in which the nodes expand.
The Search strategies are evaluated along the following dimensions:
Completeness, Time complexity, Space complexity, Optimality (the search-
related terms are first explained, and then the search algorithms and control
strategies are illustrated next).
Search-related terms
Performance of an algorithm depends on internal and external factors.

Internal factors/ External factors


• Time required to run
• Size of input to the algorithm
• Space (memory) required to run
• Speed of the computer
• Quality of the compile
Complexity is a measure of the performance of an algorithm. Complexity
measures the internal factors, usually in time than space.
Computational complexity
It is the measure of resources in terms of Time and Space.
If A is an algorithm that solves a decision problem f, then run-time of A is the
number of steps taken on the input of length n.
Time Complexity T(n) of a decision problem f is the run-time of the ‘best’
algorithm A for f. Space Complexity S(n) of a decision problem f is the amount of
memory used by the ‘best’ algorithm A for f.
HEURISTIC SEARCH
TECHNIQUES
Search Algorithms
Many traditional search algorithms are used in AI applications.
For complex problems, the traditional algorithms are unable to find the
solutions within some practical time and space limits.
Consequently, many special techniques are developed, using heuristic
functions.
Algorithms that use heuristic functions are called heuristic
algorithms.
• Heuristic algorithms are not really intelligent; they appear to be intelligent
because they achieve better performance.
• Heuristic algorithms are more efficient because they take advantage of
feedback from the data to direct the search path.

Uninformed search algorithms or Brute-force algorithms, search through


the search space all possible candidates for the solution checking whether
each candidate satisfies the problem’s statement.

Informed search algorithms use heuristic functions that are specific to the
problem, apply them to guide the search through the search space to try to
reduce the amount of time spent in searching.
A good heuristic will make an informed search dramatically outperform any
uninformed search where the goal is to find is a good solution instead of
finding the best solution
Some prominent intelligent search algorithms are stated below
1. Generate and Test Search
2. Best-first Search
3. Greedy Search
4. A* Search
5. Constraint Search
6. Means-ends analysis
There are some more algorithms. They are either improvements or combinations of these.
• Hierarchical Representation of Search Algorithms: A Hierarchical representation of most
search algorithms is illustrated below. The representation begins with two types of search:
• Uninformed Search: Also called blind, exhaustive or brute-force search, it uses no
information about the problem to guide the search and therefore may not be very
efficient.
• Informed Search: Also called heuristic or intelligent search, this uses information about
the problem to guide the search—usually guesses the distance to a goal state and is
therefore efficient, but the search may not be always possible
Different Search Algorithm
• Search Technique Require
The first requirement is that it causes motion
In a game playing program, it moves on the board and in the water jug
problem, filling water is used to fill jugs.
The second requirement is that it is systematic
It corresponds to the need for global motion as well as for local motion. This is
a clear condition that neither would it be rational to fill a jug and empty it
repeatedly, nor it would be worthwhile to move a piece round and round on
the board in a cyclic way in a game.
Breadth-first search
• A Search strategy, in which the highest layer of a decision tree is
searched completely before proceeding to the next layer is called
Breadth-first search (BFS).
• In this strategy, no viable solutions are omitted and therefore it is
guaranteed that an optimal solution is found.
• This strategy is often not feasible when the search space is large.
Algorithm
1. Create a variable called LIST and set it to be the starting state.
2. Loop until a goal state is found or LIST is empty, Do
a. Remove the first element from the LIST and call it E. If the LIST is
empty, quit.
b. For every path each rule can match the state E, Do
(i) Apply the rule to generate a new state.
(ii) If the new state is a goal state, quit and return this state.
(iii) Otherwise, add the new state to the end of LIST.
Advantages
1. Guaranteed to find an optimal solution (in terms of shortest number of steps to
reach the goal).
2. Can always find a goal node if one exists (complete).
Disadvantages
1. High storage requirement: exponential with tree depth.
Depth-first search
• A search strategy that extends the current path as far as possible
before backtracking to the last choice point and trying the next
alternative path is called Depth-first search (DFS).
• This strategy does not guarantee that the optimal solution has been
found.
• In this strategy, search reaches a satisfactory solution more rapidly
than breadth first, an advantage when the search space is large.
Algorithm
• Depth-first search applies operators to each newly generated state, trying to drive
directly toward the goal.
1. If the starting state is a goal state, quit and return success.
2. Otherwise, do the following until success or failure is signalled:
a. Generate a successor E to the starting state. If there are no more successors,
then signal failure.
b. Call Depth-first Search with E as the starting state.
c. If success is returned signal success; otherwise, continue in the loop.
Advantages
1. Low storage requirement: linear with tree depth.
2. Easily programmed: function call stack does most of the work of maintaining
state of the search.
Disadvantages
2. May find a sub-optimal solution (one that is deeper or more costly than the
best solution).
3. Incomplete: without a depth bound, may not find a solution even if one exists.
Bounded depth-first search
Depth-first search can spend much time (perhaps infinite time) exploring a very
deep path that does not contain a solution, when a shallow solution exists. An
easy way to solve this problem is to put a maximum depth bound on the
search. Beyond the depth bound, a failure is generated automatically without
exploring any deeper.
Problems:
1. It’s hard to guess how deep the solution lies.
2. If the estimated depth is too deep (even by 1) the computer time used is
dramatically increased, by a factor of bextra.
3. If the estimated depth is too shallow, the search fails to find a solution; all
that computer time is wasted.
Heuristics
A heuristic is a method that improves the efficiency of the search process.
These are like tour guides. Heuristics may not find the best solution every time
but guarantee that they find a good solution in a reasonable time.
These are particularly useful in solving tough and complex problems, solutions
of which would require infinite time, i.e. far longer than a lifetime for the
problems which are not solved in any other way.
Heuristic search
To find a solution in proper time rather than a complete solution in unlimited
time we use heuristics. ‘A heuristic function is a function that maps from
problem state descriptions to measures of desirability, usually represented as
numbers’.
These heuristic search methods use heuristic functions to evaluate the next
state towards the goal state.
• Finding a route from one city to another city is an example of a search
problem in which different search orders and the use of heuristic knowledge
are easily understood.

1. State: The current city in which the traveller is located.


2. Operators: Roads linking the current city to other cities.
3. Cost Metric: The cost of taking a given road between cities.
4. Heuristic information: The search could be guided by the direction of the
goal city from the current city, or we could use airline distance as an estimate
of the distance to the goal.
Characteristics of heuristic search
• Heuristics are knowledge about domain, which help search and reasoning in
its domain.
• Heuristic search incorporates domain knowledge to improve efficiency over
blind search.
• Heuristic is a function that, when applied to a state, returns value as
estimated merit of state, with respect to goal.
Heuristics might (for reasons) underestimate or overestimate the merit of a
state with respect to goal.
Heuristics that underestimate are desirable and called admissible.
• Heuristic evaluation function estimates likelihood of given state leading to
goal state.
• Heuristic search function estimates cost from current state to goal,
presuming function is efficient.
Generate and Test Strategy
Generate-And-Test Algorithm
Generate-and-test search algorithm is a very simple algorithm that
guarantees to find a solution if done systematically and there exists a
solution
Algorithm: Generate-And-Test
1.Generate a possible solution.
2.Test to see if this is the expected solution.
3.If the solution has been found quit else go to step 1.
Potential solutions that need to be generated vary depending on the kinds of
problems. For some problems the possible solutions may be particular points
in the problem space and for some problems, paths from the start state.
Generate-and-test, like depth-first search, requires that complete solutions be generated for testing. In its most
systematic form, it is only an exhaustive search of the problem space. Solutions can also be generated randomly but
solution is not guaranteed. This approach is what is known as British Museum algorithm: finding an object in the British
Museum by wandering randomly
Hill Climbing
• Hill Climbing is heuristic search used for mathematical optimization problems in the field
of Artificial Intelligence .
• Given a large set of inputs and a good heuristic function, it tries to find a sufficiently good
solution to the problem. This solution may not be the global optimal maximum.
 In the above definition, mathematical optimization problems implies that hill climbing
solves the problems where we need to maximize or minimize a given real function by
choosing values from the given inputs. Example-Travelling salesman problem where we
need to minimize the distance traveled by salesman.
 ‘Heuristic search’ means that this search algorithm may not find the optimal solution to
the problem. However, it will give a good solution in reasonable time.
A heuristic function is a function that will rank all the possible alternatives at any
branching step in search algorithm based on the available information. It helps the
algorithm to select the best route out of possible routes.
• Features of Hill Climbing
1. Variant of generate and test algorithm : It is a variant of generate and test
algorithm. The generate and test algorithm is as follows :
1. Generate a possible solutions.
2. Test to see if this is the expected solution.
3. If the solution has been found quit else go to step 1.
Hence we call Hill climbing as a variant of generate and test algorithm as it
takes the feedback from test procedure. Then this feedback is utilized by the
generator in deciding the next move in search space.
2. Uses the Greedy approach : At any point in state space, the search moves in
that direction only which optimizes the cost of function with the hope of
finding the optimal solution at the end.
1. Simple Hill climbing : It examines the neighboring nodes one by one and selects
the first neighboring node which optimizes the current cost as next node.
Algorithm for Simple Hill climbing :
Step 1 : Evaluate the initial state. If it is a goal state then stop and return success.
Otherwise, make initial state as current state.
Step 2 : Loop until the solution state is found or there are no new operators present
which can be applied to current state.
a) Select a state that has not been yet applied to the current state and apply it to
produce a new state.
b) Perform these to evaluate new state
i. If the current state is a goal state, then stop and return success.
ii. If it is better than the current state, then make it current state and proceed
further.
iii. If it is not better than the current state, then continue in the loop until a solution
is found.
Steepest-Ascent Hill climbing : It first examines all the neighboring nodes and then
selects the node closest to the solution state as next node.
Step 1 : Evaluate the initial state. If it is goal state then exit else make the current
state as initial state
Step 2 : Repeat these steps until a solution is found or current state does not
change
i. Let ‘target’ be a state such that any successor of the current state will be better
than it;
ii. for each operator that applies to the current state
a. apply the new operator and create a new state
b. evaluate the new state
c. if this state is goal state then quit else compare with ‘target’
d. if this state is better than ‘target’, set this state as ‘target’ e. if target is better
than current state set current state to Target Step
• Stochastic hill climbing : It does not examine all the neighboring nodes before
deciding which node to select .It just selects a neighboring node at random, and
decides (based on the amount of improvement in that neighbor) whether to
move to that neighbor or to examine another.
State Space diagram for Hill Climbing
State space diagram is a graphical representation of the set of states our search
algorithm can reach vs the value of our objective function(the function which we
wish to maximize).
X-axis : denotes the state space ie states or configuration our algorithm may reach..
Y-axis : denotes the values of objective function corresponding to to a particular
state. The best solution will be that state space where objective function has
maximum value(global maximum).
• Different regions in the State Space Diagram
1. Local maximum : It is a state which is better than its neighboring state however
there exists a state which is better than it(global maximum). This state is better
because here value of objective function is higher than its neighbors.
2. Global maximum : It is the best possible state in the state space diagram. This
because at this state, objective function has highest value.
3. Plateua/flat local maximum : It is a flat region of state space where neighboring
states have the same value.
4. Ridge : It is region which is higher than its neighbours but itself has a slope. It is a
special kind of local maximum.
5. Current state : The region of state space diagram where we are currently present
during the search. 6. Shoulder : It is a plateau that has an uphill edge.
Best First Search (Informed Search)
• BFS and DFS, when we are at a node, we can consider any of the
adjacent as next node.
• So both BFS and DFS blindly explore paths without considering any
cost function.
• The idea of Best First Search is to use an evaluation function to decide
which adjacent is most promising and then explore.
• Best First Search falls under the category of Heuristic Search or
Informed Search.
Algorithm:
Best-First-Search(Grah g, Node start)
1) Create an empty PriorityQueue PriorityQueue pq;
2) Insert "start" in pq. pq.insert(start)
3) Until PriorityQueue is empty
u = PriorityQueue.DeleteMin
If u is the goal
Exit
Else
Foreach neighbor v of u
If v "Unvisited" Mark v "Visited"
pq.insert(v)
Mark v "Examined"
Example
1. We start from source "S" and search for goal "I" using given costs and Best First
search.
2. pq initially contains S
3. We remove s from and process unvisited neighbors of S to pq.
4. pq now contains {A, C, B} (C is put before B because C has lesser cost)
5. We remove A from pq and process unvisited neighbors of A to pq. pq now
contains {C, B, E, D} .
6. We remove C from pq and process unvisited neighbors of C to pq.
7. pq now contains {B, H, E, D}
8. We remove B from pq and process unvisited neighbors of B to pq.
9. pq now contains {H, E, D, F, G}
10. We remove H from pq.
11. Since our goal "I" is a neighbor of H, we return.
A* Search Algorithm

• A* is a type of search algorithm. Some problems can be solved by representing the


world in the initial state, and then for each action we can perform on the world we
generate states for what the world would be like if we did so. If you do this until
the world is in the state that we specified as a solution, then the route from the
start to this goal state is the solution to your problem.
• The A* search algorithm (pronounced "Ay-star") is a tree search algorithm that
finds a path from a given initial node to a given goal node (or one passing a given
goal test). It employs a "heuristic estimate" which ranks each node by an estimate
of the best route that goes through that node. It visits the nodes in order of this
heuristic estimate.
• From A* we note that f = g + h where
• g is a measure of the distance/cost to go from the initial node to the current node
• H is an estimate of the distance/cost to solution from the current node.
• Thus f is an estimate of how long it takes to go from the initial node to the
solution
Algorithm
1. Initialize : Set OPEN = (S); CLOSED = ( )
g(s)= 0, f(s)=h(s)
2. Fail : If OPEN = ( ), Terminate and fail.
3. Select : select the minimum cost state, n, from OPEN, save n in
CLOSED
4. Terminate : If n €G, Terminate with success and return f(n)
5. Expand : for each successor, m, of n
a) If m € [OPEN U CLOSED]
Set g(m) = g(n) + c(n , m)
Set f(m) = g(m) + h(m)
Insert m in OPEN
b) If m € [OPEN U CLOSED]
Set g(m) = min { g(m) , g(n) + c(n , m)}
Set f(m) = g(m) + h(m) If f(m) has decreased and m €
CLOSED
Description:
• A* begins at a selected node. Applied to this node is the "cost" of entering this node (usually
zero for the initial node). A* then estimates the distance to the goal node from the current
node. This estimate and the cost added together are the heuristic which is assigned to the
path leading to this node. The node is then added to a priority queue, often called "open".
• The algorithm then removes the next node from the priority queue (because of the way a
priority queue works, the node removed will have the lowest heuristic). If the queue is empty,
there is no path from the initial node to the goal node and the algorithm stops. If the node is
the goal node, A* constructs and outputs the successful path and stops.
• If the node is not the goal node, new nodes are created for all admissible adjoining nodes; the
exact way of doing this depends on the problem at hand. For each successive node, A*
calculates the "cost" of entering the node and saves it with the node. This cost is calculated
from the cumulative sum of costs stored with its ancestors, plus the cost of the operation
which reached this new node.
• The algorithm also maintains a 'closed' list of nodes whose adjoining nodes have been
checked. If a newly generated node is already in this list with an equal or lower cost, no
further processing is done on that node or with the path associated with it. If a node in the
closed list matches the new one, but has been stored with a higher cost, it is removed from
the closed list, and processing continues on the new node.
• Next, an estimate of the new node's distance to the goal is added to the cost to
form the heuristic for that node. This is then added to the 'open' priority queue,
unless an identical node is found there.
• Once the above three steps have been repeated for each new adjoining node, the
original node taken from the priority queue is added to the 'closed' list. The next
node is then popped from the priority queue and the process is repeated
AO* Search: (And-Or) Graph
• The Depth first search and Breadth first search given earlier for OR trees or
graphs can be easily adopted by AND-OR graph. The main difference lies in the
way termination conditions are determined, since all goals following an AND
nodes must be realized; where as a single goal node following an OR node will do.
So for this purpose we are using AO* algorithm.
• Like A* algorithm here we will use two arrays and one heuristic function.
OPEN: It contains the nodes that has been traversed but yet not been marked
solvable or unsolvable.
CLOSE: It contains the nodes that have already been processed.
Algorithm
• Step 1: Place the starting node into OPEN.
• Step 2: Compute the most promising solution tree say T0.
• Step 3: Select a node n that is both on OPEN and a member of T0. Remove it from
OPEN and place it in CLOSE
• Step 4: If n is the terminal goal node then leveled n as solved and leveled all the
ancestors of n as solved. If the starting node is marked as solved then success and
exit.
• Step 5: If n is not a solvable node, then mark n as unsolvable. If starting node is
marked as unsolvable, then return failure and exit.
• Step 6: Expand n. Find all its successors and find their h (n) value, push them into
OPEN.
• Step 7: Return to Step 2.
• Step 8: Exit.
MEANS - ENDS ANALYSIS

• Most of the search strategies either reason forward of backward however,


often a mixture o the two directions is appropriate. Such mixed strategy
would make it possible to solve the major parts of problem first and solve
the smaller problems the arise when combining them together. Such a
technique is called "Means - Ends Analysis“.
• The means -ends analysis process centers around finding the difference
between current state and goal state. The problem space of means - ends
analysis has an initial state and one or more goal state, a set of operate
with a set of preconditions their application and difference functions that
computes the difference between two state a(i) and s(j). A problem is
solved using means - ends analysis by
1. Computing the current state s1 to a goal state s2 and computing their
difference D12.
2. Satisfy the preconditions for some recommended operator op is
selected, then to reduce the difference D12.
3. The operator OP is applied if possible. If not the current state is
solved a goal is created and means- ends analysis is applied recursively
to reduce the sub goal.
4. If the sub goal is solved state is restored and work resumed on the
original problem.

You might also like