AAI - Intro Lec 7 8
AAI - Intro Lec 7 8
AAI - Intro Lec 7 8
2
Difference Between Informed and
Uninformed Search
3
Best-First Search
Idea: use an evaluation function for each node
Estimate of desirability
Special cases:
Uniform Cost Search (uninformed)
Greedy (best-first) Search (informed)
A* Search (informed)
4
Evaluation Function
Evaluation function f(n) = g(n) + h(n)
– g(n) = exact cost so far to reach n
– h(n) = estimated cost to goal from n
– f(n) = estimated total cost of cheapest path through n to goal
• Special cases:
– Uniform Cost Search: f(n) = g(n)
– Greedy (best-first) Search: f(n) = h(n)
– A* Search: f(n) = g(n) + h(n)
5
Romania - Step Costs in KM
6
Greedy Best-First Search
Evaluation function h(n) (heuristic)
Estimated cost of the cheapest path from n to a goal node
7
Greedy Best-First search example
8
Greedy Best-First search example
9
Properties of Greedy Best-First search
Complete? No – can get stuck in loops, e.g., with Oradea as goal
and start from Iasi:
Iasi 🡪
Neamt 🡪
Iasi 🡪
Neamt 🡪
Complete in finite space with repeated state checking
Time? O(bm), but a good heuristic can give dramatic improvement
Space? O(bm) -- keeps all nodes in memory
Optimal? No.
10
A* Search
Idea: Avoid expanding paths that are already expensive
Evaluation function f(n) = g(n) + h(n)
g(n) = exact cost so far to reach n
h(n) = estimated cost to goal from n
f(n) = estimated total cost of cheapest path through n to goal
A* search uses an admissible heuristic:
h(n) ≤ h*(n) where h*(n) is the true cost from n
Also h(n) ≥ 0, and h(G)=0 for any goal G
E.g., hSLD(n) is an admissible heuristic because it doesn’t
overestimate the actual road distance.
11
A* Search
If we are trying to find the cheapest solution, a reasonable thing to
try first is the node with the lowest value of g(n) + h(n)
This strategy is more than just reasonable
Provided that h(n) satisfies certain conditions, A* using TREE
search is both complete and optimal.
12
A* search example
13
A* search example
14
A* search example
15
A* search example
16
Optimality of A* (proof)
Suppose some suboptimal goal G2 has been generated and is in the
fringe. Let n be an unexpanded node in the fringe such that n is on a
shortest path to an optimal goal G.
– f(G2) = g(G2) since h(G2) = 0
– g(G2) > g(G) since G2 is non-optimal
– f(G) = g(G) since h(G) = 0
– f(G2) > f(G) from above
17
Optimality of A* (proof)
Suppose some suboptimal goal G2 has been generated and is in the
fringe. Let n be an unexpanded node in the fringe such that n is on a
shortest path to an optimal goal G.
– f(G2) > f(G) from above
– h(n) ≤ h*(n) since h is admissible
– g(n) + h(n)≤ g(n) + h*(n)
– f(n) ≤ f(G)
– Hence f(G2) > f(n), and A* will never select G2 for expansion
18
Problem of Repeated States
19
Graph Search (instead of Tree Search)
Its very well to prune the search space by ignoring repeated states
But the problem is that Graph-search can end up discovering sub-
optimal solutions
– Basically, a loop means that there might be more then one path to a node
– Once we discover a path to a node, then any other paths to the same node are
ignored by graph-search
– However, it is not necessary that the first path is the optimal one
– Hence, sub-optimal solutions can be returned.
21
Graph-Search problem
• Solution?
• We can adopt a uniform-cost approach, in which we keep track of all
the paths that have been currently generated
• Then, we select only that path which has the least-cost
22
Problem with A* Proof
23
Consistent heuristics
25
Properties of A*
26
Local Search Algorithms
In many optimization problems, the path to the goal is irrelevant; the
goal state itself is the solution
State space = set of configurations
Find a configuration satisfying your constraints, e.g., n-queens
27
Optimization Problems
Local search algorithms are useful for solving optimization problems
Find the best possible state according to a given objective function
All that is seen is whether the user is buying more products (or not).
28
Hill-Climbing
29
Hill-Climbing
30
Hill-Climbing
Hill-Climbing Algorithm
1. Pick a random point in the search space
2. Consider all the neighbors of the current state
3. Choose the neighbor with the best quality and move to that state
4. Repeat 2 thru 4 until all the neighboring states are of lower quality
5. Return the current state as the solution state.
31
Hill-Climbing Problems
• Unfortunately, hill-climbing
– Can get stuck in local maxima
– Can be stuck by ridges (a series of local maxima that occur close
together)
– Can be stuck by plateaux (a flat area in the state space landscape)
• Shoulder: if the flat area rises uphill later on
• Flat local maximum: no uphill rise exist
32
Improvements
33
Simulated annealing search
34
Simulated annealing search
35
Properties of simulated annealing search
36
Questions
37