AAI - Intro Lec 7 8

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 37

Advanced Artificial Intelligence

Lecture 7-8: Informed Search

Instructor: Dr. Sakeena Javaid


Assistant Professor,
Computer Science Department,
KICSIT Kahuta
Agenda of the Lecture

 Difference between uninformed and informed search?


 Special cases

2
Difference Between Informed and
Uninformed Search

BASIS FOR COMPARISON INFORMED SEARCH UNINFORMED SEARCH


Basic Uses knowledge to find the No use of knowledge
steps to the solution.
Efficiency Highly efficient as Efficiency is mediatory
consumes less time and
cost.
Cost Low Comparatively high
Performance Finds solution more quickly Speed is slower than informed
search
Algorithms Heuristic depth first and Depth-first search, breadth-first
breadth-first search, and A* search and lowest cost first
search search

3
Best-First Search
 Idea: use an evaluation function for each node
 Estimate of desirability

 Expand most desirable unexpanded node


 Implementation:
 Fringe is a queue sorted in decreasing order of desirability

 Special cases:
 Uniform Cost Search (uninformed)
 Greedy (best-first) Search (informed)
 A* Search (informed)

4
Evaluation Function
 Evaluation function f(n) = g(n) + h(n)
– g(n) = exact cost so far to reach n
– h(n) = estimated cost to goal from n
– f(n) = estimated total cost of cheapest path through n to goal

• Special cases:
– Uniform Cost Search: f(n) = g(n)
– Greedy (best-first) Search: f(n) = h(n)
– A* Search: f(n) = g(n) + h(n)

5
Romania - Step Costs in KM

6
Greedy Best-First Search
 Evaluation function h(n) (heuristic)
 Estimated cost of the cheapest path from n to a goal node

 E.g., hSLD(n) = straight-line distance from n to Bucharest


 Greedy search expands the node that appears to be closest to goal.

7
Greedy Best-First search example

8
Greedy Best-First search example

9
Properties of Greedy Best-First search
 Complete? No – can get stuck in loops, e.g., with Oradea as goal
and start from Iasi:
 Iasi 🡪
 Neamt 🡪
 Iasi 🡪
 Neamt 🡪
 Complete in finite space with repeated state checking
 Time? O(bm), but a good heuristic can give dramatic improvement
 Space? O(bm) -- keeps all nodes in memory
 Optimal? No.
10
A* Search
 Idea: Avoid expanding paths that are already expensive
 Evaluation function f(n) = g(n) + h(n)
 g(n) = exact cost so far to reach n
 h(n) = estimated cost to goal from n
 f(n) = estimated total cost of cheapest path through n to goal
 A* search uses an admissible heuristic:
 h(n) ≤ h*(n) where h*(n) is the true cost from n
 Also h(n) ≥ 0, and h(G)=0 for any goal G
 E.g., hSLD(n) is an admissible heuristic because it doesn’t
overestimate the actual road distance.

11
A* Search
 If we are trying to find the cheapest solution, a reasonable thing to
try first is the node with the lowest value of g(n) + h(n)
 This strategy is more than just reasonable
 Provided that h(n) satisfies certain conditions, A* using TREE
search is both complete and optimal.

12
A* search example

13
A* search example

14
A* search example

15
A* search example

16
Optimality of A* (proof)
 Suppose some suboptimal goal G2 has been generated and is in the
fringe. Let n be an unexpanded node in the fringe such that n is on a
shortest path to an optimal goal G.
– f(G2) = g(G2) since h(G2) = 0
– g(G2) > g(G) since G2 is non-optimal
– f(G) = g(G) since h(G) = 0
– f(G2) > f(G) from above

17
Optimality of A* (proof)
 Suppose some suboptimal goal G2 has been generated and is in the
fringe. Let n be an unexpanded node in the fringe such that n is on a
shortest path to an optimal goal G.
– f(G2) > f(G) from above
– h(n) ≤ h*(n) since h is admissible
– g(n) + h(n)≤ g(n) + h*(n)
– f(n) ≤ f(G)
– Hence f(G2) > f(n), and A* will never select G2 for expansion

18
Problem of Repeated States

 Failure to detect repeated states can turn a linear problem into an


exponential one!
 We don’t want to expand a node that has already been expanded

19
Graph Search (instead of Tree Search)

 Maintain a closed-list containing those nodes that have already been


expanded. Then, if a node is encountered that is already in closed-list,
it is simply ignored.
 This guarantees that no loops are generated, and essentially converts
the graph into a tree 20
Graph-Search problem

 Its very well to prune the search space by ignoring repeated states
 But the problem is that Graph-search can end up discovering sub-
optimal solutions
– Basically, a loop means that there might be more then one path to a node
– Once we discover a path to a node, then any other paths to the same node are
ignored by graph-search
– However, it is not necessary that the first path is the optimal one
– Hence, sub-optimal solutions can be returned.

21
Graph-Search problem

• Solution?
• We can adopt a uniform-cost approach, in which we keep track of all
the paths that have been currently generated
• Then, we select only that path which has the least-cost

22
Problem with A* Proof

 This proof can break down with Graph-search


 A* can return sub-optimal solutions, if we don’t apply the uniform-
cost approach
 However, this is really messy and expensive
 A much better solution is to ensure that the heuristic that you have
selected is consistent
Triangle inequality

23
Consistent heuristics

 A heuristic is consistent if for every node n, every successor


n' of n generated by any action a, h(n) ≤ c(n,a,n') + h(n')
 If h is consistent, we have
 f(n') = g(n') + h(n')
– = g(n) + c(n,a,n') + h(n')
– ≥ g(n) + h(n)
– = f(n)

 i.e., f(n) is non-decreasing along any path.


 Theorem: If h(n) is consistent, A* using GRAPH-SEARCH
is optimal
24
Optimality of A*

25
Properties of A*

 Complete? Yes (unless there are infinitely many nodes with f ≤


f(G) )
 Time? Exponential
 Space? Keeps all nodes in memory
 Optimal? Yes

26
Local Search Algorithms
 In many optimization problems, the path to the goal is irrelevant; the
goal state itself is the solution
 State space = set of configurations
 Find a configuration satisfying your constraints, e.g., n-queens

 In such cases, we can use local search algorithms


 Keep a single "current" state, and then shift states, but don’t keep track of
paths.
 Use very limited memory
 Find reasonable solutions in large state spaces

27
Optimization Problems
 Local search algorithms are useful for solving optimization problems
 Find the best possible state according to a given objective function

 Optimize the number of products purchased by an E-Commerce user

 State: Action taken by the user plus the resulting page-view

 No track is kept of the path costs between the states

 All that is seen is whether the user is buying more products (or not).

28
Hill-Climbing

• "Like climbing Everest in thick fog with amnesia”


• A loop that continually moves in the direction of increasing value,
i.e., uphill
• Terminates when it reaches a peak where no neighbor has a higher
value
• Fog with Amnesia: Doesn’t look ahead beyond the immediate
neighbors of the current state.

29
Hill-Climbing

30
Hill-Climbing
 Hill-Climbing Algorithm
1. Pick a random point in the search space
2. Consider all the neighbors of the current state
3. Choose the neighbor with the best quality and move to that state
4. Repeat 2 thru 4 until all the neighboring states are of lower quality
5. Return the current state as the solution state.

31
Hill-Climbing Problems

• Greedy Local Search: grabs a good neighbor state without thinking


about where to go next
– However, greedy algos do make good progress generally towards the solution

• Unfortunately, hill-climbing
– Can get stuck in local maxima
– Can be stuck by ridges (a series of local maxima that occur close
together)
– Can be stuck by plateaux (a flat area in the state space landscape)
• Shoulder: if the flat area rises uphill later on
• Flat local maximum: no uphill rise exist

32
Improvements

• Stochastic Hill Climbing: Chooses at random from amongst the


uphill moves, based on a probability distribution

• First-choice Hill Climbing: Implements stochastic HC by generating


successors randomly until one is generated that is better than the
current state

• Random-restart Hill Climbing: Selects a series of initial nodes


randomly until the solution is found.

33
Simulated annealing search

 Idea: escape local maxima by allowing some "bad" moves but


gradually decrease their frequency

34
Simulated annealing search

 If Value[Next] is close to Value[Current], the assignment is more


likely to be accepted.
 If the temperature is high, the exponent will be close to zero, and the
probability will be close to 1.
 As the temperature approaches zero, the exponent approaches -∞, and the
probability approaches zero

35
Properties of simulated annealing search

 One can prove:


 If T decreases slowly enough, then simulated annealing search will find a
global optimum with probability approaching 1
 Widely used in VLSI layout, airline scheduling, etc

36
Questions

37

You might also like