Problem-Solving: Solving Problems by Searching

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 40

PROBLEM-SOLVING

CHAPTER 3
SOLVING PROBLEMS BY SEARCHING

Prepared by

Ali Hussain Aya Mazin Nadia Mahmuod

Supervised by
Prof DR Abbas-ALbakri
Solving Problems By Searching 1
Introduction
• Simple-reflex agents directly maps states to actions.

• Therefore, they cannot operate well in environments where the mapping is


too large to store or takes too much to learn

• Goal-based agents can succeed by considering future actions and desirability


of their outcomes

• Problem solving agent is a goal-based agent that decides what to do by


finding sequences of actions that lead to desirable states

Solving Problems By Searching 2


Outline
Problem-solving Agents
Example Problems
Searching for Solutions
Uninformed Search Strategies

Solving Problems By Searching 3


Problem solving agents

 • Intelligent agents are supposed to maximize their performance measure


 • This can be simplified if the agent can adopt a goal and aim at satisfying it
 • Goals help organize behavior by limiting the objectives that the agent is
trying to achieve
 • Goal formulation based on the current situation and the agent’s
performance measure, is the first step in problem solving
 • Goal is a set of states. The agent’s task is to find out which sequence of
actions will get it to a goal state
 • Problem formulation is the process of deciding what sorts of actions and
states to consider, given a goal

Solving Problems By Searching 4


Problem solving agents

 • An agent with several immediate options of unknown value can decide


what to do by first examining different possible sequences of actions that
lead to states of known value, and then choosing the best sequence
 • Looking for such a sequence is called search
 • A search algorithm takes a problem as input and returns a solution in
the form of action sequence
 • One a solution is found the actions it recommends can be carried out –
execution phase

Solving Problems By Searching 5


Problem solving agents
 • “formulate, search, execute” design for the agent
 • After formulating a goal and a problem to solve the agent calls
a search procedure to solve it
 • It then uses the solution to guide its actions, doing whatever the
solution recommends as the next thing to do (typically the first
action in the sequence)
 • Then removing that step from the sequence
 • Once the solution has been executed, the agent will formulate a
new goal

Solving Problems By Searching 6


Well-defined problems and solutions
A problem is defined formally by four components :
1. Initial state :that the agent starts in. For example, the initial state for our agent in
Romania might be described as In(Arad).
2. Actions or successor function :from the state In(Arad), the successor function for the
Romania problem would return.
{ (Go(Sibzu),I n(Sibiu)), (Go(T imisoara), In( Tzmisoara)), (Go(Zerznd),I n(Zerind)))}
3. Goal test :which determines whether a given state is a goal state, For example, in chess, the
goal is to reach a state called "checkmate,“where the opponent's king is under attack and can't
escape.
4. Path cost function : function that assigns a numeric cost to each path. The problem-solving
agent chooses a cost function that reflects its own performance measure.
• A solution is a sequence of actions leading from the initial state to a goal state
Solution quality is measured by the path cost function, and an optimal solution has the lowest
path cost among all solutions.

Solving Problems By Searching 7


Example Problems :
 Toy problems
 Vacuum world
 8-puzzle
 8-queens

 Real-world problems
 Route-finding
 Touring
 VLSI
 Robot navigation
 Internet searching
Solving Problems By Searching 8
Example: The agent is driving to Bucharest from Arad.

Solving Problems By Searching 9


Example: The agent is driving to Bucharest from Arad.

• On holiday in Romania; currently in Arad


• Non-refundable ticket to fly out of Bucharest tomorrow
 Formulate goal (perf. evaluation):
• be in Bucharest before the flight
 Formulate problem:
• states: various cities
• actions: drive between cities
 Find solution:
• sequence of cities ,e.g,Arad,Sibiu,Fagaras,Bucharest

Solving Problems By Searching 10


SINGLE-STATE PROBLEM FORMULATION
A PROBLEM IS DEFINED BY FOUR ITEMS
1.INITIAL STATE E.G., "AT ARAD"
2.ACTIONS OR SUCCESSOR FUNCTION
• S(X) = SET OF ACTION–STATE PAIRS
• E.G., S(ARAD) = {<ARAD ZERIND, ZERIND>, … }
3.GOAL TEST, CAN BE
• EXPLICIT, E.G., X = "AT BUCHAREST"
• IMPLICIT, E.G., CHECKMATE(X)
4.PATH COST FUNCTION (ADDITIVE)
• E. G SUM OF DISTANCES # ACTIONS EXECUTED ETC
• C(X,A,Y) IS THE STEP COST, ASSUMED TO BE ≥ 0
• A SOLUTION IS A SEQUENCE OF ACTIONS LEADING FROM THE INITIAL STATE TO A GOAL STATE

Solving Problems By Searching 11


Example :Vacuum world state space graph

 states : integer dirt and robot location.


– The agent is in one of two locations, each of which might or might not contain dirt – 8 possible
states
 Initial state: any state
 actions : Left, Right, Suck
 goal test : no dirt at all locations
 path cost : 1 per action
Solving Problems By Searching 12
Example: The 8-puzzle

 states: locations of tiles


 Initial state: any state
 actions: move blank left, right, up,
down
 goal test: goal state (given)
 path cost: 1 per move

Solving Problems By Searching 13


Example: The 8-puzzle

Solving Problems By Searching 14


Example: 8-queens problem
 States: any arrangement of 0-8 queens
on the board is a State
 Initial state: no queens on the board
 Actions: add a queen to any empty
square
 goal test: 8 queens are on the board,
none attacked
 Path cost : 1 per move

Solving Problems By Searching 15


Example: robotic assembly
 states: real-valued coordinates
of robot joint angles parts of
the object to be assembled
 actions: continuous motions
of robot joints
 goal test: complete assembly

 path cost: time to execute

Solving Problems By Searching 16


Real –world problem
Route-finding problem: it defined in terms of specified locations and transition along
links between them . It is used in :

GPS Google maps Airline route maps VLSI


 States :each is represented by a location (e.g an airport) and the current time.
 Initial state :this is specified by the problem.
 Successor function :this returns the states resulting from taking any scheduled flight
 Goal test :are we at the destination by some prespecified time?
 Path test :this depends on monetary cost , waiting time ,flight time, seat quality , time
of day , type of airplane, and so on.

17
Searching for Solutions
• Basic idea of tree search algorithms :
Simulated exploration of state space by generating successors of
already –explored states(expanding states)
 The root node : initial state .

 Expanding the current state; that is generating applying each


legal action to the current state.

The general tree-search algorithm they vary primarily according


to how they choose which state to expand next—the so-called
search strategy.
Solving Problems By Searching 18
Search Tree, Example The agent is driving to Bucharest from Arad.

19
Solving Problems By Searching
Search Tree, Example The agent is driving to Bucharest from Arad.

Solving Problems By Searching 20


Search Tree, Example The agent is driving to Bucharest from Arad.

Solving Problems By Searching 21


Search Tree Data Structures
Nodes : are the data structures
from which the search tree is
constructed

Queue : data structure to store


frontier of unexpanded nodes:

 • Make Queue(element,… ) creates queue w given elements


 • Empty?(queue) returns true if queue is empty
 • Pop(queue) returns first elem. and removes it
 • Insert(element, queue) inserts elem. in queue, returns q.
Solving Problems By Searching 22
Search Strategies
Strategies are evaluated along the following dimensions :
 Completeness: does it always find a solution if one exists?
 Time complexity: number of nodes generated
 Space complexity: maximum number of nodes in memory
 Optimality: does it always find a least-cost solution?

Time and space complexity are measured in terms of

 b: maximum branching factor of the search tree


 d: depth of the least-cost solution
 m: maximum depth of the state space (may be ∞)
Solving Problems By Searching 23
Uninformed Search Strategies
 • Uninformed : (blind) search strategies use only the information
available in the problem definition

• Breadth-first search (BFS)


• Uniform-cost search
• Depth-first search (DFS)
• Depth-limited search
• Iterative deepening search

Solving Problems By Searching 24


Breadth-First Search (BFS)
 • The root node is expanded first
 • Then all successors of the root node are expanded
 • Then all their successors
… and so on
 • In general, all the nodes of a given depth are expanded before
any node of the next depth is expanded.
 • Uses a standard queue as data structure

Solving Problems By Searching 25


Breadth-first search(BFS)

Solving Problems By Searching 26


Breadth-First Search Properties
 BFS is complete (always finds goal if one exists)
 BFS finds the shallowest path to any goal node.If multiple goal
nodes exist, BFS finds the shortest Path
 If tree/graph edges have weights, BFS does not find the shortest
length path.
 If the shallowest solution is at depth d and the goal test is done when
each node is generated then BFS generates b + b^2 + b^3 + … +b^d
= O(b^d) nodes, i.e. has a time complexity of O(b^d).
 If the goal test is done when each node is expanded the time
complexity of BFS is O(b^d+1).
 The space complexity (frontier size) is also O(b^d).
This is the biggest drawback of BFS.

Solving Problems By Searching 27


Uniform-Cost Search (UCS)
 Modification to BFS generates Uniform-cost search, which
works with any step-cost function ( edge weights/costs):
 UCS expands the node n with lowest summed path cost g(n).
 To do this, the frontier is stored as a priority queue. (Sorted list
data structure, better heap data structure).
 The goal test is applied to a node when selected for expansion
(not when it is generated).
 Also a test is added if a better node is found to a node on the
frontier.

Solving Problems By Searching 28


Uniform-Cost Search
 • Uniform cost search is similar to 0 99

Dijkstra’s algorithm.
 • It requires that all step costs are 80
non-negative
177
 • It may get stuck if there is a path 310
with an infinite sequence of zero 278
cost steps.
 • Otherwise it is complete

Solving Problems By Searching 29


Depth-First Search (DFS)
 DFS always expands the deepest node in the current frontier of the search tree.
 It uses a stack (LIFO queue last in first out)
 DFS is frequently programmed recursively, then the program call stack is the LIFO queue.
 DFS is complete, if the graph is finite.
 The tree search version of DFS is complete on a finite tree, if a test is included whether the
node has already been visited
 DFS is incomplete on infinite trees or graphs.

Solving Problems By Searching 30


Depth-First Search Properties
• Complete? No: fails in infinite‐depth spaces, spaces with loops

• Time? O(b^m)= if m is much larger than d

• Space? O(b m)

• Optimal? No

m … maximum depth

Solving Problems By Searching 31


Depth VS Breath

Solving Problems By Searching 32


Depth-Limited Search
 • The failure of DFS in infinite search spaces can be prevented by
giving it a search limit l.
 • This approach is called depth limited search
 • Unfortunately, it is not complete if we choose l < d, where d is the
depth of the goal node.
 • This happens easily, because d is unknown.
 • Depth-limited search has time complexity O(b^l ).
 • It has space complexity of O(b l).
 • However, in some applications we know a depth limit (# nodes in
a graph, maximum diameter, …)
Solving Problems By Searching 33
Iterative Deepening Search (IDS)
• The iterative deepening search algorithm repeatedly
applies depth-limited search with increasing limits.

• It terminates when a solution is found or if the depth


limited search returns failure, meaning that no solution
exists.

• This search is also frequently called depth-first iterative


deepening (DFID).

Solving Problems By Searching 34


Iterative Deepening Search (IDS)

Solving Problems By Searching 35


Iterative Deepening Search (IDS)

Solving Problems By Searching 36


Iterative Deepening Search Properties
Repeatedly visiting the same nodes seems like a waste of time (not
space). How costly is this?
This heavily depends on the branching factor b and d of the search
tree.
Assume b = 2, d = 10, full binary tree, then
(d+1)b0 + d b1 + (d-1)b2 + … + bd = bd
• N(IDS) = 11*20 + 10*21 + 9*22 + …+ 1*210 = 4083
• N(DFS) = 1*20 + 1*21 + 1*22 + …+ 1*210 = 2047
• i.d. IDS generates at most twice the #nodes of DFS or BFS
Assume b = 10, d = 10, full 10-ary tree, then
(d+1)b0 + d b1 + (d-1)b2 + … + bd = bd
• N(IDS) = 11*100+10*101+9*102+…+1*1010 = 12.345.679.011
• N(DFS) = 1*100 + 1*101+1*102+…+1*1010 = 11.111.111.111
• i.d. IDS generates only 11% more nodes than DFS or BFS
Solving Problems By Searching 37
Bidirectional search
• The idea behind bidirectional search is to run two
simultaneous searches—one forward from the
initial state and the other backward from the goal
—hoping that the two searches meet in the middle

Advantage: delays exponential growth by reducing


the exponent for time and space complexity in half.

Disadvantage: One should have known the goal


state in advance. The algorithm must be too efficient
to find the intersection of the two search trees.

Solving Problems By Searching 38


Comparing Uninformed Search Strategies

• EVALUATION OF TREE-SEARCH STRATEGIES. B IS THE BRANCHING FACTOR• D IS


THE DEPTH OF THE SHALLOWEST SOLUTION • M IS THE MAXIMUM DEPTH OF THE
SEARCH TREE L IS THE DEPTH
LIMIT .
 • A COMPLETE, IF B IS FINITE;
 • B COMPLETE, IF STEP COSTS >= Ε > 0;
 • C OPTIMAL, IF STEP COSTS ARE ALL IDENTICAL; 39

 • D IF BOTH DIRECTIONS USE BFS


THANK YOU

Solving Problems By Searching 40

You might also like