CH 3 Searching

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 18

Chapter 3

Problem solving by searching


Chapter outline
Types of agents that solve problem by searching
Problem and goal formulation
Techniques of search strategies
Blind search strategies
 Heuristic search strategies
Searching in agents
Searching is a method in which we see how an agent can
find a sequence of actions that achieves its goals when no
single action will do.
Simple Reflex agents don’t have ability to search for
solutions to problems because they don’t have a goal.
Goals help organize behavior by limiting the objectives
that the agent is trying to achieve and hence the actions it
needs to consider.
Goal formulation, based on the current situation and the
agent’s performance measure, is the first step in problem
solving.
Searching agents
a goal is a set of world states – exactly those states in
which the goal is satisfied.
The agent’s task is to find out how to act, now and in
the future, so that it reaches a goal state.
Problem formulation is the process of deciding what
actions and states to consider, to achieve a given goal.
Problem, goal and action
 Problem: defined formally by five components
States that the agent may be at any time
initial state that the agent starts in s. e.g: in(location1)
description of the possible actions to the agent applicable to s. e.g.
{go(location2),go(location3)}
transition model, description of what each action does. E.g
RESULT(in(location1), go(location2)) = in(location2). the initial
state, actions, and transition model implicitly define the state space
goal test, determines whether a given state is a goal state. E.g
in(location3)
path cost function assigns non-negative numeric cost to each path.
step cost of taking action a in state s to reach state s′ is denoted by
c(s, a, s′)
Toy problem: vacuum cleaner
States: The state is determined by both the agent location
and the dirt locations. 2 × 22 = 8 possible world states.
Initial state: Any state can be designated as the initial
state.
Actions: In this simple environment, each state has just
three actions: Left, Right, and Suck.
Transition model: The actions have their expected effects
or no effect.
Goal test: This checks whether all the squares are clean.
Path cost: Let’s say each step costs 1, so the path cost is
the number of steps in the path.
State space

State space of vacuum cleaner toy problem having discrete locations,


discrete dirt, reliable cleaning, and it never gets any dirtier
Toy problems: 8-puzzle
States: any description of the location of each number
and the blank = 9!/2
Initial state: Any state can be designated as the initial
state.
Actions: movements of the blank space Left, Right, Up,
or Down.
Transition model: Given a state and action, returns the
resulting state;
Goal test: checks whether the state matches the goal
Path cost: if a step costs 1, the path cost is the number of
steps in the path.
Toy problems: 8 queen problem
Problem: to place eight queens on a chessboard such
that no queen attacks any other
States: Any arrangement of 0 to 8 queens on the board
is a state.
Initial state: No queens on the board.
Actions: Add a queen to any empty square.
Transition model: Returns the board with a queen
added to the specified square.
Goal test: 8 queens are on the board, none attacked.
Real world problems
ROUTE-FINDING PROBLEM
TOURING PROBLEM
TRAVELING SALESMAN PROBLEM
VLSI LAYOUT
ROBOT NAVIGATION
AUTOMATIC ASSEMBLY SEQUENCING
Search for solutions
SEARCH TREE – tree formed as a result of search
EXPANDING – applying legal action to a state
PARENT NODE – node with branching node(s)
CHILD NODE – node with father/ancestor node
LEAF NODE – node with no child node
FRONTIER - set of all leaf nodes available for
expansion
Search for solution
Search algorithms require a data structure to keep track of
the search tree that is being constructed. For each node n of
the tree, we have a structure that contains four components:
n(STATE): the state in the state space to which the node
corresponds;
n(PARENT): the node in the search tree that generated this
node;
n(ACTION): the action that was applied to the parent to
generate the node;
n(PATH-COST): the cost, traditionally denoted by g(n), of the
path from the initial state to the node, as indicated by the parent
pointers
Queue: to store Frontier inf
Three actions:
isEmpty(Queue)
POP(Queue)
PUSH(node, Queue)
Three variants:
FIFO (Q)
LIFO (Stack)
Priority
Solution performance measure
Completeness: Is the algorithm guaranteed to find a
solution when there is one?
Optimality: Does the strategy find the optimal
solution?
Time complexity: How long does it take to find a
solution?
Space complexity: How much memory is needed to
perform the search?
Search strategies
Uninformed search (Blind search)
Strategies that have no additional information about
states beyond that provided in the problem definition.

Informed search(Heuristic search)


Strategies that know whether one non-goal state is “more
promising” than another are called informed search or
heuristic search strategies;
Uninformed search
Breadth-first search
Uniform-cost search
Depth-first search
Depth-limited search
Iterative deepening depth-first search
Bidirectional search
Complexity
INFORMED (HEURISTIC) SEARCH
STRATEGIES
Greedy best-first search
A* Search
Memory-bounded heuristic search
IDA*(Iterative Deepening A*)
RBFS(Recursive Best-First Search)

You might also like