Solving Problems by Searching

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 27

Chapter 3

Solving Problems by Searching


In which we see how an agent can find a sequence of actions that achieves its
goals when no single action will do.
 In chapter 2 discussed simplest agent were reflex agents.
 Such agents cannot operate well in environments for which this
mapping would be too large to store and would take too long to
learn.
 Goal-based agents, on the other hand, consider future actions and
the desirability of their outcomes.
Conti ..
This chapter describes one kind of goal-based agent called a problem-solving agent .
Problem Solving Agents
Intelligent agents are supposed to maximize their performance measure.
Imagine an agent in the city of Arad, Romania, enjoying a touring holiday.
The agent’s performance measure contains many factors:
It wants to improve its suntan
 improve its Romanian
take in the sights
enjoy the nightlife (such as it is)
avoid hangovers, and so on.
Conti ..
Now, suppose the agent has a nonrefundable ticket to fly out of
Bucharest the following day.
In that case, it makes sense for the agent to adopt the goal of getting
to Bucharest.
Goal formulation, based on the current situation and the agent’s
performance measure, is the first step in problem solving.
Goals task is to find how to act, now and in the future, so that it
reaches a goal state
Problem formulation is the process of deciding what actions
and states to consider, given a goal.
Our agent has now adopted the goal of driving to Bucharest and is
considering where to go from Arad.
Three roads lead out of Arad, one toward Sibiu, one to Timisoara,
and one to Zerind. None of these achieves the goal.
Suppose that the agent has map Romania.
Conti ..
The agent can use information to consider subsequent stages of a
hypothetical journey via each of the three towns, trying to find a journey
that eventually gets to Bucharest.
an agent with several
immediate options of unknown value can decide what to do by first examining future
actions
that eventually lead to states of known value.

For now we assume that the environment is observable, so the agent


always knows the current state, indicating its presence to arriving drivers.
Conti …
We also assume the environment is discrete, so at any given state
there are only finitely many actions to choose from.
We will assume the environment is known, so the agent knows which
states are reached by each action.
Finally, we assume that the environment is deterministic, so each
action has exactly one outcome.
Under ideal conditions, this is true for the agent in Romania.
Under these assumptions, the solution to any problem is a fixed
sequence of actions.
Conti ..
The process of looking for a sequence of actions that reaches the goal
is called search.
A search algorithm takes a problem as input and returns a solution in
the form of an action sequence.
Once a solution is found, the actions it recommends can be carried
out is called the execution phase.
After formulating a goal and a problem to solve, the agent calls a
search procedure to solve it.
Once the solution has been executed, the agent will formulate a new
goal.
Conti ..
Notice that while the agent is executing the solution sequence it
ignores its percepts when choosing an action because it knows in
advance what they will be.
Well-defined problems and solutions
 A problem can be defined formally by five components:
The initial state that the agent starts in. For example, the initial state for our
agent in Romania might be described as In(Arad).
Conti ..
A description of the possible actions available to the agent. state s,
actions(s) returns the set of actions that can be executed in s.
We say that each of these actions is applicable in s. For example,
from the state In(Arad), the applicable actions are {Go(Sibiu),
Go(Timisoara), Go(Zerind)}.
A description of what each action does; the formal name for this is
the transition model, specified by a function RESULT(s, a) that returns
the state that results from doing action a in state s.
We also use the term successor to refer to any state reachable from a
given state by a single action.
 For example, we have RESULT(In(Arad),Go(Zerind)) = In(Zerind) .
Conti ..
Together, the initial state, actions, and transition model implicitly
define the state space of the problem, the set of all states reachable
from the initial state by any sequence of actions.
The state space forms a directed network or graph in which the
nodes are states and the links between nodes are actions.
A path in the state space is a sequence of states connected by a
sequence of actions.
The goal test, which determines whether a given state is a goal state.
The agent’s goal in Romania is the singleton set {In(Bucharest )}.
Conti ..
Conti ..
A path cost function that assigns a numeric cost to each path.
The problem-solving agent chooses a cost function that reflects its
own performance measure.
The preceding elements define a problem and can be gathered into a
single data structure that is given as input to a problem-solving
algorithm.
A solution to a problem is an action sequence that leads from the
initial state to a goal state.
Solution quality is measured by the path cost function, and an
optimal solution has the lowest path cost among all solutions.
Conti ..
Formulating problems
In the preceding section we proposed a formulation of the problem
of getting to Bucharest in terms of the initial state, actions, transition
model, goal test, and path cost.
Example Problems
Toy problems
Conti ..
States: A state description specifies the location of each of the eight tiles and the blank
in one of the nine squares.
 Initial state: Any state can be designated as the initial state. Note that any given goal
can be reached from exactly half of the possible initial states (Exercise 3.4).
Actions: The simplest formulation defines the actions as movements of the blank space
Left, Right, Up, or Down. Different subsets of these are possible depending on where
the blank is.
 Transition model: Given a state and action, this returns the resulting state; for example,
if we apply Left to the start state in Figure 3.4, the resulting state has the 5 and the
blank
switched.
Goal test: This checks whether the state matches the goal configuration shown in Figure
3.4. (Other goal configurations are possible.)
Path cost: Each step costs 1, so the path cost is the number of steps in the path.
States: Any arrangement of 0 to 8 queens on the board is a state.
Initial state: No queens on the board.
Actions: Add a queen to any empty square.
Transition model: Returns the board with a queen added to the
specified square.
Goal test: 8 queens are on the board, none attacked.
Measuring problem-solving performance

Completeness: Is the algorithm guaranteed to find a solution when


there is one?
Optimality: Does the strategy find the optimal solution?
Time complexity: How long does it take to find a solution?
Space complexity: How much memory is needed to perform the
search?
Searching for solution
Having formulated some problems, we now need to solve them.
A solution is an action sequence, so search algorithms work by
considering various possible action sequences.
The possible action sequences starting at the initial state form a
search tree with the initial state at the root.
the branches are actions and the nodes correspond to states in the
state space of the problem.
Conti ..
Infrastructure for search algorithms
Search algorithms require a data structure to keep track of the search
tree that is being constructed.
For each node n of the tree, we have a structure that contains four
components:
 n.STATE: the state in the state space to which the node corresponds.
 n.PARENT: the node in the search tree that generated this node.
n.ACTION: the action that was applied to the parent to generate the node.
n.PATH-COST: the cost, traditionally denoted by g(n), of the path from the
initial state to the node, as indicated by the parent pointers.
Conti ..
UNINFORMED SEARCH STRATEGIES
Called blind search
The term means that the strategies have no additional information
about the states beyond that provided in the problem definition.
All strategies are distinguished by the order in which nodes are
expanded.
Strategies that know whether one non-goal state is “more promising”
than other are called informed search or heuristic search strategies.
1. Breadth-first search

 Is a simple strategy in which the root node is expanded first,


then all the successors of the root node are expanded next,
then their successors, and so on.
All the nodes are expanded at a given depth in the search tree
before any nodes at the next level are expanded.
Breadth-first search is an instance of the general graph-search
algorithm in which the shallowest unexpanded node is chosen
for expansion.
This is achieved very simply by using a FIFO queue for the
frontier.
Conti .
2. Depth-first Search
Depth-first search always expands the deepest node in the current
frontier of the search tree.
The search proceeds immediately to the deepest level of the search
tree, where the nodes have no successors.
Conti ..

You might also like