Search Strategies

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

Hill Climbing Algorithm in Artificial Intelligence

o Hill climbing algorithm is a local search algorithm which


continuously moves in the direction of increasing elevation/value to
find the peak of the mountain or best solution to the problem. It
terminates when it reaches a peak value where no neighbor has a
higher value.

o Hill climbing algorithm is a technique which is used for optimizing


the mathematical problems. One of the widely discussed examples
of Hill climbing algorithm is Traveling-salesman Problem in which we
need to minimize the distance traveled by the salesman.

o It is also called greedy local search as it only looks to its good


immediate neighbor state and not beyond that.

o A node of hill climbing algorithm has two components which are


state and value.

o Hill Climbing is mostly used when a good heuristic is available.

o In this algorithm, we don't need to maintain and handle the search


tree or graph as it only keeps a single current state.

Features of Hill Climbing:

Following are some main features of Hill Climbing Algorithm:

o Generate and Test variant: Hill Climbing is the variant of


Generate and Test method. The Generate and Test method produce
feedback which helps to decide which direction to move in the
search space.

o Greedy approach: Hill-climbing algorithm search moves in the


direction which optimizes the cost.

o No backtracking: It does not backtrack the search space, as it


does not remember the previous states.
State-space Diagram for Hill Climbing:

The state-space landscape is a graphical representation of the hill-


climbing algorithm which is showing a graph between various states of
algorithm and Objective function/Cost.

On Y-axis we have taken the function which can be an objective function


or cost function, and state-space on the x-axis. If the function on Y-axis is
cost then, the goal of search is to find the global minimum and local
minimum. If the function of Y-axis is Objective function, then the goal of
the search is to find the global maximum and local maximum.

Different regions in the state space landscape:

Local Maximum: Local maximum is a state which is better than its


neighbor states, but there is also another state which is higher than it.

Global Maximum: Global maximum is the best possible state of state


space landscape. It has the highest value of objective function.

Current state: It is a state in a landscape diagram where an agent is


currently present.
Flat local maximum: It is a flat space in the landscape where all the
neighbor states of current states have the same value.

Shoulder: It is a plateau region which has an uphill edge.

Types of Hill Climbing Algorithm:

o Simple hill Climbing:

o Steepest-Ascent hill-climbing:

o Stochastic hill Climbing:

1. Simple Hill Climbing:

Simple hill climbing is the simplest way to implement a hill climbing


algorithm. It only evaluates the neighbor node state at a time and
selects the first one which optimizes current cost and set it as a
current state. It only checks it's one successor state, and if it finds
better than the current state, then move else be in the same state. This
algorithm has the following features:

o Less time consuming

o Less optimal solution and the solution is not guaranteed

Algorithm for Simple Hill Climbing:

o Step 1: Evaluate the initial state, if it is goal state then return


success and Stop.

o Step 2: Loop Until a solution is found or there is no new operator


left to apply.

o Step 3: Select and apply an operator to the current state.

o Step 4: Check new state:

a. If it is goal state, then return success and quit.


b. Else if it is better than the current state then assign new state
as a current state.

c. Else if not better than the current state, then return to step2.

o Step 5: Exit.

2. Steepest-Ascent hill climbing:

The steepest-Ascent algorithm is a variation of simple hill climbing


algorithm. This algorithm examines all the neighboring nodes of the
current state and selects one neighbor node which is closest to the goal
state. This algorithm consumes more time as it searches for multiple
neighbors

Algorithm for Steepest-Ascent hill climbing:

o Step 1: Evaluate the initial state, if it is goal state then return


success and stop, else make current state as initial state.

o Step 2: Loop until a solution is found or the current state does not
change.

a. Let SUCC be a state such that any successor of the current


state will be better than it.

b. For each operator that applies to the current state:

a. Apply the new operator and generate a new state.

b. Evaluate the new state.

c. If it is goal state, then return it and quit, else compare it


to the SUCC.

d. If it is better than SUCC, then set new state as SUCC.

e. If the SUCC is better than the current state, then set


current state to SUCC.

o Step 5: Exit.
3. Stochastic hill climbing:

Stochastic hill climbing does not examine for all its neighbor before
moving. Rather, this search algorithm selects one neighbor node at
random and decides whether to choose it as a current state or examine
another state.

Problems in Hill Climbing Algorithm:

1. Local Maximum: A local maximum is a peak state in the landscape


which is better than each of its neighboring states, but there is another
state also present which is higher than the local maximum.

Solution: Backtracking technique can be a solution of the local maximum


in state space landscape. Create a list of the promising path so that the
algorithm can backtrack the search space and explore other paths as well.

2. Plateau: A plateau is the flat area of the search space in which all the
neighbor states of the current state contains the same value, because of
this algorithm does not find any best direction to move. A hill-climbing
search might be lost in the plateau area.

Solution: The solution for the plateau is to take big steps or very little
steps while searching, to solve the problem. Randomly select a state
which is far away from the current state so it is possible that the
algorithm could find non-plateau region.
3. Ridges: A ridge is a special form of the local maximum. It has an area
which is higher than its surrounding areas, but itself has a slope, and
cannot be reached in a single move.

Solution: With the use of bidirectional search, or by moving in different


directions, we can improve this problem.

Simulated Annealing:

A hill-climbing algorithm which never makes a move towards a lower


value guaranteed to be incomplete because it can get stuck on a local
maximum. And if algorithm applies a random walk, by moving a
successor, then it may complete but not efficient. Simulated
Annealing is an algorithm which yields both efficiency and completeness.
In mechanical term Annealing is a process of hardening a metal or glass
to a high temperature then cooling gradually, so this allows the metal to
reach a low-energy crystalline state. The same process is used in
simulated annealing in which the algorithm picks a random move, instead
of picking the best move. If the random move improves the state, then it
follows the same path. Otherwise, the algorithm follows the path which
has a probability of less than 1 or it moves downhill and chooses another
path.

Dijkstra's Algorithm

Dijkstra's algorithm is an algorithm we can use to find shortest distances or


minimum costs depending on what is represented in a graph. You're basically
working backwards from the end to the beginning, finding the shortest leg each time.
The steps to this algorithm are as follows:

Step 1: Start at the ending vertex by marking it with a distance of 0, because it's 0
units from the end. Call this vertex your current vertex, and put a circle around it
indicating as such.

Step 2: #Identify all of the vertices that are connected to the current vertex with an
edge. Calculate their distance to the end by adding the weight of the edge to the
mark on the current vertex. Mark each of the vertices with their corresponding
distance, but only change a vertex's mark if it's less than a previous mark. Each time
you mark the starting vertex with a mark, keep track of the path that resulted in that
mark.

Step 3: Label the current vertex as visited by putting an X over it. Once a vertex is
visited, we won't look at it again.
Step 4: Of the vertices you just marked, find the one with the smallest mark, and
make it your current vertex. Now, you can start again from step 2.

Step 5: Once you've labeled the beginning vertex as visited - stop. The distance of
the shortest path is the mark of the starting vertex, and the shortest path is the path
that resulted in that mark.

Let's now consider finding the shortest path from your house to Divya's house to
illustrate this algorithm.

Application

First, we start at the ending vertex (Divya's house). Mark it with a zero and call this
vertex the current vertex, putting a circle around it, as you can see on the graph:

The next step is to mark any vertices that are connected to Divya's house by an
edge with the distance from the end. Let's quickly calculate these distances while we
look at a visual representation of what's going on using our previous graph:

 Movie theater = 0 + 4 = 4
 Grocery store = 0 + 5 = 5
 Gas station = 0 + 6 = 6

We're now done with Divya's house, so we label it as visited with an X. We see that
the smallest marked vertex is the movie theater with a mark of 4. This our new
current vertex, and we start again at step 2.
Now, we mark any vertices connected by an edge to the movie theater, by adding
the weight of the connecting edge to the movie theater's mark. Since Divya's house
is marked as visited, we don't need to consider this vertex.

Introduction to Depth Limited Search

Depth limited search is the new search algorithm for uninformed search. The
unbounded tree problem happens to appear in the depth-first search algorithm, and
it can be fixed by imposing a boundary or a limit to the depth of the search domain.
We will say that this limit as the depth limit which makes the DFS search strategy
more refined and organized into a finite loop. We denote this limit by l  and thus this
provides the solution to the infinite path problem that originated earlier in the DFS
algorithm. Thus, Depth limited search can be called an extended and refined version
of the DFS algorithm. In a nutshell, we can say that to avoid the infinite loop status
while execution of the codes, depth limited search algorithm is being executed into a
finite set of depth called depth limit.

Algorithm

This algorithm essentially follows a similar set of steps as in the DFS algorithm.

1. The start node or node 1 is added to the beginning of the stack.


2. Then it is marked as visited and if node 1 is not the goal node in the search,
then we push second node 2 on top of the stack.
3. Next, we mark it as visited and check if node 2 is the goal node or not.
4. If node 2 is not found to be the goal node then we push node 4 on top of the
stack
5. Now we search in the same depth limit and move along depth-wise to check
for the goal nodes.
6. If Node 4 is also not found to be the goal node and depth limit is found to be
reached then we retrace back to nearest nodes that remain unvisited or
unexplored.
7. Then we push them into the stack and mark them visited.
8. We continue to perform these steps in iterative ways unless the goal node is
reached or until all nodes within depth limit have been explored for the goal.

When we compare the above steps with DFS we may found that DLS can also be
implemented using the queue data structure. In addition to each level of the node
needs to be computed to check the finiteness and reach of the goal node from the
source node.

Depth-limited search is found to terminate under these two clauses:

1. When the goal node is found to exist.


2. When there is no solution within the given depth limit domain.

Example of DLS Process

If we fix the depth limit to 2, DLS can be carried out similarly to the DFS until the goal
node is found to exist in the search domain of the tree.
Algorithm of the example

1. We start with finding and fixing a start node.


2. Then we search along with the depth using the DFS algorithm.
3. Then we keep checking if the current node is the goal node or not.

If the answer is no: then we do nothing

If the answer is yes: then we return

1. Now we will check whether the current node is lying under the depth limit
specified earlier or not.

If the answer is not: then we do nothing

If the answer is yes: Then we explore the node further and save all of its successors
into a stack.

Iterative deepeningdepth-first Search:

The iterative deepening algorithm is a combination of DFS and BFS


algorithms. This search algorithm finds out the best depth limit and does
it by gradually increasing the limit until a goal is found.
This algorithm performs depth-first search up to a certain "depth limit",
and it keeps increasing the depth limit after each iteration until the goal
node is found.

This Search algorithm combines the benefits of Breadth-first search's fast


search and depth-first search's memory efficiency.

The iterative search algorithm is useful uninformed search when search


space is large, and depth of goal node is unknown.

Advantages:

o Itcombines the benefits of BFS and DFS search algorithm in terms


of fast search and memory efficiency.

Disadvantages:

o The main drawback of IDDFS is that it repeats all the work of the
previous phase.

Example:

Following tree structure is showing the iterative deepening depth-first


search. IDDFS algorithm performs various iterations until it does not find
the goal node. The iteration performed by the algorithm is given as:
1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.

Completeness:

This algorithm is complete is ifthe branching factor is finite.

Time Complexity:

Let's suppose b is the branching factor and depth is d then the worst-case
time complexity is O(bd).

Space Complexity:

The space complexity of IDDFS will be O(bd).

Optimal:
IDDFS algorithm is optimal if path cost is a non- decreasing function of
the depth of the node.

Bidirectional Search Algorithm:

Bidirectional search algorithm runs two simultaneous searches,


one form initial state called as forward-search and other from goal
node called as backward-search, to find the goal node.
Bidirectional search replaces one single search graph with two
small subgraphs in which one starts the search from an initial
vertex and other starts from goal vertex. The search stops when
these two graphs intersect each other.

Bidirectional search can use search techniques such as BFS, DFS,


DLS, etc.

Advantages:

o Bidirectional search is fast.

o Bidirectional search requires less memory

Disadvantages:

o Implementation of the bidirectional search tree is difficult.

o In bidirectional search, one should know the goal state in


advance.

Example:

In the below search tree, bidirectional search algorithm is applied. This


algorithm divides one graph/tree into two sub-graphs. It starts traversing
from node 1 in the forward direction and starts from goal node 16 in the
backward direction.
The algorithm terminates at node 9 where two searches meet.

Completeness: Bidirectional Search is complete if we use BFS in both


searches.

Time Complexity: Time complexity of bidirectional search using BFS


is O(bd).

Space Complexity: Space complexity of bidirectional search is O(bd).

Optimal: Bidirectional search is Optimal.

You might also like