Unit 2
Unit 2
Unit 2
Completeness:
The uniform-cost search is complete, so if a solution exists, UCS will discover
it.
Time Complexity:
Let C* is Cost of the optimal solution, and ε is each step to get closer to the
goal node. Then the number of steps is = C*/ε+1. Here we have taken +1, as
we start from state 0 and end to C*/ε.
Hence, the worst-case time complexity of Uniform-cost search isO(b1 + [C*/ε])/.
Space Complexity:
The same logic is for space complexity so, the worst-case space complexity of
Uniform-cost search is O(b1 + [C*/ε]).
Optimal:
Uniform-cost search is always the best option because it only chooses the
cheapest path.
Iterative deepening depth-first Search
The iterative deepening algorithm is a blend of DFS and BFS algorithms. This
search technique determines the appropriate depth limit by gradually raising it
until a goal is discovered.
This algorithm searches in depth first up to a specific "depth limit," then
increases the depth limit for each iteration until the objective node is
discovered.
This search algorithm combines the speed of breadth-first search with the
memory efficiency of depth-first search.
When the search space is huge and the depth of the goal node is unknown, the
iterative search technique is useful for uninformed search.
Advantages
In terms of quick search and memory efficiency, it combines the advantages of
the BFS and DFS search algorithms.
Disadvantages:
The biggest disadvantage of IDDFS is that it duplicates all of the preceding
phase's work.
Example
The iterative deepening depth-first search is seen in the tree structure below.
The IDDFS algorithm iterates until it can't find the goal node any longer. The
algorithm's iteration is described as follows:
1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
The method will find the goal node in the fourth iteration.
Completeness:
If the branching factor is finite, this procedure is complete.
Time Complexity:
Let's suppose b is the branching factor and depth is d then the worst-case time
complexity is O(bd).
Space Complexity:
The space complexity of IDDFS will be O(bd).
Optimal:
If path cost is a non-decreasing function of node depth, the IDDFS algorithm
is optimal.
Bidirectional Search Algorithm
To discover the goal node, the bidirectional search algorithm does two
simultaneous searches, one from the initial state (forward-search) and the
other from the goal node (backward-search). Bidirectional search splits a
single search graph into two small subgraphs, one starting from a beginning
vertex and the other from the destination vertex. When these two graphs
intersect, the search comes to an end.
BFS, DFS, DLS, and other search algorithms can be used in bidirectional
search.
Advantages
Searching in both directions is quick.
It takes less memory to do a bidirectional search.
Disadvantages
The bidirectional search tree is challenging to implement.
In bidirectional search, the objective state should be known ahead of time.
Example
The bidirectional search technique is used in the search tree below. One
graph/tree is divided into two sub-graphs using this approach. In the forward
direction, it begins at node 1 and in the reverse direction, it begins at goal node
16.
The process comes to a halt at node 9, when two searches collide.
Completeness: If we use BFS in both searches, we get a complete bidirectional
search.
Time Complexity: Time complexity of bidirectional search using BFS
is O(bd).
Space Complexity: Space complexity of bidirectional search is O(bd).
Optimal: Bidirectional search is Optimal.
Informed Search Algorithms
So far, we've discussed uninformed search algorithms that scoured the search
space for all possible answers to the problem without having any prior
knowledge of the space. However, an educated search algorithm includes
information such as how far we are from the objective, the cost of the trip, and
how to get to the destination node. This knowledge allows agents to explore
less of the search area and discover the goal node more quickly.
For huge search spaces, the informed search algorithm is more useful. Because
the informed search algorithm is based on the concept of heuristics, it is also
known as heuristic search.
Heuristics function: Informed Search employs a heuristic function to
determine the most promising path. It takes the agent's current state as input
and outputs an estimate of how near the agent is to the goal. The heuristic
method, on the other hand, may not always provide the optimum solution, but
it guarantees that a good solution will be found in a fair amount of time. A
heuristic function determines how close a state is to the desired outcome. It
calculates the cost of an ideal path between two states and is represented by
h(n). The heuristic function's value is always positive.
Admissibility of the heuristic function is given as:
h(n) <= h*(n)
Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic
cost should be less than or equal to the estimated cost.
Pure Heuristic Search
The simplest type of heuristic search algorithm is pure heuristic search. It
grows nodes according to their heuristic value h. (n). It has two lists: an OPEN
list and a CLOSED list. It places nodes that have previously expanded in the
CLOSED list, and nodes that have not yet been expanded in the OPEN list.
Each iteration, the lowest heuristic value node n is extended, and all of its
successors are generated, and n is added to the closed list. The algorithm keeps
running until a goal state is discovered.
We shall cover two main algorithms in the informed search, which are listed
below:
Best First Search Algorithm(Greedy search)
A* Search Algorithm
Best-first Search Algorithm (Greedy Search)
The greedy best-first search algorithm always chooses the path that appears to
be the most appealing at the time. It's the result of combining depth-first and
breadth-first search algorithms. It makes use of the heuristic function as well
as search. We can combine the benefits of both methods with best-first search.
At each step, we can use best-first search to select the most promising node.
We expand the node that is closest to the goal node in the best first search
method, and the closest cost is determined using a heuristic function, i.e.
f(n)= g(n).
Were, h(n)= estimated cost from node n to the goal.
The priority queue implements the greedy best first algorithm.
Best first search algorithm:
Stage 1: Place the starting node into the OPEN list.
Stage 2: If the OPEN list is empty, Stop and return failure.
Stage 3: Remove the node n, from the OPEN list which has the lowest value of
h(n), and places it in the CLOSED list.
Stage 4: Expand the node n, and generate the successors of node n.
Stage 5: Check each of node n's descendants to see if any of them is a goal
node. Return success and end the search if any successor node is a goal node;
otherwise, proceed to Stage 6.
Stage 6: The algorithm looks for the evaluation function f(n) for each
successor node, then determines if the node has been in the OPEN or
CLOSED list. Add the node to the OPEN list if it isn't already on both lists.
Stage 7: Return to Stage 2.
Advantages
By combining the benefits of both algorithms, best first search may transition
between BFS and DFS.
This method outperforms the BFS and DFS algorithms in terms of efficiency.
Disadvantages:
In the worst-case scenario, it can act like an unguided depth-first search.
As with DFS, it's possible to get stuck in a loop.
This algorithm isn't the best.
Example:
Consider the search problem below, which we'll solve with greedy best-first
search. Each node is extended at each iteration using the evaluation function
f(n)=h(n), as shown in the table below.
We're going to use two lists in this example: OPEN and CLOSED Lists. The
iterations for traversing the aforementioned example are listed below.
Only the nodes with the lowest value of f(n) are extended at each point in the
search space, and the procedure ends when the goal node is located.
Algorithm of A* search
Stage1: Place the starting node in the OPEN list.
Stage 2: Check if the OPEN list is empty or not, if the list is empty then return
failure and stops.
Stage 3: Select the node from the OPEN list which has the smallest value of
evaluation function (g+h), if node n is goal node then return success and stop,
otherwise
Stage 4: Expand node n and generate all of its successors, and put n into the
closed list. For each successor n', check whether n' is already in the OPEN or
CLOSED list, if not then compute evaluation function for n' and place into
Open list.
Stage 5: Else if node n' is already in OPEN and CLOSED, then it should be
attached to the back pointer which reflects the lowest g(n') value.
Stage 6: Return to Step 2.
Advantages
The A* search algorithm outperforms all other search algorithms.
The A* search algorithm is ideal and comprehensive.
This method is capable of resolving extremely difficult issues.
Disadvantages
Because it is primarily reliant on heuristics and approximation, it does not
always yield the shortest path.
The A* search algorithm has some concerns with complexity.
The fundamental disadvantage of A* is that it requires a lot of memory
because it maintains all created nodes in memory, which makes it unsuitable
for a variety of large-scale issues.
Example
We'll use the A* method to explore the given graph in this example. We'll
calculate the f(n) of each state using the formula f(n)= g(n) + h(n), where g(n)
is the cost of reaching any node from the start state.
We'll use the OPEN and CLOSED lists here.
Solution
Initialization: {(S, 5)}
Iteration1: {(S--> A, 4), (S-->G, 10)}
Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}
Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7),
(S-->G, 10)}
Iteration 4 will give the final result, as S--->A--->C--->G it provides the
optimal path with cost 6.
Stratagies
A* algorithm returns the path which occurred first, and it does not search for
all remaining paths.
The efficiency of A* algorithm depends on the quality of heuristic.
A* algorithm expands all nodes which satisfy the condition f(n)<="" li="">
Complete: A* algorithm is complete as long as:
Branching factor is finite.
Cost at every action is fixed.
Optimal: A* search algorithm is optimal if it follows below two conditions:
Admissible: The first requirement for optimality is that h(n) be an admissible
heuristic in A* tree search. An acceptable heuristic is one that is optimistic.
Consistency: For only A* graph-search, the second required condition is
consistency.
A* tree search will always identify the least expensive path if the heuristic
function is accepted.
Time Complexity: The A* search algorithm's time complexity is determined
by the heuristic function, and the number of nodes expanded is proportional to
the depth of the solution d. So, where b is the branching factor, the temporal
complexity is O(bd).
Space Complexity: The space complexity of A* search algorithm is O(b^d)
Heuristic Functions in Artificial Intelligence
Heuristic Functions in AI: As we've already seen, an informed search makes
use of heuristic functions in order to get closer to the goal node. As a result,
there are various ways to get from the present node to the goal node in a
search tree. It is undeniably important to choose a decent heuristic function.
The efficiency of a heuristic function determines its usefulness. The more
knowledge about the problem there is, the longer it takes to solve it.
A heuristic function can help solve some toy problems more efficiently, such
as 8-puzzle, 8-queen, tic-tac-toe, and so on. Let's have a look at how:
Consider the eight-puzzle issue below, which has a start and a target state. Our
goal is to slide the tiles from the current/start state into the goal state in the
correct order. There are four possible movements: left, right, up, and down.
There are various ways to transform the current/start state to the desired state,
but we can solve the problem more efficiently by using the heuristic function
h(n).
The objective state is minimised from h(n)=3 to h(n)=0, as seen in the state
space tree above. However, depending on the requirement, we can design and
employ a number of heuristic functions. A heuristic function h(n) can
alternatively be defined as the knowledge needed to solve a problem more
efficiently, as shown in the previous example. The information can be related
to the nature of the state, the cost of changing states, the characteristics of
target nodes, and so on, and is stated as a heuristic function.
Properties of a Heuristic search Algorithm
The following qualities of a heuristic search algorithm result from the use of
heuristic functions in a heuristic search algorithm:
Admissible Condition: If an algorithm gives an optimal solution, it is said to
be acceptable.
Completeness: If an algorithm ends with a solution, it is said to be complete (if
the solution exists).
Dominance Property: If A1 and A2 are both admissible heuristic algorithms
with h1 and h2 heuristic functions, A1 is said to dominate A2 if h1 is better
than h2 for all node n values.
Optimality Property: If an algorithm is complete, acceptable, and dominates
other algorithms, it is the best and will almost always produce the best result.
Local Search Algorithms and Optimization Problem
The informed and uninformed search expands the nodes in two ways: by
remembering different paths and selecting the best suitable path, which leads
to the solution state required to reach the destination node. But, in addition to
these "classical search algorithms," there are some "local search algorithms"
that ignore path cost and focus just on the solution-state required to reach the
destination node.
Instead of visiting numerous paths and following the neighbours of a single
current node, a local search algorithm completes its mission by following the
neighbours of that node in general.
Although local search algorithms are not systematic, still they have the
following two advantages:
Because they only work on a single path, local search algorithms
consume a little or constant amount of memory.
In huge or infinite state spaces, where classical or systematic
algorithms fail, they frequently discover a suitable solution.
Is it possible to use the local search algorithm to solve a pure optimised
problem?
Yes, for pure optimised issues, the local search technique works. A pure
optimization problem is one that can be solved by all nodes. However,
according to the objective function, the goal is to discover the optimal state out
of all of them. Unfortunately, the pure optimization issue fails to identify good
solutions for getting from the current condition to the objective state.
In various contexts of optimization issues, an objective function is a function
whose value is either minimised or maximised. An objective function in
search algorithms can be the path cost to the goal node, for example.
Working of a Local search algorithm
Let's look at how a local search algorithm works with the help of an example:
Consider the state-space landscape below, which includes both:
Location: It is defined by the state.
Elevation: The value of the objective function or heuristic cost function
defines it.
Simulated Annealing
The hill climbing algorithm is related to simulated annealing. It is effective in
the current circumstances. Instead of choosing the best move, it chooses a
random move. If the move improves the existing situation, it is always
accepted as a step toward the solution state; otherwise, the move with a
probability smaller than one is accepted. This search method was first utilised
to tackle VLSI layout challenges in 1980. It's also used for plant planning and
other large-scale optimization projects.
Local Beam Search
Random-restart search is not the same as local beam search. Instead of simply
one state, it keeps track of k. It chooses k randomly generated states and
expands them one at a time. The search ends with success if any state is a
desired state. Otherwise, it chooses the top k successors from the entire list
and continues the procedure. While each search process works independently
in random-restart search, the essential information is shared throughout the
parallel search processes in local beam search.
Disadvantages of Local Beam search
The absence of diversity among the k states may make this search difficult.
It's a more expensive version of the hill climbing search.
Parameter: Stochastic Beam Search is a version of Local Beam Search that
chooses k successors at random rather than the best k successors.
there are 28 = 256 possible belief states, but only 12 reachable belief states
Cryptarithmetic Problem
Cryptarithmetic Problem is a form of constraint fulfilment problem where the
game is about digits and their unique substitution either with alphabets or
other symbols. The digits (0-9) are substituted by some conceivable alphabets
or symbols in a cryptarithmetic problem. In a cryptarithmetic problem, the
goal is to replace each digit with an alphabet to achieve an arithmetically
correct solution.
The following are the rules or constraints for a cryptarithmetic problem:
A unique digit should be substituted for a unique alphabet.
The outcome must adhere to the predetermined arithmetic rules, such
as 2+2 = 4, and nothing else.
Only digits from 0 to 9 should be used.
When conducting an addition operation on a problem, there should
only be one carry forward.
The problem can be approached from either the lefthand side (L.H.S)
or the righthand side (R.H.S) (R.H.S)
Let's use an example to better grasp the cryptarithmetic problem and its
constraints:
S E N D + M O R E = M O N E Y is a cryptarithmetic problem.
k-Consistency
Local Search for CSPs
Tree-Structured CSPs
For the map-coloring problem, part of the search tree was constructed using
simple backtracking.
Forward checking
Forward checking is a technique for making better use of constraints during
search. When a variable X is assigned, the forward checking procedure
examines each unassigned variable Y that is linked to X by a constraint and
deletes any value from Y's domain that is incompatible with the value chosen
for X. The progress of a map-coloring search with forward checking is shown
in the diagram below.
Example Explanation:
MAX has 9 possible moves from the start because he is the first player.
Both players alternately place x and o until we reach a leaf node where
one player has three in a row or all squares are filled.
Both players will compute the best possible utility versus an optimum
adversary for each node, called the minimax value.
Assume that both players are well-versed in tic-tac-toe and are playing
their best game. Each player is trying everything he can to keep the
other from winning. In the game, MIN is working against Max.
So, in the game tree, we have a Max layer, a MIN layer, and each layer
is referred to as Ply. The game proceeds to the terminal node, with
Max placing x and MIN placing o to prevent Max from winning.
Either MIN or MAX wins, or the game ends in a tie. This game-tree
represents the entire search universe of possibilities in which MIN and
MAX are tic-tac-toeing taking turns alternatively
As a result, the process for adversarial Search for the Minimax is as follows:
Its goal is to figure out the best way for MAX to win the game.
It employs a depth-first search strategy.
The ideal leaf node in the game tree could appear at any level of the
tree.
Minimax values should be propagated up the tree until the terminal
node is found.
The optimal strategy in a particular game tree can be determined by looking at
the minimax value of each node, which can be expressed as MINIMAX (n). If
MAX prefers to move to a maximum value state and MIN prefers to move to a
minimum value state, then:
Mini-Max Algorithm
In decision-making and game theory, the mini-max algorithm is a
recursive or backtracking method. It suggests the best move for the
player, provided that the opponent is likewise playing well.
The Mini-Max algorithm searches the game tree using recursion.
In AI, the Min-Max algorithm is mostly employed for game play.
Chess, checkers, tic-tac-toe, go, and other two-player games are
examples. This Algorithm calculates the current state's minimax
choice.
The game is played by two players, one named MAX and the other
named MIN, in this algorithm.
Both players are fighting it, since the opponent player receives the
smallest benefit while they receive the greatest profit.
Both players in the game are adversaries, with MAX selecting the
maximum value and MIN selecting the minimum value.
For the investigation of the entire game tree, the minimax method uses
a depth-first search strategy.
The minimax algorithm descends all the way to the tree's terminal
node, then recursively backtracks the tree.
Pseudo-code for MinMax Algorithm
function minimax(node, depth, maximizingPlayer) is
if depth ==0 or node is a terminal node then
return static evaluation of node
if MaximizingPlayer then // for Maximizer Player
maxEva= -infinity
for each child of node do
eva= minimax(child, depth-1, false)
maxEva= max(maxEva,eva) //gives Maximum of the values
return maxEva
else // for Minimizer player
minEva= +infinity
for each child of node do
eva= minimax(child, depth-1, true)
minEva= min(minEva, eva) //gives minimum of the values
return minEva
Initial call:
Minimax(node, 3, true)
Working of Min-Max Algorithm
A simple example can be used to explain how the minimax algorithm
works. We've included an example of a game-tree below, which
represents a two-player game.
There are two players in this scenario, one named Maximizer and the
other named Minimizer.
Maximizer will strive for the highest possible score, while Minimizer
will strive for the lowest possible score.
Because this algorithm uses DFS, we must go all the way through the
leaves to reach the terminal nodes in this game-tree.
The terminal values are given at the terminal node, so we'll compare them and
retrace the tree till we reach the original state. The essential phases in solving
the two-player game tree are as follows:
Step-1: The algorithm constructs the full game-tree in the first phase, then
applies the utility function to obtain the utility values for the terminal states.
Let's assume A is the tree's initial state in the diagram below. Assume that the
maximizer takes the first turn with a worst-case initial value of -infinity, and
the minimizer takes the second turn with a worst-case initial value of +infinity.
Step 2: Now, we'll locate the Maximizer's utilities value, which is -, and
compare each value in the terminal state to the Maximizer's initial value to
determine the upper nodes' values. It will select the best option from all of
them.
For node D max(-1,- -∞) => max(-1,4)= 4
For Node E max(2, -∞) => max(2, 6)= 6
For Node F max(-3, -∞) => max(-3,-5) = -3
For node G max(0, -∞) = max(0, 7) = 7
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes
value with +∞, and will find the 3rd layer node values.
For node B= min(4,6) = 4
For node C= min (-3, 7) = -3
Step 4: Now it's Maximizer's turn, and it'll choose the maximum value of all
nodes and locate the root node's maximum value. There are only four layers in
this game tree, so we can go to the root node right away, but there will be
more layers in real games.
For node A max(4, -3)= 4
That was the complete workflow of the minimax two player game.
Properties of Mini-Max algorithm
Complete- Min-Max algorithm is Complete. It will definitely find a solution
(if exist), in the finite search tree.
Optimal- Min-Max algorithm is optimal if both opponents are playing
optimally.
Time complexity- As it performs DFS for the game-tree, so the time
complexity of Min-Max algorithm is O(bm), where b is branching factor of the
game-tree, and m is the maximum depth of the tree.
Space Complexity- Space complexity of Mini-max algorithm is also similar to
DFS which is O(bm).
Limitation of the minimax Algorithm
The biggest disadvantage of the minimax algorithm is that it becomes
extremely slow while playing complex games like chess or go. This style of
game contains a lot of branching, and the player has a lot of options to choose
from.
Alpha-Beta Pruning
A modified variant of the minimax method is alpha-beta pruning. It's a way
for improving the minimax algorithm.
The number of game states that the minimax search algorithm must investigate
grows exponentially with the depth of the tree, as we saw with the minimax
search method. We can't get rid of the exponent, but we can reduce it by half.
As a result, there is a technique known as pruning that allows us to compute
the correct minimax choice without having to inspect each node of the game
tree.. It's named alpha-beta pruning because it involves two threshold
parameters, Alpha and beta, for future expansion. Alpha-Beta Algorithm is
another name for it.
Alpha-beta pruning can be done at any depth in a tree, and it can sometimes
prune the entire sub-tree as well as the tree leaves.
The two-parameter can be defined as:
a. Alpha: The best (highest-value) choice we have found so far at
any point along the path of Maximizer. The initial value of
alpha is -∞.
b. Beta: The best (lowest-value) choice we have found so far at
any point along the path of Minimizer. The initial value of beta
is +∞.
The Alpha-beta pruning to a standard minimax algorithm produces the same
result as the regular approach, but it removes those nodes that aren't really
effecting the final decision but are slowing down the procedure. As a result,
reducing these nodes speeds up the process.
Condition for Alpha-beta pruning
The main condition which required for alpha-beta pruning is:
α>=β
Key points about alpha-beta pruning
Only the value of alpha will be updated by the Max player.
Only the beta value will be updated by the Min player.
Instead of alpha and beta values, node values will be sent to upper nodes while
retracing the tree.
Only the alpha and beta values will be passed to the child nodes.
Stage 2: At Node D, the value of α will be calculated as its turn for Max. The
value of α is compared with firstly 2 and then 3, and the max (2, 3) = 3 will be
the value of α at node D and node value will also 3.
Stage 3: Now algorithm backtrack to node B, where the value of β will change
as this is a turn of Min, Now β= +∞, will compare with the available
subsequent nodes value, i.e. min (∞, 3) = 3, hence at node B now α= -∞, and
β= 3.
In the next step, algorithm traverse the next successor of Node B which is
node E, and the values of α= -∞, and β= 3 will also be passed.
Stage 4: At node E, Max will take its turn, and the value of alpha will change.
The current value of alpha will be compared with 5, so max (-∞, 5) = 5, hence
at node E α= 5 and β= 3, where α>=β, so the right successor of E will be
pruned, and algorithm will not traverse it, and the value at node E will be 5.
Stage 5: At next step, algorithm again backtrack the tree, from node B to node
A. At node A, the value of alpha will be changed the maximum available
value is 3 as max (-∞, 3)= 3, and β= +∞, these two values now passes to right
successor of A which is Node C.
At node C, α=3 and β= +∞, and the same values will be passed on to node F.
Stage 6: At node F, again the value of α will be compared with left child
which is 0, and max(3,0)= 3, and then compared with right child which is 1,
and max(3,1)= 3 still α remains 3, but the node value of F will become 1.
Stage 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here
the value of beta will be changed, it will compare with 1 so min (∞, 1) = 1.
Now at C, α=3 and β= 1, and again it satisfies the condition α>=β, so the next
child of C which is G will be pruned, and the algorithm will not compute the
entire sub-tree G.
Stage 8: C now returns the value of 1 to A here the best value for A is max (3,
1) = 3. Following is the final game tree which is the showing the nodes which
are computed and nodes which has never computed. Hence the optimal value
for the maximizer is 3 for this example.
The next step is to learn how to make wise decisions. Naturally, we want to
make the decision that will put us in the best possible situation. The
minimum and maximum values for positions, on the other hand, are not
specified. Instead, we can only calculate the expected value of a position,
which is the average of all possible outcomes of the chance nodes.
As a result, for games with chance nodes, we can generalise the deterministic
minimax value to an expected-minimax value. Terminal nodes, MAX and
MIN nodes (for which the dice roll is known), and MAX and MIN nodes (for
which the dice roll is unknown) all work the same way they did before. The
sum of all outcomes, weighted by the likelihood of each chance action, is the
expected value for chance nodes.
where r is a possible dice roll (or other random events) and RESULT(s,r)
denotes the same state as s, but with the addition that the dice roll’s result is
r.