Artificial Intelligence and Machine Learning (21CS54) Module - 1
Artificial Intelligence and Machine Learning (21CS54) Module - 1
There are three ways to do this: through introspection—trying to catch our own thoughts as
they go by; through psychological experiments—observing a person in action; and through
brain imaging—observing the brain in action. Once we have a sufficiently precise theory of
the mind, it becomes possible to express the theory as a computer program. If the program’s
input–output behavior matches corresponding human behavior, that is evidence that some of
the program’s mechanisms could also be operating in humans. For example, Allen Newell
and Herbert Simon, who developed GPS, the ―General Problem Solver‖ (Newell and Simon,
1961), were not content merely to have their program solve problems correctly. They were
more concerned with comparing the trace of its reasoning steps to traces of human subjects
solving the same problems. The interdisciplinary field of cognitive science brings together
computer models from AI and experimental techniques from psychology to construct precise
and testable theories of the human mind.
Thinking rationally: The ―laws of thought‖ approach
The Greek philosopher Aristotle was one of the first to attempt to codify ―right thinking,‖ that
is, irrefutable reasoning processes. His syllogisms provided patterns for argument structures
that always yielded correct conclusions when given correct premises—for example, ―Socrates
is a man; all men are mortal; therefore, Socrates is mortal.‖ These laws of thought were
supposed to govern the operation of the mind; their study initiated the field called logic.
Acting rationally: The rational agent approach
An agent is just something that acts (agent comes from the Latin agere, to do). Of course, all
computer programs do something, but computer agents are expected to do more: operate
autonomously, perceive their environment, persist over a prolonged time period, adapt to
change, and create and pursue goals. A rational agent is one that acts so as to achieve the best
outcome or, when there is uncertainty, the best expected outcome.
THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE
• Philosophy
• Mathematics
• Economics
• Neuroscience
• Psychology
• Computer engineering
• Control theory and cybernetics
• Linguistics
Philosophy
• Can formal rules be used to draw valid conclusions?
• How does the mind arise from a physical brain?
• Where does knowledge come from?
• How does knowledge lead to action?
Mathematics
• What are the formal rules to draw valid conclusions?
• What can be computed?
• How do we reason with uncertain information?
Economics
• How should we make decisions so as to maximize payoff?
• How should we do this when others may not go along?
• How should we do this when the payoff may be far in the future?
Neuroscience
• How do brains process information?
Psychology
• How do humans and animals think and act?
Computer engineering
• How can we build an efficient computer?
Linguistics
• How does language relate to thought?
Year 1950: The Alan Turing who was an English mathematician and pioneered Machine
learning in 1950. Alan Turing publishes "Computing Machinery and Intelligence" in which
he proposed a test. The test can check the machine's ability to exhibit intelligent behavior
equivalent to human intelligence, called a Turing test.
The birth of Artificial Intelligence (1952-1956)
Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence
program"Which was named as "Logic Theorist". This program had proved 38 of 52
Mathematics theorems, and find new and more elegant proofs for some theorems.
Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist
John McCarthy at the Dartmouth Conference. For the first time, AI coined as an academic
field.
At that time high-level computer languages such as FORTRAN, LISP, or COBOL were
invented. And the enthusiasm for AI was very high at that time.
The golden years-Early enthusiasm (1956-1974)
Year 1966: The researchers emphasized developing algorithms which can solve mathematical
problems. Joseph Weizenbaum created the first chatbot in 1966, which was named as ELIZA.
Year 1972: The first intelligent humanoid robot was built in Japan which was named as
WABOT-1.
The first AI winter (1974-1980)
The duration between years 1974 to 1980 was the first AI winter duration. AI winter refers to
the time period where computer scientist dealt with a severe shortage of funding from
government for AI researches.
During AI winters, an interest of publicity on artificial intelligence was decreased.
A boom of AI (1980-1987)
Year 1980: After AI winter duration, AI came back with "Expert System". Expert systems
were programmed that emulate the decision-making ability of a human expert.
In the Year 1980, the first national conference of the American Association of Artificial
Intelligence was held at Stanford University.
The second AI winter (1987-1993)
The duration between the years 1987 to 1993 was the second AI Winter duration.
Again Investors and government stopped in funding for AI research as due to high cost but
not efficient result. The expert system such as XCON was very cost effective.
The emergence of intelligent agents (1993-2011)
Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary Kasparov,
and became the first computer to beat a world chess champion.
Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner.
Year 2006: AI came in the Business world till the year 2006. Companies like Facebook,
Twitter, and Netflix also started using AI.
Deep learning, big data and artificial general intelligence (2011-present)
Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to solve
the complex questions as well as riddles. Watson had proved that it could understand natural
language and can solve tricky questions quickly.
Year 2012: Google has launched an Android app feature "Google now", which was able to
provide information to the user as a prediction.
Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the infamous
"Turing test."
Year 2018: The "Project Debater" from IBM debated on complex topics with two master
debaters and also performed extremely well.
Google has demonstrated an AI program "Duplex" which was a virtual assistant and which
had taken hairdresser appointment on call, and lady on other side didn't notice that she was
talking with the machine.
Now AI has developed to a remarkable level. The concept of Deep learning, big data, and
data science are now trending like a boom. Nowadays companies like Google, Facebook,
IBM, and Amazon are working with AI and creating amazing devices. The future of Artificial
Intelligence is inspiring and will come with high intelligence.
Types of Artificial Intelligence:
Artificial Intelligence can be divided in various types, there are mainly two types of main
categorization which are based on capabilities and based on functionally of AI. Following is
flow diagram which explain the types of AI.
3. Super AI:
Super AI is a level of Intelligence of Systems at which machines could surpass human
intelligence, and can perform any task better than human with cognitive properties. It is an
outcome of general AI.
Some key characteristics of strong AI include capability include the ability to think, to
reason,solve the puzzle, make judgments, plan, learn, and communicate by its own.
Super AI is still a hypothetical concept of Artificial Intelligence. Development of such
systems in real is still world changing task.
4. Self-Awareness
Self-awareness AI is the future of Artificial Intelligence. These machines will be super
intelligent, and will have their own consciousness, sentiments, and self-awareness.
These machines will be smarter than human mind.
Self-Awareness AI does not exist in reality still and it is a hypothetical concept.
PROBLEM-SOLVING AGENTS
In Artificial Intelligence, Search techniques are universal problem-solving methods. Rational
agents or Problem-solving agents in AI mostly used these search strategies or algorithms to
solve a specific problem and provide the best result. Problem-solving agents are the goal-
based agents and use atomic representation. In this topic, we will learn various problem-
solving search algorithms.
Search Algorithm Terminologies:
Search: Searchingis a step by step procedure to solve a search-problem in a given search
space. A search problem can have three main factors:
Search Space: Search space represents a set of possible solutions, which a system may have.
Start State: It is a state from where agent begins the search.
Goal test: It is a function which observe the current state and returns whether the goal state is
achieved or not.
Search tree: A tree representation of search problem is called Search tree. The root of the
search tree is the root node which is corresponding to the initial state.
Actions: It gives the description of all the available actions to the agent.
Transition model: A description of what each action do, can be represented as a transition
model.
Path Cost: It is a function which assigns a numeric cost to each path.
Solution: It is an action sequence which leads from the start node to the goal node.
Optimal Solution: If a solution has the lowest cost among all solutions.
Properties of Search Algorithms:
Following are the four essential properties of search algorithms to compare the efficiency of
these algorithms:
Completeness: A search algorithm is said to be complete if it guarantees to return a solution
if at least any solution exists for any random input.
Optimality: If a solution found for an algorithm is guaranteed to be the best solution (lowest
path cost) among all other solutions, then such a solution for is said to be an optimal solution.
Time Complexity: Time complexity is a measure of time for an algorithm to complete its
task.
Space Complexity: It is the maximum storage space required at any point during the search,
as the complexity of the problem.
Types of search algorithms
Based on the search problems we can classify the search algorithms into uninformed (Blind
search) search and informed search (Heuristic search) algorithms.
Search Algorithms in Artificial Intelligence
Uninformed/Blind Search:
The uninformed search does not contain any domain knowledge such as closeness, the
location of the goal. It operates in a brute-force way as it only includes information about
how to traverse the tree and how to identify leaf and goal nodes. Uninformed search applies a
way in which search tree is searched without any information about the search space like
initial state operators and test for the goal, so it is also called blind search. It examines each
node of the tree until it achieves the goal node.
UNINFORMED SEARCH STRATEGIES
Uninformed search is a class of general-purpose search algorithms which operates in brute
force-way. Uninformed search algorithms do not have additional information about state or
search space other than how to traverse the tree, so it is also called blind search.
Example:
In the below tree structure, we have shown the traversing of the tree using BFS algorithm
from the root node S to goal node K. BFS search algorithm traverse in layers, so it will follow
the path which is shown by the dotted arrow, and the traversed path will be:
S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of
nodes traversed in BFS until the shallowest Node. Where the d= depth of shallowest solution
and b is a node at every state.
T (b) = 1+b2+b3+.......+ bd= O (bd)
Space Complexity: Space complexity of BFS algorithm is given by the Memory size of
frontier which is O(bd).
Completeness: BFS is complete, which means if the shallowest goal node is at some finite
depth, then BFS will find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.
2. Depth-first Search
Depth-first search is a recursive algorithm for traversing a tree or graph data structure.
It is called the depth-first search because it starts from the root node and follows each path to
its greatest depth node before moving to the next path.
DFS uses a stack data structure for its implementation.
The process of the DFS algorithm is similar to the BFS algorithm.
Note: Backtracking is an algorithm technique for finding all possible solutions using
recursion.
Advantage:
DFS requires very less memory as it only needs to store a stack of the nodes on the path from
root node to the current node.
It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right
path).
Disadvantage:
There is the possibility that many states keep re-occurring, and there is no guarantee of
finding the solution.
DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.
Example:
In the below search tree, we have shown the flow of depth-first search, and it will follow the
order as:
Root node--->Left node ----> right node.
It will start searching from root node S, and traverse A, then B, then D and E, after traversing
E, it will backtrack the tree as E has no other successor and still goal node is not found. After
backtracking it will traverse node C and then G, and here it will terminate as it found goal
node.
Completeness: DFS search algorithm is complete within finite state space as it will expand
every node within a limited search tree.
Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the
algorithm. It is given by:
T(n)= 1+ n2+ n3 +.........+ nm=O(nm)
Where, m= maximum depth of any node and this can be much larger than d
Space Complexity: DFS algorithm needs to store only single path from the root node, hence
space complexity of DFS is equivalent to the size of the fringe set, which is O(bm).
Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or
high cost to reach to the goal node.