Module 1

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 52

Introduction to Artificial Intelligence

23ETC15A

Module 1
Introduction
Contents
• What is AI?
• History of AI.
• Agents and Environments, Structure of Agents.
• Types of Agents: Simple reflex agents, Model-based reflex
agent, Goal-based agents, Utility-based agents, Learning
agents.
What is AI?

4
Thinking Humanly Catch the thoughts and see how it flows

Thinking humanly is to make a system or


program to think like a human.

Ask a person to explain how his/her brain


connects different things during the thinking
process, he/she will probably close both eyes
and will start to check how he/she thinks but
he/she cannot explain or interpret the process.
Using the above methods, if we are
able to catch the human brain’s
actions and give it as a theory,
then we can convert that theory
into a computer program. If the
input/output of the computer – Observe a person’s brain in action. Observe a person on the action
program matches with human Definition: 1.Machine with minds.
behavior, then it may be possible 2. Create a machine or system that can perform human
that a part of the program may be activities like thinking, decision making, problem solving
behaving like a human brain.
and learning.
Acting Humanly
Definition:

1. When the task is performed by


human, what intelligence is
exhibited by human the same
intelligence is expected by machine
when created.
2. Study on how to make computers
do things like people or better than
people.

Natural Language Processing: Enables system to communicate successfully in English.


Knowledge Representation: To store what it knows or hears.
Automated Reasoning: To use the stored information to answer questions and to draw
conclusions.
Machine Learning: To adapt to new circumstances.
Thinking Rationally
The laws of thought approach.
In AI, thinking rationally means thinking rightly for example if something is true
that should be true or that must be true.
THINK RATIONALLY – LAWS OF THOUGHT
• Perfectly
• Thinking in the right direction
• Without any mistakes
Rationally • With a strong logic

E.g. All men are warriors


Logical
Arjuna is a man
Science
Arjuna is a warrior

7
THINK RATIONALLY – LAWS OF THOUGHT
Cognitive
Thinking Science
Computer
Reasoning
Logical
Rationally Science

This approach has certain dis-advantages


1. Difficult to convert informal language to formal terms
2. Solving a problem theoretically rather than practically is difficult
3. This approach should have 100% knowledge in all the domains
8
Acting Rationally
The rational agent approach.

A traditional computer program blindly executes the code


that is given. Neither it acts on its own nor it adapts to
change itself based on the outcome.

The agent program or system is expected to do more than


the traditional computer program.
It is expected to create and pursue the goal, change
state, and operate autonomously. Definition:
1. Design an Intelligent Agents.
A rational agent is an agent that acts to achieve its best 2. Intelligent behaviour.
performance for a given task.

An agent is something that acts by doing things rightly or behave rightly by maximizing
the expected performance.
History of AI
Maturation of Artificial Intelligence (1943-1952):

Year 1943: Warren McCulloch and Walter pits proposed a model


of artificial neurons.

Year 1949: Donald Hebb demonstrated an updating rule for modifying the
connection strength between neurons. His rule is now called Hebbian
learning.

Year 1950: The Alan Turing proposed a test that can check the machine's
ability to exhibit intelligent behavior equivalent to human intelligence,
called a Turing test.
The birth of Artificial Intelligence (1952-1956):

Year 1955: An Allen Newell and Herbert A. Simon created the first artificial intelligence
program which was named as "Logic Theorist". This program had proved 38 of 52
Mathematics theorems, and find new and more elegant proofs for some theorems.

Year 1956: The word "Artificial Intelligence" first adopted by American Computer
scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined as
an academic field.

At that time high-level computer languages such as FORTRAN, LISP, or COBOL


were invented. And the enthusiasm for AI was very high at that time.
The golden years-Early enthusiasm (1956-1974)
Year 1966: AI algorithms were developed which can solve mathematical problems.
And the first chatbot was created, which was named as ELIZA.

Year 1972: The first intelligent humanoid robot was built in Japan
which was named as WABOT-1.
The first AI winter (1974-1980)

• The duration between years 1974 to 1980 was the first AI winter duration. AI
winter refers to the time period where computer scientist dealt with a severe
shortage of funding from government for AI researches.
• During AI winters, an interest of publicity on artificial intelligence was
decreased.

A boom of AI (1980-1987)

• Year 1980: After AI winter duration, AI came back with "Expert System".
Expert systems were programmed that emulate the decision-making ability of a
human expert.
• In the Year 1980, the first national conference of the American Association of
Artificial Intelligence was held at Stanford University.
The second AI winter (1987-1993)

• The duration between the years 1987 to 1993 was the second AI Winter
duration.
• Again Investors and government stopped in funding for AI research as due to
high cost but not efficient result. The expert system such as XCON was very cost
effective.

The emergence of intelligent agents (1993-2011)

•Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary
Kasparov, and became the first computer to beat a world chess champion.
•Year 2002: AI entered the home in the form of Roomba, a vacuum cleaner.
•Year 2006: Companies like Facebook, Twitter, and Netflix also started using AI.
Deep learning, big data and artificial general intelligence
(2011-present)

• Year 2011: IBM's Watson won a quiz show, where it had to solve the complex
questions as well as riddles. Watson had proved that it could understand natural
language and can solve tricky questions quickly.
• Year 2012: Google has launched an Android app feature "Google now", which was
able to provide information to the user as a prediction.
• Year 2014: Chatbot "Eugene Goostman" won a competition in the infamous "Turing
test”.
• Year 2018: The "Project Debater" from IBM debated on complex topics with two
master debaters and also performed extremely well.
• Google has demonstrated an AI program "Duplex" which was a virtual assistant
and which had taken hairdresser appointment on call, and lady on other side
didn't notice that she was talking with the machine.
Agents and Environments
Agent:
An agent can be viewed as anything that perceives its environment through sensors
and acts upon that environment through actuators.
For example, human being perceives their surroundings through their sensory
organs known as sensors and take actions using their hands, legs, etc., known as
actuators.
Agents interact with the environment
through sensors and actuators.

Terminologies:
Percept: Agent’s perceptual inputs at
any given instant.
Percept sequence: Complete history of
Everything the agent has ever perceived.
Behavior of an Agent

Mathematically, an agent behavior can be described by


an:
• Agent Function: is a mathematical function that maps
a sequence of perceptions into action.
• Agent Program: The agent function for an artificial
agent is implemented using program called agent
program. Agent program is the concrete
implementation running within some physical system.

• The perception capability is usually called a sensor.

• The actions can depend on the most recent perception or on


the entire history (percept sequence).

• The part of the agent taking an action is called an


actuator.
Vacuum cleaner world problem Example
There are two rooms and one vacuum cleaner.
There is dirt in both the rooms.
Vacuum cleaner present in any one room
Goal – Clean both rooms
Representation

Dirt Dirt

Possible actions
Room 1 Room 2
Move left
Vacuum cleaner is the agent Move Right
Clean Dirt
Example Cont… 8 Possible states
Concept of Rationality
Rational Agent: A rational agent is one that can take the right decision in every
situation.
An ideal rational agent is the one, which is capable of doing expected actions to
maximize its performance measure, on the basis of −
• Its percept sequence
• Its built-in knowledge base
Rationality of an agent depends on the following −
• The performance measures, which determine the degree of success.
• Agent’s Percept Sequence till now.
• The agent’s prior knowledge about the environment.
• The actions that the agent can carry out.
• A rational agent always performs right action, where the right action means the
action that causes the agent to be most successful in the given percept sequence.
Omniscient Agent: An omniscient agent is an agent which knows the
actual outcome of its action in advance. However, such agents are
impossible in the real world.

Rational agents are different from Omniscient agents because a rational


agent tries to get the best possible outcome with the current perception,
which leads to imperfection. A chess AI can be a good example of a
rational agent because, with the current action, it is not possible to foresee
every possible outcome whereas a tic-tac-toe AI is omniscient as it always
knows the outcome in advance.
Nature of Environment
Environment is the place where the agent is going to work.
To perform a task in an environment, the following are the important
parameters need to be considered..
PEAS stands for Performance measures, Environment, Actuators, and
Sensors.
• Performance measures: These are the parameters used to measure the
performance of the agent. How well the agent is carrying out a particular
assigned task.
• Environment: It is the task environment of the agent. The agent interacts
with its environment. It takes perceptual input from the environment and
acts on the environment using actuators.
• Actuators: These are the means of performing calculated actions on the
environment.
• Sensors: These are the means of taking the input from the environment.
Sensors Actuators
Human Agent Eyes, Ears, Nose…. Hands, Joints, Legs, Vocal
Cord..
Robotic Agent Cameras, IR Sensors,.. Motors
Software Agent Keystrokes, File contents,… Writing files, displaying on
screen,..
 Performance measure
 Safe, Fast, Legal,
Autonomous taxi
Comfortable
 Environment
 Roads, Traffic,
Customers
 Actuators
 Steering, Accelerator,
Braking, Horn,
 Sensors
 Camera, GPS,
Odometer, Keyboard
and Microphone…
Properties of task environments
1. Fully observable and Partially observable:
An agent’s sensors give it access to complete state of the environment at
each point in time, then we say that the task environment is fully
observable; otherwise it is only partially observable. If the agent has no
sensors at all then the environment is unobservable.
Chess is fully observable: A player gets to see the whole board.
Poker is partially observable: A player gets to see only his own cards,
not the cards of anyone in the game.
2. Single Agent / Multi Agents:
An agent operating by itself in an environment is single agent.
Multi agent is when other agents are present.
For example:
An agent solving a crossword puzzle by itself is clearly in a single-agent
environment,
whereas an agent playing chess is in a two agent environment.
• A person left alone in a maze is
an example of the single-agent
system.
• An environment involving more
than one agent is a multi-agent
environment.
• The game of football is multi-
agent as it involves 11 players in
each team.
3. Competitive vs Collaborative
• An agent is said to be in a competitive environment when it
competes against another agent to optimize the output.
• The game of chess is competitive as the agents compete with each
other to win the game which is the output.
• An agent is said to be in a collaborative environment when multiple
agents cooperate to produce the desired output.
• When multiple self-driving cars are found on the roads, they
cooperate with each other to avoid collisions and reach their
destination which is the output desired.
4. Deterministic vs Stochastic
If the next state of the environment is completely determined by the
current state and the actions of the agent, then the environment is
deterministic; otherwise it is non-deterministic / stochastic.

Deterministic Environment: Tic Tac Toe game.


Non - Deterministic Environment: Self-driving
vehicles.
5. Episodic / Non-episodic ( Sequential )
In an Episodic task environment: Each of the agent’s actions is divided into
atomic incidents or episodes (Each episode consists of the agent perceiving and
then performing a single action ).
There is no dependency between current and previous incidents. In each
incident, an agent receives input from the environment and then performs the
corresponding action.
Example: Consider an example of Pick and Place robot, which is used to
detect defective parts from the conveyor belts. Here, every time robot(agent) will
make the decision on the current part i.e. there is no dependency between
current and previous decisions.
In a Sequential environment, the previous decisions can affect all future
decisions. The next action of the agent depends on what action agent has taken
previously and what action agent is supposed to take in the future.
Example : Checkers- Where the previous move can affect all the following
moves.
Pick and Place robot
6. Dynamic vs Static

An environment that keeps constantly changing itself when the


agent is up with some action is said to be dynamic.
An idle environment with no change in its state is called a static
environment. If the environment does not change while an agent is
acting, then it is static.
Dynamic: Playing football game, other players make it dynamic.
Every action there will be new reaction.
7. Discrete vs Continuous
If an environment consists of a finite number of actions that can be
deliberated in the environment to obtain the output, it is said to be
a discrete environment.
• The game of chess is discrete as it has only a finite number of
moves. The number of moves might vary with every game, but
still, it’s finite.
The environment in which the actions are performed cannot be
numbered i.e. is not discrete, is said to be continuous.
• Self-driving cars are an example of continuous environments as
their actions are driving, parking, etc. which cannot be numbered.
8. Known vs Unknown
In a known environment, the output for all probable actions are
given.
In unknown environment, the agent will have to learn how it works
in order to make good decisions.
Example
Structure of Agents
The main goal of AI is to design an agent program that implements
the agent function (the mapping from percepts to actions).

Agent = physical sensors and actuators + program

Architecture

Agent = architecture + program


Agent programs
The agent program takes current percept as input from the sensors and
return an action to the actuators.
The agent program takes the current percept as input, and the agent function,
which takes the entire percept history. ; if the agent’s actions need to depend
on the entire percept sequence, the agent will have to remember the percepts.
Types of Agents
Agents can be grouped into five classes based on their degree of perceived
intelligence and capability :
• Simple Reflex Agents
• Model-Based Reflex Agents
• Goal-Based Agents
• Utility-Based Agents
• Learning Agent
Simple reflex agents
• The Simple reflex agents are the simplest agents. These agents take
decisions on the basis of the current percepts and ignore the rest of
the percept history.
• These agents only succeed in the fully observable environment. If it is
partially observable, in that case the agent function enters into infinite
loops that can be escaped only on randomization of its actions.
• The Simple reflex agent does not consider any part of percepts history
during their decision and action process.
• The Simple reflex agent works on Condition-action rule, which means it
maps the current state to action.
If the condition is true, then the action is taken, else not.
Simple reflex agents
Function Simple-Reflex-Agent(percept)
static: rules, /* condition-action rules */
state <- Intercept_input(percept)
rule <- Rule_match(state, rules)
action <- Rule_Action(rule)
return(action)

The vacuum agent is a simple


reflex agent because the decision
is based only on the current
location, and whether the place
contains dirt.
Model-based reflex agents
A model-based agent can handle partially observable environments.
It consists of two important factors, which are Model and Internal State.
Model provides knowledge which helps in understanding of the
occurrence of different things in the environment such that the
current situation can be studied and a condition can be created
then appropriate actions are performed by the agent.

Internal State uses the perceptual history to represent a current


percept. The agent keeps a track of this internal state and is adjusted
by each of the percepts. The internal state is stored by the agent to
describe the unseen world.
The state of the agent can be updated by gaining information about how the
environment evolves and how the agent's action affects the environment.
Model-based reflex agents
Function Reflex-Agent-With-State(percept)
static: state, /* description of the current world state */
rules // set of condition-action rules //
state <- Update_State(state, percept)
rule <- Rule_Match(state, rules)
action <- Rule_Action(rule)
state <- Update_State(state, action)
return(action)
Goal-based agents
These kinds of agents take decisions based on how far they are currently
from their goal.
Their every action is intended to reduce its distance from the goal.
This allows the agent a way to choose among multiple possibilities,
selecting the one which reaches a goal state.
The knowledge that supports its decisions is represented explicitly and
can be modified, which makes these agents more flexible. They usually
require search and planning.
It is an improvement over model based agent where information about the
goal is also included. This is because it is not always sufficient to know just
about the current state, knowledge of the goal is a more beneficial approach.
Goal-based agents
Function Goal-Based-Agent(percept)
static: state, /* description of the current world state */
rules /* set of condition-action rules */
goal /* set of specific success states */
state <- Update_State(state, percept)
rule <- Rule_Match(state, rules)
action <- Rule_Action(rule)
state <- Update_State(state, action)
if (state in goal) then
return (action)
else
percept <- Obtain_Percept(state, goal)
return(Goal-Based-Agent(percept))
Utility-based agents
• Utility agent uses building blocks which will help in taking the best
actions and decisions when multiple alternatives are present.
• It is an improvement over goal based agent as it not only involves the goal
but also the way the goal can be achieved such that the goal can be
achieved in a quicker, safer, cheaper way.
• When there are multiple possible alternatives, then to decide which one is
best, utility-based agents are used. They choose actions based on
a preference (utility) for each state. Utility describes how “happy” the
agent is. Utility: A function which maps a state (successful) into a real
number (describes associated degree of success).
• Because of the uncertainty in the world, a utility agent chooses the action
that maximizes the expected utility. A utility function maps a state onto a
real number which describes the associated degree of happiness.
Utility-based agents
Function Goal-Based-Agent(percept)
static: state, /* description of the current world state */
rules /* set of condition-action rules */
goal /* set of specific success states */
state <- Update_State(state, percept)
rule <- Rule_Match(state, rules)
action <- Rule_Action(rule)
state <- Update_State(state, action)
score <- Obtain_Score(state)
if (state in goal) and Best_Score(score) then
return(action)
else
percept <- Obtain_Percept(state, goal)
return(Goal-Based-Agent(percept))
Learning Agent
Learning agent, as the name suggests, has the capability to learn from
past experiences and takes actions or decisions based on learning
capabilities.
It starts to act with basic knowledge and then is able to act and adapt
automatically through learning.
A learning agent has mainly four conceptual components, which are:
• Learning element: It is responsible for making improvements by
learning from the environment
• Critic: The learning element takes feedback from critics which describes
how well the agent is doing with respect to a fixed performance standard.
• Performance element: It is responsible for selecting external action
• Problem Generator: This component is responsible for suggesting
actions that will lead to new and informative experiences.
Learning Agent
Question Bank
1. Describe the four categories of definitions of Artificial Intelligence.
2. Explain Alan Turing test used to exhibit the intelligence of the system.
3. Describe the following: Percept, Percept sequence, Sensors, Actuators, agent
function and agent program.
4. Explain rationality of the agent using law of thoughts approach.
5. Briefly explain the history of AI.
6. Define Agent. Explain the behaviour of the agent.
7. Explain vacuum cleaner world problem with the state diagram.
8. Differentiate rational agent and the omniscient agent with an example.
9. Define environment of an agent. Explain the different parameters to be used for
performing the task by the agent.
10. List and explain the properties of the environment.
11. Explain the structure of the agents.
12. List and explain 5 different types of the agents.
Thank You

You might also like