Unit 1 Ai
Unit 1 Ai
Unit 1 Ai
INTELLIGENCE
DR VIKAS KHARE
Ph.D., M.Tech, MBA, B.E., FSASS
ASSOCIATE PROFESSOR STME, NMIMS,INDORE
CERTIFIED DATA ANALYST (IIT MADRAS)
CERTIFIED ENERGY MANAGER, BUREAU OF ENERGY EFFICIENCY INDIA
ARTIFICIAL INTELLIGENCE
Artificial intelligence (AI) is a wide-ranging branch of computer
science concerned with building smart machines capable of
performing tasks that typically require human intelligence.
Artificial intelligence is the simulation of human intelligence
processes by machines, especially computer systems. Specific
applications of AI include expert systems, natural language
processing, speech recognition and machine vision.
Can machines think? – Alan Turing, 1950
The exciting new effort to make computers think ... machines with minds, in the full and literal sense''
(Haugeland, 1985)
The study of mental faculties through the use of computational models'' (Charniak and McDermott, 1985)
The art of creating machines that perform functions that require intelligence when performed by people''
(Kurzweil, 1990)
The study of how to make computers do things at which, at the moment, people are better'' (Rich and
Knight, 1991)
The study of the computations that make it possible to perceive, reason, and act'' (Winston, 1992)
The branch of computer science that is concerned with the automation of intelligent behavior'' (Luger and
Stubblefield, 1993)
Human approach:
Systems that think like humans
Systems that act like humans
Ideal approach:
Systems that think rationally
Systems that act rationally
Acting humanly: The Turing Test approach
Reactive
Machine
Limited Self
Types of AI
Memory Awareness
Theory of
Mind
Type 1: Reactive machines
•When an agent sensor is capable to sense or access the complete state of an agent
at each point in time, it is said to be a fully observable environment else it is
partially observable.
•Maintaining a fully observable environment is easy as there is no need to keep
track of the history of the surrounding.
•An environment is called unobservable when the agent has no sensors in all
environments.
Examples:
• Chess – the board is fully observable, and so are the opponent’s moves.
• Driving – the environment is partially observable because what’s around the
corner is not known.
Deterministic vs Stochastic
•When a uniqueness in the agent’s current state completely
determines the next state of the agent, the environment is
said to be deterministic.
•The stochastic environment is random in nature which is
not unique and cannot be completely determined by the
agent.
Examples:
• Chess – there would be only a few possible moves for a coin at the current state and
these moves can be determined.
• Self-Driving Cars- the actions of a self-driving car are not unique, it varies time to
time.
Competitive vs Collaborative
PEAS Representation
PEAS is a type of model on which an AI agent works upon.
When we define an AI agent or rational agent, then we can
group its properties under PEAS representation model. It is
made up of four words:
o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors
ENVIRONMENT AND PROPERTIES OF TASK ENVIRONMENT
• Agents can be grouped into five classes based on their degree of perceived intelligence
and capability. All these agents can improve their performance and generate better
action over the time. These are given below:
• Simple Reflex Agent
• Model-based reflex agent
• Goal-based agents
• Utility-based agent
• Learning agent
Simple Reflex Agents
• The Simple reflex agents are the simplest agents. These agents take
decisions on the basis of the current percepts and ignore the rest of the
percept history.
• These agents only succeed in the fully observable environment.
• The Simple reflex agent does not consider any part of percepts history
during their decision and action process.
• The Simple reflex agent works on Condition-action rule, which means
it maps the current state to action. Such as a Room Cleaner agent, it
works only if there is dirt in the room.
Problems for the simple reflex agent design approach:
• They have very limited intelligence
• They do not have knowledge of non-perceptual
parts of the current state
• Mostly too big to generate and to store.
• Not adaptive to changes in the environment.
Model-based reflex agent
•The Model-based agent can work in a partially observable environment, and
track the situation.
•A model-based agent has two important factors:
• Model: It is knowledge about "how things happen in the world," so it is called
a Model-based agent.
• Internal State: It is a representation of the current state based on percept
history.
•These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.
•Updating the agent state requires information about:
• How the world evolves
• How the agent's action affects the world.
Goal-based agents
•The knowledge of the current state environment is not always
sufficient to decide for an agent to what to do.
•The agent needs to know its goal which describes desirable
situations.
•Goal-based agents expand the capabilities of the model-based agent
by having the "goal" information.
•They choose an action, so that they can achieve the goal.
•These agents may have to consider a long sequence of possible
actions before deciding whether the goal is achieved or not. Such
considerations of different scenario are called searching and planning,
which makes an agent proactive.
Utility-based agents
•These agents are similar to the goal-based agent but provide an
extra component of utility measurement which makes them
different by providing a measure of success at a given state.
•Utility-based agent act based not only goals but also the best
way to achieve the goal.
•The Utility-based agent is useful when there are multiple
possible alternatives, and an agent has to choose in order to
perform the best action.
•The utility function maps each state to a real number to check
how efficiently each action achieves the goals.
Learning Agents
•A learning agent in AI is the type of agent which can learn from its past experiences,
or it has learning capabilities.
•It starts to act with basic knowledge and then able to act and adapt automatically
through learning.
•A learning agent has mainly four conceptual components, which are:
• Learning element: It is responsible for making improvements by learning from
environment
• Critic: Learning element takes feedback from critic which describes that how
well the agent is doing with respect to a fixed performance standard.
• Performance element: It is responsible for selecting external action
• Problem generator: This component is responsible for suggesting actions that
will lead to new and informative experiences.
•Hence, learning agents are able to learn, analyze performance, and look for new ways
to improve the performance.